Grantable is an AI-powered grant writing tool. Users upload source material (i.e. past grants, org info, program documents) and the AI helps draft grant application responses.
Our core workflow is a document editor with AI assistance. Although users could control which documents they uploaded, those sources applied to the entire document. When working on a full grant application with multiple questions, the AI would reference uploaded materials for every question which sometimes resulted in mixing up different programs, pulling last year's data into this year's responses, or referencing Massachusetts information when answering questions about South Carolina.
This caused many users to develop workarounds that utilized switching between Grantable, Google Docs, and ChatGPT. Others simply defaulted back to just ChatGPT (foregoing all the other great grant-specific features within Grantable! 😱)

Impact
The release of this feature resulted in a high adoption post-launch and strong satisfaction via in-app surveys.
Create a feature that:
Gives users better control over which sources apply to individual questions
Matches users' natural question-by-question workflow (discovered via user interviews)
Reduces drop-off from complex setup requirements
Keeps users in Grantable instead of bouncing to other tools
Balance simplicity (reduce drop-off) with power (users still need to refine answers)
The need for a more efficient, AI-driven workflow came up consistently - in user interviews, product demos, and as a long-standing priority from leadership. My first attempt was a "quick drafter" that parsed applications from the document editor into individual question cards and allowed an "auto-complete" feature to generate answers to all the question units. However, because it was designed as a view of the document, it created an unsolvable sync problem: edits to the original document had no clean way to reflect in the drafter, and with only two engineers, maintaining two mirrored representations wasn't feasible.
The document editor gave the AI all source materials at once, causing it to mix up programs, geographies, and timeframes. The solution wasn't just faster generation — it was granular source control. I designed a card-based workflow that borrowed the question-by-question structure from the quick drafter but solved the core problem: each card could have its own specific source materials and instructions. This matched how users were already working manually across Google Docs, ChatGPT, and Grantable, but kept everything in one place with reliable AI context.
The document editor gave the AI all source materials at once, causing it to mix up programs, geographies, and timeframes. The solution wasn't just faster generation but granular source control. I designed a card-based workflow that borrowed the question-by-question structure from the quick drafter but solved the core problem: each card could have its own specific source materials and instructions. This matched how users were already working manually across Google Docs, ChatGPT, and Grantable, but kept everything in one place with reliable AI context.
Question-by-question card interface
From user interviews, I learned users naturally work on one grant question at a time, but our document editor forced them to see everything at once.
Parts auto-parses uploaded applications into individual cards, matching their mental model and reducing cognitive overload.
"What I normally do is get like the application and I put it in a Google doc and then I copy that into Grantable and then do the writing in Grantable and then like copy [the answers] over question by question..."
Question-specific source materials
I designed each card to accept its own source documents and instructions to give users precise control over what the AI references for each question. This would lead to less wrong year, wrong program, or "close but not right" responses.
Stripped-down setup
Early designs required funder selection, global instructions, and source uploads before users could start. Testing showed confusion and drop-off. I removed everything non-essential so that users can just upload an application and start working while still giving the option for users to include helpful information (funder, source materials) for the application.
AI context is a UX problem
The key to Parts wasn't better AI, but instead giving users transparency and control over what the AI "knows" when generating responses.
Users don't trust AI when they can't control its inputs. AI is also known to be a people pleaser and will try to generate an answer even if it means making one up.
Test even when rushed
I was under pressure to ship, but I advocated for quick prototype testing. Through testing, I received great insights for how users were interpreting the product. Some of these include:
Part reordering
Initially deprioritized by engineering, but user validation gave me evidence to push for this on the immediate roadmap.
Part splitting
Parsing wasn't always accurate, so users needed a way to correct how questions were grouped. Everyone agreed to prioritize this based on testing feedback and it shipped shortly after launch.
Completion state
Users expected a "done" button at the bottom of the flow, something I'd missed from being too close to the work.
Multiple users mentioned wanting to save strong responses and reuse them across applications. Grant applications often repeat similar questions (i.e. organization mission, program descriptions, target population) and users were already mentally tracking which answers they'd want to pull forward.
"I probably only have 7-10 favorite cards I use over and over. If I have a new favorite, I could be like 'make this the favorite now.'"
"I want a content bank - a source of truth where I can store core content like mission statements and program descriptions, and just pull from it. Right now I have to leave Grantable, update my Word doc, and re-upload it every time something changes. If that could all live in one place, it would be a game changer, especially for smaller organizations without a grant writer."
This would transform Parts from a single-application tool into a library of vetted content that compounds over time, directly supporting Grantable's core value proposition of faster, higher-quality applications.














