scientific RAG
Derisking an AI solution to ensure value (2025)
Derisking an AI solution to ensure value (2025)
Confidentiality disclaimer
The images displayed on this page are independently recreated replicas of the original project work, produced solely for portfolio purposes. No confidential information, original assets, internal workflows, or copyrighted material are shown.
A RAG (retrieval-augmented generation) feature for scientific documents was planned, but its need and value were unvalidated.
The feature was intended for the internal research application, for which I was the UX strategist for over a year at this point.
Uncover real user needs, define a clear product outcome, and derisk its development to ensure the solution would deliver meaningful value for both users and the business.
In order to validate the added value of the AI feature within our product my UX/UI colleague Emma Rosenlind and I conducted semi-structured interviews and moderated usability tests using a design prototype (Figma). We gathered many insights (selected highlights below), defined the core users for the feature and established a baseline for manual document comparison to guide future value assessment.
Once the AI became testable, we evaluated real user inputs. Among many things we discovered that most users asked about a specific content – initially out of scope for the MVP. Recognizing its importance for adoption, the Product Owner and I expanded the MVP scope to ensure the first release matched core user needs.
Because our research showed the AI was less accurate than expected user feedback was essential – which wasn’t part of the original scope. Therefore I highlighted the need for a qualitative, prompt-linked feedback option to improve the model. Quick solutions like external forms risked low usage, so we opted for an integrated pattern. Even so, adoption remained uncertain, but it offered the best chance to capture meaningful input to begin with.
The MVP was delivered with a much clearer understanding of user needs, baseline effort, and what potential value it should deliver, shaped by user research.
Although real value will only be proven through continuous learning and product metrics, our work established strategic guardrails needed to avoid "AI for AI's sake", ensures a solid foundation for measuring impact as well as guiding future iterations.
This was the second AI feature that I worked on for our research product, and I really enjoy exploring the possibilities that this technology offers. The challenge with this particular feature was to make it user-friendly and create value in a short amount of time. Therefore, we initially focused on the prompts and responses.
Is a chatbot the best way to interact with this function? Perhaps not. As AI is a fairly new tool, there are not many proven experience patterns you can rely on, as I also learned from conversations with other designers and AI developers. It would therefore certainly be helpful to involve designers in other features like this from the beginning. I for one am curious and excited to learn more about creating meaningful AI experiences.