Towards better health conversations: Research insights on a “wayfinding” AI agent based on Gemini

September 25, 2025

Mike Schaekermann, Research Scientist, and Rory Sayres, Researcher, Google Research

We share user insights from a novel research AI agent that helps people find their way to better health information through proactive conversational guidance, goal understanding, and tailored conversations.

The ability to find clear, relevant, and personalized health information is a cornerstone of empowerment for medical patients. Yet, navigating the world of online health information is often a confusing, overwhelming, and impersonal experience. We are met with a flood of generic information that does not account for our unique context, and it can be difficult to know what details are relevant.

Large language models (LLMs) have the potential to make this information more accessible and tailored. However, many AI tools today act as passive "question-answerers" — they provide a single, comprehensive answer to an initial query. But this isn't how an expert, like a doctor, helps someone navigate a complex topic. A health professional doesn't just provide a lecture; they ask clarifying questions to understand the full picture, discover a person's goals, and guide them through the information maze. Though this context-seeking is critical, it's a significant design challenge for AI.

In “Towards Better Health Conversations: The Benefits of Context-Seeking”, we describe how we designed and tested our “Wayfinding AI”, an early-stage research prototype, based on Gemini, that explores a new approach. Our fundamental thesis is that by proactively asking clarifying questions, an AI agent can better discover a user's needs, guide them in articulating their concerns, and provide more helpful, tailored information. In a series of four mixed-method user experience studies with a total of 163 participants, we examined how people interact with AI for their health questions, and we iteratively designed an agent that users found to be significantly more helpful, relevant, and tailored to their needs than a baseline AI agent.

Formative user experience insights: Challenges in finding health information online

To better understand the hurdles people face, we interviewed 33 participants about their experiences finding health information online. A key theme quickly emerged: people often struggle to articulate their health concerns. As one participant described, their process was to "...just kind of like throw all the words in there and then I'm just gonna see what comes back." It may be that without a clinical background, it’s difficult to know which details are medically relevant.

The people we interviewed were then able to use research prototypes of different chatbots. (The chat histories were not logged.) These participants made up a diverse group and asked health questions on a wide range of topics (e.g., rib pain, vertigo, consistent and unexplained weight gain, tinnitus and surgery; more details in the paper). Our studies revealed that when a chatbot proactively asks clarifying questions, the experience changes dramatically. The majority of participants preferred a "deferred-answer" approach — where the AI asks questions first — over one that gives a comprehensive answer immediately. This conversational style was perceived as more personal and reassuring. As one person noted, "It feels more like the way it would work if you talk to a doctor... it does make me feel a little more confident that it wants to know more before jumping right into an answer." These clarifying questions not only help the AI provide better answers, but also empower users, guiding them to provide more relevant context. We found similar patterns in prior work on AI for dermatology.

However, the effectiveness of this clarifying question–based approach depends heavily on the execution — engagement drops if questions are poorly formulated, irrelevant, or buried within long paragraphs of text where they are easily missed.

Designing a Wayfinding AI to empower people through personal and proactive conversations

Informed by these insights, we designed our Wayfinding AI around three core principles to create a more empowering conversational experience:

  1. Proactive conversational guidance: At each turn, the Wayfinding AI asks up to three targeted questions designed to systematically reduce ambiguity. This helps users articulate their health story more completely and directly incorporates users’ desire for more contextualized answers.

  2. Best-effort answers at each turn: Because some health-related questions may not require clarification to get a good answer, the Wayfinding AI provides a "best-effort" answer at every conversational turn, based on the information shared so far, while emphasizing that the answer can be improved if the user can answer one or more of the follow-up questions. This approach gives the user helpful information throughout the conversation, while providing the option to further receive increasingly better answers as the conversation progresses.

  3. Transparent reasoning: The Wayfinding AI explains how the user's latest answers have helped refine the previous answer. This makes the AI's reasoning process clear and understandable.

To ensure clarifying questions are never missed within the longer answers in the “best-effort” answers section, we designed an interface with a two-column layout. The conversation and clarifying questions appear in the left column, while best-effort answers and more detailed explanations appear in the right. This separates the interactive conversation from the informational content.

play silent looping video pause silent looping video

Example of a user starting to interact with our Wayfinding AI prototype interface, including both the familiar multi-turn chat interface on the left, and a “best information so far” panel on the right. This two-panel interface separates the context-seeking stream from the more detailed information provision piece, enabling users to dive into the information only when they feel all relevant information has been relayed.

Evaluating our Wayfinding AI through a randomized user study

To evaluate the potential real-world impact of this agent, we conducted a randomized user study with 130 US-based participants recruited via a third party platform. All participants were 21 years and older, were not health care professionals, and had a health-related question for which they were willing to interact with an AI. To ensure a broad range of health topics, we imposed very few restrictions on which topic would be eligible for the study (details on excluded inquiries are provided in the paper). In a randomized within-subjects design, each participant interacted with both our Wayfinding AI and a baseline Gemini 2.5 Flash model to explore their health topic. After providing informed consent and answering standard demographic questions, participants were instructed to have a conversation spending at least 3 minutes on their question; and then to resume the survey. After interacting with each AI, participants answered questions about their satisfaction with the experience along 6 dimensions: helpfulness, relevance of questions asked, tailoring to their situation, goal understanding, ease of use, and efficiency of getting useful information. They were able to provide open feedback about what they learned, and also had the option to upload their conversation with the AI. Sharing the conversation was not required to complete the survey. At the end of the study, participants were prompted to explicitly compare the two AIs and indicate which they would prefer in terms of each of the six dimensions above. They were also asked, "For a future topic, would you prefer the first or the second AI?" The order of AI exposure (Baseline AI first vs. Wayfinding AI first) was randomized across participants. Throughout the study, participants were instructed to not provide any identifying information about themselves.

Wayfinder2_StudyDesign

Illustration of our study design.

Helpful and relevant information through goal understanding and tailored conversations

As shown below, the results of the study demonstrated that users preferred the Wayfinding AI's approach across several important dimensions, despite its less-familiar two-column interface. Users favored Wayfinding AI for its helpfulness, relevance, ability to understand their goal, and for tailoring the conversation to their specific needs. These findings suggest that the proactive, question-asking behavior of Wayfinding AI successfully created a more personalized and helpful experience for users without introducing undue friction in the user experience.

Wayfinder3_Results

User preferences between a baseline and our Wayfinding AI along multiple evaluation axes, including helpfulness of the agent, relevance of its responses, tailoring of the conversation to the user, understanding the user’s goal, ease of use, efficiency of the conversation and willingness to use each for a future health information need.

Beyond simply preferring their conversations with the Wayfinding AI, participants had noticeably different conversations. Conversations were longer with the Wayfinding AI, in particular when participants were trying to understand the cause of their symptoms. For those topics, conversations with the Wayfinding AI had 4.96 turns on average, compared to 3.29 for the baseline AI. And the pattern of prompts they provided to each AI looked different across conversations:

Wayfinder4_HeroSankey

Sankey diagram illustrating the flow of conversations with the baseline AI and the Wayfinding AI. Each of the vertical bars shows the breakdown of the types of user prompts, across the first 5 conversation turns. The blue bars indicate participants responding to clarifying questions — much more common for the Wayfinding AI.

Conclusion

Finding the right health information online can feel like navigating a maze. While AI has the potential to be a powerful guide, our research shows that its success hinges on its ability to move beyond being a passive question-answerer and become an active conversational partner.

By designing our Wayfinding AI to be personal and proactive, we demonstrated how asking targeted questions in a well-structured interface can power an experience that users prefer over a more classical, question-answering experience, and thus enable people to obtain more helpful, relevant, and tailored information. The results from our user studies provide strong evidence that this human-centered, conversational approach is a promising direction for the future of AI in health, helping people navigate their health journeys.

Acknowledgements

The research described here is joint work across Google Research, Google Health, and partnering teams. We would like to thank Yuexing Hao, Abbi Ward, Amy Wang, Beverly Freeman, Serena Zhan, Diego Ardila, Jimmy Li, I-Ching Lee, Anna Iurchenko, Siyi Kou, Kartikeya Badola, Jimmy Hu, Bhawesh Kumar, Keith Johnson, Supriya Vijay, Justin Krogue, Avinatan Hassidim, Yossi Matias, Dale Webster, Sunny Virmani, Yun Liu, Quang Duong, Fereshteh Mahvar, Laura Vardoulakis, Tiffany Guo, and Meredith Ringel Morris for contributing or reviewing this work. We would also like to thank the participants who contributed to these studies.