Jon E. Froehlich

Jon E. Froehlich

I am a Visiting Faculty Researcher working on the AI for Social Good team to enhance and amplify human abilities through advances in AI + HCI. I am also a Professor in Human-Computer Interaction at UW’s Allen School of Computer Science and Engineering.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    StreetReaderAI: Making Street View Accessible Using Context-Aware Multimodal AI
    Alex Fiannaca
    Nimer Jaber
    Victor Tsaran
    Proceedings of the 2025 ACM Symposium on User Interface Software and Technology (UIST'25) (to appear)
    Preview abstract Interactive streetscape mapping tools such as Google Street View (GSV) and Meta Mapillary enable users to virtually navigate and experience real-world environments via immersive 360° imagery but remain fundamentally inaccessible to blind users. We introduce StreetReaderAI, the first-ever accessible street view tool, which combines context-aware, multimodal AI, accessible navigation controls, and conversational speech. With StreetReaderAI, blind users can virtually examine destinations, engage in open-world exploration, or virtually tour any of the over 220 billion images and 100+ countries where GSV is deployed. We iteratively designed StreetReaderAI with a mixed-visual ability team and performed an evaluation with eleven blind users. Our findings demonstrate the value of an accessible street view in supporting POI investigations and remote route planning. We close by enumerating key guidelines for future work. View details
    “Does the cafe entrance look accessible? Where is the door?” Towards Geospatial AI Agents for Visual Inquiries
    Jared Hwang
    Zeyu Wang
    John S. O'Meara
    Xia Su
    William Huang
    Yang Zhang
    Alex Fiannaca
    ICCV'25 Workshop "Vision Foundation Models and Generative AI for Accessibility: Challenges and Opportunities" (2025)
    Preview abstract Interactive digital maps have revolutionized how people travel and learn about the world; however, they rely on preexisting structured data in GIS databases (e.g., road networks, POI indices), limiting their ability to address geovisual questions related to what the world looks like. We introduce our vision for Geo-Visual Agents—multimodal AI agents capable of understanding and responding to nuanced visual-spatial inquiries about the world by analyzing large-scale repositories of geospatial images, including streetscapes (e.g., Google Street View), place-based photos (e.g., TripAdvisor, Yelp), and aerial imagery (e.g., satellite photos) combined with traditional GIS data sources. We define our vision, describe sensing and interaction approaches, provide three exemplars, and enumerate key challenges and opportunities for future work. View details