Andrea Burns
Andrea Burns is a Software Engineer at Google DeepMind working on multimodal understanding. More specifically, her interests include grounding and localized reasoning, rich image description generation, learning through interaction, and representation learning. She received her Ph.D. in Computer Science from Boston University in 2023, where she specialized in learning representations for web and app UIs, as well as modeling interactive tasks in those domains. Previously she also worked on representation learning for generic vision-language problems.
Research Areas
Authored Publications
Sort By
ImageInWords: Unlocking Hyper-Detailed Image Descriptions
Andrew Bunner
Ranjay Krishna
(2024)
Preview abstract
Despite the longstanding adage "an image is worth a thousand words," creating accurate and hyper-detailed image descriptions for training Vision-Language models remains challenging.
Current datasets typically have web-scraped descriptions that are short, low-granularity, and often contain details unrelated to the visual content. As a result, models trained on such data generate descriptions replete with missing information, visual inconsistencies, and hallucinations. To address these issues, we introduce ImageInWords (IIW), a carefully designed human-in-the-loop annotation framework for curating hyper-detailed image descriptions and a new dataset resulting from this process.
We validate the framework through evaluations focused on the quality of the dataset and its utility for fine-tuning with considerations for readability, comprehensiveness, specificity, hallucinations, and human-likeness. Our dataset significantly improves across these dimensions compared to recently released datasets (+66%) and GPT-4V outputs (+48%). Furthermore, models fine-tuned with IIW data excel by +31% against prior work along the same human evaluation dimensions. Given our fine-tuned models, we also evaluate text-to-image generation and vision-language reasoning. Our model's descriptions can generate images closest to the original, as judged by both automated and human metrics. We also find our model produces more compositionally rich descriptions, outperforming the best baseline by up to 6% on ARO, SVO-Probes, and Winoground datasets.
View details