🔥[2024-09-26] Accepted at NeurIPS 2024!
Large vision-language models (LVLMs) are ignorant of the up-to-date knowledge, such as LLaVA series, because they cannot be updated frequently due to the large amount of resources required, and therefore fail in many cases. For example, if a LVLM was released on January 2024, and it wouldn’t know the singer of the theme song for the new Detective Conan movie, which wasn’t released until April 2024. To solve the problem, a promising solution motivated by retrievalaugmented generation (RAG) is to provide LVLMs with up-to-date knowledge via internet search during inference, i.e., internet-augmented generation (IAG), which is already integrated in some closed-source commercial LVLMs such as GPT-4V. However, the specific mechanics underpinning them remain a mystery. In this paper, we propose a plug-and-play framework, for augmenting existing LVLMs in handling visual question answering (VQA) about up-to-date knowledge, dubbed SearchLVLMs. A hierarchical filtering model is trained to effectively and efficiently find the most helpful content from the websites returned by a search engine to prompt LVLMs with up-to-date knowledge. To train the model and evaluate our framework’s performance, we propose a pipeline to automatically generate newsrelated VQA samples to construct a dataset, dubbed UDK-VQA. A multi-model voting mechanism is introduced to label the usefulness of website/content for VQA samples to construct the training set. Experimental results demonstrate the effectiveness of our framework, outperforming GPT-4V by ∼25% in accuracy.
The proposed SearchLVLMs, a framework for LVLMs to access up-to-date knowledge. It consists of four components: query generator, search engine, hierarchical filtering model, and augmented generation.
Comparision with SOTA LVLMs on UDK-VQA, where “Raw” represents the model without IAG ability (e.g., official API version), “IAG” represents the model with self-contained IAG-capable ability (official web version), “LC” represents the model with long context input. “Gen.”, “Cham.” and “CLIP→FID (C→F)” denote the method from [49], [13] and [45], respectively. “★” indicates that the method leverages our framework to access up-to-date knowledge. “Ours” stands for incorporating the Raw baseline into our framework. The value outside/in () indicates the accuracy over samples that do not violate the content management policy of current/all model(s).
@misc{li2024searchlvlms,
title={SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge},
author={Li, Chuanhao and Li, Zhen and Jing, Chenchen and Liu, Shuo and Shao, Wenqi and Wu, Yuwei and Luo, Ping and Qiao, Yu and Zhang, Kaipeng},
year={2024},
eprint={2405.14554},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2405.14554},
}