SearchLVLMs

A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge

Chuanhao Li2,1, Zhen Li2, Chenchen Jing3, Shuo Liu1, Wenqi Shao1, Yuwei Wu2,†,, Ping Luo4,1, Yu Qiao1, Kaipeng Zhang1,

1OpenGVLab, Shanghai AI Laboratory, 2Beijing Institute of Technology,
3Zhejiang University, 4The University of Hong Kong

†Corresponding Author: wuyuwei@bit.edu.cn, zhangkaipeng@pjlab.org.cn
SearchLVLMs

Visualization of samples requiring up-to-date internet knowledge.

🔔News

🔥[2024-09-26] Accepted at NeurIPS 2024!

Introduction

Large vision-language models (LVLMs) are ignorant of the up-to-date knowledge, such as LLaVA series, because they cannot be updated frequently due to the large amount of resources required, and therefore fail in many cases. For example, if a LVLM was released on January 2024, and it wouldn’t know the singer of the theme song for the new Detective Conan movie, which wasn’t released until April 2024. To solve the problem, a promising solution motivated by retrievalaugmented generation (RAG) is to provide LVLMs with up-to-date knowledge via internet search during inference, i.e., internet-augmented generation (IAG), which is already integrated in some closed-source commercial LVLMs such as GPT-4V. However, the specific mechanics underpinning them remain a mystery. In this paper, we propose a plug-and-play framework, for augmenting existing LVLMs in handling visual question answering (VQA) about up-to-date knowledge, dubbed SearchLVLMs. A hierarchical filtering model is trained to effectively and efficiently find the most helpful content from the websites returned by a search engine to prompt LVLMs with up-to-date knowledge. To train the model and evaluate our framework’s performance, we propose a pipeline to automatically generate newsrelated VQA samples to construct a dataset, dubbed UDK-VQA. A multi-model voting mechanism is introduced to label the usefulness of website/content for VQA samples to construct the training set. Experimental results demonstrate the effectiveness of our framework, outperforming GPT-4V by ∼25% in accuracy.

SearchLVLMs

Overview

pipeline

The proposed SearchLVLMs, a framework for LVLMs to access up-to-date knowledge. It consists of four components: query generator, search engine, hierarchical filtering model, and augmented generation.

Experiment Results

Main Results

pipeline

Comparision with SOTA LVLMs on UDK-VQA, where “Raw” represents the model without IAG ability (e.g., official API version), “IAG” represents the model with self-contained IAG-capable ability (official web version), “LC” represents the model with long context input. “Gen.”, “Cham.” and “CLIP→FID (C→F)” denote the method from [49], [13] and [45], respectively. “★” indicates that the method leverages our framework to access up-to-date knowledge. “Ours” stands for incorporating the Raw baseline into our framework. The value outside/in () indicates the accuracy over samples that do not violate the content management policy of current/all model(s).

BibTeX


      @misc{li2024searchlvlms,
        title={SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge}, 
        author={Li, Chuanhao and Li, Zhen and Jing, Chenchen and Liu, Shuo and Shao, Wenqi and Wu, Yuwei and Luo, Ping and Qiao, Yu and Zhang, Kaipeng},
        year={2024},
        eprint={2405.14554},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2405.14554}, 
  }