Giter Club home page Giter Club logo

opengvlab / multi-modality-arena Goto Github PK

View Code? Open in Web Editor NEW
450.0 6.0 34.0 22.02 MB

Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!

HTML 0.07% CSS 0.12% JavaScript 0.69% Python 99.07% Shell 0.04%
chat chatbot chatgpt gradio large-language-models llms vqa multi-modality vision-language-model

multi-modality-arena's Introduction

Multi-Modality Arena πŸš€

Multi-Modality Arena is an evaluation platform for large multi-modality models. Following Fastchat, two anonymous models side-by-side are compared on a visual question-answering task. We release the Demo and welcome the participation of everyone in this evaluation initiative.

βš”οΈ LVLM Arena arXiv arXiv GitHub StarsπŸ”₯πŸ”₯πŸ”₯

Holistic Evaluation of Large Multimodal Models

OmniMedVQA: A New Large-Scale Comprehensive Evaluation Benchmark for Medical LVLM

  • OmniMedVQA dataset: contains 118,010 images with 127,995 QA-items, covering 12 different modalities and referring to more than 20 human anatomical regions. Dataset could be download from Here.
  • 12 models: 8 general-domain LVLMs and 4 medical-specialized LVLMs.

Tiny LVLM-eHub: Early Multimodal Experiments with Bard

  • Tiny datasets: only 50 randomly selected sampels for each dataset, i.e., 42 text-related visual benchmarks and 2.1K samples in total for ease of use.
  • More models: another 4 models, i.e., 12 models in total, including Google Bard.
  • ChatGPT Ensemble Evalution: improved agreement with human evaluation than previous word matching approach.

LVLM-eHub: An Evaluation Benchmark for Large Vision-Language Models πŸš€

LVLM-eHub is a comprehensive evaluation benchmark for publicly available large multimodal models (LVLM). It extensively evaluates $8$ LVLMs in terms of $6$ categories of multimodal capabilities with $47$ datasets and $1$ arena online platform.

LVLM Leaderboard

The LVLM Leaderboard systematically categorizes the datasets featured in the Tiny LVLM Evaluation according to their specific targeted abilities including visual perception, visual reasoning, visual commonsense, visual knowledge acquisition, and object hallucination. This leaderboard includes recently released models to bolster its comprehensiveness.

You can download the benchmark from here, and more details can be found in here.

Rank Model Version Score
πŸ…οΈ InternVL InternVL-Chat 327.61
πŸ₯ˆ InternLM-XComposer-VL InternLM-XComposer-VL-7B 322.51
πŸ₯‰ Bard Bard 319.59
4 Qwen-VL-Chat Qwen-VL-Chat 316.81
5 LLaVA-1.5 Vicuna-7B 307.17
6 InstructBLIP Vicuna-7B 300.64
7 InternLM-XComposer InternLM-XComposer-7B 288.89
8 BLIP2 FlanT5xl 284.72
9 BLIVA Vicuna-7B 284.17
10 Lynx Vicuna-7B 279.24
11 Cheetah Vicuna-7B 258.91
12 LLaMA-Adapter-v2 LLaMA-7B 229.16
13 VPGTrans Vicuna-7B 218.91
14 Otter-Image Otter-9B-LA-InContext 216.43
15 VisualGLM-6B VisualGLM-6B 211.98
16 mPLUG-Owl LLaMA-7B 209.40
17 LLaVA Vicuna-7B 200.93
18 MiniGPT-4 Vicuna-7B 192.62
19 Otter Otter-9B 180.87
20 OFv2_4BI RedPajama-INCITE-Instruct-3B-v1 176.37
21 PandaGPT Vicuna-7B 174.25
22 LaVIN LLaMA-7B 97.51
23 MIC FlanT5xl 94.09

Update

  • πŸ”₯ Mar. 31, 2024. We release OmniMedVQA, a large-scale comprehensive evaluation benchmark for medical LVLMs. Meanwhile, we 8 general-domain LVLMs and 4 medical-specialized LVLMs. For more details, please visit the MedicalEval.
  • πŸ”₯ Oct. 16, 2023. We present an ability-level dataset split derived from the LVLM-eHub, complemented by the inclusion of eight recently released models. For access to the dataset splits, evaluation code, model inference results, and comprehensive performance tables, please visit the tiny_lvlm_evaluation βœ….
  • Aug. 8, 2023. We released [Tiny LVLM-eHub]. Evaluation source codes and model inference results are open-sourced under tiny_lvlm_evaluation.
  • Jun. 15, 2023. We release [LVLM-eHub], an evaluation benchmark for large vision-language models. The code is coming soon.
  • Jun. 8, 2023. Thanks, Dr. Zhang, the author of VPGTrans, for his corrections. The authors of VPGTrans mainly come from NUS and Tsinghua University. We previously had some minor issues when re-implementing VPGTrans, but we found that its performance is actually better. For more model authors, please contact me for discussion at the Email. Also, please follow our model ranking list, where more accurate results will be available.
  • May. 22, 2023. Thanks, Dr. Ye, the author of mPLUG-Owl, for his corrections. We fix some minor issues in our implementation of mPLIG-Owl.

Supported Multi-modality Models

The following models are involving in randomized battles currently,

More details about these models can be found at ./model_detail/.model.jpg. We will try to schedule computing resources to host more multi-modality models in the arena.

Contact US at Wechat

If you are interested in any pieces of our VLarena platform, feel free to join the Wechat group.

Installation

  1. Create conda environment
conda create -n arena python=3.10
conda activate arena
  1. Install Packages required to run the controller and server
pip install numpy gradio uvicorn fastapi
  1. Then for each model, they may require conflicting versions of python packages, we recommend creating a specific environment for each model based on their GitHub repo.

Launch a Demo

To serve using the web UI, you need three main components: web servers that interface with users, model workers that host two or more models, and a controller to coordinate the webserver and model workers.

Here are the commands to follow in your terminal:

Launch the controller

python controller.py

This controller manages the distributed workers.

Launch the model worker(s)

python model_worker.py --model-name SELECTED_MODEL --device TARGET_DEVICE

Wait until the process finishes loading the model and you see "Uvicorn running on ...". The model worker will register itself to the controller. For each model worker, you need to specify the model and the device you want to use.

Launch the Gradio web server

python server_demo.py

This is the user interface that users will interact with.

By following these steps, you will be able to serve your models using the web UI. You can open your browser and chat with a model now. If the models do not show up, try to reboot the gradio web server.

Contribution Guidelines

We deeply value all contributions aimed at enhancing the quality of our evaluations. This section comprises two key segments: Contributions to LVLM Evaluation and Contributions to LVLM Arena.

Contributing to LVLM Evaluation

You can access the most recent version of our evaluation code in the LVLM_evaluation folder. This directory encompasses a comprehensive set of evaluation code, accompanied by the necessary datasets. If you're enthusiastic about partaking in the evaluation process, please don't hesitate to share your evaluation outcomes or the model inference API with us via email at [email protected].

Contributions to LVLM Arena

We extend our gratitude for your interest in integrating your model into our LVLM Arena! Should you wish to incorporate your model into our Arena, kindly prepare a model tester structured as follows:

class ModelTester:
    def __init__(self, device=None) -> None:
        # TODO: initialization of model and required pre processors
    
    def move_to_device(self, device) -> None:
        # TODO: this function is used to transfer the model between CPU and GPU (optional)
    
    def generate(self, image, question) -> str:
       # TODO: model inference code 

Furthermore, we are open to online model inference links, such as those provided by platforms like Gradio. Your contributions are wholeheartedly appreciated.

Acknowledgement

We express our gratitude to the esteemed team at ChatBot Arena and their paper Judging LLM-as-a-judge for their influential work, which served as inspiration for our LVLM evaluation endeavors. We would also like to extend our sincere appreciation to the providers of LVLMs, whose valuable contributions have significantly contributed to the progress and advancement of large vision-language models. Finally, we thank the providers of datasets used in our LVLM-eHub.

Term of Use

The project is an experimental research tool for non-commercial purposes only. It has limited safeguards and may generate inappropriate content. It cannot be used for anything illegal, harmful, violent, racist, or sexual.

multi-modality-arena's People

Contributors

bellxp avatar fanqingm avatar leimeng86 avatar lqf-hfnju avatar siyuanhuang95 avatar toggle1995 avatar wqshao126 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

multi-modality-arena's Issues

Can not load scienceQA dataset.

I run the scripts on ScienceQA but it raises error:
'''
File "./Multi-Modality-Arena/LVLM_evaluation/task_datasets/vqa_datasets.py", line 140, in load_save_dataset
self.image_list.append(sample['image'].convert('RGB'))
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'convert'
'''

mPLUG-Owl have been updated.

Hi, just quick note that mPLUG-Owl has been updated with two different checkpoint (lora & ft). Would you mind replacing the online arena demo? Thanks!

Model performance and evaluation metrics in the OmniMedVQA dataset

Thanks for your work!
After reading the paper OmniMedVQA, I have two questions and sincerely look forward to the answers.

  1. From the paper of MedVInT and RadFM, the dataset used in the radfm model is larger than that of medvint (16M vs. 1.64M). However, the performance of medvint is better than radfm in your paper. Do you further analyze the prediction results of the two models?

  2. QA scores and prefix-based scores are distributed differently across image modalities. Which metric is more useful when selecting a model under a certain modality?

LLaVA evaluation on Flickr30k

Hello, thanks for the great work! I was looking at this script for llava evaluation on Flickr30k, but am facing some issues, detailed here.

Could you please help me with the exact generation settings and model checkpoint used for this evaluation? Thanks!

always getting (error_code: 1)

Hello and thank you for your amazing work!

However, I have a problem: the models are loaded well but I continue getting

NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE. (error_code: 1)

The models (blip2 and minigpt4) are both on GPUs (I'm using them with --keep-in-device otherwise they were not even loaded) but every try I do, I always get the same error.

Any advice?

MiniGPT-4 and LLaVA evaluation

Hi! I'm a fan of your work. Can you please provide more details about how to do eval for MiniGPT-4 and LLaVA on various datasets? Thanks a lot!

Some Problems with VPGTrans

I am the first author of VPGTrans. Thanks so much for using VPGTrans! I try to see the excellent work from WeChat Articles. However, it seems to be some problems with the VPGTrans.

  1. I try your example in the WeChat Article. My demo (https://vpgtrans.github.io/) shows that:
    Selection_412
    But the result in the WeChat article is :
    Selection_413

It is different. I am not sure whether there are some modifications with the default hyperparameters like the prompt format or the beam size. I will also try to check the code. If any findings, I will also report them here.

For your debug use, you can compare it with our demo (https://vpgtrans.github.io/). If the demo is down, just mail me ([email protected]).

  1. The main authors are from NUS. But the main institution in the WeChat Article is Tsinghua University. If it is possible, hope you can modify it to NUS&THU. If it is inconvenient, hope you can add a comment at the bottom of the WeChat Article or at least correct it in this repo (model.jpg).

Which test set for Flickr30k?

Wondering if you use the karpathy test set for Flickr30k, or a different test set in your LVLM-eHUB paper. Thanks!

details of the Elo rating algorithm

Nice work! Interested in the design of 1 vs 1 battles between LVLMs, but can you share more details about the Elo rating algorithm? Like the choice of k-factor, the expected confidence intervals with the collected user ratings, etc. Appreciated if you can share more of the details.

Chatbot Arena conversation data

Hi,

thanks for the efforts in the great work!
I would like to ask whether you plan to open-source the Chatbot Arena conversation data.

Thanks in advance!

Best, Wei

Code for VCR evaluation

First, I really appreciate for your great contributions in LVLM field.

Do you have any plan to release the visual commonsense reasoning (VCR) evaluation code?
There's some elaboration about how to properly locate and download the dataset, but I couldn't find the corresponding code.

Thanks again for your work.

Hardware requirements

Hi all,
Could anyone provide with the hardware requirements to run and test these models. I am planning to run these models on Local Systems
It would be great if the hardware requirements for the open-source models are provided.

Thanking in Advance.

How to reproduce the Tiny-eHub eval

Thanks for releasing this benchmark. Now we tried to compute the categorical score for each ability but found low scores on several abilities, like visual reasoning, and visual perception. We compute the text matching accuracy. We also download the inference results of existing works, like BLIP2, etc., and manually check that the text matching accuracy can hardly achieve ~50%. Below is an example of the prediction result of BLIP2. I wonder how these works can achieve high scores in the Tiny LVLM evaluation?

{
"question": "When does the coupon expire?",
"answer": "it expires on january 31st",
"gt_answers": [
"12/31/87"
],
"image_path": "updated_datasets/Visual_Reasoning/001.png",
"model_name": "BLIP2",
"task_type": "VQA"
},
{
"question": "What is the \u201cunit of quantity\u201d of Pulp?",
"answer": "Pulp is a term used to refer to the amount of pulp produced by a pulp mill, or the amount of pulp produced by a",
"gt_answers": [
"Tonne"
],
"image_path": "updated_datasets/Visual_Reasoning/002.png",
"model_name": "BLIP2",
"task_type": "VQA"
},
{
"question": "what is the % of sugar in ro-neet?",
"answer": "% of sugar in ro-neet",
"gt_answers": [
"17.1%",
"17.1"
],
"image_path": "updated_datasets/Visual_Reasoning/003.png",
"model_name": "BLIP2",
"task_type": "VQA"
},
{
"question": "What is the total consultant costs under column "-04" based on "II. CONSULTANT COSTS"?",
"answer": "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0",
"gt_answers": [
"$1,532"
],
"image_path": "updated_datasets/Visual_Reasoning/004.png",
"model_name": "BLIP2",
"task_type": "VQA"
},

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.