Skip to content Skip to sidebar Skip to footer

UniBench: A Python Library to Evaluate Vision-Language Models VLMs Robustness Across Diverse Benchmarks

Vision-language models (VLMs) have gained significant attention due to their ability to handle various multimodal tasks. However, the rapid proliferation of benchmarks for evaluating these models has created a complex and fragmented landscape. This situation poses several challenges for researchers. Implementing protocols for numerous benchmarks is time-consuming, and interpreting results across multiple evaluation metrics becomes…

Read More

Data-Augmented Contrastive Tuning: A Breakthrough in Object Hallucination Mitigation

A new research addresses a critical issue in Multimodal Large Language Models (MLLMs): the phenomenon of object hallucination. Object hallucination occurs when these models generate descriptions of objects not present in the input data, leading to inaccuracies undermining their reliability and effectiveness. For instance, a model might incorrectly assert the presence of a “tie” in…

Read More

AI in Medical Imaging: Balancing Performance and Fairness Across Populations

As AI models become more integrated into clinical practice, assessing their performance and potential biases towards different demographic groups is crucial. Deep learning has achieved remarkable success in medical imaging tasks, but research shows these models often inherit biases from the data, leading to disparities in performance across various subgroups. For example, chest X-ray classifiers…

Read More

VEnhancer: A Generative Space-Time Enhancement Method for Video Generation

Recent advancements in video generation have been driven by large models trained on extensive datasets, employing techniques like adding layers to existing models and joint training. Some approaches use multi-stage processes, combining base models with frame interpolation and super-resolution. Video Super-Resolution (VSR) enhances low-resolution videos, with newer techniques using varied degradation models to better mimic…

Read More

Revolutionising Visual-Language Understanding: VILA 2’s Self-Augmentation and Specialist Knowledge Integration

The field of language models has seen remarkable progress, driven by transformers and scaling efforts. OpenAI’s GPT series demonstrated the power of increasing parameters and high-quality data. Innovations like Transformer-XL expanded context windows, while models such as Mistral, Falcon, Yi, DeepSeek, DBRX, and Gemini pushed capabilities further. Visual language models (VLMs) have also advanced rapidly.…

Read More

Visual Haystacks Benchmark: The First “Visual-Centric” Needle-In-A-Haystack (NIAH) Benchmark to Assess LMMs’ Capability in Long-Context Visual Retrieval and Reasoning

A significant challenge in the field of visual question answering (VQA) is the task of Multi-Image Visual Question Answering (MIQA). This involves generating relevant and grounded responses to natural language queries based on a large set of images. Existing Large Multimodal Models (LMMs) excel in single-image visual question answering but face substantial difficulties when queries…

Read More

From Diagrams to Solutions: MAVIS’s Three-Stage Framework for Mathematical AI

Large Language Models (LLMs) and their multi-modal counterparts (MLLMs) have made significant strides in advancing artificial general intelligence (AGI) across various domains. However, these models face a significant challenge in the realm of visual mathematical problem-solving. While MLLMs have demonstrated impressive capabilities in diverse tasks, they struggle to fully utilize their potential when confronted with…

Read More

A Decade of Transformation: How Deep Learning Redefined Stereo Matching in the Twenties

A fundamental topic in computer vision for nearly half a century, stereo matching involves calculating dense disparity maps from two corrected pictures. It plays a critical role in many applications, including autonomous driving, robotics, and augmented reality, among many others. According to their cost-volume computation and optimization methodologies, existing surveys categorize end-to-end architectures into 2D…

Read More

Enhancing Vision-Language Models: Addressing Multi-Object Hallucination and Cultural Inclusivity for Improved Visual Assistance in Diverse Contexts

The research on vision-language models (VLMs) has gained significant momentum, driven by their potential to revolutionize various applications, including visual assistance for visually impaired individuals. However, current evaluations of these models often need to pay more attention to the complexities introduced by multi-object scenarios and diverse cultural contexts. Two notable studies shed light on these…

Read More

MG-LLaVA: An Advanced Multi-Modal Model Adept at Processing Visual Inputs of Multiple Granularities, Including Object-Level Features, Original-Resolution Images, and High-Resolution Data

Multi-modal Large Language Models (MLLMs) have various applications in visual tasks. MLLMs rely on the visual features extracted from an image to understand its content. When a low-resolution image containing fewer pixels is provided as input, it translates less information to these models to work with. Due to this limitation, these models often need to…

Read More