LLaVA (Large Language and Vision Assistant) tool is an innovative large multimodal model designed for general-purpose visual and language understanding. It combines a vision encoder with a large language model (LLM), Vicuna, and is trained end-to-end. LLaVA demonstrates impressive chat capabilities, mimicking the performance of multimodal GPT-4, and sets a new state-of-the-art accuracy on Science QA tasks. The tool's key feature is its ability to generate multimodal language-image instruction-following data using language-only GPT-4. LLaVA is open-source, with publicly available data, models, and code. It is fine-tuned for tasks such as visual chat applications and science domain reasoning, achieving high performance in both areas.
LLaVA
1 min read