ABU DHABI, United Arab Emirates, May 14 (Bernama-BUSINESS WIRE) — The Technology Innovation Institute (TII), a leading global scientific research center and the applied research pillar of Abu Dhabi’s Advanced Technology Research Council (ATRC), today launched a second iteration of its renowned large language model (LLM) – Falcon 2. Within this series, it has unveiled two groundbreaking versions: Falcon 2 11B, a more efficient and accessible LLM trained on 5.5 trillion tokens with 11 billion parameters, and Falcon 2 11B VLM, distinguished by its vision-to-language model (VLM) capabilities, which enable seamless conversion of visual inputs into textual outputs. While both models are multilingual, notably, Falcon 2 11B VLM stands out as TII’s first multimodal model – and the only one currently in the top tier market that has this image-to-text conversion capability, marking a significant advancement in AI innovation.
Tested against several prominent AI models in its class among pre-trained models, Falcon 2 11B surpasses the performance of Meta’s newly launched Llama 3 with 8 billion parameters (8B), and performs on par with Google’s Gemma 7B at first place (Falcon 2 11B: 64.28 vs Gemma 7B: 64.29), as independently verified by Hugging Face, a US-based platform hosting an objective evaluation tool and global leaderboard for open LLMs. More importantly, Falcon 2 11B and 11B VLM are both open-source, empowering developers worldwide with unrestricted access. In the near future, there are plans to broaden the Falcon 2 next-generation models, introducing a range of sizes. These models will be further enhanced with advanced machine learning capabilities like ‘Mixture of Experts’ (MoE), aimed at pushing their performance to even more sophisticated levels.