Microsoft Phi-3 Launched As Company’s Smallest Open-Source AI Model To Date

microsoft-phi-3-launched-as-company’s-smallest-open-source-ai-model-to-date

Microsoft on Tuesday released Phi-3, its smallest language artificial intelligence (AI) model to date. Smaller AI models are significant, because they have the potential to be run on smartphones. The latest AI model is the successor to Phi-2, which was released in December 2023, and comes with higher training database and larger parameters. The increased parameters help the AI model understand and respond to more complex questions compared to its predecessor. It is also claimed to be on par with models trained on more than 10 times the number of parameters used for Phi-3.

A pre-print paper detailing the small language model (SLM) has been published on arXiv. However, as arXiv does not conduct peer reviews, the validity of the claims is yet to be ascertained. AI enthusiasts can test out the AI model through Azure and Ollama. A Hugging Face catalogue for the Phi-3-mini has also been created but the weights are yet to be released.

phi-3 is here, and it’s … good :-).

I made a quick short demo to give you a feel of what phi-3-mini (3.8B) can do. Stay tuned for the open weights release and more announcements tomorrow morning!

(And ofc this wouldn’t be complete without the usual table of benchmarks!) pic.twitter.com/AWA7Km59rp

— Sebastien Bubeck (@SebastienBubeck) April 23, 2024

On performance, the AI model has been trained on 3.3 trillion tokens — units of data that include words, phrases, or subsection of words which are fed to the system to train an AI model. It also contains 3.8 billion parameters, which highlight the level of complexity the chatbot can understand. They are essentially neural connections where each point is knowledge about a certain topic, and it connects to various other such points which contain information contextual to the original point.

Microsoft claims — based on internal benchmarking — that the chabot rivals models such as Mixtral 8x7B and GPT-3.5, which are much larger than the SML. The AI is aligned for chat format, which means it can respond to conversational queries. “We also provide some initial parameter-scaling results with a 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3-medium, both significantly more capable than phi-3-mini,” the tech giant says.

Reuters reports that the AI model, designed to perform simpler tasks, is also hosted on Microsoft Azure and Ollama. The company is yet to share details around Phi-3-mini’s open source license. Notably, Apache 2.0 license, which Grok AI recently issued, allows both academic and commercial usage.


Affiliate links may be automatically generated – see our ethics statement for details.