Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Microsoft launched several new “open” AI models On Wednesday, the most capable of being competitive with Openai O3-Dom on at least one reference value.
All new models with licensed licensing-pHI 4 mini resonation, Phi 4 disintegration and Phi 4 raschning plus-models are “explanation”, which means that they can spend more time to check the facts for complex problems. They expand the Microsoft family of the PHI “Small Model”, which the company started a year ago to offer the foundation for AI developers to build applications on the edge.
The Phi 4 mini -explanation is trained to approximately one million synthetic mathematical problems generated by the R1 R1 resonation model of Chinese AI startup Deepseek. About 3.8 billion of size parameters, Phi 4 mini resonation is designed for educational applications, says Microsoft, such as “built -in teaching” on light devices.
Parameters are approximately corresponding to the skills of solving models’ problems, and multi -parameter models generally seem better than those with fewer parameters.
The Phi 4 deployment, a model of 14 billion parameters, is trained using “high quality” web data as well as “curated demonstrations” from the aforementioned O3-Dom Openi. It is best for mathematics, science and encoding, Microsoft states.
As for the Phi 4 Raschning Plus, it is Microsoft’s previously published PHI-4 model adapted to the explanation model to achieve better accuracy on certain tasks. Microsoft claims that the Phi 4 of the Rauli Plus approaches the level of performance R1, a model with significantly more parameters (671 billion). The company’s internal comparison also has a Phi 4 resonating plus matching O3-Domi on Omnimath, a mathematical skill test.
Phi 4 mini understands, Phi 4 understands and phi 4 understands plus plus Ai Dev platform hugging face accompanied by detailed technical reports.
Techcrunch event
Berkeley, California
|
June 5
“Using distillation, learning of reinforcement and high quality data, these [new] Models of balanced size and performance, “wrote Microsoft UA blog blog. “They are small enough for a small delay environment, but they maintain strong reasoning options that rival rival rival models. This joint allows even devices that limit resources to effectively perform complex explanation tasks.”