A team of AI researchers at Mohamed bin Zayed University of AI, in Abu Dhabi, working with a colleague from the University of ...
These performance benchmarks highlight the potential of self-improving systems to redefine the landscape of AI research. By ...
Microsoft Research Asia has developed a training method called rStar-Math that enables small language models to match or exceed the math performance of much larger AI systems, such as OpenAI's ...
Large Language Models (LLMs) have shown remarkable capabilities across diverse natural language processing tasks, from generating text to contextual reasoning. However, their efficiency is often ...
Multi-hop queries have always given LLM agents a hard time with their solutions, necessitating multiple reasoning steps and information from different sources. They are crucial for analyzing a model’s ...
Microsoft enhances the capabilities of small language models (SLMs) with rStar-Math. The technique boosts the capabilities of SLMs, allowing them to compete or even surpass the math reasoning ...
A team of math and AI researchers at Microsoft Asia has designed and developed a small language model (SLM) that can be used to solve math problems. The group has posted a paper on the arXiv preprint ...