Google DeepMind Uses LLM to Solve Unsolvable Math Problems

Google DeepMind has successfully used a large language model (LLM) to solve a renowned unsolved problem in pure mathematics. In an article published in Nature, the researchers state that this marks the first application of a large language model in discovering a solution to a persistent scientific puzzle, yielding credible and valuable information that was previously unknown. Pushmeet Kohli, the vice president of research at Google DeepMind and coauthor, said, “It’s not in the training data—it wasn’t even known.”

While LLMs are commonly linked with generating content rather than revealing new information, Google DeepMind’s innovative tool, FunSearch, could change this perception. The tool demonstrates that these models can make discoveries, provided they are guided in a specific manner and that most of their outputs are selectively disregarded.

FunSearch builds on DeepMind’s achievement in using AI for fundamental advancements in mathematics and computer science, following AlphaTensor’s acceleration of calculations and optimizing key algorithms by AlphaDev.

Unlike its predecessors, FunSearch uses a distinctive strategy by integrating a large language model named Codey with additional systems that discard incorrect answers while incorporating valid ones into the workflow. The research team, led by Pushmeet Kohli, used a trial-and-error approach, enabling FunSearch to propose code solutions for a problem initially formulated in Python.

In the FunSearch process, Codey uses code suggestions to finalize the program, while a second algorithm assesses and assigns scores to these proposals. Even if initially incorrect, the top-performing ideas are reintroduced to Codey, establishing a continuous improvement loop. Following millions of suggestions and multiple iterations, FunSearch effectively generated code that provided a correct and previously unknown solution to the cap set problem—a complex challenge in pure mathematics.

The cap set problem includes determining the largest size of a specific set in graph theory, and mathematicians have grappled with its solution for years. What sets FunSearch apart is its capacity to generate both comprehensible and interpretable code by humans, offering a promising paradigm for harnessing the capabilities of large language models across various problem-solving domains.

FunSearch’s adaptability was further determined by its application to the bin packing problem, a challenging mathematical task with applications in computer science. Not only did FunSearch find a solution, but it also surpassed methods devised by humans, highlighting its potential in diverse problem-solving scenarios.

Mathematicians emphasize the need to integrate large language models cautiously into research workflows. The success of FunSearch signifies a noteworthy advancement in effectively leveraging AI capabilities to solve complex problems.

WRITTEN BY

Team Eela

TechEela, the Bedrock of MarTech and Innovation, is a Digital Media Publication Website. We see a lot around us that needs to be told, shared, and experienced, and that is exactly what we offer to you as shots. As we like to say, “Here’s to everything you ever thought you knew. To everything, you never thought you knew”
0

Leave a Reply

Your email address will not be published. Required fields are marked *