In a bid to solve the AI ‘black box’ problem, Google has released a new AI model called “Gemini 2.0 Flash Thinking,” trained to generate the model’s thinking process along with its responses. It is currently available as an experimental model in Google AI Studio and for direct use in the Gemini API.
The model creates “promising results” when inference time computation is increased, Jeff Dean, chief scientist for Google’s AI research team DeepMind, said. Inference time is the amount of time it takes a model to provide output predictions based on input data.
Google isn’t the first company to come out with a reasoning model. Earlier this year, OpenAI released its thinking model O1. Just like Google, OpenAI also found that its reasoning model performs better if it has more time to think.
How the model works:
The company argues that this model is “trained to use thoughts in a way that leads to stronger reasoning capabilities.” The model is capable of code understanding, geometry, and solving math problems, and generating questions adapted to a specific level of knowledge.
For example, if a user asks the model to create questions for US advanced placement (AP) Physics exams, it first figures out the topics any AP Physics class would cover, develops a scenario where specific concepts apply, and specifies exam-relevant information (like any assumptions the model made). Further, before providing the user with a question, the model reviews both the question and its solution.
Users can currently only input 32,000 tokens at a time into the model and the outputs the model gives them will have an 8000 token limit. They can only input text and images and can only get responses in the text format.
Why it matters:
One of the key challenges regulatory bodies—across countries and sectors— have pointed out in the context of AI is the lack of clarity around models’ decisions. This makes them cautious about the accuracy and fairness of the output. To address this, governments have been looking at ways to make AI companies more transparent. For instance, earlier this year, the Indian Economic Advisory Council to the Prime Minister (EAC-PM) advised the government to open up the licensing of AI models’ core algorithms for external audits. The council said that AI factsheets for system audits by external experts would also “demystify the black box nature of AI,” as per a MediaNama report.
Advertisements
However, the council limited its suggestion to creating fact sheets of coding/revision control logs, data sources, training procedures, performance metrics, and known limitations. OpenAI and Google’s reasoning models go a step further, in that they give insights into the actual reasoning process of AI models. This can allow users and auditors to understand not just how companies build and train their models, but also how a model arrives at specific conclusions. As governments worldwide develop AI regulations, the ability to demonstrate transparent reasoning processes could help companies meet emerging requirements for AI explainability and accountability.
Also read:
Support our journalism:
For You
Source link