On September 12, during our section of the L’Altra Ràdio show (Radio 4, RNE), Javier Otero and Marcos Montero, Heads of Marketing and Digital Transformation at IThinkUPC, spoke about the language models (LLMs) that power systems such as ChatGPT or Gemini, and explained the differences between classical models and the new reasoning models.
During the conversation, they highlighted that the first language models, like GPT, are based on predicting the next word to build coherent sentences but do not actually reason. In contrast, the new reasoning models, trained with millions of reasoning chains, are capable of solving complex problems much more effectively, although they need more time to respond since they “think” before giving an answer.
They also explained that we can classify these models as either open source or closed. Closed models, like ChatGPT, offer speed and efficiency but lack transparency about their inner workings. Open models, like Meta’s Llama or Aina from the Barcelona Supercomputing Center, allow for greater adaptability and transparency—crucial aspects for researchers and organizations who want to control training data biases.
If you want to dive deeper into how these models work and their differences, we invite you to listen to the L’Altra Ràdio podcast (minute 6:33).