Close menu
Accessibility Menu
Bigger text
bigger text icon
Text Spacing
Spacing icon
saturation icon
big cursor icon
Dyslexia Friendly
dyslexia icon

Google's two new AI milestones are LaMDA and MUM

For more than a decade now, Google has been interested in language. Recently, they have been researching new machine learning techniques for understanding searches or translating different languages, and this has enabled them to develop new methods of organising information through language. As such, these breakthroughs will mean immense investment and future gains.

Google’s two new AI milestones are LaMDA and MUM
The new language model for dialogue applications, LaMDA, is their latest research study on conversation. Conversation between people is very volatile because it can quickly move from one topic of conversation to another and this is very complex for modern conversational agents or chatbots, which typically have linear and defined conversations.

However, LaMDA is able to engage in conversation fluidly on a multitude of conversational topics, enabling the system to generate new, more natural ways of interacting with technology and new applications.

Along with recent intelligent language models such as GPT-3 or BERT, LaMDA has been in development for a long time and is based on Transformer, a neural network architecture from Google Research and open source. Thus, it is this architecture that allows training the model for language reading, the relationship of words to each other and, consequently, language prediction.

The difference between LaMDA and other new languages is that it has been trained through dialogue, with characteristics such as reasonableness and different nuances that distinguish a conversation from other forms of language.

Therefore, since 2020, Google has been introducing Transformer-based language models based on dialogue. These models are able to talk about virtually anything, and especially LaMDA can be corrected and adjusted to the specificity of their responses.

At the same time, and more slowly, Google is developing MUM. This is a model that promises to revolutionise BERT (the current search model used by Google), also based on the Transformer architecture, to answer complex questions in more than 75 languages and better understand the context of each topic.

How does LaMDA work?

The way today’s voice assistants work is that they have programmed responses in order to function. In addition, the difficulty for AI is that it is difficult for them to maintain a non-linear conversation. Human conversations do not go from point A to point B, but touch on different topics, leading to unexpected conversational paths, given where they come from. Therefore, although assistants are becoming increasingly sophisticated, they are still based on simple conversational systems.

TheMDA pays special attention to correcting this aspect by looking for the reasonableness of any conversation and is characterised by being flexible enough to control any axis of direction in which the user can steer such a conversation.

However, voice assistants are not its only application, LaMDA could be applied to chatbots, especially for customer-facing websites. A proper and improved AI model will always provide a better customer experience.

However, LaMDA is not yet ready to be introduced in the market, as elements such as interest and reality still need to be worked on, in order to combat the bias present in AI, a constant problem in AI and Machine Learning. In other words, AI sometimes prioritises race, gender or religion, without providing a level playing field for all users.

Therefore, Google is making great strides in this direction in terms of bias, especially in the linguistic area, but they still have a long way to go before they can perfect the system.

What does LaMDA hope to achieve?

Reasonableness and specificity are the main characteristics of this language. However, its creators are exploring other dimensions such as “interest”, i.e. they are measuring whether the responses it generates are witty and unexpected. There is also a special interest in objectivity, investigating ways in which language is not only convincing, but also correct.

Language is the greatest communication tool available to humans, but like everything else it can be misused, and it is important to check whether LaMDA complies with the principles of AI. So far, almost all intelligent languages have been found to be misused through bias, using hate speech or replicating false information.

Google, along with other companies, continue to do their utmost to create technologies like LaMDA, where these risks are minimised. These large companies are used to the problems of unfair bias that are often generated by machine learning, as they have been working with it for many years. For this reason, Google has decided to make its models open source so that different researchers can use and analyse the models. Thus, LaMDA has been analysed at every step of its development in order to incorporate conversational skills little by little.

Obviously, all the successes obtained with this language and all its conversational capabilities, Google will include in its assistant, so it promises a new conversational approach that seeks to interpret human interaction in a more effective way.

The first use case

TheMDA has attracted media attention after a first conversation, where it adopted the role of the planet Pluto and answered as if it were a planet. This was followed by another conversation, where the AI was a paper aeroplane. In both situations, the conversations were fluid and it was determined that the key words did not stand out from the rest of the conversation, obtaining very good results in terms of coherence.

The conclusions from this first contact are impressive, especially in the area of how easily it adapts to any topic of conversation, being sensible, fluent and interesting. However, this is a first test and the language is still under investigation.

Google’s main goal is to change the opinion generated by chatbots, transforming a rigid and cold interaction into a more fluid and productive conversation, which can generate a better customer experience.

After testing, Google claims that LaMDA understands more nuances than current algorithms, as well as being able to avoid giving repetitive answers and understand the context of the conversation.

MUM: the next milestone in understanding information

In parallel, MUM is a model capable of transforming the way Google helps with complex tasks in more than 75 languages, generating coherent answers. This is a system that works on the basis of text and images, and is considered to be the most powerful algorithm created by Google.

MUM stands for Multitask Unified Model and claims to be the revolution of BERT, the current neural network used in Google searches. The new model is also based on the Transformer architecture.

The company also claims that the model will answer complex questions and better understand the context associated with each topic. The main purpose of MUM is to eliminate the language barrier by finding information more efficiently and with more appropriate content regardless of the language in which the search is conducted.

This model seeks to make the information as accurate as possible regardless of the information available in the language in which the search is carried out. For example, even if you are looking for information in Spanish on topics that only exist in Chinese, MUM will provide all the information on that topic in the language that is available.

In addition, as it is multimodal, it will be able to associate text with images and videos, i.e. it will also be possible to perform searches and queries based on images. Google claims that MUM has the ability to associate the context of the two forms and provide an answer accordingly.


Thus, conversational AI and search AI are currently under development by Google. The company’s presentations are a preview of what is to come in the field of AI, with some promising solutions.

TheMDA will help many companies transform their customer experience by bringing real human conversations to their chatbots, with a strong focus on correcting major AI problems such as bias and exploring new dimensions such as model interest.

Finally, at the same time Google presented MUM, a model of the future neural system for Google searches, where complex questions in different languages will be more easily answered by understanding the context associated with the question.

view all