Google launches AI tools for practicing languages through personalized lessons

On Tuesday, Google releases three new AI experiments that are aimed at helping people to learn to speak a new language in a more personalized way. Although the experiments are still in the early stages, it is possible that the company wants to take on Duolingo with the help of Gemini, the multimodal large language model of Google.
The first experiment helps you quickly teach specific sentences you need at the moment, while the second experiment helps you sound less formally and more like a local.
With the third experiment you can use your camera to learn new words based on your environment.

Google notes that one of the most frustrating parts of learning a new language is when you are in a situation where you need a specific sentence that you have not yet learned.
With the new “Tiny Les” experiment you can describe a situation, such as “finding a lost passport”, to receive vocabulary and grammar teams that are tailored to the context. You can also get suggestions for reactions, such as “I don’t know where I lost it” or “I want to report it to the police.”
The next experiment, “Snake Hang”, wants to help people to sound less like a textbook if you speak a new language. Google says that when you learn a new language, you often learn to speak formally, so it experiments with a way to teach people to speak more popularly and with local jargon.

With this position you can generate a realistic conversation between native speakers and see how the dialogue one message unfolds at a time. For example, you can learn through a conversation where a street vendor chat with a customer, or a situation in which a few long -lost friends reunite about the metro. You can float over terms that you are not familiar with to learn what they mean and how they are used.
Google says that the experiment occasionally abuses a certain jargon and sometimes makes up words, so users have to cross with reliable sources.

With the third experiment, “Word Cam”, you can take a photo of your environment, after which Gemini will detect objects and it will label in the language you learn. The function also gives you extra words that you can use to describe the objects.
Google says that sometimes you just need words for things for you, because it can show you how much you don’t know yet. For example, you know the word for ‘window’, but you may not know the word for ‘blinds’.
The company notes that the idea behind these experiments is to see how AI can be used to make independent learning more dynamic and personalized.
The New Experiments Support The Following Languages: Arabic, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), English (Au), English (UK), English (US), French (Canada), French (France, Hind, Hebrew, Hebrew, Hebrew, Hebrew, Hebrew, Hebrew, Hebrew, Hebrew, Hebrew, (Brazil), Portuguese (Portugal), Russian, Spanish (Latam), Spanish (Spain) and Turkish. The tools are accessible via Google Labs.