AI in its Second Phase

Erik says: Lessons from MWC 2025 - Voice Control is AI's Worst Enemy

The clearest trend at the mobile fair was, unsurprisingly, AI. The development has entered its second phase and after simpler AI tools, manufacturers plan to completely reshape the mobile interface and how we interact with it.

Published Modified

With the help of AI agents that not only hopefully give you smart answers but actually carry out actions on their own, the dependence on apps is set to decrease. This is common to several of the major manufacturers' presented AI initiatives.

Tell the AI that you want a restaurant reservation at a Spanish restaurant at 7 pm for four people, and the AI will handle the right apps and manage the entire process. Tell the AI that you want to turn off notifications from an annoying app, and it will do so. Ask for a coffee, and the AI will handle that too. Common here is voice control. It is impractical not only on the spot at the fair when those demonstrating the function have to half-shout into the phone's microphone to be heard.

In addition to the manufacturers' own initiatives, they all also rely heavily on Google's AI Gemini, also with voice as the main way to give instructions.

With the help of AI agents that not only hopefully give you smart answers but actually carry out actions on their own, the dependence on apps is set to decrease. This is common to several of the major manufacturers' presented AI initiatives.

Tell the AI that you want a restaurant reservation at a Spanish restaurant at 7 pm for four people, and the AI will handle the right apps and manage the entire process. Tell the AI that you want to turn off notifications from an annoying app, and it will do so. Ask for a coffee, and the AI will handle that too. Common here is voice control. It is impractical not only on the spot at the fair when those demonstrating the function have to half-shout into the phone's microphone to be heard.

In addition to the manufacturers' own initiatives, they all also rely heavily on Google's AI Gemini, also with voice as the main way to give instructions.

Rarely particularly suitable

We have lived with voice assistants for quite a while now. Google Assistant (now Gemini) and Siri have been around for almost ten and nearly 14 years, respectively. The first AirPods relied entirely on Siri and voice control to let you change the song or adjust the volume. Provided you didn't want to take the phone out of your pocket to do it. Today, AirPods have buttons so you can do it silently, as it turned out that there are quite a few situations where we don't want to talk out loud. In public transport as well as in, for example, an open office landscape, it disturbs the surroundings more than it helps you as an individual. 

In practice, today's way of interacting with the phone is preferable to the new AI initiatives. Precisely because the mobile is such a personal gadget. Personally, I almost only use voice control today for simple timers when I'm cooking and for short commands to the smart speakers at home.

Apps should function but not be seen

A challenge all mobile manufacturers face with AI development is what role the apps should play. Their function should be made available to us as users, but in practice, we shouldn't have to worry about which app does what, it should just happen in the background. Both manufacturers and app developers have a task ahead of them, to make it useful and clear while in practice it should be faster and easier than what we already have.