Human autonomy and AI: A shift in thinking
In today’s world, many people fall into two categories when it comes to interconnectivity: Those that always like to be available, and those that don’t.
Problems arise when these two groups of people rely on and interact with the same technology, particularly when they use AI, such as daily task managers.
“If you send me a message on my phone and there is a virtual secretary that says ’Philipp is not available right now’, then the level of freedom you have is how you negotiate with this AI secretary, whether it is so important to you that I should be interrupted or not. So, having a virtual assistant that could make decisions would influence both parties.”
Philipp Wintersberger, IT:U Professor of Intelligent User Interfaces.
And he says their reaction will depend on their sets of values, “if you think it is my right, my calmness of mind is important, then you are OK with the system, but if you need access right now because work is your priority, then you are probably annoyed with the system”.
The professor and his team aim to improve AI interfaces by designing them with usability, safety and innovation in mind.
They are attending the CHI conference in Barcelona this week, where the team´s associate researcher and PhD candidate, Dinara Talypova, will present her most recent research paper on the topic, which received an ‘Honorable Mention Award’ from the organizers.

Dinara Talypova is presenting at the CHI conference in Barcelona this week.
Shift in thinking
Dinara Talypova calls for a shift in thinking when it comes to designing automated interfaces, where developers balance AI assistance with human autonomy.
A theoretical framework that borrows some of its key tenets from political science and philosophy is at the heart of her work.
As part of her research, she focused on productivity and focus, but she plans to look at other aspects next.
“The core idea we found is that people are much more willing to use AI systems that are aligned with their values. We are ready to sacrifice some kind of freedom, some choices that we have, if we feel that this will help us achieve our end goal.”
Dinara Talypova, PhD candidadte with the Intelligent User Interfaces team.
The end goal in this case could be to choose not to be disturbed by the virtual assistant while reading this text, if that´s part of our value set.
Dinara Talypova says this positive value system is not “just another thing to look at. What our study tells us, it is that all other factors we had thought are important, such as people’s sense of agency or perceived performance, are marginal in comparison”.
Adopting this new way of thinking could pave the way for a future where AI is designed and programmed in a more individualized manner, where humanity and choice take center stage.
