Artificial intelligences can do a lot of things. Like, a whole lot. They can be faster and more efficient than humans at many complex tasks, giving sophisticated answers in the blink of an eye. This amazing analytical power has totally changed how we interact with our digital world. AI systems are getting more and more integrated into human-computer interaction (HCI) and user experience design (UX), but as designers, we’re still just getting started when it comes to really making the most of this tech.
Human-AI co-creation in UX is all about how well we can interact with AIs and make the right requests. AIs, on their side, are trained on insane amounts of data that include human emotions and possible responses to those emotions. Because of that, they can analyze patterns and simulate empathetic behavior.
Now, empathy isn’t exactly an emotion itself, but more like an emotional and cognitive skill. It means noticing, understanding, and sometimes even sharing someone else’s feelings. So, we can say AIs aren’t really able to produce real empathy, since they can’t actually feel or share emotions.
So, Gen-AI can generate artificial empathy, meaning it can mimic empathetic behavior convincingly enough that users may feel understood or emotionally supported. But it’s important to distinguish artificial empathy from authentic human empathy. Articial empathy powered by language models trained on vast examples of human interaction, not by actual emotional experience or moral understanding.
So, what are the main differences between humans and AI that stop machines from developing real empathy?
Cognitive limitations
Humans can recognize that other people have minds of their own and that those minds might see the world differently. This ability is called Theory of Mind (I’d never heard of it before, had to look it up). Humans have this skill, but it hasn’t been replicated in AI yet because it involves super complex thinking, plus understanding different perspectives and contexts.
Right now, most AIs just look for patterns in huge amounts of data. Things like nuances, subjectivity, and context usually go unnoticed by generative AIs, and that ends up affecting how they analyze data connected to those factors.
That’s why using AI answers directly, especially in qualitative UX research, can go really wrong. Because by doing that, you’re relying on info that ignores a bunch of factors that could totally change the insights from your whole research.
Emotional limitations
This is probably the biggest obstacle for AIs: they just can't feel or get emotional. Connecting with others on an emotional level is super important for building real empathy. Even some humans struggle to deal with other people’s emotions and to empathize.
Even though AIs can give answers that sound like they understand how we feel, they actually can't measure or really get what human emotions mean. The best they can do is spot patterns that suggest certain feelings or emotions. Because of that, their responses in these situations can end up being generic, shallow, or even inappropriate.
Ethical limitations
Humans have a conscience, they know their social role, and they’re always under society’s watch when it comes to morals and ethics. AIs might understand rules and commitments, but they aren’t judged by anyone and don’t really get the impact their answers can have. Even with responsible AI initiatives, these rules can reflect biased patterns that machines just accept as valid without questioning. All that data and language processing power doesn’t regulate itself, and without control, it can easily cross ethical lines without hesitation.Things like copywriting and who owns AI-generated content also need to be regulated. Same goes for transparency and privacy.
Making AIs actually feel or have real emotions is probably a pretty huge leap, at least for now. Honestly, it’s not something we’re going to pull off anytime soon. For the moment, the best path forward is to keep improving the tools and systems we already have. We need to focus on creating ways for Gen-AIs to really understand and respond to human emotions, and to do it in a way that’s responsible, ethical, and always puts people first. Basically, we should be all-in on developing emotionally intelligent technology.
When it comes to empathy, AIs still need our help and constant supervision. They’re just not there yet. In the end, it kind of shows that us humans aren’t as outdated as some people might think. We still have a pretty important role to play, especially when it comes to understanding feelings and connecting with others. So, for now at least, we’re still in the driver’s seat!
If you’re wondering where to start with AI interactions, I recommend the AI-Powered UX: Next-Gen Product Design course on Udemy. In this course, I teach how to use AI for research, data analysis, ideation, and prototyping, transforming the way you design digital products. The course is in English but includes Portuguese subtitles.