[ad_1]
As artificial intelligence becomes more realistic, our trust in those we come in contact with may be compromised. Researchers at the University of Gothenburg have studied how advanced artificial intelligence systems affect our trust in the people we interact with.
In one scenario, the potential fraudster, who believes he is contacting an elderly man, is instead connected to a computer system that communicates through pre-recorded loops. The scammer spends a great deal of time trying to scam, patiently listening to the “man’s” confusing and somewhat repetitive stories. Oskar Lindwall, professor of communication at the University of Gothenburg, notes that it often takes a long time for people to realize that they are interacting with a technical system.
He, in collaboration with informatics professor Jonas Ivarsson, has written an article entitled Suspicious minds: The problem of trust and conversation, and exploring how individuals interpret and relate to situations in which one party may be an agent of AI. The article highlights the negative consequences of harboring suspicion towards others, such as the damage it can cause to relationships.
Ivarsson gives the example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to look for evidence of deception. The authors argue that an inability to fully trust a conversational partner’s intentions and identity may lead to excessive suspicion even when there is no reason to do so.
Their study discovered that during interactions between two people, certain behaviors were interpreted as signs that one of them was, in fact, a bot.
The researchers suggest that a mainstream design perspective is driving the development of artificial intelligence with increasingly human-like features. Although this may be attractive in some contexts, it can also be problematic, especially when it is not clear who you are communicating with. Ivarson questions whether AI should have human-like voices, because they create a sense of familiarity and lead people to form impressions based on sound alone.
In the case of the potential fraudster calling out the “older man”, the fraud is not revealed until much later, which Lindwall and Iverson attribute to the believability of the human voice and the assumption that the confused behavior is due to age. Once an AI has a voice, we infer attributes such as gender, age, and socioeconomic background, making it difficult to determine that we are interacting with a computer.
The researchers propose to create artificial intelligence with well-performed, eloquent voices that are still clearly artificial, which increases transparency.
Connecting with others involves not only deception, but also building relationships and making shared meaning. Uncertainty about whether a person is speaking to a human or a computer affects this aspect of communication. While it may not be significant in some situations, such as cognitive behavioral therapy, other forms of therapy that require more human contact may be negatively affected.
Jonas Ivarsson and Oskar Lindwall analyzed the data available on YouTube. They studied three types of conversations, audience reactions and comments. In the first type, a bot calls a person to book a hair appointment, without the person on the other end knowing. In the second type, a person contacts another person for the same purpose. In the third type, telemarketers are transmitted to a computer system with pre-recorded speech.
[ad_2]
Source link