Tetris reveals how people respond to unfair artificial intelligence – ScienceDaily

[ad_1]

An experiment led by Cornell University in which two people play a modified version of Tetris revealed that players who get fewer turns perceive the other player as less likeable, regardless of whether a person or an algorithm has assigned the turns.

Most studies on the fairness of algorithms focus on the algorithm or the decision itself, but researchers have sought to explore the relationships between people affected by decisions.

“We’re starting to see a lot of situations where AI makes decisions about how to distribute resources among people,” said Malte Jung, associate professor of information science, whose group conducted the study. “We want to understand how this affects the way people look at and act towards each other. We’re seeing more and more evidence that machines are messing with the way we interact with each other.”

In a previous study, the robot chose the person to whom it would give a block and studied each individual’s reactions to decisions to allocate the device.

“We noticed that every time the robot seemed to favor one person, the other got upset,” Young said. “We wanted to study this further, because we thought that when machines that make decisions become part of the world – whether it’s a robot or an algorithm – how does that person feel?”

Using open-source software, Houston Clewer—first author of the study and a postdoctoral researcher at Yale University—developed a two-player version of Tetris, in which players manipulate falling geometric blocks in order to stack them without leaving gaps before the blocks stack. top of the screen. Claure’s version, Co-Tetris, allows two people (one at a time) to work together to complete each round.

The “custom”—whether a human or an AI, that’s passed on to the players—determines which player takes each turn. Jung and Claure created their experiment so that players have 90% turns (“more”), 10% (“less”) or 50% (“equal”) turns.

The researchers found that, as expected, those who were given fewer roles were acutely aware that their partner was given much more. But they were surprised to find that the feelings about it were pretty much the same regardless of whether it was a human or an AI doing the personalization.

The effect of these decisions is what researchers have called “machine allocation behavior”—similar to the well-established phenomenon of “resource allocation behaviour,” which is the observed behavior that people display based on allocation decisions. Jung said machine allocation behavior is “the concept that there is this unique behavior that results from a machine making a decision about how to customize something.”

The researchers also found that fairness does not automatically lead to better game play performance. In fact, an equal distribution of roles resulted, on average, in a worse degree than an unequal distribution.

Cleure said, “If a strong player receives the most blocks, the team will perform better. If one person gets 90%, they will ultimately do better than if two average players split the blocks.”

[ad_2]

Source link

Related Posts

Precaliga