Robots can help improve mental health at work – as long as they look right – ScienceDaily


Bots can be useful as mental health coaches in the workplace — but the perception of their effectiveness depends largely on what the robot looks like.

Researchers from the University of Cambridge conducted a study at a technology consulting firm using different robot health coaches, in which 26 employees participated in robot-led weekly wellness sessions for four weeks. Although the bots had identical voices, facial expressions, and scripts for the sessions, the bots’ physical appearance affected how the participants interacted with them.

Participants who performed well-being exercises with a toy-like robot said they felt more connected to their “coach” than participants who worked with a human-like robot. The researchers say that the perception of robots is influenced by popular culture, where the only limit to what robots can do is imagination. When you encounter a bot in the real world, it often doesn’t live up to expectations.

Because the toy-like bot looked simpler, participants might have lower expectations and ended up finding the bot easier to talk to. Participants who worked with the robot found that their expectations did not match reality, because the robot was not able to have interactive conversations.

Despite the differences between expectations and reality, the researchers say their study shows that bots can be a useful tool for promoting mental health in the workplace. The results will be announced today (March 15) at the ACM/IEEE International Conference on Human-Robot Interaction in Stockholm.

The World Health Organization recommends that employers take action to promote and protect mental health at work, but the implementation of well-being practices is often constrained by a lack of resources and staff. Robotics has shown some early promise to help bridge this gap, but most studies on robotics and well-being have been conducted in a laboratory setting.

“We wanted to get robots out of the lab and study how they can be useful in the real world,” said Dr. Mikul Spital, first author of the research paper.

The researchers collaborated with a local Cambridge technology consultancy to design and implement a workplace wellness program using robots. Over the course of four weeks, employees were guided through four different wellness exercises by one of two robots: either the QTRobot (QT) or the Misty II (Misty) robot.

QT is a childish, humanoid robot that stands about 90 cm tall, while Misty is a 36 cm tall, toy-like robot. Both robots have screen faces that can be programmed with different facial expressions.

“We interviewed different health and wellness coaches, and then programmed our bots to have a coach-like personality, with openness and high conscientiousness,” said Minja Axelson, co-author. “The robots were programmed to have the same personality, the same facial expressions, and the same voice, so the only difference between them was the physical shape of the robot.”

Experiment participants were guided through various positive psychological exercises by a robot in an office meeting room. Each session began with the bot asking the participants to recall a positive experience or describe something in their life for which they were grateful, and the bot would ask follow-up questions. After the sessions, the participants were asked to rate the robot with a questionnaire and an interview. Participants did one session per week for four weeks, working with the same robot for each session.

Participants who worked with the toy bot Misty reported having a better working relationship with the bot than participants who worked with the child-like QT bot. Participants also had a more positive perception of fog in general.

“It could be because the fuzzy robot is more like a game, it matches their expectations,” Spittal said. “But since QT is more human-like, they expected it to behave more like a human, which may be why the participants who worked with QT felt a little frustrated.”

“The most common response we received from participants was that their expectations of the robot did not match reality,” said Professor Hatice Ganis from the Department of Computer Science and Technology at Cambridge, who led the research. “We programmed the bots with text, but the participants were hoping there would be more interaction. It’s very difficult to create a bot capable of natural conversation. New developments in large language paradigms may be really useful in this regard.”

“Our perceptions of how robots look or behave may hinder the uptake of robots in areas where they could be useful,” Axelsson said.

Although the robots used in the experiment aren’t as advanced as C-3PO or other fictional robots, participants said they found the wellbeing exercises helpful, and they’re open to the idea of ​​talking to a robot in the future.

“The robot could serve as a physical reminder to commit to a well-being exercise,” said Gunness. “And just saying things out loud, even to a robot, can be helpful when you’re trying to improve mental well-being.”

The team is now working to enhance the responsiveness of the robot trainers during training practices and interactions.

The research was supported by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Hatice Ganis is a Fellow at Trinity Hall, Cambridge.



Source link

Related Posts

Precaliga