Findings suggest dogs are more attuned to actions rather than who or what they are doing – ScienceDaily


Scientists have decoded visual images from a dog’s brain, and provided a first look at how a dog’s brain reconstructs what it sees. Visual Experience Magazine The research conducted at Emory University was published.

The results suggest that dogs are more attuned to actions in their environment rather than who or what is doing it.

The researchers recorded the fMRI neuronal data of two awake, unrestricted dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. Then they used a machine learning algorithm to analyze the patterns in the neural data.

“We’ve shown that we can monitor activity in a dog’s brain while they are watching a video and, to at least a limited degree, reconstruct what they are looking at,” says Gregory Burns, Emory professor of psychology and corresponding author of the research paper. “The fact that we are able to do this is wonderful.”

The project was inspired by recent advances in machine learning and fMRI to decode visual stimuli from the human brain, providing new insights into the nature of cognition. Other than humans, this technique has been applied to only a few other species, including some primates.

“While our work is based on only two dogs, it provides proof of concept that these methods work on canines,” says Erin Phillips, first author of the research paper, who did the work as a research specialist in the Canine Cognitive Neuroscience Laboratory in Burns. “I hope this paper will help pave the way for other researchers to apply these methods to dogs, as well as to other species, so that we can gain more data and greater insights into how the brains of different animals work.”

Phillips, from Scotland, came to Emory as a researcher from Bobby Jones, an exchange program between Emory and the University of St Andrews. She is currently a graduate student in Ecology and Evolutionary Biology at Princeton University.

Burns and his colleagues have pioneered training techniques to get dogs to walk in an fMRI scanner and remain completely still and unrestricted while their neural activity is measured. A decade ago, his team published the first fMRI brain images of a fully awake, unleashed dog. That opened the door to what Burns calls the “dog project” – a series of experiments that explore the mind of the oldest domesticated species.

Over the years, his lab has published research on how a dog’s brain processes vision, words, smells, and rewards such as receiving praise or food.

Meanwhile, the technology behind computer algorithms for machine learning continued to improve. The technology has allowed scientists to decipher some patterns of human brain activity. The technology “reads minds” by detecting within patterns of brain data about different objects or actions an individual sees while watching a video.

“I started wondering, could we apply similar techniques to dogs?” Burns recalls.

The first challenge was to create video content that the dog would find interesting enough to watch for a long time. The Emory research team installed a video recorder on the gimbal and selfie stick that allowed them to shoot still shots from a dog’s perspective, at waist height to a human or slightly below.

They used the device to create a half-hour video of scenes related to the lives of most dogs. Activities included petting dogs by people and receiving treatment from people. Dogs were also shown sniffing, playing, eating, or walking on a leash. Activity scenes show cars, bikes or a motorcycle passing by on the road; Cat walking in a house. A deer crosses the road. people sitting people hugging or kissing; People who project a rubber bone or a ball to the camera; And people eat.

The video data was divided by timestamps into different classifiers, including object-based classifiers (eg dog, car, human, and cat) and movement-based classifiers (eg sniffing, playing or eating).

Only two of the dogs trained for the fMRI experiments had the focus and temperament to lie very still and watch a 30-minute video uninterrupted, including three 90-minute sessions. These two “super” dogs were Daisy, a mixed breed that may be part of a Boston terrier, and Bobo, a mixed breed that may be part Boxer.

“They didn’t need treatment,” says Phillips, who monitored the animals during fMRI sessions and watched their eyes track the video. “It was fun because it’s serious science, and a lot of time and effort has been put into it, but it just came down to these dogs watching videos of other dogs and humans behaving in a ridiculous way.”

Two people also had the same experiment, watching the same 30-minute video in three separate sessions, while they were lying in an MRI machine.

Brain data can be mapped to video classifiers using timestamps.

A machine learning algorithm, a neural network known as Ivis, was applied to the data. A neural network is a way to do machine learning by having a computer analyze training examples. In this case, the neural network was trained to classify the content of brain data.

The results of the two human subjects found that the model developed using the neural network showed 99% accuracy in mapping brain data to both object- and action-based classifiers.

In the case of decoding video content from dogs, the model does not work with object classifiers. The accuracy was 75% to 88%, however, in deciphering the working classifications of dogs.

The results point to significant differences in how the brains of humans and dogs work.

“We humans are focused on the goal,” Burns says. “There are 10 times as many verbs as there are in the English language because we have an obsession with naming things. Dogs seem less interested in who or what they see and more interested in the action itself.”

Burns notes that dogs and humans also have significant differences in their visual systems. Dogs only see shades of blue and yellow but have a slightly higher density of vision receptors designed to detect movement.

“It makes sense that dogs’ brains are highly attuned to actions first and foremost,” he says. “Animals must pay close attention to things going on in their environment to avoid eating them or to keep an eye on animals they might want to hunt. Movement and movement are of paramount importance.”

For Philips, understanding how different animals perceive the world is important to her current field research on how predator reintroduction into Mozambique affects ecosystems. “Historically, there hasn’t been much overlap in computer science and the environment,” she says. “But machine learning is a growing field that is beginning to find wider applications, including in ecology.”

Additional authors of the paper include Daniel Dilks, Emory associate professor of psychology, and Kirsten Gillette, who worked on the project as an undergraduate in neuroscience and behavioral biology at Emory. Gillette has since graduated and is now in a post-baccalaureate program at the University of North Carolina.

Daisy is owned by Rebecca Beasley and Bhubo is owned by Ashwin Sakhardande. The human trials in the study were supported by a grant from the National Eye Institute.



Source link

Related Posts

Precaliga