Machine learning generates a 3D model from 2D images – ScienceDaily


Researchers from the McKelvey School of Engineering at Washington University in St. Louis have developed a machine-learning algorithm that can create a continuous 3D model of cells from a subset of 2D images taken using the same standard microscopy tools found in many labs today. .

Their findings were published on September 16 in the journal The intelligence of nature’s machine.

“We train the model on the digital image array to get a continuous representation,” said Ulugbek Kamilov, assistant professor of electrical and systems engineering and computer science and engineering. “Now, I can show it any way I want. I can zoom in smoothly and there are no pixels.”

Key to this work was the use of a neural field network, a specific type of machine learning system that learns to map from spatial coordinates to corresponding physical quantities. When the training is completed, the researchers can indicate any format and the model can provide the value of the image in that location.

One of the particular strengths of neural field networks is that they do not need to be trained on copious amounts of similar data. Alternatively, as long as there are enough 2D images of the sample, the network can represent it completely, both inside and out.

The image used to train the network is like any other micrograph. In essence, a cell is lit from below; Light travels through it and is captured on the other side, creating an image.

“Because I have some views of the cell, I can use those images to train the model,” Kamilov said. This is done by feeding the model information about a point in the sample where the image captured some of the cell’s internal structure.

Then the network takes the best of it to recreate that structure. If the output is wrong, the network will be modified. If true, this path will be strengthened. Once the predictions match real-world measurements, the grid is ready to populate parts of the cell that weren’t captured by the original 2D images.

The model now contains information on a complete and continuous representation of the cell – there is no need to save a data-heavy image file because it can always be recreated by the neural field network.

Not only is the model a true representation of the cell that’s easy to store, Kamelov said, but it’s also, in many ways, more useful than the real thing.

“I can put any format and generate that opinion,” he said. “Or I can create entirely new opinions from different angles.” He can use the model to rotate a cell like the top or zoom in for a closer look; use the form to do other numerical tasks; Or even enter it into another algorithm.

Story source:

Materials Introduction of Washington University in St. Louis. Note: Content can be modified according to style and length.



Source link

Related Posts

Precaliga