A New AI Technique Provides Researchers Neural Imaging in Moving Mice – Psychology Today

Discovering the molecule that drove her madness.
Verified by Psychology Today
Posted April 29, 2022 | Reviewed by Tyler Woods
One of the mysteries that neuroscience strives to solve is understanding the patterns of brain activity that determine behavior. A new study by researchers at Johns Hopkins University shows how artificial intelligence (AI) machine learning can improve the accuracy and speed of mouse brain imaging in action—a breakthrough that one day could help accelerate neuroscience research for human brain diseases and disorders.
“Establishing correlations between the activity of a population of neurons with discreet animal behaviors is a critical step in understanding how the brain encodes motor output,” wrote the Johns Hopkins University researchers.
Often, capturing neural activity in mice requires restraints to keep the animal from moving freely. The drawback to this method is that it may cause stress, which may impact the mouse’s brain activity. Additionally, this type of neural imaging excludes science experiments that require mammals to move, such as navigating through mazes, eating, and other activities.
Endomicroscopes, also called scanning two-photon (2P) fiberscopes, can capture continuous imaging of neural activity over time in freely moving mice. However, endomicroscopes have slower acquisition speed, as their small size limits more robust functionality. According to the researchers, the “ultra-compact design of the imaging probe limits the choices of beam scanner and imaging optics, and consequently limits the imaging frame rate.” Moreover, normal high-frequency physiological activity, such as heartbeats, can produce image artifacts that lower the accuracy.
In this study, the researchers aimed to create an endomicroscope system that enables high-resolution imaging of freely moving mice at high speeds. To increase the frames per second rate, the team decided to reduce the number of points scanned. However, a reduction of scanned points reduces the image quality. To improve the image quality, the team trained an AI algorithm to identify and generate the missing points.
The scientists adapted an open-source, deep learning platform to recover image quality. Specifically, the researchers used a deep neural network (DNN) based on a conditional generative adversarial network (cGAN). Conditional generative adversarial networks are often used to address image-to-image translation problems.
“Video-rate imaging was achieved by increasing the scanning speed and decreasing the scanning density during data acquisition in conjunction with the assistance of DNNs,” the researchers wrote.
In artificial intelligence, a generative adversarial network (GAN) is a deep learning network that generates output that has similar characteristics as the training data provided. A conditional generative adversarial network is a type of GAN where image generation can be conditional on a class label which enables the targeting of image generation of a given type.
“Compared with existing 2P fiberscopy configurations, we increased the frame rate by over 10-fold without compromising signal-to-noise ratio and imaging resolution,” the scientists reported. “This significant improvement in frame rate overcomes a critical bottleneck of 2P fiberscopy and enables it as a promising tool for functional neural imaging studies.”
Cami Rosso writes about science, technology, innovation, and leadership.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2022 Sussex Publishers, LLC
Discovering the molecule that drove her madness.

Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published.

© 2022 AI Caosuo - Proudly powered by theme Octo