Yeah! Science!: Scientists at MIT have figured out a way to recover details from motion-blurred frames of a video to reproduce clear images. The “visual deprojection model” uses a convolutional neural network (CNN) to decipher the imagery.
Researchers at MIT’s Computer Science & Artificial Intelligence Laboratory (CSAIL) trained the CNN by scanning thousands of pairs of images (projections) — one of low quality and the other the source (signal) of the blurry picture. The neural network uses the information to essentially reverse engineer the blurring effects by learning the pixel patterns and what created them.
Another part of the CNN, called a “variational autocoder,” analyzes the output and evaluates how well the network matched the signal. It then creates a “blueprint” to tell the AI how to go from a projection to all possible matching sources. When given a fresh image, the CNN examines the pixel patterns and uses the blueprint to find every possible signal that could have created the blurring. From there, it combines the data to create a “high-dimensional” copy.
“If we can convert X-rays to CT scans, that would be somewhat game-changing.”
For example, in a video that shows a car speeding by, you might be able to tell that the car was red, but not much else. The visual deprojection model could take that footage and create a reproduction that is clear enough to identify the make and model.
“It’s almost like magic that we’re able to recover this detail,” said CSAIL postdoc Guha Balakrishnan, lead author of the paper.
Example aside, the researchers are more excited about what it could do in the medical field. They believe that the technology could be used to created 3D scans similar to CTs from X-rays. This breakthrough could significantly reduce costs since MRI and CT machinery is extremely expensive. The software would be able to recreate a high-information image from a low-information image like X-rays, which are comparatively inexpensive.
“If we can convert X-rays to CT scans, that would be somewhat game-changing,” said Balakrishnan. “You could just take an X-ray and push it through our algorithm and see all the lost information.”