X-ray imaging with AI
The U.S. Department of Energy’s Argonne National Laboratory has demonstrated the use of artificial intelligence (AI) to accelerate the process of reconstructing images from coherent X-ray scattering data.
Argonne’s technology, called PtychoNN, combines an X-ray imaging technique called ptychography with a neural network. This in turn enables researchers to decode X-ray images faster, which could aid in innovations like medicine, materials and energy.
Ptychography is a lensless, X-ray coherent imaging technique. In a system, this technique generates X-ray images of a sample. This is done by diffracting or scattering a beam on a sample. The beam bounces off the sample and hits a detector. The data captured by the detector has the information required to reconstruct high-resolution images of the sample or inside the sample, according to researchers from Argonne.
“The challenge, however, is that while the photons in the X-ray beam carry two pieces of information — the amplitude, or the brightness of the beam, and the phase, or how much the beam changes when it passes through the sample — the detectors only capture one,” according to Argonne.
“Because the detectors can only detect amplitude and they cannot detect the phase, all that information is lost,” said Martin Holt from Argonne. “So we need to reconstruct it.”
This reconstruction process is doable but slow. That’s where neural networks, or machine learning, fits in. A form of AI, machine learning uses algorithms in systems to recognize patterns in data as well as to learn and make predictions about the information. Using a neural network, machine learning promises to provide faster and more accurate results in select areas like defect detection.
Using machine learning, researchers from Argonne can reconstruct images from X-ray data and predict them roughly 300 times faster than the traditional method.
“There are two key takeaways,” said Mathew Cherukara, a computational scientist at Argonne. “If data acquisition is the same as today’s method, PtychoNN is 300 times faster. But it can also reduce the amount of data that needs to be acquired to produce images.”
Ross Harder, a physicist at Argonne, added: “What’s next is showing that it works on more data sets and implementing it for everyday use.”
Tracking climate change with AI
The University of Maryland, Baltimore County (UMBC) has developed a machine learning technique to gain more insights into the impact of climate change in the Arctic.
For years, researchers have collected data about the changes in the Arctic and Antarctica. The data enables researchers to understand the impact of climate change, and the melting ice sheets, in those regions.
This is important for several reasons. If researchers can predict the severity of climate change, they can better understand the changes in sea levels.
But processing the data is challenging. For example, NASA’s process for collecting and labeling polar data involves a lot of manual work, according to UMBC. And changes in the data can take months or even years to see, according to UMBC.
There are some solutions. Researchers from UMBC have been collecting data of the internal ice layers of the Arctic. The data was collected by using NASA’s Operation IceBridge.
NASA’s Operation IceBridge images the Earth’s polar ice in detail using a fleet of research aircraft. The fleet is equipped with scientific instruments, which characterizes the annual changes in the thickness of the sea ice, glaciers, and ice sheets. It collects data used to predict climate change and the resulting rise in sea levels.
Researchers from UMBC examined the radar data collected from the aircraft and then used deep learning methods to interpret the information.
“However, in many real-world problems, even when a large dataset is available, deep learning methods have shown less success, due to causes such as lack of a large labeled dataset, presence of noise in the data or missing data,” said Maryam Rahnemoonfar, an associate professor of information systems at UMBC, in the Journal of Glaciology. Others contributed to the work.
“In this work, we have studied a multi-scale deep learning model and various approaches to implement it for detecting ice layers in radar imagery,” Rahnemoonfar said in the journal. “It is important to note that most of the well-known deep learning approaches work very well on optical images, but cannot produce acceptable results for non-optical sensors especially in the presence of noise. The fact that deep learning models are not robust with respect to noise is discussed in various works. In our experiments we have shown that transfer learning approaches do not work well for radar images, while training from scratch yields far better results. However, the latter requires annotated data provided by the domain experts.”
One way to avoid this is to generate synthetic data. “Although the synthetic data used for training in this work only loosely match the actual snow radar, the results indicate that synthetic data could be successfully used for training. Future work should explore training with synthetic data which matches the noise and signal statistics of the actual snow radar data. In the future, we plan to combine AI and physical models to expand the simulated dataset and therefore better train our network. We also plan to develop advanced noise removal technique based on deep learning. Calculating the actual thickness of ice layers from the neural network is also another direction of our research,” Rahnemoonfar said in the Journal of Glaciology.