Generative AI Achieves Superresolution with Minimal Tuning | Research & Technology | May 2024 – Photonics.com

GRLITZ, Germany, May 2, 2024 Diffusion models for artificial intelligence (AI) produce high-quality samples and offer stable training, but their sensitivity to the choice of variance can be a drawback. The variance schedule controls the dynamics of the diffusion process, and typically it must be fine-tuned with a hyperparameter search for each application. This is a time-consuming task that can lead to suboptimal performance.

A new open-source algorithm, from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Imperial College London, and University College London, improves the quality and resolution of images, including microscopic images, with minimal fine-tuning.

The algorithm, called the Conditional Variational Diffusion Model (CVDM), learns the variance schedule as part of the training process. In experiments, the CVDMs approach to learning the schedule was shown to yield comparable or better results than models that set the schedule as a hyperparameter.

The CVDM can be used to achieve superresolution using an inverse problem approach.

The availability of big data analytics, along with new ways to analyze mathematical and scientific data, allow researchers to use an inverse problem approach to uncover the causes behind specific observations, such as those made in microscopic imaging.

By calculating the parameters that produced the observation, i.e., the image, a researcher can achieve higher-resolution images. However, the path from observation to superresolution is usually not obvious, and the observational data is often noisy, incomplete, or uncertain.

The model is sensitive to the choice of the predefined schedule that controls the diffusion process, including how the noise is added. When too little or too much noise is added, at the wrong place or wrong time, the result can be a failed training. Unproductive runs hinder the effectiveness of diffusion models.

Diffusion models have long been known as computationally expensive to train . . . But new developments like our Conditional Variational Diffusion Model allow minimizing unproductive runs, which do not lead to the final model, researcher Artur Yakimovich said. By lowering the computational effort and hence power consumption, this approach may also make diffusion models more eco-friendly to train.

The researchers tested the CVDM in three applications superresolution microscopy, quantitative phase imaging, and image superresolution. For superresolution microscopy, the CVDM demonstrated comparable reconstruction quality and enhanced image resolution compared to previous methods. For quantitative phase imaging, it significantly outperformed previous methods. For image superresolution, reconstruction quality was comparable to previous methods. The CVDM also produced good results for a wild clinical microscopy sample, indicating that it could be useful in medical microscopy.

Based on the experimental outcomes, the researchers concluded that fine-tuning the schedule by experimentation should be avoided, because the schedule can be learned during training in a stable way that yields the same or better results.

Of course, there are several methods out there to increase the meaningfulness of microscopic images some of them relying on generative AI models, Yakimovich said. But we believe that our approach has some new, unique properties that will leave an impact in the imaging community, namely high flexibility and speed at a comparable, or even better quality, compared to other diffusion model approaches.

The CVDM supports probabilistic conditioning on data, is computationally less expensive than established diffusion models, and can be easily adapted for a variety of applications.

In addition, our CVDM provides direct hints where it is not very sure about the reconstruction a very helpful property that sets the path forward to address these uncertainties in new experiments and simulations, Yakimovich said.

The work will be presented by della Maggiora at the International Conference on Learning Representations (ICLR 2024) on May 8 in poster session 3. ICLR 2024 takes place May 7-11, 2024, at the Messe Wien Exhibition and Congress Center, Vienna.

The research was published in the Proceedings of the Twelfth International Conference on Learning Representations, 2024 (www.arxiv.org/abs/2312.02246).

More:
Generative AI Achieves Superresolution with Minimal Tuning | Research & Technology | May 2024 - Photonics.com

Related Posts

Comments are closed.