The inefficacy of 2-Dimensional techniques in visualizing all perspectives of an organ may lead to inaccurate diagnosis of a disease or deformity. This raises a need for adopting 3-Dimensional medical images. But, the high expense, exposure to a high volume of harmful radiations, and limited availability of machinery for capturing images are limiting factors in implementing 3-Dimensional medical imaging for the whole populace. Thus, the conversion of 2-Dimensional images to 3-Dimensional images gained high popularity in the field of medical imaging. However, numerous research works offer the potential for the reconstruction of 3-Dimensional images. But, none of these provides the visualization of all angles of view from 0 degrees to 360 degrees for a 2-Dimensional input image such as X-ray and dual-energy X-ray absorptiometry. Also, these techniques fail to handle noisy and deformed input images. The purpose of this research is to propose a tailored Conditional Adversarial Network Model for the translation of 2-Dimensional images of bones into their corresponding 3-Dimensional view. The model is preceded by pre-processing techniques for dataset cleaning, noise removal, and converting the dataset to a uniform format. Further, the efficacy of the model is improved by determining the optimal values of model parameters, employing the customized activation function, and optimizers. Additionally, the visual quality of the generated 3-Dimensional images is evaluated to showcase the degree of quality degradation while translating. The experimental results obtained on the real-life datasets collected from hospitals across India prove the efficacy of the proposed model in generating 3-Dimensional images. The generated images are similar in quality to the input image and also effective in retaining the information available in an input image.

Conditional Generative Adversarial Network Model for Conversion of 2 Dimensional Radiographs Into 3 Dimensional Views

Vocaturo, E;Zumpano, E
2023-01-01

Abstract

The inefficacy of 2-Dimensional techniques in visualizing all perspectives of an organ may lead to inaccurate diagnosis of a disease or deformity. This raises a need for adopting 3-Dimensional medical images. But, the high expense, exposure to a high volume of harmful radiations, and limited availability of machinery for capturing images are limiting factors in implementing 3-Dimensional medical imaging for the whole populace. Thus, the conversion of 2-Dimensional images to 3-Dimensional images gained high popularity in the field of medical imaging. However, numerous research works offer the potential for the reconstruction of 3-Dimensional images. But, none of these provides the visualization of all angles of view from 0 degrees to 360 degrees for a 2-Dimensional input image such as X-ray and dual-energy X-ray absorptiometry. Also, these techniques fail to handle noisy and deformed input images. The purpose of this research is to propose a tailored Conditional Adversarial Network Model for the translation of 2-Dimensional images of bones into their corresponding 3-Dimensional view. The model is preceded by pre-processing techniques for dataset cleaning, noise removal, and converting the dataset to a uniform format. Further, the efficacy of the model is improved by determining the optimal values of model parameters, employing the customized activation function, and optimizers. Additionally, the visual quality of the generated 3-Dimensional images is evaluated to showcase the degree of quality degradation while translating. The experimental results obtained on the real-life datasets collected from hospitals across India prove the efficacy of the proposed model in generating 3-Dimensional images. The generated images are similar in quality to the input image and also effective in retaining the information available in an input image.
2023
Generative adversarial networks
X-ray imaging
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/360971
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 0
social impact