Three-dimensional (3D) cross-modality cardiac image segmentation is critical for cardiac disease diagnosis and treatment. However, it confronts the challenge of modality-specific spatial confounding. This challenge derives from the entanglement between the anatomical factor and the modality factor in space. It hurts the inference about the causality between the 3D cardiac image and the predicted label. The challenge is exacerbated by the modality distribution discrepancy and the slice structure discrepancy. The existing cross-modality segmentation methods are difficult to address this challenge due to the lack of causality. In this paper, we propose the causal knowledge fusion (CKF) framework to solve the above challenge. First, the CKF explores the causal intervention to obtain the anatomical factor and discard the modality factor. The anatomical factor is the causal invariant representation that transfers between different modalities. Thus, the CKF improves the information fusion on different imaging modalities. Second, the CKF proposes the 3D hierarchical attention mechanism to extract the multi-scale information from 3D cardiac image. It improves the spatial learning ability on 3D anatomical structure. Extensive experiments on 3D cardiac image of 503 MR patients and 518 CT patients show that the CKF is effective (Dice > 0.949), and superior to eighteen state-of-the-art segmentation methods.

Causal knowledge fusion for 3D cross-modality cardiac image segmentation

Liu X.;Zhang H.;Guzzo A.;Fortino G.
2023-01-01

Abstract

Three-dimensional (3D) cross-modality cardiac image segmentation is critical for cardiac disease diagnosis and treatment. However, it confronts the challenge of modality-specific spatial confounding. This challenge derives from the entanglement between the anatomical factor and the modality factor in space. It hurts the inference about the causality between the 3D cardiac image and the predicted label. The challenge is exacerbated by the modality distribution discrepancy and the slice structure discrepancy. The existing cross-modality segmentation methods are difficult to address this challenge due to the lack of causality. In this paper, we propose the causal knowledge fusion (CKF) framework to solve the above challenge. First, the CKF explores the causal intervention to obtain the anatomical factor and discard the modality factor. The anatomical factor is the causal invariant representation that transfers between different modalities. Thus, the CKF improves the information fusion on different imaging modalities. Second, the CKF proposes the 3D hierarchical attention mechanism to extract the multi-scale information from 3D cardiac image. It improves the spatial learning ability on 3D anatomical structure. Extensive experiments on 3D cardiac image of 503 MR patients and 518 CT patients show that the CKF is effective (Dice > 0.949), and superior to eighteen state-of-the-art segmentation methods.
2023
3D cardiac image, Cross-modality segmentation, Causal learning, Causal invariant representation, Causal knowledge fusion
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/354199
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? ND
social impact