Is it possible to interpret the modeling decisions made by a neural network trained to simulate the constitutive behavior of simple or complex materials? The problem of the interpretability of a neural network is a crucial aspect that has been studied since the first appearance of this type of modeling tool and it is certainly not specific to applications related to constitutive modeling of heterogeneous materials. All areas of application, such as computer vision, biomedicine, and speech, suffer from this fuzziness, and for this reason, neural networks are often referred to as “black-box models”. The present work highlighted the efforts dedicated to this aspect in the constitutive modeling of the behavior of path independent materials, reviewing both more standard neural networks and those adopting, more or less strongly, the specific point of view of interpretability.
Constitutive modeling of heterogeneous materials by interpretable neural networks: A review
Bilotta, Antonio;Turco, Emilio;
2025-01-01
Abstract
Is it possible to interpret the modeling decisions made by a neural network trained to simulate the constitutive behavior of simple or complex materials? The problem of the interpretability of a neural network is a crucial aspect that has been studied since the first appearance of this type of modeling tool and it is certainly not specific to applications related to constitutive modeling of heterogeneous materials. All areas of application, such as computer vision, biomedicine, and speech, suffer from this fuzziness, and for this reason, neural networks are often referred to as “black-box models”. The present work highlighted the efforts dedicated to this aspect in the constitutive modeling of the behavior of path independent materials, reviewing both more standard neural networks and those adopting, more or less strongly, the specific point of view of interpretability.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


