The ePIC experiment at EIC integrates a dual-radiator RICH (dRICH) detector for particle identification in the forward region. The detector will use silicon photomultipliers (SiPMs) to detect Cherenkov radiation with single-photon sensitivity. The detector channels will be read out by several Front End Boards (FEBs). Their output data will be collected by different Readout Boards (RDOs) and then forwarded to the Data Aggregation and Manipulation Board (DAMs). In the ePIC dRICH DAQ, each DAM will collect and merge data from various RDOs, streaming the event fragments to the ePIC data buffering system (Echelon 0) a=via its 100, GbE interfaces. To mitigate the risk of excessive bandwidth demand caused by the increasing SiPM Dark Count Rate (DCR), we designed a real-time data reduction system to reduce the output bandwidth. This is achieved through a distributed data-flow processing scheme across the DAMs and an additional board acting as a Trigger Processor (TP) to discard DCR noise-only events online. The architecture employs a distributed Multi-Layer Perceptron (MLP) to discriminate DCR Noise-Only events, with 30 local sub-network replicas deployed on the DAMs extracting features that are relayed to the TP using a direct low-latency communication channel. The primary implementation challenge is the ∼100 MHz acquisition rate, dictated by the ∼10 ns electron–ion bunch crossing interval. In the following sections, we describe our technical approach to addressing these strict timing requirements, focusing on the design of the FPGA computing pipelines and high-speed communication channels. Finally, we report on the current implementation status of the system.
Online AI-based distributed data reduction for the dual-radiator RICH detector in the ePIC experiment
Capua, M.;Fazio, S.;Occhiuto, L.;Tassi, E.;
2026-01-01
Abstract
The ePIC experiment at EIC integrates a dual-radiator RICH (dRICH) detector for particle identification in the forward region. The detector will use silicon photomultipliers (SiPMs) to detect Cherenkov radiation with single-photon sensitivity. The detector channels will be read out by several Front End Boards (FEBs). Their output data will be collected by different Readout Boards (RDOs) and then forwarded to the Data Aggregation and Manipulation Board (DAMs). In the ePIC dRICH DAQ, each DAM will collect and merge data from various RDOs, streaming the event fragments to the ePIC data buffering system (Echelon 0) a=via its 100, GbE interfaces. To mitigate the risk of excessive bandwidth demand caused by the increasing SiPM Dark Count Rate (DCR), we designed a real-time data reduction system to reduce the output bandwidth. This is achieved through a distributed data-flow processing scheme across the DAMs and an additional board acting as a Trigger Processor (TP) to discard DCR noise-only events online. The architecture employs a distributed Multi-Layer Perceptron (MLP) to discriminate DCR Noise-Only events, with 30 local sub-network replicas deployed on the DAMs extracting features that are relayed to the TP using a direct low-latency communication channel. The primary implementation challenge is the ∼100 MHz acquisition rate, dictated by the ∼10 ns electron–ion bunch crossing interval. In the following sections, we describe our technical approach to addressing these strict timing requirements, focusing on the design of the FPGA computing pipelines and high-speed communication channels. Finally, we report on the current implementation status of the system.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


