In many computer vision systems, backgroundsubtraction algorithms have a crucial importance to extractinformation about moving objects. Although color featureshave been extensively used in several background subtractionalgorithms, demonstrating high efficiency andperformances, in actual applications the background subtractionaccuracy is still a challenge due to the dynamic,diverse and complex background types. In this paper, anovel method for the background subtraction is proposed toachieve low computational cost and high accuracy in realtimeapplications. The proposed approach computes thebackground model using a limited number of historicalframes, thus resulting suitable for a real-time embeddedimplementation. To compute the background model asproposed here, pixels grayscale information and colorinvariant H are jointly exploited. Differently from state-ofthe-art competitors, the background model is updated byanalyzing the percentage changes of current pixels withrespect to corresponding pixels within the modeled backgroundand historical frames. The comparison with severaltraditional and real-time state-of-the-art background subtractionalgorithms demonstrates that the proposedapproach is able to manage several challenges, such as thepresence of dynamic background and the absence of framesfree from foreground objects, without undermining theaccuracy achieved. Different hardware designs have beenimplemented, for several images resolutions, within anAvnet ZedBoard containing an xc7z020 Zynq FPGAdevice. Post-place and route characterization resultsdemonstrate that the proposed approach is suitable for theintegration in low-cost high-definition embedded videosystems and smart cameras. In fact, the presented systemuses 32 MB of external memory, 6 internal Block RAM,less than 16,000 Slices FFs, a little more than 20,000 SlicesLUTs and it processes Full HD RGB video sequences witha frame rate of about 74 fps.
Multimodal background subtraction for high-performance embedded systems
Cocorullo G;CORSONELLO, Pasquale;Frustaci F;PERRI, Stefania
2019-01-01
Abstract
In many computer vision systems, backgroundsubtraction algorithms have a crucial importance to extractinformation about moving objects. Although color featureshave been extensively used in several background subtractionalgorithms, demonstrating high efficiency andperformances, in actual applications the background subtractionaccuracy is still a challenge due to the dynamic,diverse and complex background types. In this paper, anovel method for the background subtraction is proposed toachieve low computational cost and high accuracy in realtimeapplications. The proposed approach computes thebackground model using a limited number of historicalframes, thus resulting suitable for a real-time embeddedimplementation. To compute the background model asproposed here, pixels grayscale information and colorinvariant H are jointly exploited. Differently from state-ofthe-art competitors, the background model is updated byanalyzing the percentage changes of current pixels withrespect to corresponding pixels within the modeled backgroundand historical frames. The comparison with severaltraditional and real-time state-of-the-art background subtractionalgorithms demonstrates that the proposedapproach is able to manage several challenges, such as thepresence of dynamic background and the absence of framesfree from foreground objects, without undermining theaccuracy achieved. Different hardware designs have beenimplemented, for several images resolutions, within anAvnet ZedBoard containing an xc7z020 Zynq FPGAdevice. Post-place and route characterization resultsdemonstrate that the proposed approach is suitable for theintegration in low-cost high-definition embedded videosystems and smart cameras. In fact, the presented systemuses 32 MB of external memory, 6 internal Block RAM,less than 16,000 Slices FFs, a little more than 20,000 SlicesLUTs and it processes Full HD RGB video sequences witha frame rate of about 74 fps.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.