Head-mounted displays (HMDs) have made a virtual reality (VR) accessible to a widespread consumer market, introducing a revolution in many applications. Among the limitations of current HMD technology, the need for generating high-resolution images and streaming them at adequate frame rates is one of the most critical. Super-resolution (SR) convolutional neural networks (CNNs) can be exploited to alleviate timing and bandwidth bottlenecks of video streaming by reconstructing high-resolution images locally (i.e., near the display). However, such techniques involve a significant amount of computations that makes their deployment within area-/power-constrained wearable devices often unfeasible. This research work originated from the consideration that the human eye can capture details with high acuity only within a certain region, called the fovea. Therefore, we designed a custom hardware architecture able to reconstruct high-resolution images by treating foveal region (FR) and peripheral region (PR) through accurate and inaccurate operations, respectively. Hardware experiments demonstrate the effectiveness of our proposal: a customized fast SR CNN (FSRCNN) accelerator realized as described here and implemented on a 28-nm process technology is able to process up to 214 ultrahigh definition frames/s, while consuming just 0.51 pJ/pixel without compromising the perceptual visual quality, thus achieving a 55% energy reduction and a $\times 14$ times higher throughput rate, with respect to state-of-the-art competitors.

Design of a Low-Power Super-Resolution Architecture for Virtual Reality Wearable Devices

Spagnolo F.
;
Corsonello P.;Frustaci F.;Perri S.
2023-01-01

Abstract

Head-mounted displays (HMDs) have made a virtual reality (VR) accessible to a widespread consumer market, introducing a revolution in many applications. Among the limitations of current HMD technology, the need for generating high-resolution images and streaming them at adequate frame rates is one of the most critical. Super-resolution (SR) convolutional neural networks (CNNs) can be exploited to alleviate timing and bandwidth bottlenecks of video streaming by reconstructing high-resolution images locally (i.e., near the display). However, such techniques involve a significant amount of computations that makes their deployment within area-/power-constrained wearable devices often unfeasible. This research work originated from the consideration that the human eye can capture details with high acuity only within a certain region, called the fovea. Therefore, we designed a custom hardware architecture able to reconstruct high-resolution images by treating foveal region (FR) and peripheral region (PR) through accurate and inaccurate operations, respectively. Hardware experiments demonstrate the effectiveness of our proposal: a customized fast SR CNN (FSRCNN) accelerator realized as described here and implemented on a 28-nm process technology is able to process up to 214 ultrahigh definition frames/s, while consuming just 0.51 pJ/pixel without compromising the perceptual visual quality, thus achieving a 55% energy reduction and a $\times 14$ times higher throughput rate, with respect to state-of-the-art competitors.
2023
Hardware architecture
low power
super-resolution (SR)
wearable devices
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/357118
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 3
social impact