Special issue on computational image sensors and smart camera hardware

J Fernández-Berni, R Carmona-Galán, G Sicard… - 2017 - digital.csic.es
2017digital.csic.es
Embedded computer vision is becoming a disruptive technological component for key
market drivers of the semiconductor industry like smartphones, the Internet of Things or
automotive. The recent incorporation of advanced artificial intelligence techniques into
robust and precise inference schemes is underpinning this disruptiveness, along with the
increase of on‐chip computational power and the development of tools for rapid prototyping
and experimentation. As a result, visual sensing is being embedded in all kinds of products …
Embedded computer vision is becoming a disruptive technological component for key market drivers of the semiconductor industry like smartphones, the Internet of Things or automotive. The recent incorporation of advanced artificial intelligence techniques into robust and precise inference schemes is underpinning this disruptiveness, along with the increase of on‐chip computational power and the development of tools for rapid prototyping and experimentation. As a result, visual sensing is being embedded in all kinds of products and services, with different degrees of scene understanding. These products either did not exist before or are ousting existing ones. In this scenario, vision‐enabled systems are expected to have ever‐growing need for lower power consumption, lower cost, smaller form factor, higher image resolution and higher throughput. These burdensome requirements demand new approaches when it comes to dealing with the massive flow of information associated with the visual stimulus. In particular, early vision stands out as the critical stage where raw pixels are transformed into useful features for the targeted task. In order to effectively cope with this stage, front‐end hardware resources play a major role. The incorporation of advanced sensing and computational capabilities in image sensors allows exploiting parallelism and distributed memory from the very beginning of the signal processing chain. Sensing structures can be designed to produce multiple sensorial modalities, e.g. 2D/depth. Circuit blocks can be tuned to adapt the response of the sensor to distinct specifications of the subsequent processing stage, where parallelization and memory management will be also critical. An adequate dataflow organization in FPGAs, GPUs, DSPs, etc. is of utmost importance in order to preserve the goodness of previous performance boosters and further increase the throughput. All in all, this special issue focuses on those aspects of embedded vision systems having an impact on their degree of integrated intelligence.
digital.csic.es
Showing the best result for this search. See all results