Author Affiliations
1The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen 518057, China2Department of Applied Physics, The Hong Kong Polytechnic University, Hong Kong, Chinashow less
Fig. 1. (Color online) Schematic diagram of (a) the composition of human visual system, (b) multilayer structure of human retina, and (c) a biological synapse.
Fig. 2. (Color online) (a) Abstracted pixel schematic of DVS. (b) Principle of operation[45]. (c) The response of a DVS array to a person moving in the field of view of the sensor. (d) A DVS array is observing a 500 Hz spiral on an analog oscilloscope. (e) The DVS output is a continuous sequence of address events (x, y) in time. Red and blue events represent an increase or decrease change of light, respectively[15].
Fig. 3. (Color online) (a) Abstracted pixel schematic of ATIS pixel. (b) The principle of operation of two types of asynchronous AER events. (c) Change detection events recorded (upper) and gray-level updates at the corresponding pixel positions (below)[15, 16].
Fig. 4. (Color online) The circuit and output of DAVIS vision sensor. (a) The pixel circuit of DAVIS pixel combines an APS with a DVS. (b) A snapshot from DAVIS sensors illustrating a captured APS frame in gray scale with the DVS events in color. The football was flying toward the person. Inset: 5 ms of output right after the frame capture of the football. (c) Space-time 3D view of DVS events during 40 ms of a white rectangle spinning on a black disk at 100 Hz. Green events are older and red events are newer[49].
Fig. 5. (Color online) Neuromorphic vision sensors based on ORRAM. (a) I–V characteristics of ORRAM with optical set and electrical reset. Inset, schematic structure of the MoOx ORRAM and its cross-section scanning electron microscopy (SEM) image. Scale bar, 100 nm. (b) Light-tunable synaptic characteristics under light intensity of 0.22, 0.45, 0.65 and 0.88 mW/cm2, respectively, with a pulse width of 200 ms. (c) Illustrations of the image memory function of ORRAM array. The letter F was stimulated with a light intensity of 0.88 mW/cm2. (d) Images before (left columns) and after (right columns) ORRAM image sensor pre-processing. (e) The image recognition rate with and without ORRAM image preprocessing[17].
Fig. 6. (Color online) NN vision sensors. (a) Schematic of the 2D Perovskite/Graphene optical synaptic device[21]. (b) Schematic of an artificial optic-neural synapse device based on h-BN/WSe2 heterostructure[20]. (c) Optical image of WSe2/h-BN/Al2O3 vdW heterostructure based device (left) and its structural diagram (right)[19]. (d) Optical microscope image of the photodiode array consisting of 3 × 3 pixels. The upper right: Schematic of a WSe2 photodiode. The bottom right: SEM image of the pixel. (e) Schematics of the classifier. (f) Schematics of the autoencoder[18].
Fig. 7. (Color online) A hemispherical retina based on perovskite nanowire array and its properties. (a) Side view of a completed EC-EYE. (b) The structure diagram of the EC-EYE. (c) Photocurrent and responsivity depend on light intensity of a perovskite nanowire photoreceptor. (d) I–V characteristics and the response of individual pixels. (e) The comparison of field of view (FOV) of the planar and hemispherical image sensors. (f) The reconstructed letter ‘A’ image of EC-EYE and its projection on a flat plane[23].
Parameter | DVS[45, 46] | ATIS[47, 48] | DAVIS[49, 50] |
---|
Major function | Asynchronous temporal contrast event detection | DVS + Intensity measurement for each event | DVS + Synchronous imaging | Noise | 2.1% | 0.25% | 0.4% APS, 3.5% DVS | Pixel complexity | 26 transistors, 3 caps, 1 photodiode | 77 transistors, 3 caps, 2 photodiodes | 47 transistors, 3 caps,
1 photodiode
| Power consumption (mW) | 24 | 50–175 | 5–14 | Resolution | 128 × 128 | 304 × 240 | 240 × 180 | Pixel size (μm2)
| 40 × 40 | 30 × 30 | 18.5 × 18.5 | Latency (μs)
| 15 | 4 | 3 | Dynamic range | 120 dB | 125 dB | 130 dB DVS, 51 dB APS | Date of publication | 2008 | 2011 | 2013 | Application | Dynamic scenes | Surveillance | Dynamic scenes |
|
Table 1. Comparison of three representative silicon retina.
Neuromorphic vision sensors | Device structure | Terminal number | Light wavelength (nm) | Array size | Functions | Ref. |
---|
ORAM vision sensors | WSe2/BN FET
| Three | 405–638 | 3 × 9 | Multibit optoelectronic memory/broadband spectrum distinction | [58]
| CuIn7Se11 FET
| Three | 543 | 3 pixels | Optoelectronic memory | [53]
| Pd/MoOx/ITO
| Two | 365 | 8 × 8 | Contrast enhancement/noise reduction | [17]
| NN vision sensors | Gra./2D perovskite/
gra. FET
| Three | 520 | – | High photo-responsivity/high stability/pattern recognition | [21]
| WSe2/h-BN FET
| Three | 405, 532, 655 | – | Colored and color-mixed pattern recognition/ultra-low power consumption | [20]
| WSe2 dual-gate FET
| Four | 650 | 27 pixels | Ultrafast recognition and encoding | [18]
| WSe2/h-BN/Al2O3 FET
| Three | – | 8 × 8 | Pattern recognition/edge enhancement/contrast correction | [19]
| Hemispherically shaped vision sensors | Silicon photodiodes | Two | 620–700 | 16 × 16 | Hemispherical electronic eye cameras/arbitrary curvilinear shapes | [32]
| Silicon photodiodes | Two | 532 | 8 × 8 | hemispherical shapes/FOV(140–180°) | [30]
| MoS2/graphene heterostructure FET
| Three | 515 | \gt,12 × 12 | High-density array design/small optical aberration/simplified optics | [22]
| Silicon-based lateral P–i–N photodiodes | Two | 543, 594, 633 | 676 pixels | Hemisphere-like structures | [66]
| Ionic liquid/Perovskite nanowire/liquid-metal | Two | Sunlight | 10 × 10 | High responsivity /reasonable response speed/low detection limit/wide FOV | [23]
|
|
Table 2. Comparisons of neuromorphic vision sensors based on emerging devices.