• Chinese Optics Letters
  • Vol. 17, Issue 11, 111001 (2019)
Qungang Ma1, Liangcai Cao2, Zehao He2、*, and Shengdong Zhang1
Author Affiliations
  • 1School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China
  • 2State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
  • show less
    DOI: 10.3788/COL201917.111001 Cite this Article Set citation alerts
    Qungang Ma, Liangcai Cao, Zehao He, Shengdong Zhang. Progress of three-dimensional light-field display [Invited][J]. Chinese Optics Letters, 2019, 17(11): 111001 Copy Citation Text show less


    In this review, the principle and the optical methods for light-field display are introduced. The light-field display is divided into three categories, including the layer-based method, projector-based method, and integral imaging method. The principle, characteristic, history, and advanced research results of each method are also reviewed. The advantages of light-field display are discussed by comparing it with other display technologies including binocular stereoscopic display, volumetric three-dimensional display, and holographic display.


    Information in the real world could be acquired by humans through a variety of ways. Among them, more than 70% of information is obtained by visual perception. Through visual information, humans could perceive the three-dimensional (3D) layout of objects in the real world. Human perception of the 3D information could be achieved through pseudo 3D effects, binocular parallax, motion parallax, monocular focus effect, and binocular convergence effect. Pseudo 3D effects, such as affine, texture, and shadow, contain no binocular depth information about the displayed object. They could only deceive the human brain to produce psychological 3D feelings. Binocular parallax refers to the difference between two images for the left and the right eyes, respectively. These two slightly different images are fused by the brain, and then 3D immersion could be obtained. Motion parallax is the movement amounts of objects at different depths that are not equal when a person observes a 3D scene while moving. The monocular focusing effect indicates the adjustment of the lens in human eye for a clearer viewing effect of objects at different depths. The binocular convergence effect means the rotation of the optical axes of two eyes. The intersection of the optical axes will converge at the center of the target object at a specific depth.

    Nowadays, humans often observe the real world through display devices instead of on-the-spot observation in most cases. The development status of display devices determines the comprehensiveness and authenticity of humans’ cognition about the real world. According to the richness of the provided indicators, display devices could be divided into three levels, as shown in Fig. 1. The traditional two-dimensional (2D) display is on the basic level that could only provide pseudo 3D effects[1]. Binocular vision display locates on the medium level, which could provide the binocular parallax and binocular convergence effect[2,3]. However, in binocular vision display, the distance between the display contents and the human eyes is not equal to the distance between the display screen and the human eyes. The focuses of eyes locate on the display screen, while the intersection of the optical axes of eyes locates on the display contents. This phenomenon is called convergence-accommodation conflicts[4]. Viewing with this type of device for a long time is subject to dizziness and fatigue. Besides, binocular vision display could not provide motion parallax. These shortcomings limit the application of the binocular vision display. Volumetric 3D display employs point sources that could emit light at specific positions in 3D space. Light emitted by these point sources enters human eyes, and then 3D objects could be seen by humans[5,6]. It suffers from the limitations such as high system complexity, huge calculation amount, and large size. Holographic display is a 3D display technology that records the information of 3D objects by interference and reconstructs them by diffraction[79]. The theoretical display effect of holographic display is the same as that of the real world. However, the data amount to be processed is large. The requirements for the calculation power and the transmission rate are quite demanding. Light-field display is a type of technology constructing 3D objects by ray tracing. It could realize high-quality reconstruction of 3D objects with much smaller data processing compared to holographic display[1012].

    Three levels of 3D display based on the comprehensiveness and authenticity.

    Figure 1.Three levels of 3D display based on the comprehensiveness and authenticity.

    Volumetric 3D display, holographic display, and light-field display are on the top level of 3D display because they could provide vivid 3D display effects similar to the real world. The comparison of them is shown in Table 1. Compared to volumetric 3D display and holographic display, light-field display has the advantages of lower cost, smaller calculation demanding, and lower system complexity. Therefore, light-field display, as one of the high quality 3D display technologies, is the most attractive technology in the industrial area. Thus, it is what we are most concerned about in this paper.

     System ComplexityData AmountCalculation PowerTransmission Rate3D Effect
    Volumetric 3DHighHighHighHighMedium

    Table 1. Comparison of Volumetric 3D Display, Holographic Display, and Light-Field Display


    A. Theoretical Basis of Light-Field Display

    The reason why 3D objects can be seen is that the light emitted or reflected by the object is received by human eyes. For specific 3D objects, different images could be seen by the human eyes from different perspectives. This relationship can be quantitatively expressed as I=P(x,y,z;θ,φ;λ;t),where (x,y,z) is the coordinate value of the human eye’s position, (θ,φ) is the angle value of the light ray in the horizontal and vertical directions, λ is the wavelength of the light ray, and t indicates that the light intensity will change with time. Therefore, the light ray emitted by 3D objects can be expressed by a seven-dimensional function, called the plenoptic function[13].

    However, it is extremely difficult to process and transmit seven-dimensional functions in real time under the current calculation capacity. Assuming that the intensity of light does not attenuate, and the wavelength does not change during propagation, the seven-dimensional plenoptic function can be simplified to a four-dimensional function, which could be expressed as I=P(u,v;p,q),where (u,v) and (p,q) are two non-coplanar planes. If a light ray has an intersection with each of the planes, the light ray can be represented by these two intersections[14]. Considering that the distance from the 3D objects to human eyes is very limited, the attenuation of light in the air is very small. Thus, the simplification of the seven-dimensional plenoptic function is completely reasonable.

    In order to realize the reconstruction of the four-dimensional function, special display devices need to be built. The intensity and the direction of the light emitted by each point on the display devices could be accurately controlled. The 3D objects to be displayed can be reconstructed by the display devices indirectly. According to the different ways to realize four-dimensional function, the light-field display could be divided into four categories: they are the layer-based method[1519], projector-based method[2022], and integral imaging method[2325].

    B. Layer-Based Light-Field Display

    The schematic of the layer-based light-field display is shown in Fig. 2. The liquid crystal (LC) panel-1 and LC panel-2 indicate the (u,v) plane and the (p,q) plane, respectively. The light rays, which determine the position of the real point P, can be represented by the positions of a1, a2, a3 in the (u,v) plane and a1, a2, a3 in the (p,q) plane. Meanwhile, the light intensity of point P in different directions can be represented by the transmittance of these mentioned points. Similarly, the position and the intensity of the virtual point Q can also be reconstructed by points in the (u,v) plane and the (p,q) plane. The reconstruction of the 3D object can be realized by controlling the transmittance of each pixel according to Eq. (2).

    Schematic diagram of the layer-based light-field display.

    Figure 2.Schematic diagram of the layer-based light-field display.

    The layer-based light-field display employs pixels on multiple planes to render the positions and intensities of points of 3D objects. The depth of field can be improved by increasing the number of layers. The depth of field of the layer-based light-field display exceeds the traditional multi-view stereo display. However, in practical applications, the screen size of each plane is finite, and the effective size of the plane limits the viewing angle of the light-field display.

    With the development of the display devices, a new type of layer-based light-field display has appeared, which is called the vector-fields light-field display. The schematic of the vector-fields light-field display is shown in Fig. 3. The directional backlight panel indicates the (u,v) plane, while the LC panel indicates the (p,q) plane. The directional backlight device is composed of an illumination source and optical waveguides. The direction and divergence angle of each pixel are controlled by the optical waveguide. These directional rays can be employed to reconstruct 3D objects.

    Schematic diagram of vector-fields light-field display.

    Figure 3.Schematic diagram of vector-fields light-field display.

    The advantages of vector-fields light-field display include large viewing angle, high resolution, and high contrast. However, the pixel size of the directional backlight should be as small as possible. Meanwhile, the divergence angle of the exit light rays in the directional backlight panel should be narrow enough. Thus, this technology has high requirements for the design and fabrication of optical waveguides.

    C. Projector-Based Light-Field Display

    The projector-based light-field display could be divided into two categories: time-division method (TDM) and projector-array method. There are two typical configurations for the TDM light-field display. The first type is shown in Fig. 4(a). The projector plays the role of the (u,v) plane, while the directional diffuser plays the role of the (p,q) plane. The information of the 3D objects to be displayed is projected by the projector. A narrow viewing zone is generated by the directional diffuser. In order to enlarge the viewing zone, the directional diffuser is rotated at a high speed. The second type is shown in Fig. 4(b). In such a TDM light-field display, the projector moves at a high speed. The directional diffuser diffuses specific images to the corresponding viewing zone. Based on the persistence effect of human eyes, when the refresh rate of the image exceeds 30 Hz, human eyes could see continuous, non-fluctuating reconstructed images of 3D objects.

    Schematic diagram of TDM light-field display[26].

    Figure 4.Schematic diagram of TDM light-field display[26].

    The advantages of the TDM light-field display include a high resolution and a large viewing angle. However, it has a demanding requirement on the refresh rate of the display devices. The digital micro-mirror device (DMD), which could project 104 images per second, is the most used device in TDM light-field display. Besides, there are mechanical moving parts in the TDM light-field display, which affect the stability of the display system.

    The schematic of the projector-array light-field display is shown in Fig. 5. A series of projectors are employed to project the respective sub-images to the directional diffuser. The projector-array indicates the (u,v) plane, while the directional diffuser indicates the (p,q) plane. The directional diffuser, as a light-field control device, has a small divergence angle in the horizontal direction, but a large divergence angle in the vertical direction. When a sub-image from a single projector is projected onto the directional diffuser, the corresponding narrowband sub-image could be seen in the direction along the connection line between the viewpoint and the pupil of the projector. A series of narrowband sub-images are stitched together to form a 3D image.

    Schematic diagram of projector-array light-field display.

    Figure 5.Schematic diagram of projector-array light-field display.

    Generally, horizontal parallax is more important than vertical parallax in 3D display. Ignoring vertical parallax would greatly reduce the data amount needed by the 3D display system. Besides, the use of projectors with high resolution would make the projector-array light-field display more suitable for the display of large-scale and high-resolution 3D scenes. However, the lack of the vertical parallax limits the quality of the 3D display. Meanwhile, when the number of narrowband sub-images is not large enough, the 3D feeling of the projector-array light-field display would not be successive.

    D. Integral Imaging Light-Field Display

    The schematic of the integral imaging light-field display is shown in Fig. 6. The display screen and micro-lens array indicate the (u,v) plane and the (p,q) plane, respectively. The images of different perspective projections of the 3D objects are displayed on the different parts of the screen. The micro-lens array collects and restores the light emitted from different parts of the screen and reconstructs 3D objects from different perspective projections.

    Schematic diagram of integral imaging light-field display.

    Figure 6.Schematic diagram of integral imaging light-field display.

    The integral imaging light-field display could supply both horizontal and vertical parallax simultaneously. However, the resolution of 3D objects is reduced dramatically. An ultra-high-resolution display screen and high-precision micro-lens array could improve the resolution of integral imaging, but the requirement on alignment between the display screen and micro-lens array is extremely high. The viewing angle of integral imaging is determined by the distance between the micro-lens array and display screens, which is usually less than 10 deg. The comparison of different realization methods of light-field display is shown in Table 2.

     ResolutionViewing AngleBrightnessContrastComplexity
    Integral imagingLowSmallHighLowLow

    Table 2. Comparison of Different Realization Methods of Light-Field Display


    A. Early Stage of Light-Field Display

    The light-field display can be traced back to the “integral photography” proposed by Lippmann in 1908[27]. He indicated that 3D objects could be recorded and reconstructed by a small lens array or a compound eye lens. Obviously, this is an integral imaging type light-field display, but the concept of “light-field” had not been proposed at that time. In fact, the term “light-field” was firstly proposed, to the best of our knowledge, in Gershun’s article in 1936. He indicated that the radiation of light in space could be expressed as a 3D vector of space position. This article was translated into English by Moon and Timoshenko in 1939[28]. In 1981, the concept of “photic field” was proposed by Moon[29]. Photic field is a more comprehensive and systematic theory compared to the concept of “light field” proposed by Gershun. On the basis of photic field, many researchers had further improved the theory of display technology based on light field[13,30,31]. The light-field display had gradually become a complete theoretical system.

    The research on the light-field display system began in the 1960s. In 1968, the light-field display of computer-generated objects was realized based on Lippmann’s method by Chutjian and Collier[32], which marks the combination of light-field display technology and computer technology. In 1971, Okoshi proposed an optimum design method for a lens-sheet that could be applied in the integral photography and the projection-type 3D display[33], which provides a theoretical basis for the design of lens array in the integral imaging type light-field display. In 1977, a full-parallax display system based on integral imaging was proposed by Ueda and Nakayama[34]. There were 53×53 micro lenses with the size of 1.09 mm square in the display system that could provide a ±9.6deg viewing angle. In the 1990s, with the enormous improvement of hardware performance, the light-field display could be realized by ordinary personal computers[35]. With the increase of resolution and size of display devices, higher-quality color light-field display with a 50 in. size became possible[36]. Besides, multiple projectors were also introduced into light-field display, which brought a larger viewing angle and more viewpoints[37].

    Before the 2000s, although some progress had been made in light-field display, the speed of development was relatively slow. Since the 2000s, the development of light-field display has been accelerated obviously. Different types of light-field display have fairly different characteristics. Thus, each type of light-field display has its own development direction.

    B. Developments of Layer-Based Light-Field Display

    The layer-based method is an emerging realization method of light-field display. It was proposed by Lanman[38] and Wetzstein[15]. In this method, the more layers of display devices are used, the better the 3D effect will be. In Wetzstein’s research, a display system with five layers was employed. However, in practical situations, different light rays would pass through the same position of some layers. Therefore, there will be spatial multiplexing in some positions of each layer. In order to calculate the transmittance distribution on each layer, a huge calculation amount is required. Generally, the iterative algorithm would be employed for calculation in the layer-based method[17]. With the continuous improvement of graphic processing unit (GPU) performance, the transmittance distribution can also be solved by a GPU acceleration algorithm[3941].

    There is almost no convergence-accommodation conflict in layer-based light-field display. It could provide good accommodation effects for viewers[42]. Dizziness and fatigue could be effectively avoided in layer-based light-field display. This feature has attracted wide attention from the field of near-eye display[4345].

    Vector-fields light-field display is a display technology that imitates the luminous mode of real 3D objects. The core of this method is the directional backlight unit, including light refraction type, light reflection type, and light diffraction type. The 3D film structure[46], prism-array structure[47], and lens-array structure[48] could be employed in light refraction type directional backlight. Scattering pattern structure[49,50], elliptic mirror structure[51], and groove structures[52,53] are often applied in light reflection type directional backlight. For light diffraction type directional backlight, a system based on volume holographic optical elements (HOEs)[54] and a multi-directional backlight, which has 200 viewpoints and a viewing angle of 90 deg[55], has been reported. Commonly used directional backlight panels have grating structures on their surfaces. The length, width, direction, and spacing of the grating structures determine the direction and divergence angle of the emitted light[56]. Recently, some researchers have begun to study the directional backlight structure based on a metasurface[57], which provides a new solution for optical-field display.

    C. Developments of Projector-Based Light-Field Display

    The age of the TDM light-field display is more than 30 years old[20]. The core of this method is the directional scanner and the high frame rate projector. In order to accurately reconstruct the spatial light field of 3D objects, the directional scanner must control the light direction preciously. The projector projects the light-field distributions rather than images of 3D objects in all directions. The display systems based on the TDM are usually realized by rotation. Therefore, panoramic display could be easily obtained.

    TDM light-field display could achieve a large display size with a 360 deg viewing angle[58]. It could provide more than 200 viewpoints[58] and a refresh rate of 30 Hz[59]. 3D color objects could be reconstructed with great reality[60,61]. With the help of interactive devices, viewers could interact with the display contents provided by the TDM light-field display system[62]. Due to the anisotropy of the light-field distribution projected by the high frame rate projector, viewers in different directions could see different information[63]. These features make the TDM light-field display system suitable for 3D video conferences. It should be noted that such a conference system could only reconstruct the parallax information in the direction of rotation. In order to reconstruct the parallax in the direction that is perpendicular to the rotating plane, eye-tracking devices and special algorithms need to be adopted[64].

    The projector-array light-field display has been studied by a crowd of researchers because it could display complex 3D color images without mechanical moving parts. In order to enlarge the number of narrowband sub-images to form a successive 3D display effect, the amount of projectors in the system is increasing rapidly[65,66]. However, the display consistency might decrease with the increasing of the amount of projectors. Many performance evaluation parameters and optimization methods have been proposed to address this issue[67,68].

    Nowadays, excellent horizontal parallax could be provided by a light-field display based on the projector-array method. However, the vertical parallax is fairly limited due to the property of the directional diffuser. Eye-tracking technology has been employed in projector-array light-field display to render the display contents for corresponding viewpoints in real time according to the position of eyes[69]. However, when two or more viewers of different heights appear at the same horizontal position, only one of them can see the correct information.

    D. Developments of Integral Imaging Light-Field Display

    The integral imaging method is the oldest and most studied method in light-field display. For integral imaging light-field display, the most important issue is solving the problem of the resolution reduction. A high-quality 3D image could be reconstructed directly by extremely high-resolution projectors[70]. Meanwhile, it could also be obtained by a series of low-resolution perspective projections of a 3D scene[71]. Optimal design of micro-lens-array parameters could play a useful role in improving resolution[72]. Besides, the resolution improvements could be realized by some special optical elements such as irregular lens-array structure[73] and electrically movable pinhole array[74]. In addition to resolution improvements, expansion of the viewing angle is another major concern. The viewing angle could be expanded by using a curved lens array instead of a flat one[75]. The elements that could play the same role as a curved lens array, such as HOE[76] and variable LC prism array[77], are other options to expand the viewing angle. The employment of a two-layer lenticular lens array instead of the one-layer lens array could bring a larger viewing angle[78,79]. Through the head-tracking devices, the position of observers can be captured. The display zone can be dynamically changed, which provides a larger viewing angle for observers[80]. Small depth range is the third major concern faced by integral imaging light-field display. Electrically controlled polymer-dispersed LCs could be employed to enhance the depth range without mechanical movement[81]. Besides, a specially designed lens arrays, such as bifocal LC lens array[82], the lens array based on sublens structure[83], and focus-tunable lenses[84], could also contribute to the enlargement of the depth range of the integral imaging light-field display.

    In recent years, the desktop-based integral imaging display has attracted great attentions. Its display contents are suspended above the integral imaging display devices. It could be applied in numerous areas including health care, education, military, and intelligent manufacturing. However, for desktop-based integral imaging display, specially designed lens-array structures are often employed to expand the viewing angle[85]. This brings many challenges for design and manufacturing.


    With the continuous development of light-field display, viewers could get more realistic and immersive 3D visual experiences through different kinds of light-field display devices. Different realization methods of light-field display are analyzed in this paper. The layer-based method has a large depth of field with little convergence-accommodation conflict. Although it has a relatively small viewing angle, this does not affect its use in near-eye augmented reality display. The vector-fields method is a brand new way for layer-based light-field display. With the continuous progress of manufacturing technology, the vector-fields method is expected to achieve multi-person naked-eye display with a large viewing angle and low calculation amount. Projector-based light-field display could be divided into the TDM and projector-array method. The TDM light-field display could achieve a large display size with a 360 deg viewing angle. There are moving elements in the TDM system that make it large in size. It is generally suitable for 3D conference systems. The projector-array method could display complex 3D color images without mechanically moving parts. It could be applied in large-scale 3D display systems. The integral imaging method is the most studied method in light-field display. It has been used in many fields. Improvement directions for the integral imaging method include resolution improvement, viewing angle expansion, and depth range enlargement.


    [1] T. Ni, G. S. Schmidt, O. G. Staadt, M. A. Livingston, R. Ball, R. May. IEEE Virtual Reality Conference (VR 2006), 223(2006).

    [2] T. North, M. Wagner, S. Bourquin, L. Kilcher. J. Disp. Technol., 12, 982(2016).

    [3] Y. Wang, W. Liu, X. Meng, H. Fu, D. Zhang, Y. Kang, R. Feng, Z. Wei, X. Zhu, G. Jiang. Appl. Opt., 55, 6969(2016).

    [4] B. Wick, D. Currie. Optometry Vision Sci., 68, 226(1991).

    [5] K. Kumagai, I. Yamaguchi, Y. Hayasaki. Opt. Lett., 43, 3341(2018).

    [6] Y. Maeda, D. Miyazaki, T. Mukai, S. Maekawa. Opt. Express, 21, 27074(2013).

    [7] J.-S. Chen, D. P. Chu. Opt. Express, 23, 18143(2015).

    [8] Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, P. Liu. Opt. Express, 25, 8412(2017).

    [9] A. Maimone, A. Georgious, J. Kollin. ACM T. Graph., 36, 11(2017).

    [10] B. Javidi, H. Hua. Opt. Express, 22, 13484(2014).

    [11] S. Lee, C. Jang, S. Moon, J. Cho, B. Lee. ACM T. Graph., 35, 60(2016).

    [12] C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, B. Lee. ACM T. Graph., 36, 190(2017).

    [13] E. H. Adelson, J. R. Bergen. Computational Models of Visual Processing, 3(1991).

    [14] A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, P. Debevec. ACM T. Graph., 24, 756(2005).

    [15] G. Wetzstein, D. Lanman, W. Heidrich, R. Raskar. ACM T. Graph., 30, 95(2011).

    [16] D. Teng, L. Liu. SID Symposium Digest of Technical Papers, 48, 1607(2017).

    [17] D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, R. Raskar. ACM T. Graph., 30, 186(2011).

    [18] H. S. ElGhoroury, C.-L. Chuang, Z. Y. Alpaslan. SID Symposium Digest of Technical Papers, 371(2015).

    [19] Z. Y. Alpaslan, H. S. El-Ghoroury. Proc. SPIE, 9391, 93910E(2015).

    [20] S. C. Gustafson, G. R. Little, T. P. Staub, J. S. Loomis, J. M. Brown, N. F. O’Brien. Proc. SPIE, 1970, 149(1993).

    [21] Q. Zhong, Y. Peng, H. Li, X. Liu. J. Disp. Technol., 12, 1745(2016).

    [22] Q. Zhong, B. Chen, H. Li, X. Liu, B. Wang, H. Xu. Chin. Opt. Lett., 12, 060009(2014).

    [23] B. Lee, J.-H. Park, S.-W. Min. Digital Holography and Three-Dimensional Display, 333(2006).

    [24] W.-X. Zhao, Q.-H. Wang, A.-H. Wang, D.-H. Li. Opt. Lett., 35, 4127(2010).

    [25] X. Yu, X. Sang, X. Gao, Z. Chen, D. Chen, W. Duan, B. Yan, C. Yu, D. Xu. Opt. Express, 23, 25950(2015).

    [26] M. Yamaguchi. J. Opt. Soc. Am. A, 33, 2348(2016).

    [27] G. Lippmann. Comptes-Rendus Academie des Sciences, 146, 446(1908).

    [28] A. Gershun. Stud. Appl. Math., 18, 51(1939).

    [29] P. Moon, D. E. Spencer. The Photic Field(1981).

    [30] B. Javidi, F. Okano. Three-Dimensional Television, Video, and Display Technology(2002).

    [31] H. M. Ozaktas, L. Onural. Three-Dimensional Television: Capture, Transmission, Display(2008).

    [32] A. Chutjian, R. J. Collier. Appl. Opt., 7, 99(1968).

    [33] T. Okoshi. Appl. Opt., 10, 2284(1971).

    [34] M. Ueda, H. Nakayama. JPN. J. Appl. Phys., 16, 1269(1977).

    [35] J. Eichenlaub. Proc. SPIE, 1256, 156(1990).

    [36] H. Isono, M. Yasuda, D. Takemori, H. Kanayama, C. Yamada, K. Chiba. Proc. SPIE, 1669, 176(1992).

    [37] Y. Kajiki. Proceeding of the Third International Display Workshops, 489(1996).

    [38] D. Lanman, M. W. Hirsch, Y. Kim, R. Raskar. ACM T. Graph., 29, 163(2010).

    [39] G. Wetzstein, D. Lanman, M. W. Hirsch, R. Raskar. ACM T. Graph., 31, 80(2012).

    [40] X. Cao, Z. Geng, M. Zhang, X. Zhang. Proc. SPIE, 9391, 93910F(2015).

    [41] X. Cao, Z. Geng, T. Li, M. Zhang, Z. Zhang. Opt. Express, 23, 34007(2015).

    [42] A. Maimone, G. Wetzstein, M. W. Hirsch, D. Lanman, R. Raskar, H. Fuchs. ACM T. Graph., 32, 153(2013).

    [43] A. Maimone, H. Fuchs. IEEE International Symposium on Mixed and Augmented Reality, 29(2013).

    [44] F.-C. Huang, K. Chen, G. Wetzstein. ACM T. Graph., 34, 60(2015).

    [45] K. Guttag(2018).

    [46] J. C. Schultz, M. J. Sykora.

    [47] C.-W. Wei, C.-Y. Hsu, Y.-P. Huang. SID Symposium Digest of Technical Papers, 863(2010).

    [48] H. Kwon, H. J. Choi. Proc. SPIE, 8288, 82881Y(2012).

    [49] M. Minami, K. Yokomizo, Y. Shimpuku. SID Symposium Digest of Technical Papers, 468(2011).

    [50] M. Minami.

    [51] A. Hayashi, T. Kometani, A. Sakai, H. Ito. J. Soc. Inf. Display, 18, 507(2012).

    [52] K. Käläntär. J. Soc. Inf. Display, 20, 133(2012).

    [53] C.-F. Chen, S.-H. Kuo. J. Disp. Technol., 10, 1030(2014).

    [54] Y. S. Hwang, F.-K. Bruder, T. Fäcke, S.-C. Kim, G. Walze, R. Hagen, E.-S. Kim. Opt. Express, 22, 9820(2014).

    [55] D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, R. G. Beausoleil. Nature, 495, 348(2013).

    [56] K.-W. Chien, H.-P. D. Shieh. Appl. Opt., 45, 3106(2006).

    [57] D. Lin, M. Melli, E. Poliakov, P. Hilaire, S. Dhuey, C. Peroz, S. Cabrini, M. Brongersma, M. Klug. Sci. Rep., 7, 2286(2017).

    [58] A. Jones, I. McDowall, H. Yamada, M. Bolas, P. Debevec. ACM T. Graph., 26, 40(2007).

    [59] X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, W. Shen. Opt. Express, 21, 11237(2013).

    [60] X. Xia, Z. Zheng, X. Liu, H. Li, C. Yan. Appl. Opt., 49, 4915(2010).

    [61] W. Song, Q. Zhu, Y. Liu, Y. Wang. Appl. Opt., 54, 4154(2015).

    [62] C. Su, Q. Zhong, L. Xu, H. Li, X. Liu. SID Symposium Digest of Technical Papers, 46, 346(2015).

    [63] J. Jurik, A. Jones, M. Bolas, P. Debevec. IEEE CVPR 2011 Workshops, 15(2011).

    [64] A. Jones, M. Lang, G. Fyffe, X. Yu, J. Busch, I. McDowall, M. Bolas, P. Debevec. ACM T. Graph., 28, 64(2009).

    [65] T. Agocs, T. Balogh, T. Forgacs, F. Bettio, E. Gobbetti, G. Zanetti, E. Bouvier. IEEE Virtual Reality Conference, 311(2006).

    [66] T. Balogh. Proc. SPIE, 6055, 60550U(2006).

    [67] J. A. I. Guitián, E. Gobbetti, F. Marton. Visual Comput., 26, 1037(2010).

    [68] J.-H. Lee, J. Park, D. Nam, S. Y. Choi, D. Park, C. Y. Kim. Opt. Express, 21, 26820(2013).

    [69] A. Jones, K. Nagano, J. Liu, J. Busch, X. Yu, M. Bolas, P. Debevec. J. Electron. Imaging, 23, 011005(2014).

    [70] M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, M. Sato. Proc. SPIE, 6803, 680316(2008).

    [71] C. Yang, J. Wang, A. Stern, S. Gao, V. Gurev, B. Javidi. J. Disp. Technol., 11, 947(2015).

    [72] C. Wu, Q. Wang, H. Wang, J. Lan. J. Opt. Soc. Am. A, 30, 2328(2013).

    [73] Z. Kavehvash, K. Mehrany, S. Bagheri. Appl. Opt., 51, 6031(2012).

    [74] Y. Kim, J. Kim, J.-M. Kang, J.-H. Jung, H. Choi, B. Lee. Opt. Express, 15, 18253(2007).

    [75] D.-H. Shin, B. Lee, E.-S. Kim. Appl. Opt., 45, 7375(2006).

    [76] S. Lee, C. Jang, J. Cho, J. Yeom, J. Jeong, B. Lee. Appl. Opt., 55, A95(2016).

    [77] J. Kim, S.-W. Min, B. Lee. Opt. Express, 15, 13023(2007).

    [78] W.-X. Zhao, Q.-H. Wang, A.-H. Wang, D.-H. Li. Opt. Lett., 35, 4127(2010).

    [79] X. Yu, X. Sang, D. Chen, P. Wang, X. Gao, T. Zhao, B. Yan, C. Yu, D. Xu, W. Dou. Chin. Opt. Lett., 12, 121001(2014).

    [80] X. Shen, M. M. Corral, B. Javidi. J. Disp. Technol., 12, 542(2016).

    [81] J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, B. Lee. Opt. Lett., 29, 2734(2004).

    [82] X. Shen, Y.-J. Wang, H.-S. Chen, X. Xiao, Y.-H. Lin, B. Javidi. Opt. Lett., 40, 538(2015).

    [83] J.-Y. Jang, M. Cho. J. Disp. Technol., 12, 610(2016).

    [84] X. Shen, B. Javidi. Appl. Opt., 57, B184(2018).

    [85] X. Gao, X. Sang, X. Yu, W. Zhang, B. Yan, C. Yu. Chin. Opt. Lett., 15, 121201(2017).

    Qungang Ma, Liangcai Cao, Zehao He, Shengdong Zhang. Progress of three-dimensional light-field display [Invited][J]. Chinese Optics Letters, 2019, 17(11): 111001
    Download Citation