Skip to main content

Integral imaging near-eye 3D display using a nanoimprint metalens array

Abstract

Integral imaging (II) display, one of the most critical true-3D display technologies, has received increasing research recently. Significantly, an achromatic metalens array has realized a broadband metalens-array-based II (meta-II). However, the past micro-scale metalens arrays were incompatible with commercial micro-displays; furthermore, the elemental image array (EIA) rendering is always slow. The two hinders in device and algorithm prevent meta-II from being used for practical video-rate near-eye displays (NEDs). This research demonstrates a meta-II NED combining a commercial micro-display and a metalens array. The large-area nanoimprint technology fabricates the metalens array, and a novel real-time rendering algorithm is proposed to generate the EIA. The hardware and software efforts solve the bottlenecks of video-rate meta-II displays. We also build a see-through prototype based on our meta-II NED, demonstrating the feasibility of augmented reality. Our work explores the potential of video-rate meta-II displays, which we expect can be valuable for future virtual and augmented reality.

1 Introduction

True-3D display technologies [1], including integral imaging (II) display [2,3,4], holographic display [5,6,7], volumetric 3D display [8], super multi-view display [9,10,11,12], etc., can satisfy the three-dimensional visual experience of human to the real scene. In particular, the II display is considered one of the most promising true-3D display technologies with the advantages of compact volume, full parallax, quasi-continuous viewpoint, and convenient full-color display. Compared with the current 3D display technologies based on binocular parallax [13,14,15], II display is not faced with the vergence-accommodation conflict (VAC), thus greatly alleviating visual fatigue. In the past few years, numerous efforts have been put into the performance improvement for both glasses-free II displays [16,17,18,19,20,21,22] and II near-eye displays (NEDs) [23,24,25,26,27,28,29,30]. While most II displays currently adopt a microlens array (MLA) to modulate light, an II display using a novel nano component, metalens array, was proposed [22] as the “meta-II display.” The meta-II display can take advantage of the metalens array’s flat nature and tremendous flexibility in light field manipulation, thus opening opportunities for improving the II display’s resolution, field of view, depth of field, etc. In contrast, the problems are challenging for a regular MLA-based II display. Despite the promising perspective of meta-II displays, video-rate meta-II displays for practical use have been rarely reported for two reasons. First, previously, only micron-scale metalens arrays were available due to the fabrication difficulty of large-area metalens arrays. Correspondingly, commercial micro-displays at a millimeter scale were unavailable, but a fine mask had to be used as a static picture generation unit [22]. Secondly, real-time image rendering is always challenging for II displays because of the high computational complexity of calculating the elemental image array (EIA). Therefore, the challenges in nano-device and algorithms both hinder practical meta-II displays.

We notice that deep ultraviolet lithography technology [31,32,33,34] has been used to realize low-cost, large-diameter, and large-area metalens in recent years, and nanoimprint technologies [35,36,37,38,39] have also been proposed to fast reproduce metalens samples. Such advances in metalens mass fabrication have significantly boosted the interest in metalens-based virtual reality (VR) and augmented reality (AR) display [40,41,42,43,44,45] and potentially enabled large-area meta-II. On the algorithm side, almost all existing EIA generation methods are based on viewpoints, which repeatedly perform geometric projections to induce high computational complexity. Though previous studies have proposed strategies such as parallel computing [46,47,48] and sparse viewpoints [49,50,51] to accelerate the rendering, the hardware complexity is elevated, or the rendering accuracy is sacrificed. That is to say, current acceleration strategies have to encounter a tradeoff between computational complexity, hardware complexity, and rendering accuracy. In particular, advanced computing hardware is unacceptable in wearable near-eye devices considering its power consumption and portability. Therefore, a new rendering method breaking the tradeoff is needed to achieve desired video-rate meta-II displays besides large-area metalens devices.

Here, we demonstrate a novel meta-II NED by combining a commercial micro-display and a large-area metalens array, as shown in Fig. 1a. Our metalens array is designed with the high-refractive-index nanoimprint glue and experimentally fabricated by the large-area nanoimprint technology. Next, a new rendering method is proposed to fast generate the EIA by exploiting the invariant voxel-pixel mapping in an II display. As a result, true-3D display is verified through monocular focus cues and motion parallaxes in both simulation and experiment. An average frame rate as high as 67 FPS is achieved to support real-time rendering. Moreover, based on the meta-II display module, we build a see-through system by merging 3D images with surrounding objects, showing the broader potential of the meta-II display for AR.

Fig. 1
figure 1

Diagrammatic drawing of meta-II NED. a The 3D AR effect of the meta-II NED with the major components of the metalens array and the micro-display. The light from the micro-display enters the human eye through the metalens array and a beam splitter’s reflection, while the ambient light can also be seen through the beam splitter. The virtual 3D images (number “3” and letter “D”) are reconstructed to coincide with the chess pieces (“Rook” and “Pawn”), respectively. b The photograph of the assembled meta-II micro-display panel. The micro-display and the metalens array are well aligned and assembled on both sides of the 3D-printed holder

2 Results

2.1 Optical architecture of the meta-II NED

Figure 1a illustrates the optical architecture of our meta-II NED. Three main components are shown in the top right corner: a micro-display, a 3D printing holder, and a metalens array. The micro-display is crucial to provide high-resolution pictures. Here, we adopt a 0.39-inch Si-OLED micro-display (BOE Technology Group Co., Ltd., B039FH8A0) with a high pixel density of 5644 PPI (i.e., pixel pitch: 4.6 μm). The following metalens array design and EIA rendering algorithm comply with the micro-display’s specifications. In order to assemble the micro-display and the metalens array, we specially design a 3D-printed holder. The micro-display and the metalens array can be well aligned and assembled on both sides of the holder so as to form an easy-to-use NED. To implement a see-through system, we use a beam splitter to merge 3D virtual images with the real scene, as the AR system Fig. 1a shows. Figure 1b shows our meta-II NED module, where the metalens array is aligned with the micro-display. The entire NED module in Fig. 1b weighs only 7.23 g for a seamless user experience.

2.2 Nanoimprint metalens array

In terms of the metalens array fabrication, nanoimprint technology is used to develop our metalens array. Compared to electron beam lithography, nanoimprint technology can quickly replicate many metalens array samples, especially large-area samples. Since the refractive index of nanoimprint adhesive is generally lower than 2.0, a high nanopillar is needed to realize the 2π phase interval coverage, leading to a large depth-diameter ratio of the nanopillar and increasing the difficulty of stamping, as discussed in the attached Additional file 1: Sect. S1. Here, balancing the difficulty of nanoimprint fabrication and the realization of phase interval coverage, we select the imprinting adhesive with a refractive index of 1.9 as the metalens material, and the thickness of the nanopillars is set as 500 nm. After an optimization design, the lattice constant of 416 nm is selected with the rectangle lattice arrangement for the nanopillar grating. In the nanoimprint process, a residual adhesive layer will inevitably remain on the substrate. However, it can be seen from the discussion in Additional file 1: Sect. S1 that the thickness of the residual adhesive layer little influences the overall phase, so it is not taken into consideration here. As a demonstration in this paper, we choose the appropriate size of the metalens array as 1840 μm×1840 μm with 4 × 4 metalenses; that is, the single metalens aperture is 460 μm×460 μm. Considering that the micro-display’s pixel pitch is 4.6 μm, one single metalens corresponds to 100 × 100 pixels in the micro-display. Hence, the number of viewpoints of our meta-II NED is 100 × 100, and the effective pixel resolution of the micro-display is 400 × 400. Thanks to the rapid replication advantage of nanoimprint technology, four identical metalens array samples shown in Fig. 2a are fabricated. The detailed nanoimprint fabrication processes are described in the Additional file 1: Sects. S2 and S3. Figure 2b shows the metalens array’s microscopic image, where no obvious defects can be seen, and the entire sample is intact. Figure 2c, d show the top view and side view of the local electron microscope image of the metalens array. In summary, high-quality nanoimprint metalens arrays have been achieved.

Fig. 2
figure 2

The nanoimprint metalens arrays and the measurement results. a Optical photo of our nanoimprint metalens arrays. b The microscope photo. c Top-view SEM image of a portion of the metalens array. d Side-view SEM image with high magnification. e Normalized measured intensity distributions in the y-z plane of all 16 metalenses in the array. f The normalized measured focal plane intensity distribution of a single metalens. g The x-direction cross sections of the measured intensity profiles in (f). The solid green curve denotes the Airy fit of measured data (black square point). The green text gives the full widths at half-maximum (FWHM) of the fitting measured data. h Calculated MTF (solid green curve) of a single metalens. The dashed line represents the corresponding diffraction limit

After fabricating the metalens array samples, the optical field scanning experiments were carried out. The experimental optical path is shown in Additional file 1: Fig. S4, and the corresponding results are collected in Fig. 2e–h. Figure 2e shows the normalized light intensity distributions of all 16 metalenses in the array in the yz plane at 547 nm. It can be seen that all metalenses converge the light approximately on the same focal plane, and all have only one focal point with a specific range of depth of focus (DOF). The average focal length of the metalens array is 5.8 mm, and the average DOF is 392 μm at 547 nm. Furthermore, the distributions of light intensity of all metalenses are similar in the yz plane, indicating a good uniformity of the metalens array. Hence, we can extract the optical field distributions of a single metalens to represent the optical field characteristics of the entire array. Figure 2f–h respectively show the focal plane, the cross profile along the x direction, and the corresponding modulation transfer function (MTF) of a single metalens at 547 nm. In Fig. 2g, the black scatter points represent the experimental data, while the green curve is the data curve after the Airy fitting. Thus, the focal spot’s full width at half maximum (FWHM) is 6.6 μm. In the MTF curves in Fig. 2h, the black dotted line is the diffraction-limited MTF, and the solid green line is the calculated MTF. Since the aperture of all metalenses is square, the diagonal aperture replaces the side length in calculating the diffraction limit. As seen, the focusing performance of the metalens is nearly diffraction-limited.

2.3 Real-time EIA rendering method

After obtaining the large-scale metalens array, the conventional slow EIA rendering method becomes the pivotal factor that impedes the desired video-rate meta-II display. The conventional rendering imitates II photography [2]. That is, an EIA is generated by capturing elemental images virtually with a camera array, which consumes much time to project all viewpoints to elemental images. Here, we propose a new rendering method to satisfy real-time without sacrificing computational complexity and accuracy. As shown in the middle of Fig. 3a, a voxel in an II display is generated by optically integrating homogeneous pixels. And these homogeneous pixels are mapped to the voxel through raytracing. Note that voxels on a depth plane are invariantly determined by system parameters (lens pitch, object distance, etc.) and statically mapped to pixels. Hence, we can exploit the static mapping prior to the EIA rendering, acquire voxels on specified depth planes, and save the voxel-pixel mapping as a look-up table (LUT), as shown in the left of Fig. 3a. Each cell in the LUT records the coordinates of all homogeneous pixels that form a voxel. In this manner, merely look-up operations are executed during the EIA rendering, bringing an ultra-fast EIA generation. The processes of our method can be elaborated as follows.

Fig. 3
figure 3

Sketch map of the real-time EIA rendering method and the verification of true-3D display. a The proposed EIA rendering method mainly includes acquiring the voxel-pixel mapping, LUT construction, image resampling, and look-up operations. bd Simulated 3D images with motion parallax and focus cue in the meta-II system. eg The corresponding experimental results. The images at different view angles have evident differences, demonstrating an effective motion parallax

The preprocessing contains two steps:

  1. (a)

    Voxel-pixel mapping acquisition. Use a lens model (or the pinhole model) to trace the chief rays of all voxels for each metalens, then obtain the voxel-pixel mapping with homogeneous pixels, as shown in the middle of Fig. 3a.

  2. (b)

    LUT construction. Store all homogeneous pixels in a LUT. A 3D scene is formed by multiple depth planes whose number affects the LUT size. Usually, five to ten depth planes are ample, considering the depth resolution of human eyes [52]. At this point, each LUT’s size is approximately several megabytes, entirely affordable for modern electronic devices. The modest memory demand indicates a reasonable LUT size.

Secondly, with the LUT, our EIA rendering includes two procedures:

  1. (c)

    Image resampling. Resample and rasterize an input 3D scene to voxels determined by the system parameters, as shown in the right of Fig. 3a. Black voxels can be ignored, indicating the method is more appealing for black-ground pictures (e.g., AR systems). The resampling is merely performed once to be a negligible overhead of the whole rendering.

  2. (d)

    Look-up operation. Assign rasterized data of each voxel to its corresponding homogeneous pixels according to the LUT, and output the final EIA for the II display, as shown in the left top of Fig. 3a. Because geometric projections in conventional viewpoint-based methods are replaced with ultra-fast look-up operations, the rendering speed can be accelerated for several orders of magnitude.

Using the Meta-II system with the 4-by-4 metalens array, we continuously executed the EIA rendering for 30 pictures. Our environment and rendering performance are as follows. The result shows that our method’s average frame rate exceeds the requirement of 60 FPS for video-level performance. More importantly, we use quite an entry-level personal computer but do not pay a cost for more complicated hardware.


Environment: i7-10700 CPU with no standalone GPU.


Platform: MATLAB R2021a with core codes executed by C++.


Average runtime of 30 pictures: 15 ms.


Frame rate: 67 FPS.

2.4 Verification of true-3D display

We conducted a meta-II display in the real mode (real images generated) to verify motion parallaxes and monocular focus cues, which are essential characteristics of true-3D display compared with the binocular parallax-based conventional 3D display. Consider a typical situation: the number “3” is located on the central depth plane (36 mm in this system), and the letter “D” is 10 mm behind it, as shown in Additional file 1: Fig. S7a. The EIA is calculated for the scene using the real-time generation method above, then inputted into the micro-display in simulation (LightTools) and experiment, where the distance between the metalens array and the micro-display is 5 mm.

First, Fig. 3b–d show simulated images at the viewing angles of − 1 º, 0 º, and 1 º, respectively, while fixing the receiver’s accommodation at the depth plane of the number “3”. Figure 3e–g show the corresponding experimental results, which are consistent with the simulation. As a result, when the viewing angle is 0°, as Fig. 3c, f show, the number “3” is in the middle of “D.” After rotating the viewing angle to − 1º in Fig. 3b, e, “3” is close to the edge of “D”; in contrast, it approaches the arc of “D” under the viewing angle of 1 º, as Fig. 3d, g show. The evident change in relative positions between the objects at different depths demonstrates effective motion parallaxes. Next, the monocular focus cus is considered. As the receiver in the simulation and the camera in the experiment both focus on the depth plane of “3,” the number “3” is sharp, while the letter “D” is relatively blurred. This result reveals that we create two optical depth planes, through which the number “3” is in focus and the letter “D” is out of focus. In this manner, the monocular focus cue is also proved. More verification for the motion parallax and focus cue is provided in the Additional file 1, e.g., the experimental blue and red images at different viewing angles in Additional file 1: Fig. S8 and the corresponding videos with green, blue, and red images in Additional file 1: Movies S1–S3, respectively.

2.5 See-through AR prototype

Our nanoimprint-based large-area metalens array and the real-time EIA rendering method have enabled a practical meta-II display. This section adopts it as the engine to implement a see-through AR prototype with a true-3D feature. By reconstructing virtual 3D images far behind the micro-display, the AR prototype works in the virtual mode so that a beam splitter can merge the real world and the virtual image at adjustable depths for users, as shown in Additional file 1: Fig. S7b. Note that the EIA generation method is the same for the real and virtual modes by setting desired image depths. In this prototype, the distance between the metalens array and the micro-display is 5.74 mm for green images. The reconstructed depth planes of the number “3” and the letter “D” are 80 and 300 mm from the metalens array, respectively. We use a cellphone camera to imitate human eyes and capture images. The distance between the eye and the metalens array is about 20 mm. The light from the micro-display enters the camera through the metalens array and the beam splitter’s reflection, while the ambient light transmits through the beam splitter to enter the camera, as shown in Fig. 1a. Thus, the camera can capture merged virtual 3D images and the surroundings to verify the AR effect.

Figure 4 shows AR effects with different colors, captured in the experiment in a dark ambience. In Fig. 4a, when the camera focuses on the chess piece “Rook” in the foreground, the number “3” and the chess piece “Rook” are both clear, showing the depths of both are the same. Meantime, the letter “D” is blurry, as expected, because it is reconstructed at a farther depth. When the camera focuses on the chess piece “Pawn” in the rear, as Fig. 4b shows, the letter “D” becomes sharp, while the number “3” in the middle is unrecognizable. That is to say, the reconstructed depth of the letter “D” coincides with the chess piece “Pawn.” Compared with conventional AR systems, e.g., those based on total-internal-reflection waveguides, our meta-II can achieve true-3D through the monocular focus cue verified above, as shown in Fig. 4a and b. With the help of the monocular focus cue, the VAC can be alleviated by matching it with binocular parallax-induced convergence. In Fig. 4c–f, the reconstructed blue and red 3D images are similar, further verifying the AR capacity of our meta-II. The related videos are provided in Additional file 1: Movies S4–S6.

Fig. 4
figure 4

The AR prototype based on the meta-II display. Red, green, and blue images (real images) are presented on a dark background. a The image is captured by focusing on the green number “3” and the chess piece “Rook” with details enlarged in the red frame at the right. Note that the number “3” and the chess piece “Rook” are clear, while the letter “D” and the chess piece “Pawn” behind are blurry. b The image is captured by focusing on the green letter “D” and the chess piece “Pawn.” At this point, the letter “D” and the chess piece “Pawn” become clear, while the chess piece “Rook” and the number “3” in the front become blurry. Similar results of (c, d) blue and (e, f) red images

Furthermore, we adopt rings and dice patterns to examine the resolution, as shown in Fig. 5. First, the similar focus cue is verified. When the camera focuses on the black chess in the middle, the rings and the dice are clear, while the real objects in the foreground and background are blurry. More importantly, eight bright/dark donut pairs can be distinguished in the rings in Fig. 5a besides the central bright spot. Each donut pair represents two voxels, so the rings contain 33 individual voxels. The outermost diameter of the rings is 14.2 mm, and the distance between the rings and the metalens array is 355 mm for a field of view of 2.29°. Thus, the angular resolution of our meta-II AR system is 33/2.29°=14.4 PPD (pixels per degree), equivalent to the Oculus Quest. The clear image of the dice in Fig. 5b also demonstrates the ability to display complex patterns. In general, the critical parameters of our mera-II AR system can be listed as follow, FOV: 2.29°; depth of field: 100–375 mm; angular resolution: 14.4 PPD.

Fig. 5
figure 5

The meta-II-based AR prototype to estimate resolution and show a relatively complicated object. a The ring pattern has 16 visible donut pairs in the field of view of 2.29 degrees; b a photograph of a dice

3 Discussion and conclusion

Combining a metalens array, a commercial micro-display, and a real-time EIA rendering method, a novel meta-II NED is archived. High-precision large-area nanoimprint technology is used to fabricate the metalens array with high-refractive-index material of nanoimprint glue. We also propose a new voxel-based EIA rendering method to support real-time by upgrading the conventional viewpoint-based method. As a result, a true-3D display capacity is verified. By merging 3D images with surrounding objects, we also implement an AR prototype with true-3D display. In addition to the design of the large F-number metalens array described above, we also show the design of the small F-number metalens array and its corresponding integral imaging simulation in Sect. S6 of the Additional file 1. Note that the design flexibility of metalens arrays is highly valuable for next-generation near-eye displays regarding several long-standing issues of integral imaging. For example, extended depth of field is vital for true-3D NEDs to present images from the person space to the vista space, whereas the conventional microlens array induces a very limited depth of field. In contrast, a metalens array can be easily designed as a polarization multiplexing element with varying focal lengths to different polarization directions, allowing depth of field extension. Another issue is the narrow FOV. Free-form surface optics, an effective aberration correction scheme to increase the FOV, is difficult for traditional microlens arrays to achieve. However, our meta-II provides a promising solution for further study: freeform phase profiles that precisely compensate for the field-dependent aberration can be recorded in a slim metalens array. More importantly, both extened-depth-of-field meta-II and FOV-expanded meta-II suffer from no cost in computational complexity and system volume compared with the meta-II proposed in the current manuscript. Finally, we believe the design flexibility of metalens arrays, the low-cost nanoimprint fabrication with the feasibility of mass production and our real-time rendering method can promote video-rate meta-II NED for future VR and AR.

Availability of data and materials

All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.

References

  1. J. Geng, Three-dimensional display technologies. Adv. Opt. Photon. 5, 456–535 (2013)

    Article  Google Scholar 

  2. G. Lippmann, La photographie intégrale. C. R. Hebd. Seances Acad. Sci 146, 446–451 (1908)

    Google Scholar 

  3. Q.-H. Wang, C.-C. Ji, L. Li, H. Deng, Dual-view integral imaging 3D display by using orthogonal polarizer array and polarization switcher. Opt. Express. 24, 9–16 (2016)

    Article  ADS  Google Scholar 

  4. X. Wang, H. Hua, Theoretical analysis for integral imaging performance based on microscanning of a microlens array. Opt. Lett. 33, 449–451 (2008)

    Article  ADS  Google Scholar 

  5. X. Ni, A.V. Kildishev, V.M. Shalaev, Metasurface holograms for visible light. Nat. Commun. 4, 2807 (2013)

    Article  ADS  Google Scholar 

  6. J. An et al., Slim-panel holographic video display. Nat. Commun. 11, 5568 (2020)

    Article  ADS  Google Scholar 

  7. Y.-L. Li et al., Tunable liquid crystal grating based holographic 3D display system with wide viewing angle and large size. Light: Sci. Appl. 11, 188 (2022)

    Article  ADS  Google Scholar 

  8. R. Hirayama, D. Martinez Plasencia, N. Masuda, S. Subramanian, A volumetric display for visual, tactile and audio presentation using acoustic trapping. Nature 575, 320–323 (2019)

    Article  ADS  Google Scholar 

  9. B. Liu et al., Time-multiplexed light field display with 120-degree wide viewing angle. Opt. Express. 27, 35728–35739 (2019)

    Article  ADS  Google Scholar 

  10. W. Wan et al., Holographic sampling display based on metagratings. iScience 23, 100773 (2020)

    Article  ADS  Google Scholar 

  11. J. Hua et al., Foveated glasses-free 3D display with ultrawide field of view via a large-scale 2D-metagrating complex. Light Sci. Appl. 10, 213 (2021)

    Article  ADS  Google Scholar 

  12. F. Zhou et al., Vector light field display based on an intertwined flat lens with large depth of focus. Optica 9, 288–294 (2022)

    Article  ADS  MathSciNet  Google Scholar 

  13. F.L. Kooi, A. Toet, Visual comfort of binocular and 3D displays. Displays 25, 99–108 (2004)

    Article  Google Scholar 

  14. M. Lambooij, W. IJsselsteijn, M. Fortuin, I. Heynderickx, Visual discomfort and visual fatigue of stereoscopic displays: a review. J. Imaging Sci. Technol. 53, 30201–30201 (2009)

    Article  Google Scholar 

  15. H. Hiura, K. Komine, J. Arai, T. Mishina, Measurement of static convergence and accommodation responses to images of integral photography and binocular stereoscopy. Opt. Express. 25, 3454–3468 (2017)

    Article  ADS  Google Scholar 

  16. H. Choi, S.-W. Min, S. Jung, J.-H. Park, B. Lee, Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays. Opt. Express. 11, 927–932 (2003)

    Article  ADS  Google Scholar 

  17. N. Okaichi, M. Miura, J. Arai, M. Kawakita, T. Mishina, Integral 3D display using multiple LCD panels and multi-image combining optical system. Opt. Express. 25, 2805–2817 (2017)

    Article  ADS  Google Scholar 

  18. S. Yang et al., 162-inch 3D light field display based on aspheric lens array and holographic functional screen. Opt. Express. 26, 33013–33021 (2018)

    Article  ADS  Google Scholar 

  19. L. Yang et al., Viewing-angle and viewing-resolution enhanced integral imaging based on time-multiplexed lens stitching. Opt. Express. 27, 15679–15692 (2019)

    Article  ADS  Google Scholar 

  20. H. Watanabe, N. Okaichi, H. Sasaki, M. Kawakita, Pixel-density and viewing-angle enhanced integral 3D display with parallel projection of multiple UHD elemental images. Opt. Express 28, 24731–24746 (2020)

    Article  ADS  Google Scholar 

  21. Z.-F. Zhao, J. Liu, Z.-Q. Zhang, L.-F. Xu, Bionic-compound-eye structure for realizing a compact integral imaging 3D display in a cell phone with enhanced performance. Opt. Lett. 45, 1491–1494 (2020)

    Article  ADS  Google Scholar 

  22. Z.-B. Fan et al., A broadband achromatic metalens array for integral imaging in the visible. Light Sci. Appl. 8, 67 (2019)

    Article  ADS  Google Scholar 

  23. J. Zhang, X. Lan, C. Zhang, X. Liu, F. He, Switchable near-eye integral imaging display with difunctional metalens array. Optik. 204, 163852 (2020)

    Article  ADS  Google Scholar 

  24. D. Lanman, D. Luebke, Near-eye light field displays. ACM Trans. Graphics 32, 1–10 (2013)

    Article  Google Scholar 

  25. H. Huang, H. Hua, High-performance integral-imaging-based light field augmented reality display using freeform optics. Opt. Express. 26, 17578–17590 (2018)

    Article  ADS  Google Scholar 

  26. H. Hua, B. Javidi, A 3D integral imaging optical see-through head-mounted display. Opt. Express. 22, 13484–13491 (2014)

    Article  ADS  Google Scholar 

  27. H. Huang, H. Hua, An integral-imaging-based head-mounted light field display using a tunable lens and aperture array. J. Soc. Inform. Display 25, 200–207 (2017)

    Article  Google Scholar 

  28. X. Shen, B. Javidi, Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens. Appl. Opt. 57, B184–B189 (2018)

    Article  Google Scholar 

  29. Z. Qin, P.-Y. Chou, J.-Y. Wu, C.-T. Huang, Y.-P. Huang, Resolution-enhanced light field displays by recombining subpixels across elemental images. Opt. Lett. 44, 2438–2441 (2019)

    Article  ADS  Google Scholar 

  30. Z. Qin et al., Revelation and addressing of accommodation shifts in microlens array-based 3D near-eye light field displays. Opt. Lett. 45, 228–231 (2020)

    Article  ADS  Google Scholar 

  31. J.-S. Park et al., All-glass, large metalens at visible wavelength using deep-ultraviolet projection lithography. Nano Lett. 19, 8673–8682 (2019)

    Article  ADS  Google Scholar 

  32. Q. Zhong et al., “1550nm-wavelength metalens demonstrated on 12-inch Si CMOS platform,“ in 2019 IEEE 16th International Conference on Group IV Photonics (GFP) 1–2

  33. T. Hu et al., CMOS-compatible a-Si metalenses on a 12-inch glass wafer for fingerprint imaging. Nanophotonics 9, 823–830 (2020)

    Article  Google Scholar 

  34. L. Zhang et al., High-efficiency, 80 mm aperture metalens telescope. Nano Lett. 23, 51–57 (2023)

    Article  ADS  Google Scholar 

  35. G.-Y. Lee et al., Metasurface eyepiece for augmented reality. Nat. Commun. 9, 4562 (2018)

    Article  ADS  Google Scholar 

  36. G. Brière et al., An etching-free Approach toward large-scale light-emitting Metasurfaces. Adv. Opt. Mater. 7, 1801271 (2019)

    Article  Google Scholar 

  37. G. Yoon, K. Kim, D. Huh, H. Lee, J. Rho, Single-step manufacturing of hierarchical dielectric metalens in the visible. Nat. Commun. 11, 2268 (2020)

    Article  ADS  Google Scholar 

  38. H. Choi et al., Realization of high aspect ratio metalenses by facile nanoimprint lithography using water-soluble stamps. PhotoniX 4, 18 (2023)

    Article  Google Scholar 

  39. J. Kim et al., Scalable manufacturing of high-index atomic layer–polymer hybrid metasurfaces for metaphotonics in the visible. Nat. Mater. 22, 474–481 (2023)

    Article  ADS  Google Scholar 

  40. E. Bayati, A. Wolfram, S. Colburn, L. Huang, A. Majumdar, Design of achromatic augmented reality visors based on composite metasurfaces. Appl. Opt. 60, 844–850 (2021)

    Article  ADS  Google Scholar 

  41. Z. Li et al., Meta-optics achieves RGB-achromatic focusing for virtual reality. Sci. Adv. 7, eabe4458 (2021)

    Article  ADS  Google Scholar 

  42. D.K. Nikolov et al., Metaform optics: bridging nanophotonics and freeform optics. Sci. Adv. 7, eabe5112 (2021)

    Article  ADS  Google Scholar 

  43. C. Wang et al., Metalens eyepiece for 3D holographic near-eye display. Nanomaterials 11, 1920 (2021)

    Article  Google Scholar 

  44. Y. Li et al., Ultracompact multifunctional metalens visor for augmented reality displays. PhotoniX. 3, 29 (2022)

    Article  Google Scholar 

  45. Z. Li et al., Inverse design enables large-scale high-performance meta-optics reshaping virtual reality. Nat. Commun. 13, 2409 (2022)

    Article  ADS  Google Scholar 

  46. K.-C. Kwon et al., High speed image space parallel processing for computer-generated integral imaging system. Opt. Express. 20, 732–740 (2012)

    Article  ADS  Google Scholar 

  47. S. Xing et al., High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction. Opt. Express. 25, 330–338 (2017)

    Article  ADS  Google Scholar 

  48. Y. Guan et al., Backward ray tracing based high-speed visual simulation for light field display and experimental verification. Opt. Express. 27, 29309–29318 (2019)

    Article  ADS  Google Scholar 

  49. H. Li, S. Wang, Y. Zhao, J. Wei, M. Piao, Large-scale elemental image array generation in integral imaging based on scale invariant feature transform and discrete viewpoint acquisition. Displays 69, 102025 (2021)

    Article  Google Scholar 

  50. X. Guo et al., Real-time dense-view imaging for three-dimensional light-field display based on image color calibration and self-supervised view synthesis. Opt. Express. 30, 22260–22276 (2022)

    Article  ADS  Google Scholar 

  51. D. Chen et al., Virtual view synthesis for 3D light-field display based on scene tower blending. Opt. Express. 29, 7866–7884 (2021)

    Article  ADS  Google Scholar 

  52. A. Aghasi, B. Heshmat, L. Wei, M. Tian, Optimal allocation of quantized human eye depth perception for multi-focal 3D display design. Opt. Express 29, 9878–9896 (2021)

    Article  ADS  Google Scholar 

Download references

Funding

This work is supported by the National Key R&D Program of China (Grant No.2021YFB2802300, Grant No.2022YFB3602803), National Natural Science Foundation of China (Grant No. 62035016), Guangdong Basic and Applied Basic Research Foundation (Grant No.2023B1515040023, Grant No.2020A1515110661), Natural Science Foundation of Guangdong Province (Grant No. 2021A1515011449), China Postdoctoral Science Foundation (Grant No.2021M703666).

Author information

Authors and Affiliations

Authors

Contributions

JWD, ZQ, SJJ and ZBF conceived the project. ZMC and SHL fabricated the metalens array; YFC performed the simulations of integral imaging and developed codes of fast rendering method; ZBF designed the metalens arrays; ZBF, XL and WLL measured the metalens array; ZBF, YFC, XL and WLL finished the display experiments. All authors contributed to data analysis, discussions and manuscript writing.

Corresponding authors

Correspondence to Zong Qin or Jian-Wen Dong.

Ethics declarations

Competing interests

Authors declare that they have no competing interests.

Supplementary Information

Additional file 1:

Section S1. Simulation results of nanopillar gratings. Section S2. Fabrication of silicon master mold for metalens array. Section S3. Nanoimprint fabrication of metalens array. Section S4. Measurement setup for characterizing the performances of metalens array. Section S5. Dispersion characteristic of metalens array. Section S6. Comparison of eyebox size in metalens arrays with different F-numbers. Fig. S1. Simulation results of the nanopillar gratings at the wavelength of 547 nm. Fig. S2. Optical and SEM images of the silicon master mold of metalens array. Fig. S3. Schematic illustration of nanoimprint fabrication of metalens array. Fig. S4. Measurement setup for characterizing the performances of metalens array. Fig. S5. Measured intensity distributions in the yz plane of a single metalens. Fig. S6. Measured intensity distributions in the focal planes of a single metalens. Fig. S7. Optical path diagrams for the 3D II display. Fig. S8. Experimental results of the 3D parallax effect. Fig. S9. The defocusing AR effects of the meta-II NED system. Fig. S10. Simulated results of a small F-number metalens array. Fig. S11. The eyebox size simulations of different F-number metalens arrays. Movies S1. The results of 3D parallax effect under the green light. Movies S2. The results of 3D parallax effect under the blue light. Movies S3. The results of 3D parallax effect under the red light. Movies S4. The results of AR effect under the green light. Movies S5. The results of AR effect under the blue light. Movies S6. The results of AR effect under the red light.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, ZB., Cheng, YF., Chen, ZM. et al. Integral imaging near-eye 3D display using a nanoimprint metalens array. eLight 4, 3 (2024). https://doi.org/10.1186/s43593-023-00055-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43593-023-00055-1

Keywords