## Abstract

In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects’ imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.

© 2017 Optical Society of America

## 1. Introduction

Light field imaging technologies are coming into prominence and the so-called light field cameras, also known as plenoptic cameras, have attracted an increasing attention in recent years. Contrary to conventional digital cameras, plenoptic cameras can capture both 2D spatial and angular information by only one shot. To acquire the 4D information, plenoptic cameras insert a microlens array (MLA) between the main lens and image sensor. According to the position of MLA placement, plenoptic cameras can be classified into two categories named as plenoptic 1.0 and plenoptic 2.0. Ng *et al.* [1] first presented the prototype of plenoptic 1.0 cameras having an MLA positioned at the imaging plane of main lens, commercially known as Lytro [2]. In 2009, Lumsdaine *et al*. [3] introduced a new rendering technique for plenoptic 2.0 cameras which makes the MLA focus on the imaging plane of main lens to form a relay system to reimage the image of the object onto the image sensor, commercially known as Raytrix [4]. The optical structure of plenoptic 1.0 cameras is depicted in Fig. 1. As shown in Fig. 1, light rays emitting from objects that are located on plane $a$ will propagate through the main lens and converge at MLA. These converged light rays will be split into the sensor area of corresponding micro lenses which precisely leads to both 2D spatial and angular information being recorded. The acquired 4D information allows for digital refocusing [5], synthesizing viewpoints [6,7], extending depth of field [8], saliency detection [9], etc.

Currently, distance measurement using hand-held plenoptic 1.0 cameras is becoming an attractive application. Existing techniques for distance measurement can be mainly divided into two types: active ranging and passive ranging. Active ranging, such as laser ranging [10] and radar ranging [11], requires expensive equipment and can be easily affected by the environment. Passive ranging, such as binocular cameras system [12] and cameras array system [13], is limited by portability and calibration. These problems can be solved through utilizing hand-held plenoptic 1.0 cameras. Although the depth information estimated from the light field data [14–19] can be used to inversely calculate distances, the accuracy is too limited because of the rough depth maps, especially at texture-less regions. Therefore, Hahne *et al*. [20] analyzed the imaging system of hand-held plenoptic 1.0 cameras and developed an approach to measure the distances of object planes at which the user is refocusing. The method treated a pair of light rays as a system of linear functions and its solution yielded an intersection indicating the distance to the refocused object plane. However, the problem to achieve all distances of object planes through only one refocusing process is not solved in [20]. In addition, when implementing refocusing to some planes that are closer to the main lens using the refocusing synthesis technique in [20], it inherently implies an interpolation to the light field image which requires huge computer memory.

In order to measure all distances of object planes in an image captured by the hand-held plenoptic 1.0 camera through only one refocusing implementation with higher accuracy, we put forward a geometric optical model in this paper. The proposed geometric optical model consists of two sub-models based on ray tracing, namely object space model and image space model, and the two theoretic sub-models are derived on account of on-axis point light sources. Object space model models light rays refraction inside the main lens following the refraction theorem. Image space model models the relationship between the emission positions of light rays exiting from the main lens and corresponding imaging diameters on the image sensor through refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects’ imaging diameters and corresponding distances of object planes are derived. Results of performance comparison demonstrate that the proposed geometric optical model can provide higher accuracy in distance measurement than existing methods with higher adaptation to different optical configurations in imaging system.

The rest of the paper is organized as follows. Section 2 illustrates the proposed geometric optical model in detail based on the optical analysis for the imaging of objects. Experimental results and analyses are provided in Section 3. Conclusions are drawn in Section 4 as well as the future work.

## 2. Optical analysis and geometric optical model

#### 2.1 Optical analysis

Apart from the well-focused light field imaging case shown in Fig. 1, objects that do not locate on plane $a$ will be defocused and imaged as Fig. 2 depicts. As shown in Fig. 2, with the suppose that light rays propagate into the main lens by covering its whole pupil diameter and refractions on MLA are neglected, light rays (highlighted in green) emitting from objects on plane ${a}^{\prime}$ which is in front of plane $a$ will focus behind the image sensor with the corresponding imaging diameters, ${D}_{1}$. Similarly, light rays (highlighted in red) emitting from objects on plane $\tilde{a}$ that is nearer to plane $a$ and still in front of plane $a$ will focus between the image sensor and MLA. Light rays (highlighted in blue) emitting from objects on plane $\widehat{a}$ that is behind plane $a$ will focus between the MLA and main lens. As we can see from Fig. 2, the imaging diameters of objects on the image sensor are different and dependent on the corresponding distances of object planes. Based on this analysis, it is found that the distances of object planes are possible to be measured through investigating their relationships with corresponding objects’ imaging diameters.

In order to establish the relationships between the imaging diameters of objects and distances of object planes, the whole light field imaging system is divided into two sub-systems. One sub-system includes light ray propagation between the objects and main lens and the other includes light ray propagation between the main lens and image sensor. By observing the optical structure of hand-held plenoptic 1.0 cameras, light rays that impinge on the image sensor are firstly traced back to the main lens through utilizing refocusing and similar triangle principle to obtain the relationships between the imaging diameters and emission positions of light rays on the main lens. Subsequently, light rays on the main lens are traced back to the object space by utilizing refraction theorem to derive the relationships between the emission positions and distances of object planes. The details of deriving these relationships are described in the following section.

#### 2.2 Proposed geometric optical model

According to the above analyses, a geometric optical model is proposed correspondingly which consists of two sub-models based on ray tracing: object space model and image space model. Herein, point light sources are regarded as captured objects and the two theoretic sub-models are derived on account of on-axis point light sources.

Using the plane ${a}^{\prime}$ in Fig. 2 as an instance, in the object space model, as shown in Fig. 3, light rays emitting from an object, on-axis point light source, on plane ${a}^{\prime}$ propagate into the main lens and refract inside the main lens following the refraction theorem. As shown in the figure, ${{d}^{\prime}}_{out}$ defines the distance between plane ${a}^{\prime}$ and the main lens, which is exactly needed to be measured. Its relationship with the angle, denoted by $\phi $ in Fig. 3, is mathematically given by

where $R$ represents the radius of curvature of main lens; $T$ represents the central thickness of main lens and $D$ is the pupil diameter of main lens. Equation (1) indicates that ${{d}^{\prime}}_{out}$ can be calculated byTherefore, we need to acquire the value of $\phi $ in order to obtain ${{d}^{\prime}}_{out}$.In Fig. 3, light rays will refract inside the main lens and arrive at the position $(p,q)$ which is marked by a green dot. The refractive angle $\psi $ satisfies the refraction theorem that is given by

where ${n}_{1}$ is the refractive index of the main lens; $\psi $ is the included angle between the normal, the dashed line in purple, and the refractive light rays in the main lens; ${\theta}_{1}$ followsAfter arriving at position $(p,q)$ as marked in Fig. 3, light rays will exit from position $(p,q)$ with a refractive angle $\omega $ and then impinge on the image sensor. These propagations of light rays are carried out in the image space and the proposed image space model is shown in Fig. 4.

By utilizing the refraction theorem, we have

where ${\theta}_{1}-\psi $ equals to the included angle between the refractive light rays and horizontal axis, and${\theta}_{2}$ satisfiesIf the emergent light rays that impinge on the image sensor are extended, then undoubtedly the emergent light rays will intersect on a plane behind the image sensor with a distance $d$, as shown in Fig. 4. Considering the focal length and thickness of general MLA are very small, the deviation caused by refraction on MLA is neglected. As a consequence, we havewhere ${D}_{1}$ represents the imaging diameters on the image sensor. Besides, position $(p,q)$ lies on the curved surface of main lens so that $p$ and $q$ satisfy where ${f}_{x}$ is the focal length of MLA and ${d}_{in}$ defines the distance between MLA and main lens. Combining Eqs. (7) and (9), we haveAnalyses on Eqs. (8) and (10) manifest that $p$ and $q$ can only be obtained after ${d}_{in}$, ${D}_{1}$, and $d$ are all known.In order to obtain the values of ${d}_{in}$, ${D}_{1}$, and $d$, refocusing and inverse ray tracing are required. Therefore, the light field image is refocused to a reference plane at a known distance away from the main lens once it is captured. The refocusing is carried out using the ray tracing technique proposed in [20] as shown in Fig. 5. The reference plane can be considered as the plane $a$ in Fig. 2 whose distance is ${d}_{out}$. Combining Fig. 2 and Fig. 5, objects on other planes such as ${a}^{\prime}$ are defocused and equivalent to be captured under the optical configuration that the distance between MLA and main lens equals to ${d}_{in}$. According to [20], the slopes of light rays emitting from the centers of pixels on the image sensor to the corresponding micro lens, denoted by ${m}_{i}$, is given by

where ${y}_{j}$ represents the vertical central coordinates of each micro lens and ${v}_{i}$ represents the vertical central coordinates of each pixel on the image sensor; Both subscript $i$ and $j$ start to count from zero while the range for $i$ is the vertical-dimension number of pixels under each micro lens and the range for $j$ is the vertical-dimension number of micro lenses. Since the right items in Eq. (11) are all known, each slope ${m}_{i}$ can be calculated. These light rays will propagate through the main lens and corresponding positions on the focal plane ${F}_{u}$, the dots marked in the same color with the light rays, and then converge to a point on plane $a$, as shown in Fig. 5. The intervals between the dots on plane ${F}_{u}$ are the baselines of virtual viewpoints in hand-held plenoptic 1.0 cameras [20]. The positions of the dots can be derived bywhere $f$ is the focal length of main lens. With the known slope ${m}_{i}$ and focal length $f$, the positions of each dot can be obtained.Further, the slopes of light rays in object space, denoted by ${k}_{i2}$, can be derived by

where ${{y}^{\prime}}_{2}$ represents the known vertical coordinates of point ${{y}^{\prime}}_{2}$ on plane $a$. Using the purple light ray in Fig. 5 as an instance, ${k}_{02}$ indicates the incident angle of this light ray. With the known ${{y}^{\prime}}_{2}$ and ${F}_{0}$, the intersection of this light ray with the right curve of the main lens can be obtained. Then, $({{p}^{\prime}}_{0},{{q}^{\prime}}_{0})$ can be ascertained by the refraction theorem. Finally, ${d}_{in}$ can be derived by_{${D}_{1}$} can be obtained by recording the imaging diameters of objects on plane ${a}^{\prime}$ in the refocused light field image.

For the purpose of achieving $d$, an approximation is made in the image space model that light rays will intersect at the yellow dots as marked in Fig. 4, the center of marginal pupil diameter of main lens, if the light rays are prolonged on the main lens. By utilizing similar triangle principle which gives

_{$d$}can be approximately achieved byAfter the above processing and approximation, $p$ and $q$ can be derived by plugging ${d}_{in}$, ${D}_{1}$, and $d$ into Eqs. (8) and (10), which is used for deriving ${{d}^{\prime}}_{out}$.

After that, refractive angle $\omega $ can be calculated by

For plane $\tilde{a}$ and $\widehat{a}$ in Fig. 2, their respective object space models are the same as Fig. 3 shows. Differences exist in the image space model since currently the focal planes in image space are located in front of the image sensor. Therefore, Eq. (9) is changed by

and this change is the same for both plane $\tilde{a}$ and $\widehat{a}$. Then, Eq. (10) is changed correspondingly byIn addition, Eq. (15) should be replaced byand this leads to the change on Eq. (16) withThen, the steps for deriving the distances of plane $\tilde{a}$ and $\widehat{a}$ are exactly the same with that described for plane ${a}^{\prime}$ by including the updates in the equations.## 3. Experimental results and analyses

#### 3.1 Simulation system

For the purpose of validating the utility of the proposed geometric optical model, imaging systems of hand-held plenoptic 1.0 cameras are simulated in optics tool Zemax [21], as shown in Fig. 6. Figure 6(a) depicts a screenshot of the simulated imaging systems and components from left to right are image sensor (white), MLA and main lens, respectively. The gap between the image sensor and MLA is magnified for better visibility. The performance of the proposed geometric optical model is compared with the method provided in [20]. As mentioned before, the method in [20] treats a pair of light rays as a system of linear functions and considers the solutions as the distances of refocused objects planes. Two optical configurations of hand-held plenoptic 1.0 cameras are designed for comparison and the $F-number$ of main lens and MLA is always kept to be equal [22]. Figures 6(b) and 6(c) show the zoomed MLA used in the two optical configurations. Geometrical parameters used for designing the two simulated imaging systems are summarized in Table 1. The focal lengths of MLA and main lens in imaging system 1 are the same with that used in [20]. The wavelength of light rays, $\lambda $, is set to be 632.8nm, the same with that used in [20], to ensure the fairness of performance comparison.

#### 3.2 Performance comparison and analyses

The estimation error of distance, denoted by $ERROR$, is used for comparing the performance of proposed geometric optical model and the method in [20], which is given by

As we can see from Table 2, the proposed geometric optical model can outperform the method provided in [20] for the majority of distances, particularly in the actual distance range of 0.2m~2.5m in imaging system **1**. In addition, the proposed geometric optical model outperforms the method in [20] by an average of 1.954% reduction in estimation error and the maximum reduction reaches 4.678%. In imaging system **2**, the proposed geometric optical model is superior to the method [20] for whole distance range. The reduction of average estimation error is 4.643% and the maximum reduction reaches 5.484%. More importantly, in near distance range of 0.2~1.3m, the estimation errors obtained by the proposed geometric optical model is more than 4 times lower, even 9 times lower at 0.3~0.6m, than the method provided in [20], which shows higher potential in applications.

The estimation errors of two imaging systems in Table 2 are graphed in Figs. 7(a) and 7(b). In general, the proposed geometric optical model outperforms the algorithm in [20] by providing much higher accuracy for both optical configurations. It is also observed from Figs. 7(a) and 7(b) that the estimation error will gradually increase with the distance for the proposed algorithm. The reason mainly lies in the approximation made in the image space model. During deriving Eqs. (15) and (22), light rays ${r}_{1}$ and ${r}_{2}$ are approximated by ${{r}^{\prime}}_{1}$and ${{r}^{\prime}}_{2}$, respectively, as shown in Fig. 8. Thus, deviations exist in the light rays emission position $(p,q)$ on the main lens, such as $\Delta {q}_{1}$ and $\Delta {q}_{2}$, which leads to the estimation errors in Eqs. (10) and (21). Farther distances on the object space generally cause larger $\Delta q$and finally result in larger estimation errors. This also causes a lightly worse performance as the actual distance is farther than 2.5m in imaging system **1**.

However, it is found that the larger estimation error at farther distance may be compensated by changing the geometric parameters of components in the hand-held plenoptic 1.0 cameras, such as main lens. Therefore, eight testing cases (*TC*s) are designed in Table 3 by changing the parameters of main lens while keeping the MLA used in imaging system **2**, convexo-convex shaped MLA, with unchanged focal length ${f}_{x}=2.816mm$. The pitch of each micro lens, ${D}_{m}$, is varied with the change of focal length of main lens in order to keep the $F-number$ of main lens and MLA being equal [22]. The eight testing cases can be mainly classified into three categories. We mainly select two focal lengths for the main lens: one is around 79mm, like that in *TC*1-*TC*3; the other is around 98mm, like that in *TC*4-*TC*6. They are controlled by changing the radius of curvature of main lens $R$. Among each group, *TC*1-*TC*3 or *TC*4-*TC*6, the central thickness of main lens, *T*, changes, which will affect the focal length a bit, to investigate its effect on the estimation error. *TC*7 and *TC*8 use the same focal length of the main lens with *TC*6, while investigate the effect of changing the pupil diameter of main lens $D$. The refractive index of main lens ${n}_{1}$ and wavelength of light rays $\lambda $ are the same as that in Table 1.

Estimation errors of all the testing cases are listed in Table 4. First, it is found that for the same $R$, which results in almost the same focal length of the main lens, smaller $T$ can contribute to smaller estimation errors, like the performance comparison among *TC*1 to *TC*3 and that among *TC*4 to *TC*6. Actually, the imaging diameters on the image sensor will not be affected by changing $T$, while the estimated distance $d$ will decrease when $T$ becomes smaller, which results in larger $\omega -{\theta}_{2}$. For larger $\omega -{\theta}_{2}$, the estimated $q$ will be smaller and leads to larger $\Delta q$. However, since $T$ becomes smaller, the estimated $p$ also becomes smaller and $q$ eventually becomes larger according to Eq. (8). Therefore, $\Delta q$ and estimation error decrease with the decrement in $T$. On the contrary, the estimation error increases with larger $T$, as depicted in Figs. 9(a) and 9(b).

Second, it is observed that using exact the same focal length of the main lens, corresponding to the same $R$ and $T$, increasing $D$ can improve the estimation accuracy obviously as we compare the results among *TC*6 to *TC*8. The variation in the estimation error with the distance change is shown in Fig. 9(c). As $D$ is larger, the thickness of the margins of the main lens where the yellow dots are located, as shown in Fig. 4, will be smaller. Thus, the effect of enlarging $D$ is approximately equivalent to reducing $T$ and inherently the maximum of $\Delta q$, as shown in Fig. 8, will be very small for a larger $D$, which results in smaller estimation errors.

Third, if we check the performance of each test case across the groups, it can be further discovered that larger focal length of the main lens, introduced by increasing $R$, can provide lower estimation errors. Generally, a larger $R$ leads to a smaller $\omega -{\theta}_{2}$ and further leads to a smaller $\Delta q$ according to the above analyses. While, it also results in an increase in the thickness of the margins of the main lens. As a consequence, it is hard to judge whether $\Delta q$ is increased or reduced. Fortunately, it is found that the actual refractive angle, $\psi $, decreases with the increase in $R$, which accounts for smaller $\Delta q$. Integrating the above three factors, $\Delta q$ is reduced eventually, which results in lower estimation errors. We compared *TC*1 with *TC*4, *TC*2 with *TC*5, and *TC*3 with *TC*6, which are shown in Figs. 9(e), 9(f) and 9(g), respectively. The three pairs of comparison present a conclusion consistent with what we discovered.

In addition, it is observed that the estimation errors in Table 2 and Table 4 increase with small fluctuations. Although $\Delta q$ generally increases with the increment in the distance between the object plane and the main lens, the estimation error does not increase with $\Delta q$ linearly. The reason mainly lies in that the majority of equations in the proposed model are non-linear. Besides, uncertainties exist in the amount of increment in $\Delta q$when changing the parameters of the main lens, especially for the pupil diameter $D$. Thus, tiny fluctuations are detected in the computational results shown in Fig. 7 and Fig. 9.

It can also be noticed from Figs. 9(a), 9(b) and 9(c) that the effects on the estimation accuracy are obvious when central thickness $T$ and pupil diameter $D$ are changed in a bigger way. Therefore, we compare their effects by increasing $T$ and $D$ at the same time. For this comparison, two additional testing cases are conducted and the geometric parameters are summarized in Table 5. Geometric parameters of *TC*9 are the same with those in imaging system **1** in Table 1.

The estimation errors of *TC*9 and *TC*10 are graphed in Fig. 10. As we can see from Fig. 10, the estimation error caused by larger $T$ can be decreased by enlarging $D$.

Generally speaking, more accurate distance measurement for object planes in a light field image can be achieved by the proposed geometric optical model for hand-held plenoptic 1.0 cameras using the main lens with larger pupil diameter, larger focal length as well as smaller thickness. We can also consider to replace the MLA with different focal lengths so that the new optical configurations of hand-held plenoptic 1.0 cameras are adaptable to measure farther distances, which is under investigation as one of our future works.

#### 3.3 Testing on a real system

### 3.3.1 Prototype of a plenoptic imaging system

In order to further demonstrate the effectiveness of the proposed model, a prototype of a real imaging system has been built, as depicted in Fig. 11. The geometric parameters of the optical elements are summarized in Table 6. As shown in Fig. 11, a laser is used as an on-axis point light source and the wavelength of parallel light rays emitting from it is 532nm. To make the light rays be omnidirectional, an objective lens is placed in front of the laser so that the parallel light rays can first focus on the focal point of the objective lens and subsequently propagate divergently. The reference plane is 1000mm away from the main lens. The distance between the main lens and MLA is 101mm.

### 3.3.2 Experimental results and analyses

Images obtained by changing the distances between the main lens and the light source are shown in Fig. 12. As we can see from Fig. 12, the imaging diameter decreases as the distance increases. The imaging diameter is measured by the number of valid pixels in the vertical direction multiplied by the pixel pitch size. Considering that the edges of the imaging results are not sharp enough due to the aberrations of the main lens, an advanced detecting technique is desired to pick out the “valid” pixels at the image boundary. Several possible signal processing techniques can be developed, such as detecting the “valid” pixels by measuring the relative intensity variation along the radial direction or by measuring the absolute difference between the current pixel and the average intensity. The algorithms can be further optimized by considering regional continuity and smoothness, which needs to be investigated carefully in our future work. Here, a preliminary method is used to determine the imaging diameter according to the histogram of intensity distribution. Counting from the highest intensity value, e.g. 255, the intensity at which the accumulated distribution reaching a specific ratio relative to the total number of pixels is set as the threshold. The pixels whose intensities are larger than the threshold are regarded as “valid” to the imaging diameter. Results of the estimated distances and estimation errors are listed in Table 7.

As shown in Table 7, the proposed model can outperform the method in [20] with an average of 0.4702% reduction in estimation error. It is also found that as the distance is larger than 800mm, the estimation accuracy of the proposed model is a bit worse than the method provided in [20]. It may be caused by the suboptimality in detecting the imaging diameter, which needs to be further investigated.

Based on the experiments, it is found that there are still some problems needed to be solved in estimating the distance for a real imaging system, such as how to compensate the fabrication errors and how to reduce the registration errors. We also notice that the aberrations of the main lens and micro lens exist. The aberrations of the main lens result in that the light rays converge to a small spot on the focal plane instead of a theoretic sharp point after propagating through the main lens. Therefore, when implementing refocusing using the model shown in Fig. 5, errors will arise in deriving ${d}_{in}$. Thus, it is important and indispensable for us to calibrate the imaging system, by which correction factors or compensation factors can be obtained and used to update the proposed model to ensure the estimation accuracy, which is also put as our future work.

## 4. Conclusions

In this paper, we put forward a geometric optical model to measure the distances of object planes in an image captured by a hand-held plenoptic 1.0 camera. The proposed geometric optical model consists of two sub-models based on ray tracing, namely object space model and image space model. Results of simulations and real experiments demonstrate that the proposed geometric optical model works better than existing distance measurement methods in terms of accuracy, especially at general imaging range. In addition, the accuracy in measurement is further investigated for the imaging system with diverse geometric parameters and results reveal that the estimation error can be compensated by enlarging the pupil diameter, focal length, or reducing the thickness of the main lens.

In order to further optimize the proposed model and improve its versatility, our future works include updating the image space model by further taking the refraction on MLA into consideration, developing calibration process to compensate the aberrations of the main lens and to reduce the registration errors in a real imaging system, and optimizing the signal processing techniques jointly with the image space model to detect the imaging diameter accurately.

The proposed two theoretic sub-models are derived based on on-axis point light sources. For a real scenario, the model can be further extended by choosing feature points on the object planes as sampled point light sources, updating the on-axis models to be the off-axis ones, measuring the respective distance of each feature point, and retrieving the average of measured distances as the final distances of corresponding object planes. It is also set as our future work to make the proposed work applicable to industrial detection, microscopy, retina imaging or even broader areas.

## Funding

National Natural Science Foundation of China (NSFC) Guangdong Joint Foundation Key Project (U1201255 and 61371138).

## References and links

**1. **R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

**2. ** Lytro, https://www.lytro.com/.

**3. **A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography*,* (IEEE, 2009), pp. 1–8.

**4. ** Raytrix, https://www.raytrix.de/.

**5. **C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757. [CrossRef]

**6. **Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512. [CrossRef]

**7. **K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905. [CrossRef]

**8. **T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. **34**(5), 972–986 (2012). [CrossRef] [PubMed]

**9. **N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

**10. ** Metrology Resource Co, http://www.metrologyresource.com/.

**11. ** Weibel Inc., Weibel RR-60034 Ranging Radar System.

**12. **J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. **15**(2), 10072861 (2009).

**13. **K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

**14. **Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799. [CrossRef]

**15. **M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40. [CrossRef]

**16. **M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680. [CrossRef]

**17. **M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948. [CrossRef]

**18. **Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861. [CrossRef]

**19. **H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555. [CrossRef]

**20. **C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express **22**(22), 26659–26673 (2014). [CrossRef] [PubMed]

**21. **“Zemax,” http://www.zemax.com/.

**22. **E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A **32**(11), 2021–2032 (2015). [CrossRef] [PubMed]