Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Depth perception estimation of various stereoscopic displays

Open Access Open Access

Abstract

In this paper, we investigate the relationship between depth perception and several disparity parameters in stereoscopic images. A number of subjective experiments were conducted using various 3D displays, which indicate that depth perception of stereoscopic images is proportional to depth difference and is inversely related to the camera distance. Based on this observation, we developed some formulas to quantify the degree of depth perception of stereoscopic images. The proposed method uses depth differences and the camera distance between the objects and the 3D camera. This method also produces improved depth perception estimation by using non-linear functions whose inputs include a depth difference and a camera distance. The results show that the proposed method provides noticeable improvements in terms of correlation and produces more accurate depth perception estimations of stereoscopic images.

© 2016 Optical Society of America

1. Introduction

The rapid advancement of 3D imaging and display technologies has helped the growth of 3D related industries. Different types of 3D displays have been introduced into the market and a large number of 3D programs have been produced. The main benefit of 3D content is the depth information that allows viewers to perceive the relative depth of various objects. If it is possible to estimate the accuracy of this perception, the 3D content producer can maximize the 3D effects. Also, in some applications such as medical operations and remote robot controls, accurate perception by the operator is very important. Therefore, if a depth perception accuracy estimation model is available, it can be used for 3D content production, remote operations using stereo images, etc.

Several authors have studied human depth perception of 3D images. In [1–3], depth perception was investigated for stereo random dot images and the authors provided some insights into the mechanisms and processes of stereopsis. It was reported that several monocular effects such as spatial properties, noise, quantization level, or blurring in the 3D images do not affect depth perception. In [4], random dot techniques were used to investigate the human visual sensitivity to sinusoidal depth modulations specified by motion parallax information. It was reported that motion parallax provides the human visual system a strong depth cue that contains, in principle, the same geometrical information as stereopsis. In [5], the authors investigated whether similar random dot techniques could be used to study depth from relative movement or motion parallax in the human visual system. In [6], the depth perception of 3D images captured under various binocular rivalry conditions was tested. In [7], some experiments were conducted to investigate the relationship between binocular acuity and several factors such as distance and illumination.

However, most previous research has focused on the processes of depth perception, and there has been little research on modeling or quantifying depth perception accuracy. In this paper, we investigate how to estimate the level of depth perception and propose models for depth perception accuracy.

Depth perception and eye comfort are important issues for 3D content production [8,9]. In particular, camera parameters, such as the distance between two lenses and the distance between the camera and objects, are important. On the other hand, when using various types of 3D displays, their impacts on depth perception are not well understood.

With increasing adaptations of 3D video technologies in consumer electronic products such as cameras, smart phones, tablets, etc., it is important to understand the relationship between the image parameters and depth perception. This understanding can improve the production of quality 3D video content by amateur and professional users.

To ensure comfortable long-term 3D viewing experiences, high-quality 3D content without coding and transmission errors is desirable. Also, another important factor for evaluating 3D sources is depth perception. From an application perspective, the depth characteristics of 3D content are important for determining whether the content is suitable for 3D services. Therefore, efficient methods for measuring depth perception in 3D content are helpful. The depth perception assessment of 3D video content is a key feature of 3D Quality of Experience (QoE) assessment. In particular, there is an increasing need to investigate the relationship between camera parameters and depth perception.

Several parameters affect depth perception, such as depth differences and the distance between the camera and objects. In general, it has been reported that depth perception is related to binocular disparity, which is the difference in image locations of an object seen by the left and right eyes.

In this paper, we investigate the relationship between depth perception and two image parameters: depth difference, and the distance between the camera and objects. We conducted subjective experiments with various image parameter settings and analyzed the subjective data. Then, we developed depth perception estimation functions that showed high correlations with depth perception accuracy.

The rest of this paper is organized as follows. Section 2 describes subjective experiments on depth perception along with a statistical analysis of the subjective test results. In Section 3, the experimental results are reported and Section 4 proposes depth perception estimation functions based on the experimental results. Finally, concluding remarks are drawn in Section 5.

2. Experimental setting and test environment

Several methods for subjective evaluation of visual stimuli have been described in ITU-R Recommendation BT.500 [10] for television pictures and BT.2021 for stereoscopic television pictures [11]. In the subjective tests, we used these recommendations.

2.1 Display systems

Depth perception may be different depending on the 3D display type. Several researchers have showed that depth perception is affected by other factors such as crosstalk, brightness, etc. In [12], when the 3D contents were played at a crosstalk level of up to 1~2%, the authors showed that the crosstalk caused detrimental effects on perceived depth from binocular disparity and they reported that depth perception can be reduced by 12%. In [13], the study shows that the brightness of 3D displayed images affects the pupil size of the viewers, which in turn affects the depth perception of the stereoscopic images. After considering these issues, three different types of 3D displays were used in the tests: a 55-inch TV monitor for home TV watching environment, a PC monitor (27”), and notebook monitors (17.3” and 15.6”). The 3D displays included a film patterned retarder (FPR), shutter glasses (SG), and an auto stereoscopic device (parallax barrier). Table 1 shows the 3D display specifications.

Tables Icon

Table 1. Display specifications used in the subjective experiments

2.2 Laboratory environment

ITU-R Recommendation BT.500 and ITU-R Recommendation BT.2021 describe the testing conditions in terms of laboratory brightness, viewing distances, display brightness, etc. The subjective tests were performed in accordance with these recommendations. For the 3D TV monitor, two observers participated in the subjective tests at the same time. For the 3D PC monitor and notebooks, only one observer participated in each test. Figure 1 illustrates the viewing environment of the subjective tests. The viewing distance was 3H (three times the display height) for the TV and PC monitors. For the notebooks, the viewer chose a comfortable distance (usually 1.5~2H).

 figure: Fig. 1

Fig. 1 Viewing environments (H: monitor height): (a) TV monitor, (b) PC monitor, (c) notebooks.

Download Full Size | PDF

2.3 Test stimuli

We generated a number of stereo images using a commercial 3D camera. In these images were four objects with one of them at a different depth (see Fig. 2). The sizes of the four objects were intentionally chosen to be different so that the size information could not be used to select an answer. Figure 3 shows some examples of the test images.

 figure: Fig. 2

Fig. 2 Camera distance and depth differences.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Example of the test images: (a) left and right images, (b) combined image, (c) object A, (d) object B, (e) object C, (f) object D.

Download Full Size | PDF

Since the distance between the lenses of the stereo camera was fixed, only two factors affected the disparities: the distance between the camera and the objects (camera distance) and the relative distances between the objects. At a close distance, small differences between the objects were sometimes noticeable. As the distance between the camera and the objects increased, a longer depth difference was needed for viewers to distinguish the relative locations of the objects. The test images were generated using 12 different depth differences (six positive and six negative). Generally, large disparity levels caused eye fatigue. If they were too large, the objects did not converge. Thus, the tests were designed so that the disparity levels were within a comfortable viewing zone [6]. In the tests, we used four camera distances (4m, 4.5m, 5m, and 5.5m). Table 2 shows the test conditions.

Figure 4 shows the relationship between the widths of Object A (pig) and the distances. Even at the same width, the absolute distance showed some variations. For example, when the width was 130 pixels, the distance was between 450cm and 500cm. Thus, it was difficult for an observer to accurately estimate the absolute distances of the four objects based on their relative sizes.

 figure: Fig. 4

Fig. 4 Relationship between object widths and distances (object A).

Download Full Size | PDF

Figure 5 shows the disparity difference distributions and average disparity differences, and Fig. 6 shows the disparity difference histogram. For 10cm depth differences, the disparity difference was about 1 pixel or less in most cases. The range of disparity differences was from 0.25 to 7 pixel(s).

 figure: Fig. 5

Fig. 5 (a) disparity difference distributions, (b) average disparity differences.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Disparity difference histogram.

Download Full Size | PDF

2.4 Subjective test methodology

Subjective assessment was performed using the ACR (Absolute Category Rating) method described in ITU-R Recommendation BT.500-10, BT.2021, and ITU-T Recommendation P.910 [14]. Each image was shown to the viewers for ten seconds and they were then asked to choose an object located at a different distance from the others.

During the training phase, five sample images were shown for observers to become familiar with the test. The sample stereo images were not used in the data analyses. Figure 7 shows an example of the test procedure.

 figure: Fig. 7

Fig. 7 Test image presentation.

Download Full Size | PDF

Before the subjective test, screening tests were conducted, which included a vision test using a Snellen chart, a color blindness test and a stereoscopic acuity test using a Titmus test. In the Titmus test, there are nine diamonds, each of which contains four circles. One of the four circles is located at a different depth and a viewer is asked to find that circle. The nine diamonds have different difficulty levels (1-9). The depth difference decreases and it becomes more difficult to find the circle at a different depth. Each viewer was required to pass Level 5 of the Titmus test. The interpupillary distance of each participant was also recorded [15]. A total of 103 viewers participated in the tests. The viewers consisted of 62 males and 41 females, who were between 20 and 30 years old.

After each subjective test, a screening test was performed according to a guideline described in [10,11] to remove untrustworthy viewers. In other words, viewers whose rating values were significantly different from the others were replaced by new viewers. At least 24 viewers participated in each test. In each stimulus, the depth perception accuracy was calculated as follows:

depthperceptionaccuracy=ThenumberofviewerswhochoosecorrectobjectThetotalnumberofviewers

3. Experimental results and analyses

3.1 Depth perception accuracy at depth difference

Table 3 shows the depth perception accuracy when using each display. There are some variations among the displays and the objects. Displays A (FPR TV) and B (FPR PC monitor) showed good accuracy while Display C showed the lowest accuracy. The viewers found Object A most difficult to identify, whereas Object B was most accurately recognized. Generally, if an object has a large width, small disparities may not be well perceived compared to an object with a small width. Figure 3 and Table 3 also show the relative widths (pixels) of the objects. Object A has the largest width whereas Object B has the smallest width, which may have affected the recognition accuracy.

Tables Icon

Table 3. Depth perception accuracy of each display

3.2 Correlation between displays

Figures 8-11 show the subjective test results when using each display. In general, the depth perception accuracy increases as the depth difference increases. When the depth difference was 10cm, the depth perception accuracy was less than 50%. However, when the depth difference was larger than or equal to 20cm, the depth perception accuracy exceeded 80% in most cases. On the other hand, as the distance between the camera and the objects increased, the depth perception accuracy decreased, as expected. Table 4 shows the correlation coefficients among the displays. The average correlation was 0.826. Figure 12 shows a scatter plot of displays A and C. The points marked in red in Fig. 12 represent the images whose depth ratios were smaller than 0.05. In other words, the images whose accuracies were noticeably different depending on the displays had small depth ratio values, which indicate that it was difficult to perceive the relative depth.

 figure: Fig. 8

Fig. 8 Depth perception accuracy at depth differences (Display A): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Depth perception accuracy at depth differences (Display B): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Depth perception accuracy at depth differences (Display C): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Depth perception accuracy at depth differences (Display D): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.

Download Full Size | PDF

Tables Icon

Table 4. The correlation coefficients between each pair of displays

 figure: Fig. 12

Fig. 12 Scatter plots between each pair of displays.

Download Full Size | PDF

3.3 Relationship between depth perception and physical characteristics

Figure 13 shows scatter plots of the depth perception accuracy and the interpupillary distance. However, the experimental results showed no clear relationships between depth perception accuracy and these factors. Therefore, they were not considered for modeling depth perception accuracy.

 figure: Fig. 13

Fig. 13 Depth perception accuracy at different interpupillary distances: (a) Display A, (b) Display B, (c) Display C, (d) Display D.

Download Full Size | PDF

The stereo acuity may also affect depth perception. Figure 14 shows the relationship between the depth perception accuracy and the stereoscopic acuity level, which was obtained using the Titmus test. Since no clear relationships were observed, they were not considered for modeling the depth perception accuracy.

 figure: Fig. 14

Fig. 14 Depth perception accuracy at the stereoscopic acuity level: (a) Display A, (b) Display B, (c) Display C, (d) Display D.

Download Full Size | PDF

4. Depth perception estimation

4.1 Depth perception estimation modeling based on the depth ratio

As can be seen in Figs. 8-11, the recognition accuracy increased as the depth difference between the objects increased. Also, the recognition accuracy decreased as the distance between the camera and the objects increased while the depth difference between the objects remained the same. Obviously, the ratio of the depth differences among the objects and the distance between the camera and the objects significantly affected the depth recognition accuracy. To investigate this relationship, we defined the depth ratio as follows:

depthratio=ThedepthdifferencebetweenobjectsThedistancebetweencameraandobjects

The distance between the camera and the objects was measured between the camera and the three objects at the same distance. Figure 15 shows the graphs of recognition accuracies in terms of the depth ratio. As expected, the recognition accuracy increased as the depth ratio increased. Then, the recognition accuracy became saturated when the depth ratio reached certain levels. For Displays A and B, the recognition accuracy became almost saturated when the depth ratio was about 0.07. For Display C, the recognition accuracy became saturated when the depth ratio was about 0.08 whereas the recognition accuracy became saturated for Display D when the depth ratio was about 0.09~0.1. When the depth ratio was larger than 0.1, the recognition accuracy was greater than 90% in almost all cases.

 figure: Fig. 15

Fig. 15 Depth perception accuracy at the depth ratio: (a) Display A, (b) Display B, (c) Display C, (d) Display D.

Download Full Size | PDF

From Fig. 15, it appears that the viewers recognized the negative disparity better than the positive disparity. Based on this relationship, several curve fitting functions were investigated. We divided the data set results into training and test sets, and used the training set to determine the coefficients. The test sets were used to evaluate performance. Figure 16 is an example of curve fitting using sigmoid functions. After investigating the fitting curve characteristics, we modeled the curves using three functions (sigmoid, exponential, and logarithmic functions) as follows:

Sigmoid:f(x)=a(1+ebx)+c
Exponential:f(x)=a1exp(b1x)+a2exp(b2x)
Logarithmic:f(x)=alog(x)+b
We used three parameters (a,b, and c) for the sigmoid function, four parameters (a1,a2,b1 and b2) for the exponential function, and two parameters (a and b) for the logarithmic function. We computed the parameters for each display.

 figure: Fig. 16

Fig. 16 Curve fitting.

Download Full Size | PDF

Figure 17 shows the fitting results of the three functions for the four displays. Although there were some differences, the overall curve shapes were very similar. Although it is usually desirable for a model to be asymptotically correct (0.25 at zero disparity for four objects, 1 at infinite disparity), we considered it better to have a model that produced better performance for the target range when the target input (disparity) covered a certain range. Only the sigmoid function was able to produce asymptotically correct models. Table 5 shows the performance comparison of four models with different asymptotical behaviors. As can be seen, the models showed very similar performance. On the other hand, when the number of objects changes, the lower target point (0, 0.25) should change, which is not desirable for modeling. Thus, we used the asymptotically correct model at infinity (see Figs. 17(a) and 17(b)).

 figure: Fig. 17

Fig. 17 Curve fitting (left: positive disparity, right: negative disparity): (a-b) sigmoid function (asymptotically correct model), (c-d) exponential function (the non-asymptotically correct), (e-f) logarithmic function (the non-asymptotically correct).

Download Full Size | PDF

Tables Icon

Table 5. Performance comparison

4.2 Verification test

Next, the statistical significance of the results was examined. First, we applied the ANOVA [16–19] test. The hypothesis was verified using the F test, which could be easily judged by the p-value. Table 6 shows the results of the two-way ANOVA test. In the results, both the parameters (depth difference, camera distance) had significant effects on the depth perception for displays A and B (marginally significant effect in terms of camera distance). However, in displays C and D, only the depth difference had a significant effect on the depth perception. Table 7 shows the results of the one-way ANOVA test for the proposed parameter (depth ratio), which shows that it had a significant effect on the depth perception for all displays.

Tables Icon

Table 6. The results of the two-way ANOVA test (camera distance, depth difference): (a) Display A, (b) Display B, (c) Display C, (d) Display D

Tables Icon

Table 7. The results of the one-way ANOVA test (depth ratio)

On the other hand, Levene's test was conducted to assess the homogeneity of the variance. The results showed that the p-value of Levene’s test was smaller than the critical value, which indicates that the data did not satisfy the homogeneity of the variance. Thus, the reliability of the ANOVA analyses may not be guaranteed. Also, we tested the proposed curve fitting functions by using the weighted least square method [17]. The test results indicate that the two parameters and the proposed parameter have significant effects on depth perception.

Next, we used the leave-one-out method [20] to evaluate the model’s performance for unknown data. Figure 18 shows the differences between the real depth perception accuracy and the estimated accuracy. In other words, we used all the samples except for a sample as training data and used the sample not used in training as a test data. The results were similar to those of the two fold cross-validation results. The proposed method showed good estimation for Displays A and B whereas larger discrepancies were observed for Displays C and D.

 figure: Fig. 18

Fig. 18 Differences between real accuracy and estimated accuracy (sigmoid functions, leave-one-out): (a) Display A, (b) Display B, (c) Display C, (d) Display D.

Download Full Size | PDF

Table 8, Table 9, Table 10, and Table 11 show the goodness of fit of the three fitting models. The sigmoid function showed the best performance for Displays A, B, and C and the exponential function model showed the best performance for Display D. We were able to accurately estimate the expected recognition accuracy based on the depth ratio using the sigmoid and exponential functions.

Tables Icon

Table 8. Performance analyses of the three functions (Display A)

Tables Icon

Table 9. Performance analyses of the three functions (Display B)

Tables Icon

Table 10. Performance analyses of the three functions (Display C)

Tables Icon

Table 11. Performance analyses of the three functions (Display D)

4.3 Stereo random dot images

Monocular cues such as relative size and shape information can affect depth perception. To exclude these effects, subjective tests were also conducted using stereo random dot images with no monocular cues, and the proposed model was also tested using the subjective data set. We used the disparity difference histogram of Fig. 6 to generate the stereo random dot images. The images contained four objects, one of which had a different disparity. Figure 19 is an example of the stereo random dot images. Additional subjective experiments were conducted using these random dot images for Display A. We applied the proposed model (sigmoid, two-fold cross-validation) using the same parameters to the random dot images. Table 12 shows the correlation coefficients of depth perception accuracy between the real stereo images and the random dot stereo images, and Fig. 20 shows the depth perception accuracy of the random dot stereo images.

 figure: Fig. 19

Fig. 19 Example of stereo random dot generated images.

Download Full Size | PDF

Tables Icon

Table 12. The correlation coefficients for the random dot stereo images

 figure: Fig. 20

Fig. 20 Depth perception accuracy at disparity difference: (a) positive disparity, (b) negative disparity.

Download Full Size | PDF

4.4 Modeling based on disparity

In the previous section, we estimated the recognition accuracy based on the depth ratio. However, conventional disparity can be also used to predict recognition accuracy. Figure 21 shows the procedure for disparity computation [21]. Table 13 shows the computed disparity values of the test images. As expected, the disparity value increased as the depth difference increased or the camera distance decreased. The correlation between the depth ratio and the disparity was about 96%.

 figure: Fig. 21

Fig. 21 Binocular disparity in stereoscopic images.

Download Full Size | PDF

Tables Icon

Table 13. Computational disparity of various combinations of camera distances and depth differences

Figure 22 shows scatter plots of recognition accuracies in terms of disparity. The same curve fitting procedures were applied to the recognition accuracies and disparity values. The sigmoid function showed the best performance for Displays A and B, and the exponential function model showed the best performance for Displays C and D.

 figure: Fig. 22

Fig. 22 Depth perception accuracy at binocular disparity: (a) Display A, (b) Display B, (c) Display C, (d) Display D.

Download Full Size | PDF

The depth perception estimation equations were used to predict depth perception accuracy based on the depth ratio or disparity. Table 14 shows the correlations between the estimated accuracies and the recognition accuracies of the viewers. The depth perception estimation using computational disparity showed a 93.2% correlation with the depth recognition accuracies. Depth perception estimation using the depth ratio showed better correlation with the depth recognition accuracies of the viewers.

Tables Icon

Table 14. Accuracy correlation comparison of the computational disparity and the proposed model

A potential problem with the computational disparity is that it can show large variations when an object horizontally moves. In Fig. 23(a), Object Y horizontally moves while Object X stays at the same location. Figure 23(b) shows the corresponding computational disparity, which decreases non-linearly. On the other hand, the proposed depth ratio remains the same. Figure 24 shows the graph of computational disparity as two objects with a fixed horizontal distance and a fixed depth difference move together horizontally from the left to the right. Although the depth difference is the same in Fig. 24, the computational disparity shows a considerable variation. Due to these kinds of irregular computational disparity behaviors, the proposed depth ratio may provide more reliable depth perception prediction than the computational disparity.

 figure: Fig. 23

Fig. 23 (a) horizontal movement, (b) computational disparity as a function of horizontal location.

Download Full Size | PDF

 figure: Fig. 24

Fig. 24 (a) two objects with a fixed horizontal distance and a fixed depth difference, (b) computational disparity as a function of horizontal location of object M.

Download Full Size | PDF

5. Conclusions

In this paper, we conducted subjective tests for stereoscopic depth perception when using various combinations of camera distances and depth differences. The results show that depth differences and camera distances have significant impacts on depth perception accuracy. We also proposed depth perception estimation equations using the depth ratio and computational disparity. The recognition accuracy predicted by the proposed model was highly correlated with the depth recognition accuracy ratings of the viewers. The accuracy correlation of the proposed model based on the depth ratio was 0.952 whereas the computational disparity’s correlation was 0.932. A problem with computational disparity is that it shows non-linear behaviors when objects horizontally move whereas the depth ratio provides more consistent depth perception estimation. The proposed model may be improved by taking into account other factors such as object widths, disparity level, viewing angles, etc. The proposed methods can be used to produce high-quality 3D content with good 3D effects.

Funding

National Research Foundation of Korea (NRF) (MEST) (No. 2011-0029381).

References and links

1. B. Julesz, “Binocular depth perception computer-generated patterns,” Bell Syst. Tech. J. 39(5), 1125–1162 (1960). [CrossRef]  

2. B. Julesz, Foundations of Cyclopean Perception (Chicago University, 1971).

3. B. Julesz, “Binocular depth perception with familiarity cues,” Science 145(3630), 356–362 (1964). [CrossRef]   [PubMed]  

4. B. Rogers and M. Graham, “Similarities between motion parallax and stereopsis in human depth perception,” Vision Res. 22(2), 261–270 (1982). [CrossRef]   [PubMed]  

5. B. Rogers and M. Graham, “Motion parallax as an independent cue for depth perception,” Perception 8(2), 125–134 (1979). [CrossRef]   [PubMed]  

6. I. P. Howard and B. J. Rogers, Binocular Vision and Stereopsis (Oxford University, 1995).

7. H. M. S. Langlands, “Experiments in binocular vision,” Trans. Opt. Soc. 28(2), 45–82 (1926). [CrossRef]  

8. F. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25(2), 99–108 (2004). [CrossRef]  

9. F. Speranza, W. Tam, R. Renaud, and N. Hur, “Effect of disparity and motion on visual comfort of stereoscopic images,” Proc. SPIE 6055, 60550B (2006). [CrossRef]  

10. ITU-R, “Methodology for the subjective assessment of the quality of television pictures,” ITU-R Recommendation BT.500–11 (2003).

11. ITU-R, “Subjective methods for the assessment of stereoscopic 3DTV systems,” Recommendation ITU-R BT.2021 (2012).

12. I. Tsirlin, L. Wilcox, and R. Allison, “The effect of crosstalk on the perceived depth from disparity and monocular occlusions,” IEEE Trans. Broadcast 57(2), 445–453 (2011). [CrossRef]  

13. R. Patterson, “Human factors of 3‐D displays,” J. Soc. Inf. Disp. 15(11), 861–871 (2007). [CrossRef]  

14. ITU-T, “Subjective video quality assessment methods for multimedia applications,” ITU-T Recommendation P.910 (2008).

15. L. S. Sasieni, The Principles and Practice of Optical Dispensing and Fitting (Butterworths, 1975).

16. G. Snedecor and W. Cochran, Statistical Methods (Oxford & IBH, 1967).

17. D. Anderson, D. Sweeney, and T. Williams, Statistics for Business & Economics (Cengage Learning, 2011).

18. R. Lomax, Statistical Concepts: a Second Course (Routledge, 2007).

19. G. Box, “Non-normality and tests on variances,” Biometrika 40(3/4), 318–335 (1953). [CrossRef]  

20. R. Duda, P. Hart, and D. Stork, Pattern Classification (John Wiley & Sons, 2012).

21. N. Qian, “Binocular disparity and the perception of depth,” Neuron 18(3), 359–368 (1997). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (24)

Fig. 1
Fig. 1 Viewing environments (H: monitor height): (a) TV monitor, (b) PC monitor, (c) notebooks.
Fig. 2
Fig. 2 Camera distance and depth differences.
Fig. 3
Fig. 3 Example of the test images: (a) left and right images, (b) combined image, (c) object A, (d) object B, (e) object C, (f) object D.
Fig. 4
Fig. 4 Relationship between object widths and distances (object A).
Fig. 5
Fig. 5 (a) disparity difference distributions, (b) average disparity differences.
Fig. 6
Fig. 6 Disparity difference histogram.
Fig. 7
Fig. 7 Test image presentation.
Fig. 8
Fig. 8 Depth perception accuracy at depth differences (Display A): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.
Fig. 9
Fig. 9 Depth perception accuracy at depth differences (Display B): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.
Fig. 10
Fig. 10 Depth perception accuracy at depth differences (Display C): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.
Fig. 11
Fig. 11 Depth perception accuracy at depth differences (Display D): (a) distance = 400 cm, (b) distance = 450cm, (c) distance = 500cm, (d) distance = 550cm.
Fig. 12
Fig. 12 Scatter plots between each pair of displays.
Fig. 13
Fig. 13 Depth perception accuracy at different interpupillary distances: (a) Display A, (b) Display B, (c) Display C, (d) Display D.
Fig. 14
Fig. 14 Depth perception accuracy at the stereoscopic acuity level: (a) Display A, (b) Display B, (c) Display C, (d) Display D.
Fig. 15
Fig. 15 Depth perception accuracy at the depth ratio: (a) Display A, (b) Display B, (c) Display C, (d) Display D.
Fig. 16
Fig. 16 Curve fitting.
Fig. 17
Fig. 17 Curve fitting (left: positive disparity, right: negative disparity): (a-b) sigmoid function (asymptotically correct model), (c-d) exponential function (the non-asymptotically correct), (e-f) logarithmic function (the non-asymptotically correct).
Fig. 18
Fig. 18 Differences between real accuracy and estimated accuracy (sigmoid functions, leave-one-out): (a) Display A, (b) Display B, (c) Display C, (d) Display D.
Fig. 19
Fig. 19 Example of stereo random dot generated images.
Fig. 20
Fig. 20 Depth perception accuracy at disparity difference: (a) positive disparity, (b) negative disparity.
Fig. 21
Fig. 21 Binocular disparity in stereoscopic images.
Fig. 22
Fig. 22 Depth perception accuracy at binocular disparity: (a) Display A, (b) Display B, (c) Display C, (d) Display D.
Fig. 23
Fig. 23 (a) horizontal movement, (b) computational disparity as a function of horizontal location.
Fig. 24
Fig. 24 (a) two objects with a fixed horizontal distance and a fixed depth difference, (b) computational disparity as a function of horizontal location of object M.

Tables (14)

Tables Icon

Table 1 Display specifications used in the subjective experiments

Tables Icon

Table 2 Test conditions

Tables Icon

Table 3 Depth perception accuracy of each display

Tables Icon

Table 4 The correlation coefficients between each pair of displays

Tables Icon

Table 5 Performance comparison

Tables Icon

Table 6 The results of the two-way ANOVA test (camera distance, depth difference): (a) Display A, (b) Display B, (c) Display C, (d) Display D

Tables Icon

Table 7 The results of the one-way ANOVA test (depth ratio)

Tables Icon

Table 8 Performance analyses of the three functions (Display A)

Tables Icon

Table 9 Performance analyses of the three functions (Display B)

Tables Icon

Table 10 Performance analyses of the three functions (Display C)

Tables Icon

Table 11 Performance analyses of the three functions (Display D)

Tables Icon

Table 12 The correlation coefficients for the random dot stereo images

Tables Icon

Table 13 Computational disparity of various combinations of camera distances and depth differences

Tables Icon

Table 14 Accuracy correlation comparison of the computational disparity and the proposed model

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

depthperceptionaccuracy= Thenumberofviewerswhochoosecorrectobject Thetotalnumberofviewers
depthratio= Thedepthdifferencebetweenobjects Thedistancebetweencameraandobjects
Sigmoid:f(x)= a (1+ e bx ) +c
Exponential:f(x)= a 1 exp( b 1 x)+ a 2 exp( b 2 x)
Logarithmic:f(x)=alog(x)+b
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.