Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-resolution 3D shape measurement with extended depth of field using fast chromatic focus stacking

Open Access Open Access

Abstract

Close-range 3D sensors based on the structured light principle have a constrained measuring range due to their depth of field (DOF). Focus stacking is a method to extend the DOF. The additional time to change the focus is a drawback in high-speed measurements. In our research, the method of chromatic focus stacking was applied to a high-speed 3D sensor with 180 fps frame rate. The extended DOF was evaluated by the distance-dependent 3D resolution derived from the 3D-MTF of a tilted edge. The conventional DOF of 14 mm was extended to 21 mm by stacking two foci at 455 and 520 nm wavelength. The 3D sensor allowed shape measurements with extended DOF within 44 ms.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

1.1 Motivation

The application of optical measurement systems which capture objects or scenes in their three-dimensional (3D) shape are widely known in industrial quality control, cultural heritage, medicine, human-machine interaction (HMI), robotic vision, and forensics [111]. Close-range photogrammetry in combination with a structured light projector is one popular method and subject of this paper. Compared to other established techniques, such as laser scanning or photogrammetry, it achieves a high resolution in terms of point density and detail level as well as a high-speed in terms of fast data acquisition and low latency [1214].

Structured light 3D sensors consist of at least one camera and one active light projector unit arranged in a triangulation setup [12]. Digital projectors, e.g. based on DLP (Digital Light Processing), are widely used, but other technical realizations of pattern projectors exist, such as gobo, laser speckle, or array projection [1517]. A sequence of varying structured patterns is projected into the scene, e.g., phase-shifted sinus or aperiodic fringe patterns [18,19], and captured by the camera(s). Through correlation of the patterns, corresponding points are identified, which allow for reconstructing the 3D coordinates of the surface point by triangulation from the pre-calibrated camera(s) and/or projector.

As an optical system, the performance of a structured light 3D sensor is subject to diverse properties of its optical components, i.e. light source, lenses, aperture stops, detector, etc. Naïve optimization of one property counteracts to one or several others. One conflict is between the optical depth of field (DOF) which determines the measuring range of a structured light 3D sensor, i.e. the range between near and far plane, and the light throughput which limits the maximum achievable measuring speed.

Due to the need of a projector, the light throughput from the light source to the detector is always a limiting factor. Therefore, structured light sensors are primarily applied in close-range situations with a small field of view that can be illuminated with reasonable effort. Known system realizations range up to 4 × 2.2 m2 field of view at a distance of ∼ 4 m [20] but require very large light sources [10] that, due to size and weight restrictions, are not applicable in other domains with limited space requirements and object distance like the inline control of printed circuit boards.

The limited light throughput can be counteracted by lenses with large aperture size (equivalent to small F-number) [21]. However, that practice reduces the DOF and contradicts our goal. In contrast, 3D shape measurement systems desire a large DOF to achieve a certain level of detail over a large measuring range. For example, to resolve tiny components in different elevation layers on PCBs, or to be more tolerant against positional errors of the object of interest. Especially sensors with high magnification, achieve their specified resolution only within a few millimeters of depth. Application-specific tradeoffs are often made between acquisition time (dictated by light throughput) and DOF [22].

A condition in the research of this paper was to extend a sensor’s DOF without sacrificing most of its other properties. In particular, we wanted to retain high performance in terms of speed and resolution. Next, we will present some of the possibilities to extend the DOF in a 3D sensor and what their tradeoffs are.

1.2 Related work

Different methods have been proposed to extend the conventional DOF of optical imaging systems. Those were primarily invented and used in 2D imaging. In our review, only methods which were applied to structured light 3D sensors are considered. The basic idea to extent the DOF is to refocus on different focal planes during the acquisition. Typically, the size of the projected patterns is large compared to the camera resolution, so that the DOF of the projector is not the limiting constraint and only the camera lens of the structured light sensor is refocused. The known methods can be roughly grouped into focus stacking and focus sweeping. While stacking describes the subsequent 3D data acquisition at a set of distinct focal planes, sweeping indicates a single acquisition during the shift of the focal plane.

In the context of 3D sensors, stacking results in a set of multiple 3D point clouds, e.g., realized by [23]. Each single point cloud may contain blurred features which originate from object points outside the related DOF. However, due to the known camera-to-object distances for each point, they can be excluded from further analysis. In case the scene does not change during acquisition, the point clouds can be merged into one by using the ‘best’ focus for each point.

Focus sweeping is done in 2D imaging by continuously changing the focal plane during the exposure time of the camera sensor. By deconvolution with the known point spread function (PSF), an unblurred image can be recovered under the assumption that the PSF is independent from depth [24]. In structured light sensors, which require the acquisition of multiple temporally varying patterns, the sweeping process must be repeated for each pattern. Then, the stack of 2D images is deconvolved to a stack of unblurred images which is ultimately used for 3D point cloud reconstruction, e.g. realized by [25].

While focus stacking is associated with a time-consuming acquisition, especially in case of many focus positions, focus sweeping involves a time-consuming reconstruction, as image deconvolution is a computationally expensive operation [26,27]. Furthermore, deconvolution is sensitive to noise and cannot recover high spatial frequencies robustly [2729]. Thus, focus stacking should be the method of choice if the requirements in terms of detail level, resolution, and accuracy outrank a short acquisition time. Previous works about sweeping and deconvolution, e.g., [21,30,31], compared their results against focused images as reference. Others did no quantitative evaluation how detail level and resolution are affected [25,27]. A focus stacking solution would be applicable to high-speed 3D shape measurement if a small set of focus positions is sufficient and if a fast change of the focal planes is realized. The projection of time-varying textures during the focus sweep was evaluated by [32] with the goal to improve depth recovery, but they found no major advantage for triangulation methods.

The mechanisms to change the focal plane are similar for focus stacking and sweeping and influence the overall time expense. In general, refocusing can be done by shifting the image sensor (e.g., [21]) or the imaging lens. Any mechanical shifting adds time delays, which makes it less attractive for high-speed applications. Shifting the measurement object is an option in applications, where the object is small, rigid, lightweight, and controllable. In general, it is beneficial to integrate the refocusing mechanism directly into the 3D shape measurement system itself to be independent from object characteristics. Another proposed refocus mechanism is the change of the focal length of the imaging lens. Zoom objectives accomplish this by shifting single or multiple lenses, which leads again to time delays. Liquid lenses allow quick changes of the focal length by deforming the lens surface. The usage of an electrical tunable liquid lens (ETL) in front of the conventional camera lens of a structured light 3D sensor was evaluated in [25] and [32]. The time delay is minimized to a few milliseconds. As liquid lenses consist of elastomer material, the image quality is limited compared to glass lenses [33].

In [34], the axial chromatic aberration of optical materials, normally something to be corrected by, e.g., achromatic or apochromatic lens designs, was proposed to be used as a benefit which provided the impetus to the method of chromatic confocal microscopy. The same idea was proposed by [31] and [35] for focus sweeping by constituting spectral dependent focal lengths. Refocusing is then realized through mixing illumination with different wavelengths. With common RGB color cameras, up to three focal planes could be captured at once but with limited spatial resolution through the Bayer grid. With monochrome cameras, which are typically used in structured light 3D sensors to retain maximal spatial resolution, the colors would be captured subsequently (for focus stacking) or at once (for sweeping). It was discussed in [32] to extend their idea of time-varying texture projection during focus sweep by spectral-varying textures. The switching between LEDs of different wavelength introduces negligible time delays of a few microseconds, which is a benefit against other refocus mechanism, such as ETL. An additional advantage of such a chromatic aberration imaging lens is its reduced complexity, as neither electronic control nor moving parts are required.

A totally different idea to extend DOF of an imaging system, is to take the amount of blur itself into account. The usage of coded apertures, which generate an engineered PSF of known shape dependent on the distance from the camera, are a considerable research area [36,37] and can be combined with modern deep learning methods [38,39,40]. Chromatic dependent blur is used in this context by [41] and [42]. Such methods follow not merely the intent to extend DOF, but to estimate depth information directly from a single, monocular image. This allows fast 3D image acquisition with a single camera, which is very interesting for consumer electronics market [43]. Nevertheless, spatial resolution of those “depth from defocus” [37] techniques is limited because the blurred information cannot be fully recovered [44]. At least until now, they do not achieve a resolution close to the Nyquist limit of the used pixel array, as it is the claim for structured light setups [45,46].

1.3 Proposed method

In our research, the chromatic refocus method was designed into a camera lens and applied to a high-resolution and high-speed 3D sensor including a projector with switchable color LEDs (Fig. 1). We evaluated chromatic focus stacking with two focus planes against focus stacking by refocusing a motorized camera lens. Our 3D sensor aimed on a resolution below 100 µm and would already advance its practicability by doubling the DOF to a few tens of millimeters. In contrast to previous works, we characterized the extended DOF in 3D space by measuring the distance-dependent 3D resolution using an expansion of the 3D-MTF approach from [47].

 figure: Fig. 1.

Fig. 1. Schematic diagram of a structured light 3D sensor applying chromatic focus stacking in camera lens in combination with a multi-color projector.

Download Full Size | PDF

Our lens design is explained in section 2. In section 3 we show the results of our experiments, which are discussed in section 4. Section 5 summarizes our work. In the supplementary material (SM), we have included several appendices. Theoretical background with regard to DOF is outlined in Supplemental 1. Supplementals 2 and 3 include additional results from our lens design and MTF characterization. We characterized the DOF of our experimental 3D sensor setups by their distance-dependent 3D resolution derived from their 3D-MTF. The applied procedure is explained in detail in Supplemental 4.

2. Design

2.1 Chromatic aberration

Because of the dispersive property of refractive materials, the focal length $S^{\prime}$ of optical systems depends on the wavelength λ [48] (Fig. 2). The variation $\mathrm{\Delta }S_{\textrm{ax}}^{\prime}$ of paraxial focal length between two wavelengths ${\lambda _1}$ and ${\lambda _2}$ is called axial chromatic aberration and is given by

$$\mathrm{\Delta }S_{\textrm{ax}}^{\prime} = S^{\prime}({{\lambda_1}} )- S^{\prime}({{\lambda_2}} ), $$

Chromatic aberration is often corrected in lens design because it reduces image quality in case of broadband light spectrum. However, chromatic aberration can be exploited to extend the DOF, as a lens with axial chromatic aberration has a focal length that varies with wavelength [48]. In chromatic confocal microscopes, this idea is applied for many years to make distance measurements [34].

 figure: Fig. 2.

Fig. 2. Axial chromatic aberration $\mathrm{\Delta }S_{\textrm{ax}}^{\prime}$ of a lens.

Download Full Size | PDF

Two or more wavelengths can be used to extend the conventional DOF. Through a proper optical design, the focus positions for selected wavelengths are placed, so that their DOFs are concatenated (Fig. 1). The total DOF is then the sum of all spectral DOFs. In contrast to confocal systems, for focus stacking or sweeping a 2-dimensional imaging lens is required. This method was evaluated in 2D imaging by [31]. Our research exploits the applicability of chromatic focus stacking to a structured light triangulation 3D sensor.

2.2 Design of a camera lens applying chromatic focus stacking

We started with the design of a camera lens configured for two-channel chromatic focus stacking. Table 1 summarizes relevant parameters for the lens design, which we later realized in our experimental 3D sensor setup. The theoretical background we have considered with regard to DOF of imaging lenses can be found in Supplemental material 1. Furthermore, we investigated in a preliminary design study (cf. SM2) the dependence of the DOF on influencing parameters by using OpticStudio (www.zemax.com). The simulation in the study showed that the optimal parameter set including F-number F#, wavelength λ and magnification M can reach a conventional DOF of about 17 mm at best, so less than our requirement of > 20 mm. Next, we explain briefly how the requirements in Table 1 are derived and realized.

Tables Icon

Table 1. Design requirements for the chromatic focus stacking camera lens

Main criterion for DOF was an object-sided modulation of at least 20% @ 10 line pairs per mm (LP/mm). The modulation is represented through the modulation transfer function (MTF). It was shown by [49] and [50] that for the MTF under defocus, neglecting diffraction, the circle of confusion $D^{\prime}$, i.e. the allowed non-distinguishable diameter in sensor plane, is corresponding to the inverse of the spatial frequency $R^{\prime}$ at which the modulation drops to 20%:

$$R_{\textrm{MTF20}}^{\prime} \approx \frac{1}{{D^{\prime}}}\;\;\;\;{R_{\textrm{MTF20}}} \approx \frac{1}{D}$$

Therefore, ${R_{\textrm{MTF20}}} = 10\; \textrm{LP/mm}$ in object space is approximately equivalent to a circle of confusion $D = 0.1\; \textrm{mm}$. $D^{\prime}$ and $R^{\prime}$ are related to their object-sided counterparts D and R by the lens magnification M [51,52].

Camera parameters in Table 1 are given through our intended image sensor Gpixel GMAX0505. We design the camera-to-object distance to 330 mm close our design study in Supplemental 2. The magnification M is pre-defined by the relation between the widths of camera sensor and field of view. Including a small margin, it is set to $M = 1/12$. This results in a ground sample distance (GSD) of ∼ 30 µm. As the negative sign of M is not of relevance in the considerations of DOF, it is neglected. Preliminary experiments with a Tamron 35mm f/1.4 Di USD camera lens showed that $F\# $ ≤ 5.6 would permit the application of exposure times smaller 5.5 ms which is required for an image acquisition with 180 Hz frame rate in our experimental 3D sensor setup (cf. Section 3.2). Our design study in Supplemental 2 revealed that shorter wavelengths are favorable in our case with F# close to critical aperture $F{\# _{\textrm{crit}}}$ (cf. Eq. (S7) in SM: $F{\# _{\textrm{crit}}} = 7.5\; @\; \lambda = 455\; \textrm{nm}$, $F{\# _{\textrm{crit}}} = 6.6\; @\; \lambda = 520\; \textrm{nm}$). We select 455 and 520 nm as wavelengths for chromatic focus stacking, because they are typical for blue and green LEDs in DLP projectors.

The lens optimization shows that F# = 4 is optimal to achieve an extended DOF of $\textrm{DO}{\textrm{F}_{\textrm{extended}}} > 20\; \textrm{mm}$. According to optical theory of DOF (cf. Eqs. (S3), (S5), and (S6) in SM), single depth of fields $\textrm{DO}{\textrm{F}_{\textrm{tot}}}$ would be 13.8 mm ($\lambda = 455\; \textrm{nm}$) and 14.4 mm ($\lambda = 520\; \textrm{nm}$) for the designed parameter set. Therefore, a $\textrm{DO}{\textrm{F}_{\textrm{extended}}} = 28.2\; \textrm{mm}$ could be achieved in theory. In practice, spectral bandwidth of light sources, overlap of both DOFs and higher order aberrations will reduce the achievable value. Those factors are considered in our lens design in OpticStudio.

Figure 3 shows the DOF curve of the designed lens system. MTF values for $R = \; 10\; \textrm{LP}/\textrm{mm}$ are plotted against the focus position for the two wavelengths. The threshold is set to 35% including 15% margin for manufacturing tolerances. Spectra of (455 ± 10) nm and (520 ± 20) nm are simulated. $\textrm{DO}{\textrm{F}_{\textrm{tot}}}$ are 11 mm ($\lambda = 455\; \textrm{nm}$) and 11.3 mm ($\lambda = 520\; \textrm{nm}$). The overlap is 1.1 mm. In this way, an extended DOF of $\textrm{DO}{\textrm{F}_{\textrm{extended}}} = 21.2\; \textrm{mm}$ can be achieved by our lens design at a spatial frequency of $R_{\textrm{MTF20}}^{\prime} = 120\; \textrm{LP/mm}$ on the image side, or rather ${R_{\textrm{MTF20}}} = 10\; \textrm{LP/mm}$ on the object side. The lower peak modulation at λ = 520 nm arises majorly from the stronger diffraction at larger wavelengths. Besides, the smaller spatial resolution (GSD) at larger distance is of relevance. As can be extrapolated from Fig. 3, our criterion on modulation would be hard to achieve for wavelengths λ > 600 nm. The focal lengths of our design are 26.464 mm ($\lambda = 455\; \textrm{nm}$) and 26.509 mm ($\lambda = 520\; \textrm{nm}$).

 figure: Fig. 3.

Fig. 3. DOF curves as modulation value against defocus for two wavelengths 455 and 520 nm at spatial frequency $R^{\prime} = 10\frac{{\textrm{LP}}}{{\textrm{mm}}}$.

Download Full Size | PDF

The simulated camera lens design in Fig. 3 demonstrates that the axial chromatic aberration can be tailored for our purpose in such a way that the concatenatation of spectral DOFs is realizable. Two narrow band light sources (e.g., LEDs) must be used to illuminate the object subsequently.

3. Experiment

3.1 Camera lens characterization

The camera lens design from Section 2.2 was manufactured as a prototype by Docter Optics. First, we checked the camera lens performance in combination with the camera CB262MG-GP-X8G3 from Ximea which includes the camera sensor from Table 1. The lens was mounted by a C-mount thread. A field of view of 155 × 120 mm2 was realized at the designed camera-to-object distance. A CTF (contrast transfer function) target was used to evaluate the contrast @ 10 LP/mm. CTF and MTF values can be converted into each other by factor 4/π [52]. We placed the target in the center of the field of view and moved it stepwise along the optical axis by a motorized linear stage M-414.2PD from PI. Two LED sources and a diffusor were used to alternately illuminate the target. We checked their spectral emission beforehand by a multi-channel spectrometer from Zeiss: (455 ± 8) nm and (520 ± 15) nm.

Figure 4 shows the measured DOF curves as plot of the modulation @ 10 LP/mm in relation to the defocus position. The blue focus channel is over threshold between –14 and −1 mm, the green channel between −2 and +11 mm. An extended $\textrm{DO}{\textrm{F}_{\textrm{extended}}} = 25\; \textrm{mm}$ was realized including an overlap region of 1 mm. One can conclude that the prototypical production of the camera lens exceeds the specified tolerances, given that the measured $\textrm{DO}{\textrm{F}_{\textrm{extended}}}$ is nearly 4 mm larger compared to the simulated design (cf. Figure 3). Yet, the peak modulations of both focus channels are ∼ 20% below the design. Manufacturing tolerances are likely the main reason for this drop. Influences from the camera and image processing were avoided by usage of unprocessed raw images. The peak modulation in the green channel is smaller compared to the blue one, which was predicted by the designed DOF curve.

 figure: Fig. 4.

Fig. 4. DOF curves at wavelengths 455 and 520 nm with modulation over defocus position and example images of the used CTF target with 10 LP/mm at −8, 0 and +8 mm.

Download Full Size | PDF

In Supplemental 3 we show additionally the MTF characterization of our lens prototype determined with an ImageMaster Universal system from TRIOPTICS. Nevertheless, the 2D dimensional characterization of the DOF by MTF or CTF did not take into account influences from the 3D sensor setup and data processing. Next, it was evaluated how the extended DOF is represented in 3D measurements.

3.2 3D sensor setup with focus stacking

We evaluated the applicability of chromatic focus stacking for 3D scanning in an experimental structured light 3D sensor setup considering the parameters from Table 1. The experimental setup is shown in Fig. 5 on the left side. It consisted of one camera (2) with the chromatic aberration lens prototype (1) and one DLP projector (3). The overlapping region of projector and camera field of views resulted for the 3D sensor setup in a field of view of 155 × 104 mm2. The DLP projector LightCrafter 4500 allowed an instantly switch between the integrated RGB LEDs within a pattern sequence which is a necessity for fast chromatic focus stacking. The peak wavelengths of the integrated LEDs are close to our lens design.

 figure: Fig. 5.

Fig. 5. 3D sensor setups with chromatic (left) and mechanical focus stacking (right):

1 – camera lens, 2 – camera, 3 – projector

Download Full Size | PDF

The right side in Fig. 5 shows a second 3D sensor setup, which was used for reference tests. In contrast to the first setup, the camera lens was a Tamron 35mm f/1.4 Di USD. It included a motorized focus control, so that mechanical focus stacking could be realized. A focus change of 10 mm took about 80 ms. It was used in some instances to stack more than two focus positions. The reference setup was realized with parameters approximate to the experimental setup, so that the different refocus mechanism are comparable and other influences are negligible (e.g., ground sample distance GSD). Table 2 summarizes the realized specifications of the experimental and reference setup.

Tables Icon

Table 2. Comparison of experimental and reference 3D sensor setups

We applied classical phase shifting of sine patterns in both setups [1,2,12]. The sinusoidal patterns projected by the DLP had a spatial frequency of about 0.75 LP/mm in object space. Considering optical theory (cf. Eq. (S1) in SM), the projector DOF is thereby more than one order of magnitude larger than the camera DOF for the desired 10 LP/mm. The camera was oriented perpendicular to the scene to make optimum use of its DOF. Tilted setups would be an option if the camera lens is mounted in a Scheimpflug arrangement [51].

One main criterion for our 3D sensor setup was minimum acquisition time. The camera allowed 180 frames per second (fps) for the applied region of interest 5,120 × 3,980 pixels. We used a sequence of four phase-shifted sine patterns for 3D data acquisition as a reasonable compromise between speed and noise [53]. This resulted in a raw acquisition time of 22 ms, repeated for both focus positions with the refocusing time in between. In comparison, the chromatic focus stacking took 44 ms and the reference setup with mechanical focus stacking took 124 ms for the complete acquisition process. Almost factor three was gained.

The unwrapping of the phase values was done by known unwrapped phase values on a reference plane captured beforehand as described in [22] and [54]. The four patterns were stacked as 6-bit images into one 24-bit frame to use the Lightcrafter 4500 in high-speed mode.

We calibrated the experimental and the reference 3D sensor setups by using a target with circle markers in 20 varying positions and the bundle-block adjustment software BINGO ATM (www.bingo-atm.de). The geometric accuracy was checked without focus stacking by a distance normal of size 60.114 mm consisting of two spheres of Ø10 mm in seven positions following [55]. Both setups are comparable with a length measurement error of < 1% (0.59 mm) and a probing error of < 14 µm standard deviation against the sphere geometry. This further underlined, that both 3D sensor setups were approximately identical except for the refocus mechanism. In the following characterization and comparison, we focused on the resolution and DOF, which is not covered by [55].

3.3 Evaluation of extended depth of field

3.3.1 Distance-dependent 3D resolution

In contrast to 2D imaging, there is no standardized method to characterize resolution, modulation, or DOF of a 3D sensor. Based on the method to determine a sensor’s 3D-MTF [47,56,57], we devised an extension to determine the distance-dependent 3D resolution. Details regarding 3D-MTF and our procedure to characterize DOF are explained in Supplemental 4. Because 3D-MTF results got erratic for small modulation values [47,57], it did not allow to evaluate the cutoff frequency at the intended modulation threshold of 20% (10 LP/mm). We decided to use an adapted image quality criterion to characterize the DOF through the 3D-MTF. We found the criteria provided by [58] for 2D images to be reasonable: the range in which > 60% modulation was achieved for 4 LP/mm.

3.3.2 Extended DOF with mechanical focus stacking

First, the DOF was evaluated in the reference setup applying mechanical focus stacking. Figure 6 shows the DOF curve as cutoff frequency over distance from camera for five consecutive focal positions. A focus shift of 10 mm was applied in between. F# was set to 4, which was equivalent to the camera lens in the chromatic focus stacking setup. The curves have a DOF of ∼ 15 mm (focus 1) and ∼ 14 mm (focus 2). The first two foci have an overlap of ∼ 7 mm, so that a $\textrm{DO}{\textrm{F}_{\textrm{extended}}}$ of approximately 22 mm was achieved. It can be concluded that a focus shift larger than 10 mm could be applied to further extend the DOF. Following foci 3, 4, and 5 tended to smaller DOFs. Through the larger distance from the camera, the magnification got smaller and decreased the spatial resolution (GSD). The object distances are measured from the optical camera center in our evaluations, so that the values are larger than the physical distances from the front side of the camera lens.

Figure 7 shows the drawback when the F-number was close to or larger than $F{\# _{\textrm{crit}}}$. Although the measured cutoff frequency is on a constant level over a large depth range, the maximum value is limited through diffraction. With F# = 8, the cutoff frequency is not reliably over the threshold of 4 LP/mm.

 figure: Fig. 6.

Fig. 6. Measured DOF curves for five consecutive focus positions by mechanical focus stacking with F# = 4 (the DOF limits and overlap for focus 1 and 2 are marked).

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Measured DOF curve for a single focus position with F# = 8.

Download Full Size | PDF

3.3.3 Extended DOF with chromatic focus stacking

Figure 8 shows the measured DOF curves for the experimental 3D sensor setup applying chromatic focus stacking. The focus position for λ = 455 nm is closer to the camera and yields a DOF of ∼ 14 mm. The focus position for λ = 520 nm leads to a DOF of ∼ 8 mm. The overlap is ∼ 1 mm, so that a $\textrm{DO}{\textrm{F}_{\textrm{extended}}}$ of approximately 21 mm was achieved.

 figure: Fig. 8.

Fig. 8. Measured DOF curves for chromatic focus stacking at two wavelengths.

Download Full Size | PDF

The green focus position was less prominent in comparison to the blue one. The peak cutoff frequencies differ by 25%. The 2D characterization of the DOFs in Fig. 4 showed a difference of only 6% of modulation. Main reason was found in the light sources of the DLP projector LightCrafter 4500. Its blue LED had a narrow spectral emission of ± 10 nm with peak and dominant wavelength at 455 nm equal to the intended lens design. Unfortunately, the green LED was based on phosphor-conversion which elongated the spectrum to larger wavelengths. The spectrum’s center of gravity was at 542 nm with bandwidth of ± 48 nm which presented a significant deviation from our design. The integrated bandpass filter BP 490-180 HT from Schneider Kreuznach with (490 ± 90) nm transmission range did not crop the interfering spectrum sufficiently. Thus, the measured 3D resolution and $\textrm{DO}{\textrm{F}_{\textrm{extended}}}$ of the 3D sensor were expected to be smaller as the capability of the system would allow. The application of chromatic focus stacking requires the usage of light sources with spectra fitting to the lens design. Therefore, a customized projection unit would be required for optimal measurement results.

Despite the non-optimal LED spectra, the measured $\textrm{DO}{\textrm{F}_{\textrm{extended}}}$ of the experimental setup was close to the designed 21.2 mm. Furthermore, it was only 1 mm smaller in comparison to the reference setup applying mechanical focus stacking. Main benefit was the time saving for refocusing by chromatic focus stacking. This reduces the overall acquisition time from 124 ms by nearly factor three to 44 ms for this case of two focus positions. Thus, chromatic stacking is a reasonable method to realize an extended DOF for high-speed 3D sensor systems.

Table 3 summarizes the theoretic, designed and measured DOF characteristics for our chromatic aberration lens prototype.

Tables Icon

Table 3. Comparison of theoretic, designed and realized DOF with chromatic focus stacking

3.4 Application for quality control

Our 3D sensor setup applying chromatic focus stacking is intended for practical relevance in applications which require fast frame rates (e.g., at inline quality control or real-time analysis in HMI) as well as a high level of detail along with a relatively large field of view (e.g., inspection of tiny features on printed circuit boards (PCBs) or biometric analysis in HMI). Whereas the second requirement limits the achievable DOF of the sensor, the first requirement restricts the applicability of state-of-the-art focus stacking methods due to timing delays. Our approach almost doubles the measuring range of the 3D sensor without noticeable time loss for refocusing, which would improve its practicability.

Figure 9 shows exemplary results of a high-speed 3D measurement with extended DOF of a loaded PCB board (Fig. 9(a)). It consists of structures up to 16 mm height. Considering technical tolerances when placing the samples in a certain distance range to the sensor, the conventional DOF of the structured light 3D sensor setups in Table 2 would not be sufficient to capture all PCB elements with the specified resolution of 10 LP/mm. An extended DOF is mandatory. Because such measurements should take place inline, a high acquisition speed is crucial. The chromatic focus stacking with two focus positions meets both criteria: large DOF and short acquisition time. The acquisition process in Fig. 9 was realized at 180 fps frame rate and with 4 phase-shifted sine patterns in both focus channels. It took a total of 44 ms to capture the raw images for the 3D reconstruction.

 figure: Fig. 9.

Fig. 9. 3D acquisition of a loaded PCB board by chromatic focus stacking:

(a) photo of the PCB board (captured by an external photo camera); the circles indicate the areas of the details shown in (c) and (d)

(b) photos of exemplary fringe patterns during image acquisition with two wavelengths corresponding to two focus positions (captured by an external photo camera)

(c-d) detail views of tiny elements at two different height levels on the PCB; (c) is located in focus 1 and (d) in focus 2; each view shows the 2D camera image as well as color-coded top and side view of the 3D point clouds

Download Full Size | PDF

In Fig. 9(b), two projected fringe patterns at blue and green illumination taken with our experimental 3D sensor setup (Fig. 5 left) are shown. Two tiny details located at different height levels on the PCB board (marked circles in Fig. 9(a)) allow a qualitative comparison of the measurement results at the two focus positions. The bottom half of Fig. 9 shows 2D camera images as well as color-coded top and side views of the captured 3D point clouds of both details (c/d) at both foci (1/2). The detail in Fig. 9(c) is in a top region of the PCB closer to the camera. In focus channel 1 (c1) a sharp image and point cloud are captured while the results in channel 2 (c2) appear blurrier. The detail in Fig. 9(d) is in a bottom region of the PCB ∼ 10 mm below detail (c). Here, the differences between the results at the two focus positions are less prominent due to the broad spectral emission of the green LED in our experimental 3D sensor setup (cf. Section 3.3.3). Still the 2D camera image appears sharper in focus channel 2, but the 3D point clouds show no noticeable difference. This would be improved by a structured light projector with customized LEDs.

4. Discussion and outlook

Our experiments proved that chromatic focus stacking is an appropriate method to extend the DOF of structured light 3D sensors (cf. Table 3). We achieved results that are comparable to those of mechanical focus stacking with a motorized camera lens. Main benefit of chromatic focus stacking is the time saving for refocusing, which is realized by switching LED light sources within microseconds. This makes the method attractive for high-speed applications. In our experiments, an acquisition time of 124 ms with mechanical focus stacking was improved almost by factor three down to 44 ms with chromatic focus stacking. Tunable liquid lenses achieve a settling time of ∼ 10 ms for refocusing [25], which would result to ∼ 54 ms acquisition time in our two-foci setup.

Fabrication of our lens is much simpler compared to any motorized camera lens (cf. Figure 5 right) or tunable liquid lens [25] because no movable parts are involved. On the other hand, the adopted chromatic aberration lens is specific to a certain optical setup. And to achieve optimal results, the structured light projector must be customized with light sources of matching spectral emission. The selection of 455 nm and 520 nm for chromatic focus stacking was optimal for our concrete 3D sensor specifications (cf. Table 1), but this is not applicable in general. Simulations are mandatory to find optimal wavelengths for the concrete specifications.

Expanding the principle to more than two focus positions is a reasonable next step. In general, the axial chromatic aberration allows its further tailoring into the red or infrared spectrum (cf. Fig. 2). Design studies and further experiments must evaluate the feasibility for each concrete setup. If the criterions on 3D data quality are less strict, focus sweeping instead of stacking would be a practical choice in order to retain fast data acquisition. Here, the structured light projector would illuminate the object with all wavelengths at once during acquisition and, afterwards, the camera images need to be deconvolved by the PSF.

It is possible to transfer the principle of chromatic focus stacking to alternative structured light projection techniques, such as [17,59,60], which have benefits over DLP projectors with regard to high-speed applications. The advancement of focus sweeping through time- and spectral-varying texture projection during sweep by [32] might benefit from a tailored chromatic aberration as well. Moreover, combining chromatic focus stacking with upcoming multispectral 3D measurement approaches could be an interesting direction of development [59,61].

In our review of related works in section 1.2, we mentioned recent approaches of DOF extension by coded apertures, which generate an engineered PSF of known shape depending on the distance [3643]. Those techniques realize single-shot, monocular 3D imaging. Although their achieved 3D data quality in terms of spatial resolution and accuracy is not yet competitive to established 3D sensor systems [44,45], it is worth to investigate whether their performance might be improvable by tailored chromatic aberration of the lens in addition to the coded aperture. Single-shot methods could potentially surpass our proposed method regarding acquisition time.

Our experiments resulted in a set of independent 3D point clouds for each acquired focus position. We did not investigate any merging process. However, commercial software tools for photographers allow the fusion of focus stacks into one entity. A simple merging procedure could take the known camera-to-object distances for each 3D point and the known depth ranges for each focus position into account. Nevertheless, this might be prone to artifacts in the transition between focus positions and would require an accurate calibration of each focus channel separately. A method for merging stacked point clouds based on the sharpness of additional gray code patterns was proposed by [62].

In our research, we were able to demonstrate how the DOF of a 3D sensor can be characterized by the distance-dependent measurement of its 3D-MTF. Nevertheless, the 3D-MTF gets erratic for modulations below 60%, which is a strong drawback against established 2D-MTF measurements. The DOF curves determined by pure 2D image contrast (cf. Fig. 4) make a more realistic appearance compared to the more irregular DOF curves evaluated by 3D-MTF (cf. Fig. 8). An underestimation of the real 3D resolution capabilities by the actual determination of the 3D-MTF is expected [47]. The measurement of the distance-dependent 2D-MTF allows an estimation of the DOF, but does not incorporate the complete 3D sensor system, consisting of camera(s), projector, and data processing, into the evaluation. Therefore, future efforts should try to improve and standardize the methodology of 3D-MTF. It would be applicable to 3D measuring principles other than structured light as well, such as laser scanning or coded aperture masks. As focus sweeping methods still have timing advantages in case of stacking many focus positions, the characterization of their resolution and DOF by our proposed 3D-MTF measurement on a tilted edge (cf. Supplemental 4) would allow to make their performance comparable to other methods.

5. Summary

The depth of field (DOF) of optical imaging systems is a restriction that limits the achievable measuring range of close-range 3D sensors. Focus stacking is a method to extend the DOF by subsequent refocusing to multiple focus positions. Chromatic focus stacking uses the axial chromatic aberration of an optical lens to stack focus positions by illuminating the scene with light of different wavelengths.

In our research, the method of chromatic focus stacking was applied to a structured light 3D sensor. A camera lens prototype with two stacked focus positions at 455 and 520 nm was designed and manufactured. An experimental setup consisting of a 20.3 Mpx camera with the lens prototype and a RGB DLP projector was realized to investigate the achievable extended DOF. The DOF was characterized by a new method which derives a distance-dependent 3D resolution using the 3D-MTF measured on a tilted edge.

In our experimental setup, we achieved an extended DOF of 21 mm by the stacking of two spectral channels, which was comparable to a reference setup with mechanical focus stacking ($\textrm{DO}{\textrm{F}_{\textrm{extended}}} = 22\; \textrm{mm}$). The conventional DOF of a single channel was ≤ 14 mm. As the refocusing is based on the switching of LED light sources, there is no time lost for the focus shift in contrast to mechanical focus stacking. In this way, we were able to carry out high-speed measurements with an overall acquisition time of 44 ms.

Funding

Bundesministerium für Bildung und Forschung (03ZZ0447G).

Acknowledgments

The authors thank the funding ministry BMBF and the 3Dsensation consortium for granting and supporting our research project 3D4F.

Disclosures

RR: The author declares no conflicts of interest; MMA: Docter Optics SE (E,P); DHN: The author declares no conflicts of interest; TH: Docter Optics SE (E); HS: The author declares no conflicts of interest; SK: Docter Optics SE (E); DHF: Docter Optics SE (E); SE: Docter Optics SE (E); PK: The author declares no conflicts of interest; SH: The author declares no conflicts of interest; GN: The author declares no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. P. Kühmstedt, C. Munkelt, M. Heinze, C. Bräuer-Burchardt, and G. Notni, “3D shape measurement with phase correlation based fringe projection,” Proc. SPIE 6616, 66160B (2017). [CrossRef]  

2. S. Zhang, “Handbook of 3D Machine Vision: Optical Metrology and Imaging” (1st. ed.). (CRC Press, Inc.), USA (2017).

3. C. Braeuer-Burchardt, F. Siegmund, D. Hoehne, P. Kuehmstedt, and G. Notni, “Finger Pointer Based Human Machine Interaction for Selected Quality Checks of Industrial Work Pieces,” ISR 2020; 52th International Symposium on Robotics, pp. 1–6 (2020).

4. C. Montgomerie, D. Raneri, and P. Maynard, “Validation study of three-dimensional scanning of footwear impressions,” Australian Journal of Forensic Sciences, DOI: 10.1080/00450618.2020.1789222 (2020).

5. C. Zhang, I. Gebhart, P. Kühmstedt, M. Rosenberger, and G. Notni, “Real-time multimodal 3D imaging system for remote estimation of vital signs,” Proc. SPIE 11785, 117850F (2021). [CrossRef]  

6. C. Munkelt, M. Heinze, S. Schindwolf, S. Heist, and G. Notni, “Irritation-free optical 3D-based measurement of tidal volume,” Proc. SPIE 11787, 117870A (2021). [CrossRef]  

7. S. Riehemann, M. Palme, P. Kuehmstedt, C. Grossmann, G. Notni, and J. Hintersehr, “Microdisplay-Based Intraoral 3D Scanner for Dentistry,” J. Disp. Technol. 7(3), 151–155 (2011). [CrossRef]  

8. M. Javaid, A. Haleem, and L. Kumar, “Current status and applications of 3D scanning in dentistry,” Clinical Epidemiology and Global Health 7(2), 228–233 (2019). [CrossRef]  

9. M. Preissler, C. Zhang, M. Rosenberger, and G. Notni, “Approach for Process Control in Additive Manufacturing Through Layer-Wise Analysis with 3-Dimensional Pointcloud Information,” 2018 Digital Image Computing: Techniques and Applications (DICTA), pp. 1–6, doi: 10.1109/DICTA.2018.8615803 (2018)

10. S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed three-dimensional shape measurement using GOBO projection,” Opt. Lasers Eng. 87, 90–96 (2016). [CrossRef]  

11. R. Ramm, M. Heinze, P. Kühmstedt, A. Christoph, S. Heist, and G. Notni, “Portable solution for high-resolution 3D and color texture on-site digitization of cultural heritage objects,” J. Cultural Heritage 53, 165–175 (2022). [CrossRef]  

12. T. Luhmann, S. Robson, S. Kyle, and J. Boehm, “Close-Range Photogrammetry and 3D Imaging”, (Boston: De Gruyter: Berlin). https://doi.org/10.1515/9783110607253 (2019)

13. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

14. V. A. Oleg, Y. B. Aleksandr, G. H. Steen, Y. Z. Claudia, I. M. Igor, and J. Zheng, ““Structured Light: Ideas and Concepts,” Front. Phys. 8, 114 (2020). [CrossRef]  

15. S. Heist, P. Dietrich, M. Landmann, P. Kühmstedt, G. Notni, and A. Tünnermann, “GOBO projection for 3D measurements at highest frame rates: a performance analysis,” Light: Sci. Appl. 7(1), 71 (2018). [CrossRef]  

16. M. Schaffer, M. Grosse, B. Harendt, and R. Kowarschik, “High-speed three-dimensional shape measurements of objects with laser speckles and acousto-optical deflection,” Opt. Lett. 36(16), 3097–3099 (2011). [CrossRef]  

17. S. Heist, P. Kühmstedt, A. Tünnermann, and G. Notni, “Array projection of aperiodic sinusoidal fringes for high-speed three-dimensional shape measurement,” Opt. Eng. 53(11), 112208 (2014). [CrossRef]  

18. S. Heist, P. Kühmstedt, A. Tünnermann, and G. Notni, “Experimental comparison of aperiodic sinusoidal fringes and phase-shifted sinusoidal fringes for high-speed three-dimensional shape measurement,” Opt. Eng. 55(2), 024105 (2016). [CrossRef]  

19. P. Lutzke, M. Schaffer, P. Kühmstedt, R. Kowarschik, and G. Notni, “Experimental comparison of phase-shifting fringe projection and statistical pattern projection for active triangulation systems,”, Proc. SPIE 8788, 878813 (2013). [CrossRef]  

20. C. Munkelt, M. Heinze, C. Bräuer-Burchardt, S. P. Kodgirwar, P. Kühmstedt, and G. Notni, “Large-volume NIR pattern projection sensor for continuous low-latency 3D measurements,” Proc. SPIE 10991, 109910K (2019). [CrossRef]  

21. H. Nagahara, S. Kuthirummal, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” In: D. Forsyth, P. Torr, and A. Zisserman, eds. Computer Vision – ECCV 2008. ECCV 2008. Lecture Notes in Computer Science, vol 5305. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88693-8_5 (2008)

22. C. Bräuer-Burchardt, S. Heist, P. Kühmstedt, and G. Notni, “High-speed 3D surface measurement with a fringe projection based optical sensor,” Proc. SPIE 9110, 91100E (2014). [CrossRef]  

23. Y. Liu, Y. Fu, Y. Zhuan, P. Zhou, K. Zhong, and B. Guan, “Large depth-of-field 3D measurement with a microscopic structured-light system,” Opt. Commun. 481, 126540 (2021). [CrossRef]  

24. Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 1–15 (2013). [CrossRef]  

25. X. Hu, S. Zhang, Y. Zhang, Y. Liu, and G. Wang, “Large depth-of-field three-dimensional shape measurement with the focal sweep technique,” Opt. Express 28(21), 31197–31208 (2020). [CrossRef]  

26. R. Zanella, G. Zanghirati, R. Cavicchioli, L. Zanni, P. Boccacci, M. Bertero, and G. Vicidomini, “Towards real-time image deconvolution: application to confocal and STED microscopy,” Sci. Rep. 3(1), 2523 (2013). [CrossRef]  

27. C. Liu, S. Gao, X. Zhao, and J. Qiu, “All-in-Focus Sweep Imaging Based on Wigner Distribution function,” in IEEE Access, vol. 6, pp. 64858–64866, 2018, doi: 10.1109/ACCESS.2018.2878056. (2018)

28. P. Campisi, “Blind Image Deconvolution: Theory and Applications”, 10.1201/9781420007299 (2007)

29. J. R. Swedlow, J. W. Sedat, and D. A. Agard, “Deconvolution in optical microscopy. Deconvolution of images and spectra” (2nd ed.). Academic Press, Inc., USA, 284–309 (1996)

30. C. Zhou, D. Miau, and S. K. Nayar, “Focal Sweep Camera for Space Time Refocusing,” Tech Report, Computer Sciene, DOI:10.7916/D8V69SZB (2012)

31. O. Cossairt and S. Nayar, “Spectral Focal Sweep: Extended depth of field from chromatic aberrations,” 2010 IEEE International Conference on Computational Photography (ICCP), pp. 1–8, doi: 10.1109/ICCPHOT.2010.5585101. (2010)

32. M. Sheinin and Y. Y. Schechner, “Depth from texture integration,” Proc. IEEE ICCP - Int. Conference on Computational Photography (2019).

33. J. Bliedtner and G. Gräfe, “Optiktechnologie – Grundlagen – Verfahren – Anwendungen – Beispiele, 2nd Edition, ISBN 978-3-446-42215-5, (Carl Hanser Verlag), Munich (2010)

34. J. Courtney-Pratt and R. Gregory, “Microscope with Enhanced Depth of Field and 3–D Capability,” Appl. Opt. 12(10), 2509–2519 (1973). [CrossRef]  

35. C.-L. Tisse, H. P. Nguyen, R. Tessières, M. Pyanet, and F. Guichard, “Extended depth-of-field (EDoF) using sharpness transport across colour channels,” Proc. SPIE 7061, 706105 (2008). [CrossRef]  

36. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859–1866 (1995). [CrossRef]  

37. H. Nagahara, “19-3: Invited Paper : Computational 3D Imaging - PSF Engineering for Depth from Defocus –,” SID Symposium Digest of Technical Papers. 47. 227–230. (2016) [CrossRef]  

38. J. Chang and G. Wetzstein, “Deep Optics for Monocular Depth Estimation and 3D Object Detection,” IEEE International Conference on Computer Vision (ICCV), (2019)

39. Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D — Learning Phase Masks for Passive Single View Depth Estimation,” 2019 IEEE International Conference on Computational Photography (ICCP), pp. 1–12(2019). [CrossRef]  

40. H. Ikoma, C. M. Nguyen, C. A. Metzler, Y. Peng, and G. Wetzstein, “Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation,International Conference on Computational Photography (2021).

41. H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth Estimation From a Single Image Using Deep Learned Phase Coded Mask,” IEEE Transactions on Computational Imaging , (4), 3, 298–310 (2018). [CrossRef]  

42. L. Jin, Y. Tang, Y. Wu, J. B. Coole, M. T. Tan, X. Zhao, H. Badaoui, J. T. Robinson, M. D. Williams, A. M. Gillenwater, R. R. Richards-Kortum, and A. Veeraraghavan, “Deep learning extended depth-of-field microscope for fast and slide-free histology,” Proc Natl Acad Sci U S A. (117), 52, 33051–33060 (2020). [CrossRef]  

43. S. Elmalem, R. Giryes, and E. Marom, “Learned phase coded aperture for the benefit of depth of field extension,” Opt. Express 26(12), 15316–15331 (2018). [CrossRef]  

44. N. Chen, Z. Chao, Y. L. Edmund, and L. Byoungho, “3D Imaging Based on Depth Measurement Technologies,” Sensors 18(11), 3711 (2018). [CrossRef]  

45. G. Zhang, S. Yang, P. Hu, and P. H. Deng, “Advances and Prospects of Vision-Based 3D Shape Measurement Methods,” Machines 10(2), 124 (2022). [CrossRef]  

46. G. Häusler and S. Ettl, “Limitations of optical 3D sensors,” in Optical Measurement of Surface Topography, R. Leach, ed. (Springer), (2011).

47. P. Berssenbruegge, M. Dekiff, B. Kemper, C. Denz, and D. Dirksen, “Characterization of the 3D resolution of topometric sensors based on fringe and speckle pattern projection by a 3D transfer function,” Optics and Lasers in Engineering 50(3), 465–472 (2012). [CrossRef]  

48. R. Kingslake and R. B. Johnson, “Lens Design Fundamentals” 2nd Edition, ISBN 9780123743015, (Academic), (2010)

49. H. Hopkins, “The Frequency Response of Optical Systems,” Proc. Phys. Soc. B 69(5), 562–576 (1956). [CrossRef]  

50. W. S. Charles and O. A. Becklund, “Introduction To The Optical Transfer Function”, (Spie Press Book) (1989). [CrossRef]  

51. F. L. Pedrotti, L. M. Pedrotti, and L. S. Pedrotti, “Introduction to Optics”, 3rd edition. (Cambridge University), ISBN 978-1-108-42826-2 (2018)

52. U. Teubner and H. J. Brückner, Optical Imaging and Photography: Introduction to Science and Technology of Optics, Sensors and Systems,” (Boston: De Gruyter: Berlin), https://doi.org/10.1515/9783110472943 (2019).

53. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Optics and Lasers in Engineering. 109, 23–59 (2018). [CrossRef]  

54. Y. Liu, O. Zhang, H. Zhang, Z. Wu, and W. Chen, “Improve Temporal Fourier Transform Profilometry for Complex Dynamic Three-Dimensional Shape Measurement,” Sensors 20(7), 1808 (2020). [CrossRef]  

55. VDI 2634, 2012, “Part 2: Optical 3-D measuring systems - Optical systems based on area scanning”, (VDI/VDE-Gesellschaft Mess- und Automatisierungstechnik), (2012)

56. M. Goesele, C. Fuchs, and H. Seidel, “Accuracy of 3D range scanners by measurement of the slanted edge modulation transfer function,” Fourth International Conference on 3-D Digital Imaging and Modeling, 2003. 3DIM 2003. Proceedings., pp. 37–44, doi: 10.1109/IM.2003.1240230 (2003)

57. T. Kellner, A. Breitbarth, C. Zhang, and G. Notni, “„Characterizing 3D sensors using the 3D modulation transfer function”,” Meas. Sci. Technol. 29(3), 035103 (2018). [CrossRef]  

58. FBI, “EBTS - Electronic biometric transmission specification”, version 10.0.8, appendix F, NGI-DOC-01862-1.1, https://www.fbibiospecs.cjis.gov/Document/Get?fileName=Master%20EBTS%20v10.0.8%2009302017_Final.pdf (2017)

59. S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5D hyperspectral imaging: Fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366 (2018). [CrossRef]  

60. C. Munkelt, H. Speck, C. Bösel, C. Junger, S. Töpfer, and G. Notni, “Continuous low-latency 3D measurements using efficient freeform GOBO pattern projection and close-to-sensor image rectification,” Proc. SPIE 11397, 1139705 (2020). [CrossRef]  

61. S. Kottner, M. M. Schulz, F. Berger, M. Thali, and D. Gascho, „Beyond the visible spectrum – applying 3D multispectral full-body imaging to the VirtoScan system”, Forensic Sci Med Pathol. https://doi.org/10.1007/s12024-021-00420-x (2021)

62. Y. Xiao, G. Wang, X. Hu, C. Shi, L. Meng, and H. Yang, “Guided, Fusion-Based, Large Depth-of-field 3D Imaging Using a Focal Stack,” Sensors 19(22), 4845 (2019). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Appendices

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic diagram of a structured light 3D sensor applying chromatic focus stacking in camera lens in combination with a multi-color projector.
Fig. 2.
Fig. 2. Axial chromatic aberration $\mathrm{\Delta }S_{\textrm{ax}}^{\prime}$ of a lens.
Fig. 3.
Fig. 3. DOF curves as modulation value against defocus for two wavelengths 455 and 520 nm at spatial frequency $R^{\prime} = 10\frac{{\textrm{LP}}}{{\textrm{mm}}}$ .
Fig. 4.
Fig. 4. DOF curves at wavelengths 455 and 520 nm with modulation over defocus position and example images of the used CTF target with 10 LP/mm at −8, 0 and +8 mm.
Fig. 5.
Fig. 5. 3D sensor setups with chromatic (left) and mechanical focus stacking (right):
Fig. 6.
Fig. 6. Measured DOF curves for five consecutive focus positions by mechanical focus stacking with F# = 4 (the DOF limits and overlap for focus 1 and 2 are marked).
Fig. 7.
Fig. 7. Measured DOF curve for a single focus position with F# = 8.
Fig. 8.
Fig. 8. Measured DOF curves for chromatic focus stacking at two wavelengths.
Fig. 9.
Fig. 9. 3D acquisition of a loaded PCB board by chromatic focus stacking:

Tables (3)

Tables Icon

Table 1. Design requirements for the chromatic focus stacking camera lens

Tables Icon

Table 2. Comparison of experimental and reference 3D sensor setups

Tables Icon

Table 3. Comparison of theoretic, designed and realized DOF with chromatic focus stacking

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

Δ S ax = S ( λ 1 ) S ( λ 2 ) ,
R MTF20 1 D R MTF20 1 D
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.