Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial multiplexing and autofocus in holographic contouring for inspection of micro-parts

Open Access Open Access

Abstract

We present a method for fast geometrical inspection of micro deep drawing parts. It is based on single-shot two-wavelength contouring digital holographic microscopy (DHM). Within the capturing process, spatial multiplexing is utilized in order to record the two required holograms in a single-shot. For fast evaluation, determining the locations where the object is in focus and stitching all focus object’s areas together is achieved digitally without the need for any external intervention using an autofocus algorithm. Thus, the limited depth of focus of the microscope objective is improved. The autofocus algorithm is based on minimizing the total variation (TV) of phase difference residuals of the two-wavelength measurements. In contrast to standard DHM, an object side telecentric microscope objective is used for overcoming the image scaling distortions caused by a conventional microscope objective. The method is used to reconstruct the 3D geometrical shape of a cold drawing micro cup. Experimental results verify the improvement of DHM’s depth of focus.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Micro-components made of metal which are used in a wide range of applications, for example, in smartphones and tablets, automotive industry and medical technology have to be small and light. Moreover, they are commonly manufactured at a high production rate and the fabrication of such a mass production process has to be cost-effective. The cold forming process is considered to be one of the cost-effective production processes and produces up to several hundreds of micro parts per minute. An example of a micro-part called micro-cup which is fabricated within the Collaborative Research Center 747 “Micro Cold Forming - Processes, Characterization, Optimization” is shown in Fig. 1. Usually, such micro-parts should have tightly toleranced, well defined form and surface characteristics. These characteristics strongly depend on the manufacturing process [1].

 figure: Fig. 1

Fig. 1 An image shows micro-cups as an example of metallic cold forming micro-parts. The micro-cups have a size of less than 1 mm in at least two dimensions. This image is captured by Lukas Heinrich, BIAS.

Download Full Size | PDF

Commonly, micro-parts are subject to geometrical distortions and surface defects such as fractures, dent and scratches. Thus quality control is an essential part to improve the production process. Optical metrology in an industrial environment for such micro-parts requires, however, the use of a robust technique that measures the complete 3D form of the part under test with high measurement accuracy and high speed [2].

In regard to the state of the art, metrology techniques such as tactile, confocal microscope, white light interferometry [3, 4], coherent tomography [5], Computational Shear Interferometry (CoSI) [6] and phase retrieval [7–9] were used. However, such techniques are not suited here since they are either based on scanning or require sequential measurements. In contrast to these methods, digital holography (DH) [10, 11] offers fast measurements in combination with low measurement uncertainty. DH is based on measuring the optical path difference (OPD) by retrieving the phase difference of light diffracted by a test object and a reference wave [12]. Since commercially available cameras can only measure intensities of light, the OPD must be accordingly encoded. Based on the encoding method and the geometrical complexity of the test part, measurement uncertainties down to a fraction of the illumination’s wavelength can be achieved for smooth surfaces [13]. For metal micro-parts, we encounter two problems:

  1. For optically rough surfaces, as in our case, the result of the evaluation becomes ambiguous and cannot be used to reconstruct the shape of the test object. One way to solve this problem is measuring two phase distributions associated with two different wavelengths. This approach is referred to as two-wavelength contouring technique [14–16]. However, this technique has a problem with the measurement accuracy since speckle fields corresponding to different wavelengths are not fully correlated. Consequently, the phase difference is influenced by decorrelation noise which depends on the wavelength difference [17, 18].
  2. Commonly the depth of focus is limited for any microscopic imaging system which is usually employed for the inspection of micro-parts using optical methods. As a consequence, sharp contouring fringes appear only where parts of the test object are in focus.

Here we present a shape measurement method suitable for in-line quality control of micro-parts. The method is based on integrating a new autofocus approach with the two-wavelength holographic contouring technique. Based on spatial multiplexing, the two holograms associated with the two-wavelength contouring are simultaneously recorded in a single-shot. The method is used here to measure the 3D form of a cold formed micro-cup showing the value of the proposed approach with regard to the in-line inspection of deep drawing micro components.

2. Form measurements by means of holographic contouring

Two-wavelength holographic form measurement methods consist of two main steps. The first step is recording the holographic data which is achieved using a holographic system to capture the required holograms. In the second step an evaluation process to convert the measurement to a 3D form map is utilized [19, 20]. The two steps will be discussed in detail in the following subsections.

2.1. Digital holographic microscopy

Figure 2 shows a scheme of the digital holographic microscopy (DHM) for the inspection of micro-parts. The setup consists of a telecentric long distance microscope objective (LDM) with a 10x magnification, a numerical aperture (NA) of 0.21 and a working distance of 51 mm which is used to image the test object to a charge-coupled device (CCD) camera sensor. The numerical aperture of the currently utilized objective limits the lateral resolution of the system to 1.86 µm in accord with the Rayleigh criterion at λ = 642 nm. The object side telecentric objective has a depth of field of 62 µm. The lateral resolution could be improved by utilizing another microscope objective with higher N.A. In addition, the limited depth of field is enhanced by means of utilizing numerical focusing properties of digital holography. A beam-splitter is used to combine light diffracted from the object under investigation and the reference wave resulting in a hologram. The hologram is then captured at the camera plane. Technical objects’ surfaces are optically rough, since the object surface commonly exhibits steps and peaks larger than quarter of the wavelength. Thus, the evaluation process as a result of the measurement becomes ambiguous and the measurement cannot be used to recover the form of the test object. As it was previously mentioned, the solution to this problem is referred to as the synthetic wavelength [21]. For the realization of such an approach two measurements associated with two different wavelengths λ1 and λ2 have to be acquired which can be simultaneously recorded [22, 23].

 figure: Fig. 2

Fig. 2 Scheme of the digital holographic microscopy (DHM) setup. A long distance microscope objective (LDM) with 10× magnification, a numerical aperture of N.A. = 0.21 and a working distance of 51 mm image the object under test on a CCD camera sensor. Optical fiber splitters and coupler are used for creating the illuminating and reference waves which are combined utilizing a 50:50 beam-splitter in front of the CCD camera. An angle of α = 22° is found to be appropriate for illuminating the micro-cup under test.

Download Full Size | PDF

In order to simultaneously record the two holograms in a single-shot, optical splitters and couplers are used. The scheme for the realization of object illumination and reference waves is shown in Fig. 3. Each laser diode is coupled into a single mode fiber. Using a 1 × 2 fiber splitter, the input of each is divided into two outputs creating four laser outputs. Two of these outputs are used for illuminating the test object from one direction by combining them using a 1 × 2 fiber coupler. The other two outputs are used as reference waves. The positions of the two reference waves are shifted from the center of the optical axis by different shifts of Δxn = (Δxi, Δxj). Accordingly, the superposition of a spherical wave arising from the shifted reference point source and the one arising from the field curvature of the imaging system results in linear fringes, i.e. linear phase ramp. Thus, each hologram generated across the recording plane is modulated by a phase ramp G which is given by:

Gλn(y)=exp[i2πΔxny].
The vector y = (yi, yj) represents the spatial coordinates at the recording domain, i.e. the hologram plane. Accordingly, the intensity distribution I(y) of the captured holograms can be written as:
I(y)=A(y)+n=12[UOλn(y)URλn*(y)Gλn(y)]+n=12[UOλn*(y)URλn(y)Gλn*(y)].
Here A(y) represents the incoherent superposition of the interfering wave fields, U refers to the wave field and its index Oλ and Rλ are used to distinguish between the object and the reference waves at different illumination λ and * is the complex conjugate. According to Eq. (2), the phase ramps introduce a spatial carrier to the recorded holograms and thus are separated across the Fourier domain by a predefined shift. A scheme of the expected spectrum of a single hologram which contains object’s information for the two illumination wavelengths for object illumination with an angle of α is shown in Fig. 4. Here, +1 and −1 orders represent the Fourier transform of the second and third parts of the right hand side of Eq. (2), where the dc-term refers to the dc-term of the Fourier transform of the incoherent interfering term, i.e. the Fourier transform of the first term of Eq. (2) of the intensity hologram.

 figure: Fig. 3

Fig. 3 Scheme showing configuration of object’s illumination and reference waves originating from the laser diodes with wavelength λ1 and λ2 and utilizing optical fiber splitters and couples.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Schematic representation of the expected spectrum of a single hologram, where +1, −1, and the dc-term refer to the diffraction orders and fi and fj represents the frequencies in i and j directions. Here, the spectra corresponding to the wavelengths λ1 and λ2 are shown as an example.

Download Full Size | PDF

2.2. Spatial carrier frequency method

Now the adaptive spatial carrier frequency method [24–26] can be applied to recover the phase information from the recorded intensity hologram using the following steps which are summarized in Fig. 5:

  1. Applying the Fourier transform, ℱ, on the recorded intensity hologram.
  2. The −1 diffraction order for λ1 is selected and all other frequencies are removed.
  3. The selected −1 diffraction order is translated to the center of the spectrum. This is done by determining the center of the −1 order. This center represents the Dirac delta function i.e. the impulse function. By convolving the -1 order with the negative of the Dirac delta function, the -1 order is shifted to the center of the spectrum. For fast implementation the convolution is realized using a fast Fourier transform.
  4. To obtain the phase distribution ϕλ1 which corresponds to the measurement performed with λ1 at the hologram plane an inverse Fourier transform, ℱ−1, is applied to −1 order shifted to the center. Then the argument of the resulting complex amplitude is taken to get ϕλ1.
  5. Steps ii) to iv) are repeated for the −1 order related to the second wavelength, λ2.

 figure: Fig. 5

Fig. 5 Scheme representing the implementation of the spatial carrier frequency method which is summarized in steps i) to iv) and utilized to recover the phase information from the captured intensity hologram. ℱ and ℱ−1 refer to the Fourier transform and its inversion, UOλi is the filtered object wave field corresponding to the λi measurement, i takes 1 or 2 and arg{·} is the argument of a complex number which gives its phase ϕλi.

Download Full Size | PDF

Thus, two phase distributions, ϕλ1 and ϕλ2, are recovered from a single measurement associated with the two wavelengths.

2.3. Holographic contouring

From the recovered two phase distributions, ϕλ1 and ϕλ2, which are retrieved from the measurement associated with the two different wavelengths the phase difference Δϕ across the hologram plane is calculated using:

Δϕ=ϕλ2ϕλ1.
Accordingly, the 3D height map zp of the test object is directly calculated utilizing [27]:
Δϕ=2πΛzp(1+cosα).
Here, α is the angle between the observation and illumination direction and Λ is referred to as the synthetic wavelength given by:
Λ=λ1λ2|λ2λ1|.
The resulting phase difference distribution map given by Δϕ equivalents a single measurement with Λ. This map contains fringes and it is referred to as the phase contouring map.

3. Autofocus in holographic contouring

Especially in DHM, the numerical refocusing is widely used in applications such as flow analysis [28,29], tracking of cells [13, 30–32], inspection of micro-electro-mechanical systems (MEMS) [33]. Within these applications, the autofocus is implemented to simplify the fast determination of the best focus plane from a set of reconstructed object’s layers from a single hologram [34–37].

Commonly, automated determination of the best focus plane is achieved by using a metric quantifying the degree of focus. Examples of focus metrics include energy invariance [36], contrast-based [37], variance [38], sparsity-based [39] and image-processing-based metrics [40].

However, these metrics are based on using the intensity distribution and in case of existing speckle noise they fail to find the focus plane. Therefore, a new focus metric has to be used for holographic contouring, since areas which are out of focus are corrupted by speckle decorrelation [18].

3.1. Contouring focus metric

In the following we propose a new autofocus approach enabling the reconstruction of an undistorted contouring map for the whole micro-part without the need for any external intervention. Within this approach, the focus plane is defined by determining the minimum total variance (mTV) of the phase difference residual ϕRes as a focus measure. Accordingly, the focus metric LTV is given by:

LTV=yR|ϕRes|2,
where the vector y = (yi, yj) represents the spatial coordinates at a certain object layer, R is the grid defied by the sampling across that layer and the phase difference residual ϕRes is calculated by:
ϕRes=ΔϕΔϕ˜.
Here Δϕ˜ represents a low pass filtered version of Δϕ calculated by convolution using a circular convolution kernel having a diameter of 10 pixels and ∇ϕRes is given by:
ϕRes=(ϕResyiϕResyj)and|ϕRes|2=(ϕResyi)2+(ϕResyj)2.
The first derivative of ϕRes can be determined using Euler finite difference approximation using e.g.:
ϕResyi|ϕRes(yi+1)ϕRes(yi)|mod2π.

In contrast to other intensity focus metrics, the proposed focus metric is based on evaluating the phase difference instated of evaluating the intensity of the reconstruction. This enables its implementation in the presence of speckle noise.

3.2. Autofocus implementation

In the following, we discuss in detail the implementation of the proposed autofocus approach which is described in Sec. 2.2. It is based on finding the minimum of the focus metric LTV which is calculated based on the standard deviation of the phase difference distributions of the contouring map. Figure 6 shows a diagram demonstrating the process according to the following steps:

  1. Recover the complex amplitude across a certain object plane for each wavelength using the spatial carrier frequency approach discussed in Sec. 2.1 and implemented according to the scheme shown in Fig. 5. Thus, two complex amplitudes are obtained at that plane.
  2. Numerically propagate the recovered complex amplitude in different layers to cover the whole test object. The complex amplitude across the first object plane U1 is propagated numerically, to the adjacent plane obtaining the complex amplitude U2, using the plane wave decomposition approach [41]:
    U2=1{{U1}Hz}.
    where Hz is the transfer function of the free space propagation which can be written as:
    Hz(v)=exp[i2πΔzλ11λ12|v|2].
    Here Δz denotes the distance between two layers of the object and v is a spatial frequencies vector. Propagation is done in the -z direction. We start from the last layer of the micro-cup, where z = 0 mm. This is done towards the top of the cup,i.e. the cup tours, since the cup is observed from a 45° angle. The result of phase difference between the recovered phase distributions across a certain layer gives a contouring map.
  3. The focus region is indicated by monitoring where the focus measure is minimum, which means that the object at this area is in-focus. This is implemented by calculating the value of LTV within a fixed windowed of 80 × 80 pixels of all object layers, see brown box in the contouring maps in Fig. 6. Comparing the values and finding the minimum of them, represents where the object is in-focus.
  4. Step iii) is repeated by consequently moving the window to the neighboring area, as shown by the yellow box in Fig. 6. For each movement, the minimum of LTV is determined. This step is ended by defining all in-focus areas corresponding to the minimum of LTV of the complete phase difference images through all layers.

 figure: Fig. 6

Fig. 6 Scheme representing the implementation of the autofocus approach. We start with the two simulated complex amplitudes which are numerically propagated using Eq. (10) starting from z = 0 mm to the subsequent all object layers. Then the two complex amplitudes obtained at each layer are complex conjugate multiplied. Thus, the contouring map across that layer is obtained. Within a window of 80 × 80 pixels, the brown square, the minimum of the error metric LTV is determined within all object layers. By comparing the values of the LTV, its minimum at each pixel within the window is defined which represents where the object is in-focus. Shifting the window to a neighborer area, yellow box, and repeating the process of defining the minimum of the LTV to find other areas where the object is in-focus.

Download Full Size | PDF

For demonstrating purpose, two complex amplitudes for light diffracted from a numerical model of a micro-cup are simulated. The diameter of the simulated cup is 1 mm and its depth is 0.5 mm. To make the cup surface optically rough, random surfaces variations with a maximum height of 10 λ are added. Thus speckle wave fields similar to the expected wave fields recovered from the measurement are simulated. The two complex amplitudes correspond to two measurements. One is for λ1 = 639.0 µm and the second is for λ2 = 642.5 µm. This leads of a synthetic wavelength of Λ = 117.3 µm. The results of the autofocus approach is shown in Fig. 7. We started with the two simulated complex amplitudes. These complex amplitudes are numerically propagated using Eq. (10) starting from z = 0.0 mm layer which corresponds to the last layer of the cup’s cover to the previous object layer in the -z direction. Since we know a prior the depth of the micro-cup (ca. 0.5 mm), the end propagation distance is known. The distance between two layers depends of the predefined number of object’s layers. We found that ten object’s layers give good results. The distance between two layers is 50 µm. Increasing the number of object layers increases the processing time. The two complex amplitudes obtained after the numerical propagation which correspond to the two wavelengths at each layer are complex conjugate multiplied. Thus, the contouring map across that layer is obtained, see Fig. 7(a). Within a fixed window of 80 × 80 pixels, the minimum of the error metric LTV is determined through all object layers by estimating the standard deviations which is relatively high within these windows where the object is out of focus because of the speckle decorrelation [42]. By comparing the values of the LTV, its minimum which represents where the object is in-focus at this window is defined, see Fig. 7(b). Shifting the window to a neighboring area and repeating the process of determining the minimum of LTV, other areas where the object is in-focus are found. Figure 7(c) shows a mask of the minimum of the error metric across the complete contouring phase map. At the end, all found focus areas are stitched together to obtain the focus contouring map as shown in Fig. 7(d). Visualization of the whole autofocus process is shown in Visualization 1.

 figure: Fig. 7

Fig. 7 Demonstration of the autofocus approach using simulation results. In a) the contouring phase map obtained for the third object layer where z = 150 µm. The yellow highlighted area represents the plane which is shown by the darkest area of the error metric shown in b). c) shows a median filtered version of the error metric. In d) all focus areas found are stitched together to obtain the focus contouring map (see Visualization 1).

Download Full Size | PDF

4. Experimental results

Two laser diodes λ1 = 636.55 nm and λ2 = 642.10 nm are used to create, in accord with Fig. 3, the object and reference illumination for the experimental setup. Accordingly, a synthetic wavelength of Λ = 73.64 µm is utilized for the evaluation process.

Figure 8(a) shows an example of a captured hologram. During the recording process the camera exposure time was 50 ms. Figure 8(b) shows the hologram spectrum which contains all information required for implementing the contouring approach. It is noted here, that there is no cross talk between the two different holograms which are recorded in a single shot camera image. Each object’s information related to certain wavelength is shifted to a specific position.

 figure: Fig. 8

Fig. 8 a) shows the intensity captured hologram which contains object information for the simultaneous illumination using the two wavelengths. The zoomed area shows the interference fringes of the recorded hologram which causes the separation of the recorded information of the two simultaneously captured holograms at the Fourier plane as shown in the hologram’s spectrum b).

Download Full Size | PDF

From the captured hologram one can retrieve ϕλ1 and ϕλ2, using the method described within Sec. 2.1. Thus, two phase distributions are obtained at the regarded plane as shown in Fig. 9(a), where only the real amplitude is presented. The contouring fringes, Fig. 9(b), are only sharp for the object’s parts which are in-focus (see the area which is marked with the yellow box in Fig. 9(a)). Due to the limited depth of focus of the utilized microscope objective, the other areas are corrupted by speckle decorrelation [18] and are not in-focus.

 figure: Fig. 9

Fig. 9 Reconstructed amplitude images which a) represent a sharp image of the micro-cup under test at one plane for complex amplitude recover from the measurement λ1 and c) across the whole object after the digital extending of the depth of focus of the DHM c) are shown. b) and d) show the corresponding phase difference Δϕ, respectively. Image size is 2200 × 2200 pixels with a pixel pitch of 4.54 µm.

Download Full Size | PDF

To obtain a sharp contouring map across the whole object, the recovered object wave fields are processed by the autofocus approach described in Sec. 3. For the autofocus approach, the required complex amplitudes are recovered starting from the last cup’s cover layer. The propagation is done also in −z direction. The trade-off between Δz and the minimum of LTV is based on the depth of field of the utilized telecentric objective, since outside the objective depth of field speckle decorrelation is hight. Thus there is no need to decrease Δz below the objective depth of field. Accordingly, the distance between consecutive layers Δz = 65 µm is chosen. Ten layers, similar to the simulation, are utilized to recover a focus contouring map. The focus region is indicated by monitoring where the focus measure is minimum.

Accordingly, different areas where the object is in-focus are found and stitched together forming the sharp contouring map. See Fig. 9(c) for real amplitude and Fig. 9(d) for contouring map, respectively. From the contouring maps, Figs. 9(b) and 9(d) a surface defect (dent) is clearly shown which means that the contouring map can be used as an input for surface inspection. Object defects with lateral size from 2 µm, limited by the numerical aperture of the objective, and minimal depth of at least 5 µm can be detected using our setup.

For fast evaluation, the automated process was proposed and implemented within a graphics processing unit (GPU) using a GeForce GTX TITAN X having 12GB Ram, 3584 cores and 10 Gbps memory speed. The time required to carry out all proposed post-processing for the autofocus approach is one second enabling real time inspection of cold drawn micro parts. As it can be seen that the result of the contouring phase map is corrupted with speckle noise. To reduce the speckle noise in two wavelength contouring, the multiple illumination approach described in [27] can be used. Finally, the contouring phase map has to be unwrapped. For this purpose, the phase unwrapping max-flow/min-cut algorithm (PUMA) is used [43]. Substituting the values in Eq. (4) yields the 3D height map which is shown in Fig. 10.

 figure: Fig. 10

Fig. 10 3D height map obtained by substituting the phase values obtained from the unwrapped contouring map into Eq. (4). It shows the part of a micro-cup that is observed by the DHM unit. The white spots within the surface refer to areas with phase phase singularities which can not be unwrapped.

Download Full Size | PDF

According to these results, the technique is recommended to be used in form measurements of macro and micro optically rough technical objects where speckle exists. Example includes in-line quality assurance during production of micro components where 100% quality inspection is mandatory to develop and/or optimize the product and the manufacturing process.

5. Conclusions

A novel approach for shape measurements of micro-parts is presented. It is based on using two-wavelength holographic contouring in combination with a novel autofocus concept. The holographic system utilizes a telecentric lens instead of standard microscope objective to compensate scaling effects and wave field curvature which distort reconstruction in digital holographic microscopy. Spatial multiplexing is used to simultaneously capture two holograms by a single camera reducing the measuring time required to capture the two holograms in two wavelength contouring. For fast evaluation, an autofocus algorithm to determine the location of the object plane and refocusing the object digitally without the need for any external intervention is proposed. It is based on finding the minimum of a metric which measures the total variation of statistical fluctuations on the phase difference (contouring) maps. This is the reason why the approach is limited to investigate optically rough objects. The algorithms for evaluation are parallelized in a GPU in order to speed up the computing time. The method is demonstrated by determining the 3D form of a cold formed micro-cup. The system is able to detect errors from 2 µm lateral size with a depth of at least 5 µm. Complete evaluation process for holographic capturing and acquisition is about one second.

Funding

Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)

Acknowledgments

The authors gratefully acknowledge the financial support by the DFG for Subproject B5 “Safe processes” within the Collaborative Research Center “SFB 747 - Micro cold Forming”. We also thank Reiner Klattenhoff his support in the experiment and Fabian Thiemicke for his support in the Matlab-based data processing.

References

1. M. Agour, P. Huke, C. v. Kopylow, and C. Falldorf, “Shape measurement by means of phase retrieval using a spatial light modulator,” AIP Conf. Proc. 1236, 265–270 (2010). [CrossRef]  

2. C. v. Kopylow and R. B. Bergmann, “Optical metrology,” in Micro Metal Forming, F. Vollertsen, ed. (Springer, 2013), pp. 392–404.

3. J. C. Wyant, “White light interferometry,” Proc. SPIE 4737, 4737 (2002).

4. P. Pavlíček and G. Häusler, “White-light interferometer with dispersion: an accurate fiber-optic sensor for the measurement of distance,” Appl. Opt. 44, 2978–2983 (2005). [CrossRef]  

5. D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, C. Puliafito, et al., “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

6. C. Falldorf, M. Agour, and R. B. Bergmann, “Digital holography and quantitative phase contrast imaging using computational shear interferometry,” Opt. Eng. 54, 024110 (2015). [CrossRef]  

7. C. Falldorf, M. Agour, C. v. Kopylow, and R. B. Bergmann, “Phase retrieval by means of a spatial light modulator in the fourier domain of an imaging system,” Appl. Opt. 49, 1826–1830 (2010). [CrossRef]   [PubMed]  

8. C. Falldorf, M. Agour, C. v. Kopylow, and R. B. Bergmann, “Phase retrieval for optical inspection of technical components,” J. Opt. 14, 065701 (2012). [CrossRef]  

9. M. Agour, “Optimal strategies for wave fields sensing by means of multiple planes phase retrieval,” J. Opt. 17, 085604 (2015). [CrossRef]  

10. U. Schnars and W. Jüptner, “Direct recording of holograms by a ccd target and numerical reconstruction,” Appl. Opt. 33, 179–181 (1994). [CrossRef]   [PubMed]  

11. U. Schanars, C. Falldorf, J. Watson, and W. Jüptner, Digital Holography and Wavefront Sensing: Principles, Techniques and Applications (SpringerBerlin / Heidelberg, 2015).

12. S. Seebacher, W. Osten, and W. Jüptner, “Measuring shape and deformation of small objects using digital holography,” (1998).

13. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30, 468–470 (2005). [CrossRef]   [PubMed]  

14. K. Haines and B. Hildebrand, “Contour generation by wavefront reconstruction,” Phys. Lett. 19, 10–11 (1965). [CrossRef]  

15. C. Wagner, W. Osten, and S. Seebacher, “Direct shape measurement by digital wavefront reconstruction and multi-wavelength contouring,” Opt. Eng. 39, 79–86 (2000). [CrossRef]  

16. I. Yamaguchi, T. Ida, M. Yokota, and K. Yamashita, “Surface shape measurement by phase-shifting digital holography with a wavelength shift,” Appl. Opt. 45, 7610–7616 (2006). [CrossRef]   [PubMed]  

17. N. George and A. Jain, “Space and wavelength dependence of speckle intensity,” Appl. physics 4, 201–212 (1974). [CrossRef]  

18. N. Wang, C. v. Kopylow, and C. Falldorf, “Rapid shape measurement of micro deep drawing parts by means of digital holographic contouring,” in Proceedings of the 36th International MATADOR Conference, S. Hinduja and L. Li, eds. (Springer London, London, 2010), pp. 45–48.

19. M. Agour, R. Klattenhoff, C. Falldorf, and R. B. Bergmann, “Speckle noise reduction in single-shot holographic two-wavelength contouring,” Proc. SPIE 10233, 102330R (2017). [CrossRef]  

20. M. Agour, R. Klattenhoff, C. Falldorf, and R. B. Bergmann, “Spatial multiplexing digital holography for speckle noise reduction in single-shot holographic two-wavelength contouring,” Opt. Eng. 56, 124101 (2017). [CrossRef]  

21. T. Kreis, Handbook of holographic interferometry: optical and digital methods (John Wiley & Sons, 2005).

22. J. Kühn, T. Colomb, F. Montfort, F. Charrière, Y. Emery, E. Cuche, P. Marquet, and C. Depeursinge, “Real-time dual-wavelength digital holographic microscopy with a single hologram acquisition,” Opt. Express 15, 7231–7242 (2007). [CrossRef]   [PubMed]  

23. S. T. Thurman and A. Bratcher, “Multiplexed synthetic-aperture digital holography,” Appl. Opt. 54, 559–568 (2015). [CrossRef]  

24. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. 72, 156–160 (1982). [CrossRef]  

25. M. Agour, K. El-Farahaty, E. Seisa, E. Omar, and T. Sokkar, “Single-shot digital holography for fast measuring optical properties of fibers,” Appl. Opt. 54, E188–E195 (2015). [CrossRef]   [PubMed]  

26. T. Z. N. Sokkar, K. A. El-Farahaty, M. A. El-Bakary, E. Z. Omar, M. Agour, and A. A. Hamza, “Adaptive spatial carrier frequency method for fast monitoring optical properties of fibres,” Appl. Phys. B 122, 138 (2016). [CrossRef]  

27. C. Falldorf, S. H. von Luepke, C. von Kopylow, and R. B. Bergmann, “Reduction of speckle noise in multiwavelength contouring,” Appl. Opt. 51, 8211–8215 (2012). [CrossRef]   [PubMed]  

28. C. Yourassowsky and F. Dubois, “High throughput holographic imaging-in-flow for the analysis of a wide plankton size range,” Opt. Express 22, 6661–6673 (2014). [CrossRef]   [PubMed]  

29. D. Lebrun, A. Benkouider, S. Coëtmellec, and M. Malek, “Particle field digital holographic reconstruction in arbitrary tilted planes,” Opt. Express 11, 224–229 (2003). [CrossRef]   [PubMed]  

30. P. Langehanenberg, L. Ivanova, I. B. S. Ketelhut, A. Vollmer, D. Dirksen, G. K. Georgiev, G. v. Bally, and B. Kemper, “Automated three-dimensional tracking of living cells by digital holographic microscopy,” J. Biomed. Opt. 14, 014018 (2009). [CrossRef]   [PubMed]  

31. D. Dirksen, H. Droste, B. Kemper, H. Deleré, M. Deiwick, H. Scheld, and G. von Bally, “Lensless fourier holography for digital holographic interferometry on biological samples,” Opt. Lasers Eng. 36, 241–249 (2001). [CrossRef]  

32. D. Carl, B. Kemper, G. Wernicke, and G. v. Bally, “Parameter-optimized digital holographic microscope for high-resolution living-cell analysis,” Appl. Opt. 43, 6536–6544 (2004). [CrossRef]  

33. P. Ferraro, G. Coppola, S. D. Nicola, A. Finizio, and G. Pierattini, “Digital holographic microscope with automatic focus tracking by detecting sample displacement in real time,” Opt. Lett. 28, 1257–1259 (2003). [CrossRef]   [PubMed]  

34. J. Gillespie and R. A. King, “The use of self-entropy as a focus measure in digital holography,” Pattern Recognit. Lett. 9, 19–25 (1989). [CrossRef]  

35. P. Ferraro, S. D. Nicola, A. Finizio, G. Coppola, S. Grilli, C. Magro, and G. Pierattini, “Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging,” Appl. Opt. 42, 1938–1946 (2003). [CrossRef]   [PubMed]  

36. F. Dubois, C. Schockaert, N. Callens, and C. Yourassowsky, “Focus plane detection criteria in digital holography microscopy by amplitude analysis,” Opt. Express 14, 5895–5908 (2006). [CrossRef]   [PubMed]  

37. P. Memmolo, C. Distante, M. Paturzo, A. Finizio, P. Ferraro, and B. Javidi, “Automatic focusing in digital holography and its application to stretched holograms,” Opt. Lett. 36, 1945–1947 (2011). [CrossRef]   [PubMed]  

38. L. Ma, H. Wang, Y. Li, and H. Jin, “Numerical reconstruction of digital holograms for three-dimensional shape measurement,” J. Opt. A: Pure Appl. Opt. 6, 396 (2004). [CrossRef]  

39. M. Liebling and M. Unser, “Autofocus for digital fresnel holograms by use of a fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004). [CrossRef]  

40. M. F. Toy, S. Richard, J. Kühn, A. Franco-Obregón, M. Egli, and C. Depeursinge, “Enhanced robustness digital holographic microscopy for demanding environment of space biology,” Biomed. Opt. Express 3, 313–326 (2012). [CrossRef]   [PubMed]  

41. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

42. C. Wykes, “De-correlation effects in speckle-pattern interferometry. 1. wavelength change dependent de-correlation with application to contouring and surface roughness measurement,” Opt. Acta: Int. J. Opt. 24, 517–532 (1977). [CrossRef]  

43. J. M. Bioucas-Dias and G. Valadao, “Phase unwrapping via graph cuts,” IEEE Transactions on Image Process. 16, 698–709 (2007). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Visualization of the proposed autofocus approach.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 An image shows micro-cups as an example of metallic cold forming micro-parts. The micro-cups have a size of less than 1 mm in at least two dimensions. This image is captured by Lukas Heinrich, BIAS.
Fig. 2
Fig. 2 Scheme of the digital holographic microscopy (DHM) setup. A long distance microscope objective (LDM) with 10× magnification, a numerical aperture of N.A. = 0.21 and a working distance of 51 mm image the object under test on a CCD camera sensor. Optical fiber splitters and coupler are used for creating the illuminating and reference waves which are combined utilizing a 50:50 beam-splitter in front of the CCD camera. An angle of α = 22° is found to be appropriate for illuminating the micro-cup under test.
Fig. 3
Fig. 3 Scheme showing configuration of object’s illumination and reference waves originating from the laser diodes with wavelength λ1 and λ2 and utilizing optical fiber splitters and couples.
Fig. 4
Fig. 4 Schematic representation of the expected spectrum of a single hologram, where +1, −1, and the dc-term refer to the diffraction orders and fi and fj represents the frequencies in i and j directions. Here, the spectra corresponding to the wavelengths λ1 and λ2 are shown as an example.
Fig. 5
Fig. 5 Scheme representing the implementation of the spatial carrier frequency method which is summarized in steps i) to iv) and utilized to recover the phase information from the captured intensity hologram. ℱ and ℱ−1 refer to the Fourier transform and its inversion, U O λ i is the filtered object wave field corresponding to the λi measurement, i takes 1 or 2 and arg{·} is the argument of a complex number which gives its phase ϕλi.
Fig. 6
Fig. 6 Scheme representing the implementation of the autofocus approach. We start with the two simulated complex amplitudes which are numerically propagated using Eq. (10) starting from z = 0 mm to the subsequent all object layers. Then the two complex amplitudes obtained at each layer are complex conjugate multiplied. Thus, the contouring map across that layer is obtained. Within a window of 80 × 80 pixels, the brown square, the minimum of the error metric LTV is determined within all object layers. By comparing the values of the LTV, its minimum at each pixel within the window is defined which represents where the object is in-focus. Shifting the window to a neighborer area, yellow box, and repeating the process of defining the minimum of the LTV to find other areas where the object is in-focus.
Fig. 7
Fig. 7 Demonstration of the autofocus approach using simulation results. In a) the contouring phase map obtained for the third object layer where z = 150 µm. The yellow highlighted area represents the plane which is shown by the darkest area of the error metric shown in b). c) shows a median filtered version of the error metric. In d) all focus areas found are stitched together to obtain the focus contouring map (see Visualization 1).
Fig. 8
Fig. 8 a) shows the intensity captured hologram which contains object information for the simultaneous illumination using the two wavelengths. The zoomed area shows the interference fringes of the recorded hologram which causes the separation of the recorded information of the two simultaneously captured holograms at the Fourier plane as shown in the hologram’s spectrum b).
Fig. 9
Fig. 9 Reconstructed amplitude images which a) represent a sharp image of the micro-cup under test at one plane for complex amplitude recover from the measurement λ1 and c) across the whole object after the digital extending of the depth of focus of the DHM c) are shown. b) and d) show the corresponding phase difference Δϕ, respectively. Image size is 2200 × 2200 pixels with a pixel pitch of 4.54 µm.
Fig. 10
Fig. 10 3D height map obtained by substituting the phase values obtained from the unwrapped contouring map into Eq. (4). It shows the part of a micro-cup that is observed by the DHM unit. The white spots within the surface refer to areas with phase phase singularities which can not be unwrapped.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

G λ n ( y ) = exp [ i 2 π Δ x n y ] .
I ( y ) = A ( y ) + n = 1 2 [ U O λ n ( y ) U R λ n * ( y ) G λ n ( y ) ] + n = 1 2 [ U O λ n * ( y ) U R λ n ( y ) G λ n * ( y ) ] .
Δ ϕ = ϕ λ 2 ϕ λ 1 .
Δ ϕ = 2 π Λ z p ( 1 + cos α ) .
Λ = λ 1 λ 2 | λ 2 λ 1 | .
L T V = y R | ϕ R e s | 2 ,
ϕ R e s = Δ ϕ Δ ϕ ˜ .
ϕ R e s = ( ϕ R e s y i ϕ R e s y j ) and | ϕ R e s | 2 = ( ϕ R e s y i ) 2 + ( ϕ R e s y j ) 2 .
ϕ R e s y i | ϕ R e s ( y i + 1 ) ϕ R e s ( y i ) | m o d 2 π .
U 2 = 1 { { U 1 } H z } .
H z ( v ) = exp [ i 2 π Δ z λ 1 1 λ 1 2 | v | 2 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.