Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compact image scanner with large depth of field by compound eye system

Open Access Open Access

Abstract

A compact image scanner is designed by using a compound eye system with plural optical units in which a ray path is folded by reflective optics. The optical units are aligned in two lines and take each image of a separated field of view (FOV), slightly overlapped. Since the optical units are telecentric in the object space and the magnification ratio is constant regardless of the object distance, the separated pieces of a total image are easily combined with each other even in the defocused position. Since the optical axes between adjacent optical units are crossed obliquely, object distance is derived from the parallax at each boundary position and an adequate deblurring process is achieved for the defocused image.

©2012 Optical Society of America

1. Introduction

A reduction type image sensor which has scanning mirrors, a fixed imaging lens and a line image sensor is used as a main reading device for a copier to meet specifications of high resolution and large depth of field (DOF). The size of the reduction type image sensor is large because the object distance is large and it needs a complicated driving mechanism to scan the mirrors while keeping the object distance.

Compound eye optics has been researched for long time to reduce optics size. A large FOV is divided into many small areas and plural units of imaging optical system take image of the separated area, thus reducing the total size.

In the first type of compound eye optics, plural images from the optical units are optically overlapped on the image plane. The first lens array makes reverted and shrunk images on the intermediate plane and the second lens array accurately superimposes those images on an image sensor. Anderson developed a close-up imaging system forming an erect composite image with two arrays of lenses for an oscilloscope [1]. Recently, Meyer et al. reported a wafer-level camera based on microlens-arrays and aperture arrays [2]. Contact image sensor (CIS) based on gradient index (GRIN) lens array [3] can be classified into this compound eye system. The GRIN lens array is widely used in the scanner market because of compactness and cheapness, however the DOF is small and this limits its use in copiers to automatic document feeders only. This first type of compound eye system has two difficulties: (a) it requires erecting a system with strictly the same magnification for each lens, and (b) slight misalignment between lens-arrays causes mismatching of images.

In the second type of compound eye optics, each optical unit forms an independent image on its image sensor and those images are combined electrically for a final output image [46]. Limiting our discussion to image scanners, there are many reports and patents of an image scanner based on this second type [7,8]. However, this type of image scanner is only used as a CIS in which object distance is constant because magnification ratio changes due to the object distance and this causes mismatching of the image combining process.

We developed a compact image scanner based on the second type of compound eye optics. The DOF is large enough to be used as the main reading device of a copier by applying a telecentric optical system.

2. Basic construction of our compound eye optics and principle of large depth of field

Figure 1 shows the conceptual construction of our optics to explain the basic idea. Each cylinder in Fig. 1 expresses an optical unit. The X direction is called main scanning direction. The Y direction is called sub scanning direction, in which a paper as an object is scanned on the top glass. A large FOV along the X direction is divided into small fields of 10 mm length of which an optical unit takes an image. Each optical unit is aligned in a zigzag alignment of two lines of A and B. The optical axes are inclined in the Y direction as shown in Fig. 1(c) so that every optical unit has the same reading position on the top glass in the Y direction. Each optical unit forms an inverted image on the respective image sensor.

 figure: Fig. 1

Fig. 1 Conceptual construction of our compound eye scanner.

Download Full Size | PDF

Figure 2 shows the image combining process. Figure 2(a) shows a Japanese character as an object on the top glass, which strides on the areas of FOV of No.1 and No. 2 optical units. The areas of FOV overlap each other slightly. The images by the two optical units after scanned in the Y direction are shown in Fig. 2(b). The images of Fig. 2(b) have the same part of S'S which is the transcription of the area of the overlapped FOV between No. 1 and No. 2 optical units. The images are reverted and combined to coincide with the overlapped images in electrical image combining processing.

 figure: Fig. 2

Fig. 2 Image combining process.

Download Full Size | PDF

Each optical unit is telecentric on the object side as shown in Fig. 1(b) and it features our optics. The image size produced by each optical unit does not change due to the object distance. Even when an object floats from the top glass, for example when the binding area of a book does not contact the top glass as shown in Fig. 1(b), the overlapped area in the X direction does not change. Therefore the images can be combined with each other without varying magnification which degrades image quality. If optical units are aligned in one line and take images without a gap in the area of the FOV, it is impossible to construct telecentric optics in the compound eye optics. Because any principal rays are parallel to the optical axis of the optical unit and the aperture size of the first lens is larger than the field length of the optical unit. So the optical units are aligned in a zigzag alignment of two lines to avoid mechanical interference between the adjacent optical units. It causes a shift of the reading positions of A and B lines in the Y direction when an object floats from the top glass. The gap between images of adjacent optical units, Δy, changes linearly by the object distance from the top glass, Δz, expressed as Eq. (1):

ΔyΔz.

Cross correlation is calculated in the image combining process. Since two pictures of the overlapped area have different coordinates only in the Y direction, the value of Δy is calculated by searching an optimal point with cross correlation between the two pictures. Thus the value of Δy derives the value of Δz at every boundary of adjacent optical units and every position in the Y direction. The images are locally expanded or contracted depending on the local irregularity of the object surface.

Additionally, since Δz is known at every position, it is easy to sharpen the blurred image caused by defocusing locally. Point spread function at Δz can be obtained easily from ray trace simulation and the blurred image is deconvolved with the point spread function. It means that this deblurring process extends the DOF, which is defined as depth range where modulation transfer function (MTF) exceeds a threshold, as show in Fig. 3 . The threshold value of MTF is determined as 0.3 at half of the Nyquist frequency, for example. The original MTF versus Δz is expressed as graph (a) and its DOF is shown as za in Fig. 3. The deblurring process increases the MTF to graph (b) and the DOF is extended to zb in Fig. 3.

 figure: Fig. 3

Fig. 3 Extension of depth of field (DOF) by deblurring process.

Download Full Size | PDF

3. Design of reflective optical unit

We designed each optical unit as a reflective optics folding the ray path to further reduce the size in addition to the compound eye design as shown in Fig. 4 . It has five optical elements: two flat mirrors (M1 and M2), two concave mirrors (L1 and L2) and an aperture stop. It is important that the aperture stop is placed at the back focal position of L1 to form a telecentric optical system on the object side. M1 and M2 have roles to deflect rays in the Y direction. Since the optical path from L1 to the focal point on the object side is designed to be the focal length of L1, 20 mm, the rays after L1 are collimated. L1 and L2 have same curvature for easy trial manufacture and L2 is placed at 20 mm after the aperture stop; therefore, the image side is also telecentric and the magnification ratio of the optical unit equals one.

 figure: Fig. 4

Fig. 4 Configuration of reflective optical elements in an optical unit from perspective view.

Download Full Size | PDF

Figure 5 shows a picture of an optical unit, in which the four reflective optical elements are mounted on a holder, although M2 is hidden at the back. The holder is made by mold injection and includes the aperture stop. Figure 6 shows alignment of the optical units shown in Fig. 5. Our experimental image scanner consists of 14 optical units, though four optical units are removed in Fig. 6 for explaining each unit location.

 figure: Fig. 5

Fig. 5 Picture of an optical unit. M2 and an aperture stop cannot be seen in this picture.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Alignment of optical units.

Download Full Size | PDF

Figure 7 shows the projection view of those optical elements in the Y-Z plane. A thin illumination module composed of LED arrays is also shown in Fig. 7. The optical track size is compact: 60 mm in the Y direction and 23.5 mm in the Z direction. The optical units are placed in an alternately inverted direction, and the optical elements are arranged in such a way as to avoid interference between optical elements of the adjacent optical units.

 figure: Fig. 7

Fig. 7 Projection view of optical elements in Y-Z plane.

Download Full Size | PDF

Figure 8 shows the design result of MTF versus Δz in the object space at spatial frequency ν = 6 line pairs/mm (lp/mm). We define our MTF specification at 6 lp/mm, half of the Nyquist frequency of 600 dots per inch (dpi) sensor-pixel-density. Graph lines (a) and (b) are calculated at object height 0 in the X and Y directions, respectively, and lines (c) and (d) are calculated at object height 5 mm in the X and Y directions, respectively. Since rays are obliquely incident in L1 and L2, the optical unit has a slight astigmatism which is shown in the difference of the peak positions of lines (a) and (b) in Fig. 8. However, the MTF of any line in Fig. 8 exceeds 0.3 in a wide range of 5 mm in the horizontal axis.

 figure: Fig. 8

Fig. 8 MTF at 6 lp/mm versus defocal position in object space by simulation analysis.

Download Full Size | PDF

4. Experiment

Figure 9 shows assembly of our compound eye optics. Those 14 optical units are put into a frame as shown in Fig. 9(a) with alternately inverted directions as shown in Fig. 9(b). An illumination module and an image sensor board are attached over and under the assembled imaging optics, respectively, and then a driver circuit and a signal processing circuit are connected. Paper charts are pasted onto a cylindrical rotating drum, and our image scanner is fixed in front of the drum and takes images of the charts.

 figure: Fig. 9

Fig. 9 Assembly of our compound eye optics. (a) The optical units are put into a frame with alternately inverted directions. (b) The optical units are arranged in the same way as Fig. 6.

Download Full Size | PDF

Figure 10(a) shows four pictures taken by four optical units in such an experiment. The horizontal and vertical directions correspond to the X and Y directions in Fig. 1, respectively. There are overlapped areas in each boundary, as seen in Fig. 10(a). Figure 10(b) is the result of a combining process in which each piece of the picture is shifted only in the Y direction so that the boundary areas of the four pictures coincide.

 figure: Fig. 10

Fig. 10 Example of the image combining process. (a) Before the combining process. Each image is inverted. (b) After combining process.

Download Full Size | PDF

Figure 11 shows output images of a part of a resolution chart at some object distance. The width of the part of the resolution chart is 25 mm, therefore each image is combined from images of three optical units. A number in the images, n, shows the number of lines and spaces per inch at the vertical positions. Therefore n is converted to ν by Eq. (2):

 figure: Fig. 11

Fig. 11 Images of a resolution chart. Each image is combined from images of three optical units.

Download Full Size | PDF

ν=n/50.8.

Resolution rapidly degrades by Δz in a conventional CIS by GRIN lens array as shown in Fig. 11(a). Meanwhile in our imaging scanner, the resolution is better than the conventional CIS even without the deblurring process as shown in Fig. 11(b). Though the image at 7 mm in Fig. 11(b) is blurred, the resolution can be restored by performing the deblurring process as shown in Fig. 11(c).

Figure 12 shows contrast C(ν) of the images at Δz = 7 mm in Fig. 11(b) and 11(c). Here C(ν) is defined by Eq. (3):

C(ν)=Imax(ν)Imin(ν)Imax(ν)+Imin(ν),
where Imax (ν) and Imin (ν) are the maximum and the minimum values of the intensity at ν. We used the contrast C(ν) as an evaluation index instead of MTF derived from Fourier transform of a point image, because the point image cannot be resolved by the 600 dpi sensor. Contrast is lower than 0.2 in the range of ν > 4 lp/mm before the deblurring process, while the deblurring process increases contrast over 0.4 in the range of ν < 6 lp/mm.

 figure: Fig. 12

Fig. 12 Contrast calculated from images at Δz = 7 mm in Fig. 11(b) and 11(c), before and after deblurring process.

Download Full Size | PDF

5. Summary

We proposed compound eye optics constructed from plural reflective optical units for a compact image scanner. Since the optical unit is telecentric in the object space and the magnification ratio is constant regardless of the object distance, no degradation occurs in the image combining process and the original DOF of the optical unit is conserved after the image combining process. Since the optical axes between adjacent optical units are crossed obliquely, the images are shifted only in the sub scanning direction between adjacent optical units and the object distance can be calculated from the shift value of the images. An adequate deblurring process using the information of the object distance can restore contrast, which means that DOF is extended.

We fabricated the prototype, which is thin at 23.5 mm in optical track size, and demonstrated the performance that the contrast value at ν = 6 lp/mm is larger than 0.4, even with a large object distance of 7 mm from the top glass.

References and links

1. R. H. Anderson, “Close-up imaging of documents and displays with lens arrays,” Appl. Opt. 18(4), 477–484 (1979). [CrossRef]   [PubMed]  

2. J. Meyer, A. Brückner, R. Leitel, P. Dannberg, A. Bräuer, and A. Tünnermann, “Optical cluster eye fabricated on wafer-level,” Opt. Express 19(18), 17506–17519 (2011). [CrossRef]   [PubMed]  

3. M. Kawazu and Y. Ogura, “Application of gradient-index fiber arrays to copying machines,” Appl. Opt. 19(7), 1105–1112 (1980). [CrossRef]   [PubMed]  

4. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO) concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef]   [PubMed]  

5. A. Brückner, J. Duparré, R. Leitel, P. Dannberg, A. Bräuer, and A. Tünnermann, “Thin wafer-level camera lenses inspired by insect compound eyes,” Opt. Express 18(24), 24379–24394 (2010). [CrossRef]   [PubMed]  

6. G. Druart, N. Guérineau, R. Haïdar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos peckii vision,” Appl. Opt. 48(18), 3368–3374 (2009). [CrossRef]   [PubMed]  

7. I. Maeda, T. Inokuchi, and T. Miyashita, US patent 4776683 (1988).

8. K. Nagatani, K. Morita, H. Okushiba, S. Kojima, and R. Sakaguchi, US patent 5399850 (1995).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Conceptual construction of our compound eye scanner.
Fig. 2
Fig. 2 Image combining process.
Fig. 3
Fig. 3 Extension of depth of field (DOF) by deblurring process.
Fig. 4
Fig. 4 Configuration of reflective optical elements in an optical unit from perspective view.
Fig. 5
Fig. 5 Picture of an optical unit. M2 and an aperture stop cannot be seen in this picture.
Fig. 6
Fig. 6 Alignment of optical units.
Fig. 7
Fig. 7 Projection view of optical elements in Y-Z plane.
Fig. 8
Fig. 8 MTF at 6 lp/mm versus defocal position in object space by simulation analysis.
Fig. 9
Fig. 9 Assembly of our compound eye optics. (a) The optical units are put into a frame with alternately inverted directions. (b) The optical units are arranged in the same way as Fig. 6.
Fig. 10
Fig. 10 Example of the image combining process. (a) Before the combining process. Each image is inverted. (b) After combining process.
Fig. 11
Fig. 11 Images of a resolution chart. Each image is combined from images of three optical units.
Fig. 12
Fig. 12 Contrast calculated from images at Δz = 7 mm in Fig. 11(b) and 11(c), before and after deblurring process.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

ΔyΔz.
ν=n/50.8.
C(ν)= I max (ν) I min (ν) I max (ν)+ I min (ν) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.