Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Analog image contouring using a twisted-nematic liquid-crystal display

Open Access Open Access

Abstract

We present a novel image contouring method based on the polarization features of the twisted-nematic liquid-crystal displays (TN-LCDs). TN-LCDs are manufactured to work between a crossed polarizer-analyzer pair. When the analyzer is at 45 deg (instead of 90 deg) with respect to the polarizer, one obtains an optically processed image with pronounced outlines (dark contours) at middle intensity, i.e., the borders between illuminated and dark areas are enhanced. The proposed method is quite robust and does not require precise alignment or coherent illumination. Since it does not involve numerical processing, it could be useful for contouring large images in real-time, which presents potential applications in medical and biological imaging. Validation experiments are presented.

©2010 Optical Society of America

1. Introduction

One of the most challenging tasks in image processing is the partition of an image into separate regions, which ideally correspond to different real-world objects. This operation is closely related to the concept of contour-detection (often as a previous step for image segmentation). Contour-detection algorithms rely on the discontinuity of image intensities or texture at the object boundaries.

The automatic contour-detection is a key operation in numerous applications in various fields, such as medical imaging [13], microscopy [4,5], automated defect inspection [6] and agriculture [7]. For example, in brain tumor studies (and, in general, in radiation oncology) accurate and reproducible characterization of abnormalities is not straightforward, and a mayor problem in treatment planning is determination of tumor extent (tumor boundaries) [8,9]. Contouring techniques combined with information from PET/CT imaging have proven to be useful in this regard [10].

In general, contour techniques are computational algorithms, and their main disadvantage is the computation time requirement, which depends on the size of image, processing strategy, and algorithm complexity [11,12]. One optical approach to determine the contour of an object is the use of traditional optical edge detector methods, which are often based on high-pass filters implemented in coherent optical processors [13,14]. Several of these traditional systems require the use of a liquid crystal display (LCD) (additional to the LCD used to display the image to be processed) for implementing the filter function in real time.

The purpose of the present paper is to present an optical method for contour-detection that can be applied, in real-time applications, in a great variety of fields. The method is based on properties of twisted-nematic liquid-crystal displays (TN-LCD). These devices are manufactured to work between crossed polarizers, i.e., the incident light has a given polarization direction (fixed by the manufacturer) and the amplitude image is obtained with the analyzer orthogonal to it. We will demonstrate that, when the analyzer is at 45° (instead of 90°) with respect to the incident polarization direction, one obtains a modified image that presents a partial contrast reversal and pronounced outlines (dark contours) at middle intensity, i.e., the borders between illuminated and dark areas are enhanced, which represents a form of image contouring. [The projection of polarization states of a LCD onto the direction of an analyzer at 45° has some precedents in the literature, see e.g [15].]

The optical processor described here performs contouring with no prior knowledge of the target or numerical preprocessing. The contour adapts itself to the target’s shape independently of its orientation and scale. Furthermore, the method can be easily implemented using a common LCD architecture and it does not require any complex optical architecture or a bank of filters. The potential field of application of the proposed method is the contouring of large images as well as an image sequence in real-time, e.g., determining the cell image contouring is a necessary first step of many automated biomedical image processing procedures [16]. Also, it could be used to pre-process an input signal into an optical correlator in order to enhance objects’ edges, and in this way improve the performance of the pattern recognition systems [17,18].

In Section 2 we briefly describe the theory, and in Section 3 we present validation experiments and conclusions are summarized in Section 4.

2. Theory

The proposed setup is shown in Fig. 1 . It consists of a twisted-nematic LCD in which the original image I(x,y) is displayed, a polarizer (P) with transmission direction along the x-coordinate, and an analyzer (A) whose transmission direction forms an angleξ with respect to the y-axis. The purpose of the lens (L) is to project the image generated by the combination P-LCD-A across the detector array of a digital camera (C) (without objective lens). [When the images are observed with the naked eyes, the lens and the camera are not necessary.] Twisted-nematic LCDs are currently manufactured to work between crossed polarizers, with the incident light polarization (P) having a prefixed direction that we denote as x-direction. We will assume that the LCD is across the (x,y)-plane and that the light wave propagates in z-direction. In the ideal case, the electric field (E) after the liquid-crystal cells will be linearly polarized with a polarization direction characterized by the angle θ(x,y) with respect to the y-axis (see Fig. 1), where θ(x,y) will depend on the voltage applied to the LCD-pixels. [Strictly speaking, in a certain voltage range and depending of the input polarization direction, after the TN-LCD the light may be elliptically polarized. However, through the present work we are using a very simplified model for the TN-LCD, where we consider it simply as a device that rotates the plane of polarization.]

 figure: Fig. 1

Fig. 1 Proposed setup.

Download Full Size | PDF

As mentioned above, when the analyzer direction is along the y-direction the image produced by the combination P-LCD-A will be a replica of the digital image displayed on the LCD, i.e.,

I(x,y)=cos2(θ(x,y)),
where we arbitrarily stated0I(x,y)1. Now, if the analyzer transmission direction (A) forms an angle ξ with respect to the y-axis, at the system output we will have a modified image Iout(x,y) given by

Iout(x,y)=cos2(θ(x,y)+ξ)               =(1/2){1+cos(2θ(x,y))cos(2ξ)sin(2θ(x,y))sin(2ξ)}.

From Eq. (1) it is easy to demonstrate that sin(2θ(x,y))=1(2I(x,y)1)2, so that Eq. (2) can be rewritten as

Iout(x,y)=(1/2){1+(2I(x,y)1)cos(2ξ)1(2I(x,y)1)2sin(2ξ)}.

Hence it is clear that when or ξ±π/4 the output image will be a linear combination of the original image and a non-linearly processed replica. In the particular case whenξ=π/4, from Eq. (3) (omitting a physically irrelevant factor 1/2 in front of the curl bracket) it results

Iout(x,y)=11(2I(x,y)1)2.

Actually, a real-world LCD does not display the “true” digital image I(x,y), but it displays an image aI(x,y)+IB (with 0aI(x,y)+IB1), where a is the gain of the display amplifier and IB is an offset or bias-term added to the voltage applied to the display, often called “brightness” control. [For the sake of simplicity, we are assuming that the LCD presents a linear response.]

In principle, the gain factor (a) of the amplifier and the offset-term (IB) can be already included in what we consider to be the “original” digital image I(x,y). To describe explicitly the effect of the offset-term, we have to do the substitution I(x,y)I(x,y)+IB in Eq. (4),

Iout(x,y)=11[2(I(x,y)+IB)1]2.

Hence we see that when I(x,y)+IB=0 or I(x,y)+IB=1, one obtains Iout(x,y)=1. Thus, dark and bright areas of the original image will be reproduced as bright areas in the optically processed image, Iout(x,y). In a well-illuminated half-tone image, e.g., an image with intensity values ranging continuously from 0 to 1, the value I(x,y)+IB=0.5 (assuming IB<0.5) will occur at some points along the borders between bright and dark areas. From Eq. (5), it is clear that when I(x,y)+IB=0.5 one obtains Iout(x,y)=0, and thus, the proposed method will produce dark contours (outlines) along the borders between bright and dark regions of the original image I(x,y). Thus, in general, the offset-term (IB) determines what pixels of I(x,y) will be contour points, since the condition I(x,y)+IB=0.5 is the implicit equation for the dark contouring curves y=y(x) around different image areas. [We are assuming that y=y(x) is a single-valued function; if not, we may always consider the curves as decomposed as a sum of single-value functions.]

From Eq. (5), one obtains

Iout(x,y)τ=2[2(I(x,y)+IB)1]1[2(I(x,y)+IB)1]2I(x,y)τ
where τ denotes the x- or y-coordinate. Then, when I(x,y)+IB<0.5 the sign of Iout(x,y)τ will be opposite to the sign of I(x,y)τ, which means that the contrast of the poorly illuminated regions may be reverted.

Some characteristic features of the proposed method remember the Sabatier effect −often called solarization− of photographic films. However, the similarity cannot be further developed because the Sabatier effect depends on the photochemical processes inside the photographic film.

3. Experimental results

We have performed validation experiments using a liquid-crystal display of 600x800 pixels (model LC2002, Holoeye Corp.) illuminated by a white light LED. The images were acquired using a firewire CCD camera (model DC310, Thorlabs Inc.). Although our image processing is monochromatic, it is not difficult to perform experiments using color liquid-crystal display and camera, or alternatively, using three monochromatic parallel channels to process the red, blue and green colors.

In order to study the response of the method to noisy images, we have performed experiments using a digital image of a USAF 1951 test pattern with added Gaussian noise (with standard deviation σ = 0.2). Figure 2(a) shows a partial view of the noisy USAF 1951 pattern, while Figs. 2(b)2(d) show the optically processed images with intensity offset (IB) increasing in steps of 0.2 between consecutive images. Despite the intentionally added noise, the processed images show the expected dark outline along the borders of the numbers and bars of the pattern, which illustrate the robustness of the method. [As expected from the expression [Eq. (5)], the value of IB affects the gray level of the different regions of the processed image.]

 figure: Fig. 2

Fig. 2 (a) Partial view of the USAF1951 pattern with additive Gaussian noise. (b)-(d) Optically processed images with IB increasing in steps of 0.2 between consecutive images.

Download Full Size | PDF

In order to illustrate the potential applications of the method, we have performed a series of experiments with images of biological and medical interest.

Figure 3(a) shows an MRI image of a brain with a tumor (a meningioma) [19]. Figure 3(b) shows the optically processed image. The value of the offset-term was chosen in order to achieve a clearly contoured tumor, which enhances the visualization of the pathology and potentially improves diagnosis.

 figure: Fig. 3

Fig. 3 Brain MRI image. The left-side shows the original image, while the right-side shows the optically processed image with contoured tumor.

Download Full Size | PDF

Figure 4 is a single-frame excerpts from a video recording (Media 1) showing the division of a cell nucleus with the chromosomes splitting lengthwise (mitosis) of the African blood lily, Haemantus katherinae, observed with phase contrast microscopy [20,21]. The optically processed video shows clearly the cell division (see, e.g., the small vesicles appearing between nuclei to form the cell plate) and the chromosomes with well defined contours. Thus, the proposed contouring method increases the discrimination capability of structures that are hardly visible in the original images.

 figure: Fig. 4

Fig. 4 (Media 1) Cell division of Haemantus katherinae observed with phase contrast microscopy. The left-side shows the original image, while the right-side shows the optically processed image with contoured chromosomes.

Download Full Size | PDF

4. Conclusions

We have presented an analog image contouring method using a twisted-nematic LCD. The proposed method produces a dark outline at middle intensity, i.e., an enhancement of the borders between illuminated and dark areas, which can be controlled by varying the value of the bias-term (IB). The method is robust and can be easily implemented, and it does not involve any numerical processing. Thus, it could be potentially useful for processing large images (or an image sequence) in real-time applications such as biological and medical imaging, as demonstrated in our experiments.

Acknowledgement

J.A.F. thanks the financial support from PEDECIBA (Uruguay). J. L. Flores express his gratitude to the “Programa de Estancias Académicas, University of Guadalajara” for funding his academic stay at the Facultad de Ingeniería (UdelaR, Uruguay) where this research was developed.

References and links

1. A. van Baardwijk, G. Bosmans, L. Boersma, J. Buijsen, S. Wanders, M. Hochstenbag, R. J. van Suylen, A. Dekker, C. Dehing-Oberije, R. Houben, S. M. Bentzen, M. van Kroonenburgh, P. Lambin, and D. De Ruysscher, “PET-CT-based auto-contouring in non-small-cell lung cancer correlates with pathology and reduces interobserver variability in the delineation of the primary tumor and involved nodal volumes,” Int. J. Radiat. Oncol. Biol. Phys. 68(3), 771–778 (2007). [CrossRef]   [PubMed]  

2. O. Tsujii, M. T. Freedman, and S. K. Mun, “Lung contour detection in chest radiographs using 1-D convolution neural networks,” J. Electron. Imaging 8(1), 46–53 (1999). [CrossRef]  

3. H. H. Lin, S. G. Shu, S. W. Kuo, C. H. Wang, Y. P. Chan, and S. S. Yu, “Alpha-gamma equalization-enhanced hand radiographic image segmentation scheme,” Opt. Eng. 48(10), 107001 (2009). [CrossRef]  

4. P. Phukpattaranont and P. Boonyaphiphat, “Color Based Segmentation of Nuclear Stained Breast Cancer Cell Images,” ECTI Trans. EEC 5, 158–164 (2007).

5. V. R. Korde, H. Bartels, J. Ranger-Moore, and J. Barton, “Automatic Segmentation of cell nuclei in bladder and skin tissue for karyometric analysis,” Proc. SPIE 6633, 66330V (2007). [CrossRef]  

6. M. Ralló, M. S. Millán, and J. Escofet, “Referenceless segmentation of flaws in woven fabrics,” Appl. Opt. 46(27), 6688–6699 (2007). [CrossRef]   [PubMed]  

7. Q. Zeng, Y. Miao, C. Liu, and S. Wang, “Algorithm based on marker-controlled watershed transform for overlapping plant fruit segmentation,” Opt. Eng. 48(2), 027201 (2009). [CrossRef]  

8. R. N. Strickland, Image-processing techniques for tumor detection (Marcel Dekker Incorporated, New York, 2002).

9. D. Guliato, R. M. Rangayyan, W. A. Carnielli, J. A. Zuffo, and J. E. L. Desautels, “Segmentation of breast tumors in mammograms using fuzzy sets,” J. Electron. Imaging 12(3), 369–378 (2003). [CrossRef]  

10. C. Greco, K. Rosenzweig, G. L. Cascini, and O. Tamburrini, “Current status of PET/CT for tumour volume definition in radiotherapy treatment planning for non-small cell lung cancer (NSCLC),” Lung Cancer 57(2), 125–134 (2007). [CrossRef]   [PubMed]  

11. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” Int. J. Comput. Vis. 1(4), 321–331 (1988). [CrossRef]  

12. G. Zhu, S. Zhang, Q. Zeng, C. Wang, Q. Zeng, and C. Wang, “Boundary-based image segmentation using binary level set method,” Opt. Eng. 46(5), 050501 (2007). [CrossRef]  

13. M. Y. Shih, A. Shishido, and I. C. Khoo, “All-optical image processing by means of a photosensitive nonlinear liquid-crystal film: edge enhancement and image addition-subtraction,” Opt. Lett. 26(15), 1140–1142 (2001). [CrossRef]  

14. C. S. Yelleswarapu, S. R. Kothapalli, and D. V. G. L. N. Rao, “Optical Fourier techniques for medical image processing and phase contrast imaging,” Opt. Commun. 281(7), 1876–1888 (2008). [CrossRef]   [PubMed]  

15. J. A. Davis, G. H. Evans, K. Crabtree, and I. Moreno, “Programmable birefringent lenses with a liquid-crystal display,” Appl. Opt. 43(34), 6235–6241 (2004). [CrossRef]   [PubMed]  

16. N. Kharma, H. Moghnieh, J. Yao, Y. P. Guo, A. Abu-Baker, J. Laganiere, G. Rouleau, and M. Cheriet, “Automatic segmentation of cells from microscopic imagery using ellipse detection,” IET Image Process. 1(1), 39–47 (2007). [CrossRef]  

17. B. L. Liang, Z. Q. Wang, G. G. Mu, J.-H. Guan, H. L. Liu, and C. M. Cartwright, “Real-time edge-enhanced optical correlation with a cerium-doped potassium sodium strontium barium niobate photorefractive crystal,” Appl. Opt. 39(17), 2925–2930 (2000). [CrossRef]  

18. Z. Wang, H. Zhang, C. M. Cartwright, M. S. Ding, N. J. Cook, and W. A. Gillespie, “Edge enhancement by use of moving gratings in a bismuth silicon oxide crystal and its application to optical correlation,” Appl. Opt. 37(20), 4449–4456 (1998). [CrossRef]  

19. http://neurosurgery.ucla.edu/body.cfm?id=178

20. http://www.bio.davidson.edu/misc/movies/mitosislily.mov

21. J. R. Price, D. Aykac, S. S. Gleason, K. Chourey, and Y. Liu, “Quantitative comparison of mitotic spindles by confocal image analysis,” J. Biomed. Opt. 10(4), 044012 (2005). [CrossRef]  

Supplementary Material (1)

Media 1: MPG (4093 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Proposed setup.
Fig. 2
Fig. 2 (a) Partial view of the USAF1951 pattern with additive Gaussian noise. (b)-(d) Optically processed images with I B increasing in steps of 0.2 between consecutive images.
Fig. 3
Fig. 3 Brain MRI image. The left-side shows the original image, while the right-side shows the optically processed image with contoured tumor.
Fig. 4
Fig. 4 (Media 1) Cell division of Haemantus katherinae observed with phase contrast microscopy. The left-side shows the original image, while the right-side shows the optically processed image with contoured chromosomes.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = cos 2 ( θ ( x , y ) ) ,
I o u t ( x , y ) = cos 2 ( θ ( x , y ) + ξ )                 = ( 1 / 2 ) { 1 + cos ( 2 θ ( x , y ) ) cos ( 2 ξ ) sin ( 2 θ ( x , y ) ) sin ( 2 ξ ) } .
I o u t ( x , y ) = ( 1 / 2 ) { 1 + ( 2 I ( x , y ) 1 ) cos ( 2 ξ ) 1 ( 2 I ( x , y ) 1 ) 2 sin ( 2 ξ ) } .
I o u t ( x , y ) = 1 1 ( 2 I ( x , y ) 1 ) 2 .
I o u t ( x , y ) = 1 1 [ 2 ( I ( x , y ) + I B ) 1 ] 2 .
I o u t ( x , y ) τ = 2 [ 2 ( I ( x , y ) + I B ) 1 ] 1 [ 2 ( I ( x , y ) + I B ) 1 ] 2 I ( x , y ) τ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.