Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated manipulation of non-spherical micro-objects using optical tweezers combined with image processing techniques

Open Access Open Access

Abstract

Automated optical trapping of non-spherical objects offers great flexibility as a non-contact micromanipulation tool in various research fields. Computer vision control enables fruitful applications of automated manipulation in biology and material science. Here we demonstrate fully-automated, simultaneous, independent trapping and manipulation of multiple non-spherical objects using multiple-force optical clamps. Customized real-time feature recognition and trapping beam control algorithms are also presented.

©2008 Optical Society of America

1. Introduction

Laser trapping, first demonstrated in 1970 by Ashkin [1] and well-known as optical tweezers [2], has been further extended to line-scanning [3], holographic [4], time-sharing [6], generalized phase contrast (GPC) [5, 7], and others. The ability to manipulate fluid-borne microscopic objects without physical contact, in contrast to mechanical micro-hands, allows us to carry out many interesting studies in various research fields, including on biological systems, by applying pico-newton forces to the objects [8, 9]. The manipulation of arbitrarily shaped objects in three-dimensional (3D) space is essential and of widespread importance, since many naturally occurring objects [10] as well as microfabricated, anisotropic objects [7] are not spherical. Therefore, fully-automated, optical trapping and manipulation of arbitrarily shaped objects based on computer vision control offers great flexibility as a non-contact micromanipulation tool in biology, material science, Lab-on-a-Chip, etc. However, if the intention is to automatically or dexterously manipulate non-spherical micron-sized objects in real-time, the control and vision system of a conventional laser trapping system is insufficient for manipulation in a 2D/3D working space, since the non-spherical objects may show different postures or orientations in 2D microscopic views, and require complex coordinated movement of each trapped position. Although excellent commercial programming tools for image processing and system control, for example LabVIEW, are currently available, to our knowledge, previous demonstrations of fully-automated, optical trapping and manipulation combined with image processing technique have been limited to spherical beads [11, 12] and known-shaped microfabricated structures [13].

Here we demonstrate fully-automated, simultaneous, independent trapping and manipulation of multiple fluid-borne inhomogeneous objects based on real-time feature recognition and multiple-force optical clamps, where the operator is fully removed from control sequences. We chose two kinds of sample with nontrivial shape; ellipse-like diatoms and rod-like whiskers, from the viewpoint of observed size and trapping forces applied to the sample. We also outline the hardware setup and the control sequences for our demonstrations.

2. Experimental setup

2.1 Optical system and control system

A laser scanning method is suitable for real-time trajectory control of trapped objects based on results of feature recognition, since it is simple to generate and change multiple optical trapping positions rapidly, wasting less computing-time. The superior feature of the computing-time is derived from the directly writing of trapping positions rather than the complex calculation of a holographic filter for a spatial light modulator (SLM). The laser scanning method with high-reflective mirrors also enables the use of more powerful irradiation than do holographic or GPC methods with a spatial light modulator. Hence, we chose a Time-Sharing Synchronized Scanning (T3S) approach [6] for the physical method of applying multiple optical clamps. Our experimental setup is illustrated in Fig. 1(a). An expanded continuous-wave Nd:YAG laser beam (Spectron SL902T, λ = 1064 nm, TEM00, 16W(max)) is introduced into an inverted microscope (Olympus IX70) via a shutter, lenses L1, L2, a PC-controlled (Newport FMS-300), a relay lens and the fluorescence port, and is reflected upward by a dichroic mirror to an oil-immersion objective. The 3D focal position of the beam is controlled; that on the XY-plane by the 2-axis steering mirror, which can tilt at a rate similar to a piezoelectric mirror [14] (that is, its closed-loop amplitude bandwidth is over 1kHz), and that of the Z-axis by the lens L1 mounted on a PC-controlled linear stage which can be moved parallel (the maximum speed is 800 mm/s) to the optical axis. No automated control for the Z-axis is installed, because the linear stage cannot move quickly in synchronism with the time-shared scan on the XY-plane. Hence, the focal control with respect to the Z-axis is only commanded by PC mouse to raise samples into the specified Z-coordinate. Samples are illuminated by a standard equipped microscope’s halogen light source. An image processor (Hitachi IP5005) digitizes the images from a color CCD camera (SONY DXC-151A) in real-time. The developed software, which is programmed in C++ (Microsoft Visual C++6.0) for image processing and device control, is executed by PC (Intel Core2 Duo CPU).

 figure: Fig. 1.

Fig. 1. Schematic diagram of (a): the time-sharing synchronized scanning optical tweezers, (b): the control sequences, for automated multiple clamps and manipulation.

Download Full Size | PDF

2.2 Control sequences

Figure 1(b) illustrates the outline of control sequences for applying automated clamps and manipulation of multiple non-spherical objects. Our approach consists of four processes; a contour shape detection (process 1), model matching (process 2), automated multiple clamping (process 3), and automated manipulation (process 4). Each process is in C++ software, is dependent on the preceding process, and is modified to perform the specified demonstrations. First, in process 1, the contours of objects are extracted using a digital filter for finding the local edges, for example a Sobel operator [15], and a subsequent noise reduction algorithm which removes isolated one-pixel elements from binary images.

Secondly, in process 2, control parameters of the modeled-shape are identified using the Hough transform [15], which enables us, under noisy conditions such as microscope images, to robustly detect an arbitrary shape that can be quantized with parameters. In the popular textbooks on image processing, the classical Hough transform is well-known algorithm for detecting lines and circles under noisy conditions. The basic strategy of the Hough transform for detecting an arbitrary shape is to compute the possible loci of reference points in parameter space from edge point data in image space and increment the parameter points in an accumulator array (see textbook [15]). The micron-sized objects in a fluid will be diffused by Brownian motion while they are not clamped by laser beams. Therefore, after digitizing images, we have to complete processes 1 and 2 within the allowable time in which the objects stay in the vicinity of the identified clamp positions. This allowable time, tD, can be estimated by the Langevin equation [11],

tD=6πηw3kBT,

where η, w, kB, T are fluid viscosity, average radius of the objects, Boltzmann constant, and absolute temperature, respectively. For typical samples ranging in size from w=1 to 3 µm, room temperature (T=293 K) and viscosity of water (η=0.001 Pa s), the allowable computing time, tD, is from 5 to 126 s.

Thirdly, in process 3, the detected non-spherical objects are automatically clamped at pre-determined points on the modeled shape using the T3S optical tweezers. Finally, in process 4, all clamped objects are automatically translated/rotated from initial positions/orientations to the destinations which are automatically allocated by taking open spaces and the identified parameters into account. Once the collision-free paths are generated based on the pre-designed manipulation/sorting planning, the simultaneous manipulations of the multiple objects are preformed under open-loop control strategies.

3. Demonstrations

3.1 Samples

For demonstrations, we chose two kinds of sample with non-spherical and nontrivial shape. One is an ellipse-like diatom, which has cell walls of silica consisting of two interlocking symmetrical valves [16]. The diatoms were collected from a creek and cleaned in an acid solution to remove organic matters in the sample. We selected diatoms roughly 20 µm long and 4 µm wide from the viewpoints of trapping power and visual detection accuracy of their postures using image processing techniques. The other sample consisted of aluminum borate whiskers. The whiskers are a refractory compound forming needle-shaped crystals or rod-like particles, and are a suitable material for the reinforcement of plastics or metal alloys [17]. We used rod-like particles ranging in size from 10 µm to 15 µm long and roughly 1 µm in width for the purpose of the same viewpoints mentioned above. In our demonstrations, these different shaped samples were dispersed in deionized water.

3.2 Automated trapping and 3D manipulation of diatoms

In a previous report [10], we demonstrated that a diatom could be stably trapped, and its position controlled and oriented in 3D by a three-beam optical tweezers or a line-scanning laser beam. However, in the previous demonstrations, the focal position control for initial trapping of single diatom was reliant on interactive human operation using a PC mouse. Here, we demonstrate that our T3S approach, combined with computer vision control, has the fruitful potential to manipulate multiple diatoms automatically and simultaneously. In order to recognize all diatoms of specified size in the scene, we apply the Hough transform to an elliptic model in Fig. 2(Left), which consists of five parameters, x, y, θ, a, b; that is, the 2D positions, orientation of major axis, and lengths of the major and minor axes, respectively. Two types of optical clamps, namely the three-point-clamp where each diatom is trapped at three edge points (C3) and the two-point-clamp where each diatom is trapped at two points (C2) on the major axis, are demonstrated experimentally. For the demonstrations in this section, we used a ×60 oil-immersion objective (Olympus UPlanFLN, NA1.25, IR). We also adjusted the beam power to roughly 550 mW at the entrance pupil of the objective, and set the shared irradiation time to 20 ms for each clamp. These adequate values of power and time were determined by preliminary experiments.

In Fig. 2(Right: Media 1), we show a case of three-point-clamping. First, three diatoms were automatically detected, and stably clamped simultaneously at edge points C3 on each diatom (Fig. 2(a)). For the above-mentioned processes 1 and 2, it took roughly 4 seconds to complete the processes and to identify the control parameters. In process 3, we could automatically attract all the diatoms into the identified trapping positions (C3) and stably clamp the diatoms, since 4 seconds was ample time as compared with the time tD=37 s, where we used half of the minor axis of the elliptic model (w=2 µm), room temperature (T=293 K) and viscosity of water (η=0.001 Pa s) for the calculation of tD in Eq. 1. Next, all the clamped diatoms were rotated from their detected initial orientations to the desired orientations, in which the major axis of each diatom is perpendicular to the translation direction for stable dragging (Fig. 2(Left) and (b)). Finally, subsequent translation (Fig. 2(b)) and rotation (Fig. 2(c)) without collision were able to arrange all the diatoms automatically and simultaneously in the same orientation at the pre-determined destinations (Fig. 2(d)), while each diatom retained its initial posture giving a repeatable 2D view in microscope images. Note that this initial 2D view of each diatom is almost always observed in the scenes, since the diatoms lay on a cover glass due to their valves being sufficiently flat.

 figure: Fig. 2.

Fig. 2. Left: Elliptic model and control parameters for detecting and manipulating diatoms. Right: (Media 1) Automated multiple clamps and simultaneous manipulation of multiple diatoms. Each diatom is automatically clamped at three edge points C3, and is rotated/translated to be arranged in the same final orientation.

Download Full Size | PDF

In another demonstration shown in Fig. 3 (Media 2), the same diatoms as for the former demonstration were trapped by the two-point-clamp method and then arranged in the same orientation at the pre-determined destinations. First, just as trapping by line-scanning allowed a single diatom to turn 90 degrees about its long axis (as described in our previous report [10]), so all the diatoms clamped at points C2 autonomously turned 90 degrees about the major axis of the elliptic model shortly after irradiation with laser beams (Fig. 3(a)). When clamped, the diatom’s valves were parallel to the optical axis (Z-axis). (Note that without optical trapping, the 2D view of these postures is seldom observed in microscope images.) Next, all the clamped diatoms were automatically and simultaneously transported to the pre-determined destinations (Fig. 3(e)) by subsequent rotation (Fig. 3(b)), translation (Fig. 3(c)) and rotation (Fig. 3(d)), while each diatom retained its above-mentioned posture in the 2D view. Finally, shortly after release of the clamps, (i.e., after stopping of the laser irradiation,) the diatoms returned to their flat posture giving the repeatable 2D view because of the influence of gravity upon their valves.

Thus, the multiple-force optical clamps with computer vision enabled the automatic manipulation of multiple diatoms along a collision-free dynamic path, with each diatom retaining its own unique posture corresponding to both the position and the number of clamps.

 figure: Fig. 3.

Fig. 3. (Media 2) Automated multiple clamps and simultaneous manipulation of multiple diatoms. Each diatom is automatically clamped at two edge points C2 in Fig. 2. In this case, (a): shortly after irradiation of clamp beams, the diatoms autonomously turn 90 degrees about the major axis of elliptic model in Fig. 2; (f): shortly after release of the clamps, the diatoms return to their flat posture because of gravity, which gives a repeatable 2D view in microscope images.

Download Full Size | PDF

3.3 Automated trapping and sorting of whiskers

In a previous report [18], we demonstrated that a rod-like whisker in water could be stably trapped in the XY-plane by simultaneous irradiation with a beam at each of the tip positions of the rod, and subsequent dexterous manipulation of the trapped whisker was easy to perform by the mouse-controlled laser-beam movements. However, in the previous demonstrations, the precise irradiation with beams for stable initial two-point-clamp was not easy, since the slight error in regard to either the positions or the timing led to a point-like trap [19], which caused the rotation of the whisker into the beam axis (Z-axis).

Here, we demonstrate that multiple whiskers can be automatically sorted by length using the two-point-clamp, with each whisker trapped at both tip positions C2 in Fig. 4(Left). In order to recognize all whiskers in the field of view, we apply the Hough transform to a skeleton model in Fig. 4(Left), which consists of four parameters, x, y, θ, l; that is, 2D positions, orientation, and length of whisker, respectively. For the demonstration in this section (Media 3), we used a ×100 oil-immersion objective (Olympus UPlanApo, NA1.35, IR). We also adjusted the beam power to roughly 350 mW at the entrance pupil of the objective, and set the shared irradiation time, as in demonstrations in the former section, to 20 ms for each clamp position. First, three whiskers were automatically detected, and stably clamped simultaneously at both tip positions of each whisker (Fig. 4(a)). Secondly, the clamped whiskers were simultaneously translated to the left-side of the field, while keeping the initial orientation of each whisker (Fig. 4(b)). After rotating from the initial orientation to horizontal, thirdly, the whiskers were sequentially translated from the left-side to the right-side of the field in order to sort by their lengths (Fig. 4(c)). The paths and order of translation are represented by the numbered black arrows in Fig. 4(b). Finally, the whiskers with horizontal orientation were simultaneously translated from the right-side to the center of the field, and arranged vertically in order of their length, as shown in Fig. 4(d). Note that these dexterous movements for sorting without colliding with each other, represented by the black arrows, could be automatically generated in process 4 based on the recognized parameters of the skeleton model.

 figure: Fig. 4.

Fig. 4. Left: skeleton model and control parameters for detecting and manipulating whiskers. Right: (Media 3) Automated clamps and simultaneous dexterous manipulation of multiple whiskers for sorting by length. The whiskers are automatically clamped at both tip positions, C2, of each skeleton, and are translated/rotated to be arranged automatically according to their length measured by image processing.

Download Full Size | PDF

3.4 Discussion

We applied the Hough technique to detecting the ellipse/rod-like samples which have the five/four model parameters to be identified. The Hough transform is the most common algorithm that is known to be robust under noisy conditions. In general, it is possible to detect the objects appeared in the 2D image with any arbitrary shape that can be specified by parameters, and its detecting ability is not limited by their orientations and scales. Thus, the Hough transform has favorable properties for automating the optical trap and subsequent manipulation based on visual information, although its main limitation is that it slows with increasing numbers of parameters. Our system, of course, cannot stably clamp the smaller 3D objects which will change their 2D view before completion of the Hough process.

We used not a graphical programming environment such as LabVIEW but a programming language C++, since a tight integration between the control of the T3S optical tweezers for multiple clamps and the image processing for Hough transform was needed for the real time control. We also restricted the resolution of orientation to be a maximum of 1 degree in the parameter space in order to refine the processing speed. The time required for initial clamps is mainly limited by the Hough process and was 4 seconds. For automating the subsequent manipulation process (that is, for above-mentioned process 4), this speed of the Hough process is still not enough to process every video frame repeatedly, and is hard to include in a real-time feedback process, even with the latest PC. Thus, the simultaneous manipulations of the multiple diatoms/whiskers along the collision-free paths were demonstrated under open-loop control based on the pre-designed manipulation/sorting planning. The real power of this approach will become more apparent when the system is installed the live vision feedback.

4. Conclusion

We have demonstrated the feasibility of fully-automated trapping and manipulation of non-spherical micron-sized objects using the T3S optical tweezers combined with computer vision techniques. Ellipse- and rod-like objects suspended in water were automatically recognized, trapped, and manipulated. To our knowledge, this is the first demonstration of fully-automated, simultaneous trapping and manipulation of multiple, non-spherical objects using a multiple-force optical clamps technique. Although we have dealt with only two kinds of shaped objects, these demonstrations, we believe, will open up new possibilities for manipulating arbitrarily shaped objects using multiple clamps based on computer vision technique. The automated clamps of biological materials will enable exciting applications in cell biology such as the non-contact mechanotransduction in live cells [20]. Furthermore, it is expected that the control schemes coupled with high-speed vision feedback, in which the feedback rate is less than the conventional video rate (33 ms), may enable fruitful applications for the dynamic 3D motion control of arbitrarily shaped objects such as microstructures in MEMS and Lab-on-a-Chip.

Acknowledgments

We would like to thank Mr. Hideo Wada of AIST Shikoku for preparation of the whiskers. This work was partly supported by Grants-in-Aid for Scientific Research (C, #20560252) from the Japan Society for the Promotion of Science.

References and Links

1. A. Ashkin, “Acceleration and trapping of particles by radiation pressure,” Phys. Rev. Lett. 24, 156–159 (1970). [CrossRef]  

2. D. G. Grier, “A revolution in optical manipulation,” Nature 424, 810–816 (2003). [CrossRef]   [PubMed]  

3. K. Sasaki, M. Koshioka, H. Misawa, N. Kitamura, and H. Masuhara, “Pattern-formation and flow-control of fine particles by laser-scanning micromanipulation,” Opt. Lett. 16, 1463–1465 (1991). [CrossRef]   [PubMed]  

4. J. E. Curtis, B. A. Koss, and D. G. Grier, “Dynamic holographic optical tweezers,” Opt. Commun. 207, 169–175 (2002). [CrossRef]  

5. P. J. Rodrigo, R. L. Eriksen, V. R. Daria, and J. Glückstad, “Interactive light-driven and parallel manipulation of inhomogeneous particles,” Opt. Express 10, 1550–1556 (2002). [PubMed]  

6. F. Arai, K. Yoshikawa, T. Sakami, and T. Fukuda, “Synchronized laser micromanipulation of multiple targets along each trajectory by single laser,” Appl. Phys. Lett. 85, 4301–4303 (2004). [CrossRef]  

7. P. J. Rodrigo, L. Gammelgaard, P. Bøggild, I. R. Perch-Nielsen, and J. Glückstad, “Actuation of microfabricated tools using multiple GPC-based counterpropagating-beam traps,” Opt. Express 13, 6899–6904 (2005). [CrossRef]   [PubMed]  

8. J. T. Finer, R. M. Simmons, and J. A. Spudich, “Single myosin molecule mechanics: piconewton forces and nanometer steps,” Nature 368, 113–119 (1994). [CrossRef]   [PubMed]  

9. P. J. H. Bronkhorst, G. J. Streekstra, J. Grimbergen, E. J. Nijhof, J. J. Sixma, and G. J. Brakenhoff, “A new method to study shape recovery of red blood cell using multiple optical trapping,” Biophys. J. 69, 1666–1673 (1995). [CrossRef]   [PubMed]  

10. Y. Tanaka, K. Hirano, H. Nagata, and M. Ishikawa, “Real-time three-dimensional orientation control of non-spherical micro-objects using laser trapping,” Electron. Lett. 43, 412–414 (2007). [CrossRef]  

11. S. C. Chapin, V. Germain, and E. R. Dufresne, “Automated trapping, assembly, and sorting with holographic optical tweezers,” Opt. Express 14, 13095–13100 (2006). [CrossRef]   [PubMed]  

12. P. J. Perch-Nielsen, C. A. Rodrigo, J. Alonzo, and Glückstad, “Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment,” Opt. Express 14, 12199–12205 (2006). [CrossRef]   [PubMed]  

13. P. J. Rodrigo, L. Kelemen, C. A. Alonzo, I. R. Perch-Nielsen, J. S. Dam, P. Ormos, and J. Glückstad, “2D optical manipulation and assembly of shape-complementary planar microstructures,” Opt. Express 15, 9009–9014 (2007). [CrossRef]   [PubMed]  

14. C. Mio and D. W. M. Marr, “Optical trapping for the manipulation of colloidal particles,” Adv. Mater. 12, 917–920 (2000). [CrossRef]  

15. D. H. Ballard and C. M. Brown, Computer Vision (Prentice-Hall, 1982), Chap. 3–4.

16. Y. A. Hicks, D. Marshall, P. L. Rosin, R. R. Martin, D. G. Mann, and S. J. M. Droop, “A model of diatom shape and texture for analysis, synthesis and identification,” Mach. Vision Appl. 17, 297–307 (2006). [CrossRef]  

17. H. Wada, K. Sakane, T. Kitamura, H. Hata, and H. Kambara, “Synthesis of aluminium borate whiskers in potassium sulphate flux,” J. Mater. Sci. Lett. 10, 1076–1077 (1991). [CrossRef]  

18. Y. Tanaka, A. Murakami, K. Hirano, H. Nagata, and M. Ishikawa, “Development of PC-controlled laser manipulation system with image processing functions,” Proc. SPIE. 6374, 63740P1-P8 (2006).

19. R. Agarwal, K. Ladavac, Y. Roichman, G. Yu, C. M. Lieber, and D. G. Grier, “Manipulation and assembly of nanowires with holographic optical traps,” Opt. Express 13, 8906–8912 (2005). [CrossRef]   [PubMed]  

20. X. Trepat, L. Deng, S. S. An, D. Navajas, D. J. Tschumperlin, W. T. Gerthoffer, J. P. Butler, and J. Fredberg, “Universal physical responses to stretch in the living cell,” Nature 447, 592–596 (2007).1 [CrossRef]   [PubMed]  

Supplementary Material (3)

Media 1: MOV (1023 KB)     
Media 2: MOV (2103 KB)     
Media 3: MOV (1581 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Schematic diagram of (a): the time-sharing synchronized scanning optical tweezers, (b): the control sequences, for automated multiple clamps and manipulation.
Fig. 2.
Fig. 2. Left: Elliptic model and control parameters for detecting and manipulating diatoms. Right: (Media 1) Automated multiple clamps and simultaneous manipulation of multiple diatoms. Each diatom is automatically clamped at three edge points C3, and is rotated/translated to be arranged in the same final orientation.
Fig. 3.
Fig. 3. (Media 2) Automated multiple clamps and simultaneous manipulation of multiple diatoms. Each diatom is automatically clamped at two edge points C2 in Fig. 2. In this case, (a): shortly after irradiation of clamp beams, the diatoms autonomously turn 90 degrees about the major axis of elliptic model in Fig. 2; (f): shortly after release of the clamps, the diatoms return to their flat posture because of gravity, which gives a repeatable 2D view in microscope images.
Fig. 4.
Fig. 4. Left: skeleton model and control parameters for detecting and manipulating whiskers. Right: (Media 3) Automated clamps and simultaneous dexterous manipulation of multiple whiskers for sorting by length. The whiskers are automatically clamped at both tip positions, C2, of each skeleton, and are translated/rotated to be arranged automatically according to their length measured by image processing.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

t D = 6 π η w 3 k B T ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.