Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dynamic micro-bead arrays using optical tweezers combined with intelligent control techniques

Open Access Open Access

Abstract

Dynamic micro-bead arrays offer great flexibility and potential as sensing tools in various scientific fields. Here we present a software-oriented approach for fully automated assembly of versatile dynamic micro-bead arrays using multi-beam optical tweezers combined with intelligent control techniques. Four typical examples, including the collision-free sorting of array elements by bead features, are demonstrated in real time. Control algorithms and experimental apparatus for these demonstrations are also described.

©2009 Optical Society of America

1. Introduction

Microarrays are valuable tools in biology and medicine. The DNA chip, using micro-spots of bio-molecules on a static solid support, represents a widely used group of static microarrays for basic studies in biomedical fields. Compared with static microarrays, dynamic microarrays using mobile substrates, usually micro-beads coated with bio-molecules/chemicals, offer greater flexibility and have the potential to be used as sensing tools for advancing research in many fields such as basic biomedical studies, diagnostics and drug discovery [1]. Micro-bead handling techniques that allow us to transport selected beads, and to immobilize them at desired positions for signal detection are essential to the creation of dynamic arrays. In the several demonstrated approaches, including hydrodynamic [1], dielectrophoresis [2], mechanical [3, 4], and others, the utilization of optical tweezers [5, 6] is eminently suitable for dynamic handling of micro-beads.

Optical tweezers have several advantages. Firstly, there is no physical contact, which means that we can manipulate the beads in a closed space such as a completely sealed compartment of Lab-on-a-Chip. It also means that there is no adhesion between the manipulator, namely the laser beam, and the bead resulting in easier achievement of precise positioning. Secondly, simultaneous and independent manipulation of multiple beads is possible. This is achieved by the derivatives for generating multiple trapping-beams, namely holographic [7], generalized phase contrast (GPC) [8], time-sharing [9], and other techniques [10]. Thirdly, the use of single microscope’s objective lens for both the manipulating and observing apparatus means that we can control both operations under the same coordinate system. This allows for the easy application of computer vision techniques to fully automated applications in which the complex coordinated movements of large numbers of beads are required. Lastly, optical tweezers enable us to use software-oriented configuration, which means that it is not necessary to redesign hardware configurations, namely mechanical and fluidic parts, for handling beads of different size, shape and the number. The reusable platform, therefore, can greatly reduce the costs of operation.

For fully automated assembly of versatile dynamic micro-bead arrays, here we present a software-oriented approach using multi-beam optical tweezers combined with intelligent control techniques, namely computer vision and a knowledge database. On the basis of our approach consisting of three strategic stages, we demonstrate four typical examples: two fully automated collisionless assemblies of micro-bead arrays, a collision-free sorting of array elements by color, and another by size. We also describe the control algorithms and experimental apparatus for these demonstrations.

2. Software-oriented approach for versatile dynamic arrays

Dynamic micro-bead arrays require the complex coordinated movements of large numbers of beads. A few optical tweezers systems with real-time feature recognition have demonstrated the fully automated initial trappings and subsequent complex coordinated, simultaneous manipulations for large numbers of beads [16]. Our software-oriented approach using optical tweezers combined with intelligent control techniques is comprised of three strategic stages: fully automated trapping using multi-beam optical tweezers (stage 1), simultaneous transportation along collisionless paths (stage 2), and automatic sorting with collision-free interchange based on group theory (stage 3), as illustrated in Fig. 1 .

 figure: Fig. 1

Fig. 1 A new software-oriented approach to assemble a dynamic micro-bead array using multi-beam optical tweezers combined with intelligent control techniques. Labeled beads dispersed in pipetted liquid on a cover glass are automatically trapped, assembled, and sorted to act as a versatile dynamic microarray.

Download Full Size | PDF

2.1 Fully automated optical trapping

In stage 1, a microscopic image of beads in pipetted droplets is digitized in real time using a color CCD camera and a frame grabber. Then, the position and features of each bead, namely the center position in the XY-plane, radius and color, are identified by image processing techniques. The center positions and radii are identified using the circular Hough transform [11], which is a well-known algorithm for robustly detecting circles under noisy conditions such as microscopic images. The color of each bead is also identified by the calculation of hue values using RGB video signals corresponding to the inner pixels of each detected circle. Finally, the desired numbers of beads with pre-specified features are, automatically and simultaneously, trapped at their identified center positions using multi-beam optical tweezers.

2.2 Simultaneous transportation

In stage 2, all initially trapped beads are simultaneously transported to destinations to form an array. We have developed the control algorithm for efficient, collisionless and simultaneous transportation to form a 2-dimensional (2D) M×N lattice array. Our control algorithm consists of two processes: destination assignment (process 1) and path generation (process 2).

In process 1, the algorithm assigns pre-designed destinations d p k=[dxk, dyk]T to initial positions i p k=[ixk, iyk]T, which can be represented by M×N matrices d P and i P, respectively. Let us denote the i-th row of d P by d P i=[d p i 1, d p i 2, …, d p iN ], and i P by i P i=[i p i 1, i p i 2, …, i p iN ], in which d p ij, namely, the element of the d P at the i-th row and j-th column is the position vector of k-th destination, d p k=[dxk, dyk]T=d p ij, of the lattice array, where k =N(i-1)+j. Since the destination is 2D M×N lattice, we can assume the following relations:

ydN(i1)+1==ydN(i1)+N<ydNi+1==ydNi+N,i[1,M1],
xdN(i1)+1<xdN(i1)+2<<xdN(i1)+N,i[1,M1].

First, the initial trapped beads are numbered in ascending order of the identified position iyk, that is, the elements of i P satisfy the following relation:

yiN(i1)+1yiN(i1)+NyiNi+1yiNi+N,i[1,M1],
which means that the initial trapped beads at position from iyN ( i -1) + 1 to iyN ( i -1)+ N are assigned d P i corresponding to i P i. For example, in the case of sixteen beads in Fig. 2 , each set of four beads stained the identical color is assigned one row of a 4×4 matrix. Second, for all beads which constitute i P i, re-numbering is done on the basis of the identified ixk, that is, all the beads constituting the same row are renumbered in ascending order of the ixk. In Fig. 2, for example, respective elements of i P i, that is, four beads stained the identical color are renumbered in ascending order of the ixk. After the renumbering, the elements of i P satisfy the following relations:
xiN(i1)+1xiN(i1)+2xiN(i1)+N,i[1,M1],
max(yiN(i1)+1,,yiN(i1)+N)min(yiNi+1,,yiNi+N),i[1,M1].
Note that the inequality in Eq. (3) no longer holds after the renumbering. Thus, all rows of i P corresponding to d P are determined.

 figure: Fig. 2

Fig. 2 Assignment of sixteen destinations and collisionless trajectories for (a): negligible bead size, (b): non-negligible bead size compared with grid size.

Download Full Size | PDF

In process 2, the algorithm generates the trajectories for the parallel transportation along the collisionless paths using the matrices i P and d P. First, since we have found that an optimal step size for smooth transportation is less than the radius of the bead, each step size δ p k is determined by the following equations:

δpk=[δxk,δyk]T=(pdkpik)/n,
nmax|pdkpikrk|,k=1,,MN,
where rk is the radius of the k-th bead and n is the smallest integer satisfying the inequality (7). Second, new positions in the next time-step are gradually generated using the M×N matrix δ P=[ δ p k].

In the case of negligible bead size, where we assume that the initial trapped beads are adequately spaced and the lattice width of the array (that is, the grid size of the destinations) is also sufficiently wide compared with the bead size, the positions in the next time-step are

P(ts+1)=P(ts+τ)=P(ts)+δP,s=0,,n1,
where P(t 0)=i P, P(tn)=d P. Each bead is transported directly towards its destination along the linear trajectory as illustrated in Fig. 2(a). Using the inequalities (2) and (4), Eq. (8) derives
xtsi=xii+(xdixii)tsn=xii(1tsn)+xdints<xij(1tsn)+xdjnts=xij+(xdjxij)tsn=xtsj  for i<j,
that is,
xtsN(i1)+1<xtsN(i1)+2<<xtsN(i1)+N,i[1,M1],
with respect to the new x-positions of bead, which belonged to one row of i P, at arbitrary time-step t s. Similarly, the inequality
max(ytsN(i1)+1,,ytsN(i1)+N)<min(ytsNi+1,,ytsNi+N),i[1,M1],
is derived by using the inequalities (1) and (5). The inequalities (10) and (11) are remarkable in that they state that all beads which are transported along the trajectories gradually generated by Eq. (8) retain the inequalities with respect to their initial positions. Therefore, each bead never overtakes or collides with others, if the beads are regarded as points geometrically. Indeed, in the case of a 3×3 array, fewer collisions occur, although the potential for collisions does exist.

In the case of non-negligible bead size, the positions in the next time-step are as follows:

P(ts+1)=P(ts+τ)=P(ts)+δmP,s=0,,2n+cstop1,
δmpk={Step1: [0,0]Tif collision occurs in next timestep orytsk=ydk,otherwise [0,δyk]Tuntil ytsk=yts-1k,k[1,MN],Step2:  [δxk,0]Tfor  s=cstep1,cstep1+1,,cstep1+n1,Step3:  [0,0]Tif the destination is reached,otherwise [0,δyk]Tfor  s=cstep1+n,,2n+cstop1,
where m δ p k is the element of the modified step size matrix m δ P=[m δ p k] and cstop is the largest value in all collision counters c=[ck]. In order to avoid collisions, the modified control algorithm checks the potential collisions in the next time-step during Step 1. If the k-th bead is projected to collide from behind or head-on with others, then it stops the update of its position, namely m δ p k=[0,0]T, one time and adds one to the k-th collision counter ck. Under the modified algorithm, each bead is transported to its destination along the trajectory parallel to the grid, as illustrated by the dotted arrow in Fig. 2(b). Beads transported along the trajectory parallel to the x-axis never collide with others during Step 2, because the inequalities (2) and (10) are satisfied with respect to the x-positions. Therefore, no checks of the potential collisions are required after the Step 1. Note that the only physical limitations of the collisionless parallel transportation under the modified algorithm are grid size, bead size and the number of columns in the array. The sufficient condition for completing process 2 is
Li2DmaxN,
where i L is the initial grid size, D max is the largest bead size, and N is the number of columns in the array. After the process2, if necessary, we can shrink/expand the grid from i L to arbitrary size L. Thus the final grid size, L, is independent of the number and size of beads. Therefore, the performance of our approach scales well with smaller/larger array sizes, namely grid widths, bead diameters and the number of beads. The advantage of our approach is that once transport paths have been generated for the destinations with i L, the beads can be simultaneously transported without collisions and subsequent sorting can also be achieved using collision-free cyclic shifts described in next Section 2.3. Furthermore, the algorithm is faster and more reliable as compared to previously published work [16], since the potential for collisions does not increase even with larger array sizes.

2.3 Collision-free sorting of array elements

In stage 3, the beads forming a 2D lattice array are automatically sorted by their features, namely size, color, etc., which are identified in stage 1 or by another recognition process. We have developed the sorting methods using the cyclic shift of six beads (CS6) and four beads (CS4); these operations are illustrated in Figs. 3(b) -3(d). Under the operations CS6 and CS4, the six or four beads are synchronously transported along the grid to shift their locations without collisions.

 figure: Fig. 3

Fig. 3 Sorting method using the collision-free cyclic shifts. (b): Cyclic shift of six beads, (c): that of upper four beads, and (d): that of lower four beads, for the elements of a 3×2 array. (e): A 4×3 array is divided into two 2×3 arrays.

Download Full Size | PDF

Let us denote the state of six grid nodes forming a 3×2 array in Fig. 3(a) by

(G1G2G3G4G5G6BB2B3B4B5B6),
where Gi and Bi are grid nodes numbered i and beads numbered i, respectively. On the basis of group theory [12], the expression (15) implies a permutation of the grid nodes Gi to Bi, and all of the 6!=720 permutations form a group S6, called the permutation group. The operations of CS6 form a cyclic group C 6, and those of CS4 form a C 4. We can interchange two beads at arbitrary nodes of the 3×2 array using the combination of CS6 and CS4 [13], like solving the well-known Rubik’s Cube puzzle [14]. This fact implies that all of transpositions forming S6 are generated by the combination of CS6 and CS4 — a mathematical proof will be provided in a future report. According to the proposition of group theory, any permutation can be expressed as a product of transpositions. Therefore, we can sort the six beads forming a 3×2 array in arbitrary order. A M×N array that is larger than a 3×3 array can be divided into 3×2 arrays, for example a 4×3 array is divided into two 2×3 arrays as shown in Fig. 3(e). Thus, we can complete the sorting for a M×N array using the combination of CS6 and CS4.

3. Experimental apparatus

The time-sharing scanning method of laser trap is extremely useful for testing new control algorithms of the dynamic micro-beads array, since it is simple to rapidly change multiple trapping positions. However, a conventional time-sharing scanning optical tweezers is restricted to the objective lens’ focal plane. The use of that, therefore, has been limited to manipulating beads in 2D space, namely in XY-plane at a fixed Z-coordinate. In order to put an array on/off a cover glass (namely, its platform), we have developed a Time-Sharing Synchronized Scanning (T3S) optical tweezers system, which can translate the array to the XY-plane at any desired Z-coordinate, similar to that in our previous paper [15]. Note that the translation to the XY-plane at arbitrarily Z-coordinates means that we can manipulate the array in two-and-half-dimensional (2.5D) space.

Figure 4 shows the optical configuration and control system for fully automated assembly of a dynamic micro-bead array. Our T3S optical tweezers system is configured around an inverted microscope (Olympus IX70) with an oil-immersion objective (Olympus UplanApoIR, ×100, 1.35NA). An expanded continuous-wave Nd:YAG laser beam (Spectron SL902T, 1064 nm, TEM00, 16W(max)) is introduced into the microscope via a shutter, lenses L1, L2, a PC-controlled 2-axis fast steering mirror (Newport FMS-300), a relay lens and the fluorescence port, and is reflected upward by a dichroic mirror to the objective. The focal positions of the time-shared beam on the XY-plane are controlled by the 2-axis steering mirror which can tilt at a considerable rate just as a piezoelectric mirror [9] can. Their Z-coordinate is controlled by the lens L1 mounted on a PC-controlled linear stage which can be moved parallel to the optical axis. An image processor (Hitachi IP5005) digitizes the images (512×512 pixels) from a color CCD camera (Sony DXC-151A) in real time for feature recognition. Software developed for image processing and device control is executed by a personal computer (PC). We can also interactively manipulate the assembled array in 2.5D space, that is, translate into 3D (Figs. 4(b1) and 4(b3)) and rotate in the XY-plane at a specified Z-coordinate (Fig. 4(b2)), using a PC’s 3-button mouse.

 figure: Fig. 4

Fig. 4 Optical and control system configurations for handling an array in a two-and-half-dimensional (2.5D) working space. Multi-beam optical tweezers, which is generated by a time-sharing synchronized scanning (T3S) approach, can be controlled with a PC-controlled 2-axis steering mirror and a lens mounted on a PC-controlled linear stage. Commands by a control algorithm or a PC’s 3-button-mouse determine the position and orientation of the assembled array which can be located in 2.5D space.

Download Full Size | PDF

4. Demonstrations

4.1 Fully automated assembly of bead arrays

On the basis of our assembly strategies mentioned in Sections 2.1 and 2.2, we demonstrated the fully automated assembly of micro-bead arrays. Figure 5 (Media 1) is a sequence of images recorded with the CCD camera showing the result of the fully automated assembly of a 4×4 array. The sample was glass spheres (Duke Scientific, Borosilicate, 2.5μm). First, all positions of beads dispersed in the pepetted water on a cover glass were detected by the circular Hough transform, and then sixteen beads nearest to the center position, o, were simultaneously trapped at the initially detected positions using the T3S optical tweezers (Fig. 5(b)). The micro-beads in the droplet are diffused by Brownian motion while untrapped. Therefore, after image digitizing, we have to complete the recognition processes for initial traps within the allowable time in which the beads stay in the neighborhood of the identified positions. This allowable time depends on the size of the beads, the viscosity of droplets and the temperature, and can be estimated by the Langevin equation [16]. For typical micro-beads in the water with diameter ranging from 1 to 3μm, the allowable computing time is from 0.6 to 16 seconds. Under our control system with the latest PC (Intel Core2 Duo CPU, 3GHz), we were able to complete the recognition processes within the allowable computing time, since the computing time required was mainly limited by the circular Hough process and was less than 0.2 seconds. Under the T3S optical tweezers, scanning dwell time is also required to trap the beads stably. This dwell time was adjusted to 0.01 seconds for each bead.

 figure: Fig. 5

Fig. 5 (Media 1) Video frame sequence of the fully automated assembly of a 4×4 array. Computer vision detects all beads in an image, and sixteen beads nearest to the center position are trapped simultaneously (figure (b)). The sixteen beads are simultaneously transported along the collisionless paths (figure (a), (c)) to form the 4×4 array (figure (d)). Subsequent operations such as shrinking (figure (e)) and rotating (figure (f)) of the array are also demonstrated. The accompanying movie is in real time, not accelerated.

Download Full Size | PDF

Second, the sixteen beads were simultaneously transported to their destinations along the collisionless paths illustrated in Fig. 5(a) to form the 4×4 array. These collisionless paths were gradually generated using Eqs. (12) and (13). Figure 5(c) shows the image after the procedure of step 1 in Eq. (13). Note that in Fig. 5(c), the sixteen beads are almost aligned in the assigned rows. However, bead number 3 could not reach its row because it would have collided with bead number 4 which reached the row before it; beads number 6 and 7 could also not reach their row because they would have collided head on. These bead situations after step 1 are illustrated by white circles in Fig. 5(a). Third, after the forming of the 4×4 array with initial grid size i L=7.5μm (Fig. 5(d)), the grid size was shrunk to 70% of its initial size, to L=5.2μm (Fig. 5(e)). Finally, we rotated the array counterclockwise in the XY-plane.

In another demonstration shown in Fig. 6 (Media 2), almost all beads in an image of pipetted droplets were automatically assembled to form a 6×6 array using the same three-beam system as used in our previous paper [17]. Each laser beam configured a T3S optical tweezers mentioned in Section 3, and the supervisor synchronously controlled three sets of T3S optical tweezers to execute the assembly algorithm mentioned in Section 2.1 and 2.2. First, the destinations of a 6×6 array for the chosen 36 beads were divided into three sets of destinations for 2×6 arrays. Next, under supervisory controls by a PC, the 36 beads were simultaneously transported along the collisionless paths based on the proposed algorithm to form the 6×6 array, where each divided 2×6 array was assembled with one set of the T3S system. After shrinking the grid size (Fig. 6(b)), subsequent operations such as Z-axis translation of the 2×6 arrays (Fig. 6(c)) were also demonstrated.

 figure: Fig. 6

Fig. 6 (Media 2) Video frame sequence of a fully automated assembly of a 6×6 array using three sets of T3S optical tweezers. The 36 beads are simultaneously transported along the collisionless paths based on the proposed algorithm to form the 6×6 array. Subsequent operations such as shrinking (figure (b)) and translating the Z-axis of the 2×6 arrays (figure (c)) are also demonstrated. The accompanying movie is in real time, not accelerated.

Download Full Size | PDF

4.2 Fully automated sorting by bead features

Rearrangement of the beads located at arbitrary grid nodes is important for signal analysis applications of dynamic micro-bead arrays. On the basis of our assembly strategy using the collision-free interchange mentioned in Section 2.3, we demonstrated the fully automated sorting of array elements by identified bead features. The samples were dyed latex beads (Polysciences Inc, Polybead®Dyed Red, Yellow, Blue, 3μm). Figure 7 (Media 3) is a sequence of images recorded with the color CCD camera showing the result of fully automated assembly of a 3×2 array and subsequent sorting by identified colors. First, all bead positions in an image were identified by the circular Hough transform and their colors by the thresholding of RGB signals using the Discriminant Threshold Selection Method (DTSM) [18, 19], and then three pairs of beads with identical colors were simultaneously trapped using a T3S system (Fig. 7(a)). Second, they were transported to form a 3×2 array (Fig. 7(b)). Note that the 3×2 array is a primitive array for the cyclic shift operations, CS4 and CS6, to sort an arbitrary M×N array. Third, the successive operations of CS4 and CS6 were carried out to interchange the array elements under the collision-free manner (Figs. 7(b)-7(d)). Under the knowledge data based on group theory, this procedure continued until the sorting was completed to rearrange the colored 3×2 array like a traffic light, that is, in the order of red-yellow-blue (Fig. 7(e)). Finally, the array was translated and rotated in the XY-plane (Fig. 7(f)).

 figure: Fig. 7

Fig. 7 (Media 3) Video frame sequence of a fully automated assembly of a dyed beads array. Ri: red beads, Yi: yellow beads, and Bi: blue beads. Computer vision detects all bead positions in an image, and three-color pairs nearest to the center position are trapped and transported simultaneously to form a colored 3×2 array (figure (a), (b)). Subsequent sorting procedure (figure (b)-(e)) proceeds automatically using knowledge data based on group theory in order to rearrange the order of colors like a traffic light. The accompanying movie is in real time, not accelerated.

Download Full Size | PDF

In another demonstration shown in Fig. 8 (Media 4), a 3×3 array consisting of nine beads with different sizes was automatically rearranged to sort its elements by size. The sample was glass spheres (Duke Scientific, Borosilicate, 2.5μm±0.5μm). First, all bead positions and their radii in an image were identified by the circular Hough transform, and then nine beads nearest to the center were simultaneously trapped using a T3S system (Fig. 8(a)). Second, under the control algorithm mentioned in Section 2.2, they were transported to form a 3×3 array. Note that the array was not sorted at that moment; therefore the bead sizes at grid nodes were random (Fig. 8(b)). Third, under the knowledge data based on group theory for 3×3 arrays, the successive operations of CS4 and CS6 were carried out to interchange the beads at specified nodes, like solving the Rubik’s Cube puzzle operations (Figs. 8(c)-8(k)). Note that a 3×3 array is divided into overlapped two 3×2 arrays; therefore we can complete the sorting only using the combination of CS6 and CS4. Finally, after the nine operations the beads at the grid nodes were sorted by size (Fig. 8(l)).

 figure: Fig. 8

Fig. 8 (Media 4) Video frame sequence of a fully automated assembly and sorting of a 3×3 array consisting of silica beads with different sizes. The detected nine beads are trapped, and transported along the collisionless paths, simultaneously, to form a 3×3 array (figure (a), (b)). Subsequent cyclic shift of four beads (CS4: figure (c), (d), (e), (g), (h), (i), (k)) and six beads (CS6: figure (f), (j)) are executed successively using knowledge data based on group theory to sort the array elements by size. The accompanying movie is in real time, not accelerated.

Download Full Size | PDF

4. Conclusion

We have demonstrated the fully automated, versatile assembly of micro-bead arrays based on a software-oriented approach using multi-beam optical tweezers. Beads with different colors and sizes were automatically assembled and subsequently sorted to form the 2D dynamic arrays in order of their specified features without collisions. Although we have chosen the T3S techniques for the physical method of multi-beam optical tweezes, GPC and optoelectronic tweezers [2] can also be used with similar results. Additionally, the fully automated assembly of the 2D array implies that the beads forming a 2D array can be easily transformed into 3D structures using 3D manipulation techniques such as hologram sequences [20]. Furthermore, our approach enables not only the dynamic assembling of bead-arrays but also the dynamic patterning of biological materials and colloidal structures. Under such software-oriented approaches, massively parallel optical manipulation techniques, including optoelectronic tweezers, combined with intelligent control techniques, can evolve from tweezers into multi-arm robotic manipulators, which will enable the implementation of exciting applications in biomedical fields, such as fully automated handling of live cells in the Lab-on-a-chip.

Acknowledgements

This work was partly supported by Grants-in-Aid for Scientific Research (C, #20560252) from the Japan Society for the Promotion of Science, and also by Research for Promoting Technological Seeds from Japan Science and Technology Agency.

References and Links

1. W.-H. Tan and S. Takeuchi, “A trap-and-release integrated microfluidic system for dynamic microarray applications,” Proc. Natl. Acad. Sci. U.S.A. 104(4), 1146–1151 (2007). [CrossRef]   [PubMed]  

2. P. Y. Chiou, A. T. Ohta, and M. C. Wu, “Massively parallel manipulation of single cells and microparticles using optical images,” Nature 436(7049), 370–372 (2005). [CrossRef]   [PubMed]  

3. H. Noda, Y. Kohara, K. Okano, and H. Kambara, “Automated bead alignment apparatus using a single bead capturing technique for fabrication of a miniaturized bead-based DNA probe array,” Anal. Chem. 75(13), 3250–3255 (2003). [CrossRef]   [PubMed]  

4. C. D. Onal and M. Sitti, “Visual servoing-based autonomous 2-D manipulation of microparticles using a nanoprobe,” IEEE Trans. Contr. Syst. Technol. 15(5), 842–852 (2007). [CrossRef]  

5. A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Observation of a single-beam gradient force optical trap for dielectric particles,” Opt. Lett. 11(5), 288–290 (1986). [CrossRef]   [PubMed]  

6. D. G. Grier, “A revolution in optical manipulation,” Nature 424(6950), 810–816 (2003). [CrossRef]   [PubMed]  

7. J. E. Curtis, B. A. Koss, and D. G. Grier, “Dynamic holographic optical tweezers,” Opt. Commun. 207(1-6), 169–175 (2002). [CrossRef]  

8. P. J. Rodrigo, R. L. Eriksen, V. R. Daria, and J. Glueckstad, “Interactive light-driven and parallel manipulation of inhomogeneous particles,” Opt. Express 10(26), 1550–1556 (2002). [PubMed]  

9. C. Mio and D. W. M. Marr, “Optical trapping for the manipulation of colloidal particles,” Adv. Mater. 12(12), 917–920 (2000). [CrossRef]  

10. J. M. Tam, I. Biran, and D. R. Walt; “An imaging fiber-based optical tweezer array for microparticle array assembly,” Appl. Phys. Lett. 84(21), 4289–4291 (2004). [CrossRef]  

11. D. H. Ballard, and C. M. Brown, Computer Vision (Prentice-Hall, 1982), Chap. 3–4.

12. J. Chen, Group Representation Theory for Physicists (World Scientific, 1989), Chap. 1.

13. Y. Tanaka, H. Kawada, K. Hirano, M. Ishikawa, and H. Kitajima, Japan patent 2008–101060 (April, 9, 2008).

14. http://www.rubiks.com/

15. Y. Tanaka, H. Kawada, K. Hirano, M. Ishikawa, and H. Kitajima, “Automated manipulation of non-spherical micro-objects using optical tweezers combined with image processing techniques,” Opt. Express 16(19), 15115–15122 (2008). [CrossRef]   [PubMed]  

16. S. C. Chapin, V. Germain, and E. R. Dufresne, “Automated trapping, assembly, and sorting with holographic optical tweezers,” Opt. Express 14(26), 13095–13100 (2006). [CrossRef]   [PubMed]  

17. Y. Tanaka, K. Hirano, H. Nagata, and M. Ishikawa, “Real-time three-dimensional orientation control of non-spherical micro-objects using laser trapping,” Electron. Lett. 43(7), 412–414 (2007). [CrossRef]  

18. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man, Cybernetics SMC-9, 62–66 (1979).

19. A. Murakami, Y. Tanaka, and Y. Kinouch, “Laser manipulation system for automatic control of microscopic particles,” in Proceedings of the 4th Asian Control Conference, Singapore, 25–27 Sept. 2002, pp.414–419.

20. G. S. Sinclair, P. Jordan, J. Courtial, M. Padgett, J. Cooper, and Z. J. Laczik, “Assembly of 3-dimensional structures using programmable holographic optical tweezers,” Opt. Express 12(22), 5475–5480 (2004). [CrossRef]   [PubMed]  

Supplementary Material (4)

Media 1: MOV (1740 KB)     
Media 2: MOV (1860 KB)     
Media 3: MOV (1435 KB)     
Media 4: MOV (1485 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 A new software-oriented approach to assemble a dynamic micro-bead array using multi-beam optical tweezers combined with intelligent control techniques. Labeled beads dispersed in pipetted liquid on a cover glass are automatically trapped, assembled, and sorted to act as a versatile dynamic microarray.
Fig. 2
Fig. 2 Assignment of sixteen destinations and collisionless trajectories for (a): negligible bead size, (b): non-negligible bead size compared with grid size.
Fig. 3
Fig. 3 Sorting method using the collision-free cyclic shifts. (b): Cyclic shift of six beads, (c): that of upper four beads, and (d): that of lower four beads, for the elements of a 3×2 array. (e): A 4×3 array is divided into two 2×3 arrays.
Fig. 4
Fig. 4 Optical and control system configurations for handling an array in a two-and-half-dimensional (2.5D) working space. Multi-beam optical tweezers, which is generated by a time-sharing synchronized scanning (T3S) approach, can be controlled with a PC-controlled 2-axis steering mirror and a lens mounted on a PC-controlled linear stage. Commands by a control algorithm or a PC’s 3-button-mouse determine the position and orientation of the assembled array which can be located in 2.5D space.
Fig. 5
Fig. 5 (Media 1) Video frame sequence of the fully automated assembly of a 4×4 array. Computer vision detects all beads in an image, and sixteen beads nearest to the center position are trapped simultaneously (figure (b)). The sixteen beads are simultaneously transported along the collisionless paths (figure (a), (c)) to form the 4×4 array (figure (d)). Subsequent operations such as shrinking (figure (e)) and rotating (figure (f)) of the array are also demonstrated. The accompanying movie is in real time, not accelerated.
Fig. 6
Fig. 6 (Media 2) Video frame sequence of a fully automated assembly of a 6×6 array using three sets of T3S optical tweezers. The 36 beads are simultaneously transported along the collisionless paths based on the proposed algorithm to form the 6×6 array. Subsequent operations such as shrinking (figure (b)) and translating the Z-axis of the 2×6 arrays (figure (c)) are also demonstrated. The accompanying movie is in real time, not accelerated.
Fig. 7
Fig. 7 (Media 3) Video frame sequence of a fully automated assembly of a dyed beads array. Ri: red beads, Yi: yellow beads, and Bi: blue beads. Computer vision detects all bead positions in an image, and three-color pairs nearest to the center position are trapped and transported simultaneously to form a colored 3×2 array (figure (a), (b)). Subsequent sorting procedure (figure (b)-(e)) proceeds automatically using knowledge data based on group theory in order to rearrange the order of colors like a traffic light. The accompanying movie is in real time, not accelerated.
Fig. 8
Fig. 8 (Media 4) Video frame sequence of a fully automated assembly and sorting of a 3×3 array consisting of silica beads with different sizes. The detected nine beads are trapped, and transported along the collisionless paths, simultaneously, to form a 3×3 array (figure (a), (b)). Subsequent cyclic shift of four beads (CS4: figure (c), (d), (e), (g), (h), (i), (k)) and six beads (CS6: figure (f), (j)) are executed successively using knowledge data based on group theory to sort the array elements by size. The accompanying movie is in real time, not accelerated.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

y d N ( i 1 ) + 1 = = y d N ( i 1 ) + N < y d N i + 1 = = y d N i + N , i [ 1 , M 1 ] ,
x d N ( i 1 ) + 1 < x d N ( i 1 ) + 2 < < x d N ( i 1 ) + N , i [ 1 , M 1 ] .
y i N ( i 1 ) + 1 y i N ( i 1 ) + N y i N i + 1 y i N i + N , i [ 1 , M 1 ] ,
x i N ( i 1 ) + 1 x i N ( i 1 ) + 2 x i N ( i 1 ) + N , i [ 1 , M 1 ] ,
max ( y i N ( i 1 ) + 1 , , y i N ( i 1 ) + N ) min ( y i N i + 1 , , y i N i + N ) , i [ 1 , M 1 ] .
δ p k = [ δ x k , δ y k ] T = ( p d k p i k ) / n ,
n max | p d k p i k r k | , k = 1 , , M N ,
P ( t s + 1 ) = P ( t s + τ ) = P ( t s ) + δ P , s = 0 , , n 1 ,
x t s i = x i i + ( x d i x i i ) t s n = x i i ( 1 t s n ) + x d i n t s < x i j ( 1 t s n ) + x d j n t s = x i j + ( x d j x i j ) t s n = x t s j   for  i < j ,
x t s N ( i 1 ) + 1 < x t s N ( i 1 ) + 2 < < x t s N ( i 1 ) + N , i [ 1 , M 1 ] ,
max ( y t s N ( i 1 ) + 1 , , y t s N ( i 1 ) + N ) < min ( y t s N i + 1 , , y t s N i + N ) , i [ 1 , M 1 ] ,
P ( t s + 1 ) = P ( t s + τ ) = P ( t s ) + δ m P , s = 0 , , 2 n + c s t o p 1 ,
δ m p k = { Step 1:  [ 0 , 0 ] T i f   c o l l i s i o n   o c c u r s   i n   n e x t   t i m e s t e p   o r y t s k = y d k , o t h e r w i s e   [ 0 , δ y k ] T until y t s k = y t s-1 k , k [ 1 , M N ] , Step 2:   [ δ x k , 0 ] T for   s = c s t e p 1 , c s t e p 1 + 1 , , c s t e p 1 + n 1 , Step 3:   [ 0 , 0 ] T i f   t h e   d e s t i n a t i o n   i s   r e a c h e d , o t h e r w i s e   [ 0 , δ y k ] T for   s = c s t e p 1 + n , , 2 n + c s t o p 1 ,
L i 2 D max N ,
( G 1 G 2 G 3 G 4 G 5 G 6 B B 2 B 3 B 4 B 5 B 6 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.