Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Haptic guidance for improved task performance in steering microparticles with optical tweezers

Open Access Open Access

Abstract

We report the manipulation of 4–5 µm diameter polymer microspheres floating in water using optical tweezers (OT) and a haptic device (i.e. force-reflecting robotic arm). Trapped microspheres are steered using the end-effector of a haptic device that is virtually coupled to an XYZ piezo-scanner controlling the movements of the fluid bed. To help with the manipulations, we first calculate a collision-free path for the particle and then display artificial guidance forces to the user through the haptic device to keep him/her on this path during steering. Experiments conducted with 8 subjects show almost two-fold improvements in the average path error and average speed under the guidance of haptic feedback.

©2007 Optical Society of America

1. Introduction

Since their first introduction in 1986, optical tweezers (OTs) have found numerous applications where mesoscopic objects having sizes as small as 5 nm have been manipulated non-invasively [1, 2]. They have been used to characterize the viscoelastic properties of various biological structures (e.g. DNA, cell membranes, and actin), and the forces exerted by molecular motors (e.g. myosin, kinesin, processive enzymes and ribosomes) [2]. OTs have also proven to be indispensable in constructing assemblies of mesoscopic objects [3].

Optimum control is crucial in many optical manipulation tasks that require precise transportation and positioning of the trapped particle. Trapping force generated by the intensity gradient of the laser beam acts like a nonlinear spring. An applied external force can displace the trapped particle from the focus of the laser beam in a manner similar to an object that is attached to a mechanical spring. During controlled steering, the drag force applied to the particle must be balanced by the trapping force. Large steering speeds or external forces can easily cause the steered particle to escape from the trap. In this regard, haptic control appears to be a natural choice for optical manipulation [4]. Haptic refers to manual interactions with an environment, such as exploration for extraction of information about the environment or manipulation for modifying the environment. During the last decade, haptic displays have emerged as effective human-machine interfaces for improving the realism of touch interactions in virtual worlds. The underlying technology, both in terms of hardware and software, is now quite mature and has many applications. The applications now cover a wide range of fields including medicine (surgical simulation, tele-medicine, haptic user interfaces for blind persons, rehabilitation for patients with neurological disorders, dental medicine) art and entertainment (3D painting, character animation, digital sculpting, virtual museums), computer-aided product design (free-form modeling, assembly and disassembly including insertion and removal of parts), scientific visualization (geophysical data analysis, molecular simulation, flow visualization), and robotics (path planning, micro/nano tele-manipulation) [5, 6]. Most of the commercial haptic displays used today are in the form of force-reflecting robotic arms. As the user manipulates the end-effector of the haptic device the tip position of the end-effector is sensed by the encoders and then feedback forces are displayed to the user via actuators accordingly.

Despite the relatively long history of optical trapping and manipulation, only a few studies have been reported on controlling the movements of trapped objects using human-machine interfaces. Arai et al. [7, 8] used a haptic device to manipulate biological samples indirectly using “microtools” including a micro bead, a micro basket, and a micro capsule. These tools were trapped by the laser beam and positioned by the haptic device. They were primarily used to manipulate and isolate targeted biological objects such as Escherichia coli and yeast cell. The haptic feedback was used to convey trapping forces to the user only and not for guidance during steering and assembly. Moreover, a user study showing the benefits of the haptic feedback was missing. Whyte, et al. [9] have developed a real time interface for holographic optical tweezers. In their demonstration the operator’s fingertips were mapped to the positions of silica beads captured in optical traps. The beads acted as the fingertips of a microhand which could be used to manipulate objects. Recently, the same group has reported the use of a joystick to control the three dimensional movements of the optically trapped spheres that helps holding other objects [10]. However, no force feedback was applied to the user during manipulations in both of these studies [9, 10]. We have recently demonstrated the use of haptic guidance in assembling polymer microspheres to form coupled microsphere optical resonators [4]. Streptavidin coated polymer microspheres were chemically attached to stationary biotin coated ones. Different patterns of assemblies containing 4–5 microspheres were demonstrated. The user study demonstrated the reduced positioning error in the patterns formed.

In this study we focus on the steering control of the optically trapped microspheres rather than improving positional precision in pattern formation. Steering control is a more general manipulation task which requires accurate tracking of a collision-free path. We demonstrate that haptic guidance leads to better steering performance with less error in path tracking and higher manipulation speed.

2. Experimental setup

The beam of a continuous wave green laser (λ=532 nm) with an output power of 20 mW is used in optical trapping. After being reflected off a dichroic mirror, the laser beam is focused on the sample by a high numerical aperture (NA=1.2, 60x) microscope objective. White light images are taken using the same microscope objective, and an intermediate magnification module, achieving a total magnification of 90x. A CCD camera is used to capture the images of the sample. A red pass filter is used to filter out the laser light. Manipulation of the trapped particle is achieved by a three-dimensional XYZ piezoelectric scanner working in the closed-loop control (Tritor 102 SG from Piezosytem Jena Inc., scanning resolution is 2 nm). The movements of the scanner are commanded by the user via a haptic device (Omni from Sensable Technologies Inc.). The displacements of the haptic stylus are scaled and suitable voltage values are sent to the scanner to control its movements on the sample plane. The forces acting on the trapped particle during the manipulations are scaled up and conveyed to the user through the same haptic device.

The components of the setup are synchronized via a computer program written in C++ language. First, a snapshot of the manipulation environment is captured via the CCD camera. Then a threshold is applied to the captured image and center position and radius of each microsphere in the image are determined using a contour finding algorithm. Using this information, a virtual model of the scene is constructed to provide the user with the bird’s eye view of the manipulation environment during the execution of the task. Since the CCD camera is stationary and not moving with the scanner as the spheres move, the user may easily loose the “big picture” during the steering of a sphere without the bird’s eye view. For the trapped sphere, the computer automatically displays the target location to the user in the virtual model. Based on the current locations of the trapped and untrapped spheres (obstacles) in the scene and the target location for the manipulated sphere, the computer program first calculates the collision-free path for steering and then generates an artificial force field to provide haptic guidance to the user during the execution of the steering task.

The haptic device and OTs are both commanded by the same computer; hence time delays during teleoperation are not significant. As the user manipulates the trapped particle via the haptic device, the movements of the particle are tracked from the camera image. The positional scaling (β in Fig. 1) between the movements of the haptic stylus and the XYZ scanner was adjusted such that one millimeter movement of the haptic stylus was equivalent to 3 pixels of movement of the trapped particle in the image (the size of one pixel is ~200 nm, hence the scale factor for positional movements is β=1667). Similarly, the drag and guidance forces applied to the trapped particle during the manipulations are scaled up with a coefficient α and displayed to the user via the haptic device. In our implementation, the maximum total force displayed to the user is saturated to 2N [4].

3. Haptic Guidance and Control

In our application, two types of forces are applied to the user during steering manipulations:

1) Drag force: Since the steering task is carried out in a fluid solution, drag force acting on the trapped microsphere may exceed the gradient force applied by the laser beam if the particle is manipulated with a high speed. As a result, the particle may escape from the trap. In order to avoid this situation, the maximum achievable speed in our setup (42 µm/sec) is determined prior to the manipulations and used in the estimation of the drag force applied to the user during the manipulations [4].

2) Guidance forces: To move a trapped sphere to a target location while preventing collisions with other spheres (and hence to prevent the escape of the particle from the trap), we first calculate a collision-free path for the particle and then apply guidance forces to the user through the haptic device to keep him/her on this path. In order to calculate the collision-free path, we use the potential field approach [11]. This approach is commonly used for path and motion planning in robotics. In our application, we construct the potential field U such that the trapped sphere is attracted to the target location while being repelled from the boundaries of the untrapped spheres (i.e. obstacles). Hence, the total potential is made of two parts: U(q)=Utarget (q)+∑ Uobstacle (q), where, q is the position of the particle. Typically, Utarget is defined as a parabolic attractor. The potential field of Uobstacle is defined such that the repulsive force reaches to a maximum at the boundary of the obstacle, but reduces to zero when the trapped sphere is sufficiently away from it. This distance is defined as the radius of influence. In our application, it is equal to 1.2 times the radius of the obstacle plus the radius of the trapped sphere. The path planning can be treated as an optimization problem where the goal is to find the global minimum in U. The collision-free-path is calculated recursively starting from the initial position. At each recursive step, the neighboring grid point which has minimum potential is selected as the next path point until the goal position, i.e. U=0, is reached. The calculated set of grid points between the initial and goal positions reveal the collision-free path.

 figure: Fig. 1.

Fig. 1. Our experimental setup for optical manipulation of microspheres using haptic feedback.

Download Full Size | PDF

Once the collision-free path is defined, we use the concept of virtual fixtures to keep the user on this path with the help of haptic device. The term virtual fixture refers to a software implemented guidance that helps the user perform a task by limiting his/her movements into restricted regions and/or influencing its movement along a desired path [12]. Virtual fixtures offer an excellent balance between automated operation and direct human control. They can be programmed to help the operator carry out a structured task faster and more precisely or they can act as safety elements preventing the manipulated object from entering dangerous or undesired regions. In our application, when the trapped particle moves away from the collision-free path, the virtual fixture becomes active and pulls the user towards the path along the perpendicular direction from the current position of the particle to the path. This generates a tunneling effect such that the user easily slides the particle along a channel whose centerline is defined by the collision-free path. We also programmed a snap effect at the end of the path such that the manipulated particle is pulled to the target location when it is sufficiently close to it. In our implementation, the restoring force of the trap that is related to the displacement of the particle from the trap center was not considered in the haptic feedback force calculation.

 figure: Fig. 2.

Fig. 2. (3.32 MB) Movie of the virtual scene (left) and camera image (right) viewed by the user during two consecutive manipulation tasks. Both visual and haptic feedbacks were displayed to the user during these manipulation tasks. Yellow, green and red spheres in the virtual scene denote the trapped particle, obstacles, and the goal point respectively. The collision-free path is shown in blue in the movie. The dimensions of the camera image are 69 µm×53 µm. [Media 1]

Download Full Size | PDF

4. Experimental results

A user study was performed with 8 human subjects. At each trial, subjects were asked to steer a trapped microsphere to given 6 target locations near the corners in the scene. Each time, a collision free-path connecting the current position of the manipulated sphere to the new target position was calculated in advance. Subjects were asked to follow the displayed path with minimum path error (no instruction was given about the steering speed). They used the global virtual scene and the local camera image in tandem during the manipulations. Subjects repeated the same experiment multiple times under two different sensory conditions: Only visual feedback was displayed to the subjects (V), visual and haptic feedback was displayed together (V+H). Subjects were divided into two groups (four in each group), and the experiments were performed in two sets with one week time interval between the sets. In the first set, the first group received 5–6 trials under condition (V+H) and then 5–6 trials under condition (V) while the second group received 5–6 trials with condition (V) and 5–6 trials with condition (V+H). After one week, the order of conditions received by the groups was changed. Figure 2 shows an exemplary movie of the virtual scene and camera image viewed by the user during two consecutive manipulation tasks. Both visual and haptic feedbacks were displayed to the user during these manipulation tasks.

Two measures were defined to evaluate the performance of the subjects: 1) average steering speed and 2) average path error (i.e. normalized average positional deviation from the collision-free path). The average speed is calculated by dividing path length to travel time. The average path error is the area between the path traversed by the subject and the collision-free path normalized by the path length. It is calculated as iNΔxihiiNΔxi, where N is the number of time intervals along the traversed path, Δxi is corresponding position increment in the desired path and hi is the perpendicular distance between the actual and desired path points.

The statistical analysis did not reveal a significant difference between the results obtained from the two groups indicating that the order of stimuli had no influence on the results (p<0.05). Hence, the data obtained from two groups was combined and the results reported accordingly. The addition of haptic feedback (V+H) significantly improved both the speed and accuracy of the steering task (Fig. 3). The user study showed that the average path error was decreased by a factor of 2.2. Although the subjects were not specifically asked to maximize their speed during the experiments, haptic guidance also improved the average speed by a factor of 2.0.

 figure: Fig. 3.

Fig. 3. The results of the user study performed with 8 human subjects show that manipulating microspheres with haptic and visual feedback together (V+H) is significantly better than manipulating with visual feedback only (V). Almost two-fold improvements are seen in the (a) average path error and (b) average speed during optical manipulations (p<0.05).

Download Full Size | PDF

5. Conclusions

We developed a setup for better steering of microspheres using a haptic device and optical tweezers. In our setup, the trapped particle is virtually coupled to the haptic device. Its movements are synchronized with the movements of haptic stylus while guidance forces are displayed to the user through the haptic device for better task performance. We conducted experiments with 8 human subjects under visual guidance alone (V) and visual and haptic guidance together (V+H). The results showed that haptic guidance enabled the user to steer the trapped particle more accurately and efficiently (Fig. 3). Average path error and speed during manipulations were improved by factors of 2.2 and 2.0 respectively. The achieved improvement in steering performance makes haptic guidance suitable for various optical manipulation tasks that require precise transportation.

Acknowledgments

A. Kiraz acknowledges the financial support of the Turkish Academy of Sciences in the framework of the Young Scientist Award program (Grant No. A.K/TÜBA-GEBİP/2006-19).

References and links

1. A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Observation of a single-beam gradient force optical trap for dielectric particles,” Opt. Lett. 10, 288–290 (1986). [CrossRef]  

2. D. G. Grier, “A revolution in optical manipulation,” Nature 424, 810–816 (2003). [CrossRef]   [PubMed]  

3. P. J. Pauzauskie, A. Radenovic, E. Trepagnier, H. Shroff, P. Yang, and J. Liphardt, “Optical trapping and integration of semiconductor nanowire assemblies in water,” Nature 5, 97–101 (2006). [CrossRef]  

4. I. Bukusoglu, C. Basdogan, A. Kiraz, and A. Kurt, “Haptic Manipulation of Microspheres Using Optical Tweezers Under the Guidance of Artificial Forces,” Presence: Teleoperators and Virtual Environments, in press (arxiv:0707.3325) (2007).

5. C. Basdogan, S. D. Laycock, A. M. Day, V. Patoglu, and R. B. GillespieM. C. Lin and M. Otaduy, “3-Dof Haptic Rendering,” in Haptic Rendering, eds., (A. K. Peters, 2007).

6. M. A. Srinivasan and C. Basdogan, “Haptics in Virtual Environments: Taxonomy, Research Status, and Challenges,” Comput. Graph. 21, 393–404 (1997). [CrossRef]  

7. F. Arai, M. Ogawa, T. Mizuno, T. Fukuda, K. Morishima, and K. Horio, “Teleoperated laser manipulator with dielectrophoretic assistance for selective separation of a microbe,” Proceedings of the IEEE Int. Conf. on Intelligent Robots and Systems , 1872–1877 (1999).

8. F. Arai, M. Ogawa, and T. Fukuda, “Bilateral control system for laser micromanipulation by force feedback,” Adv. Rob. 14, 381–383 (2000). [CrossRef]  

9. G. Whyte, G. Gibson, J. Leach, M. Padgett, D. Robert, and M. Miles, “An optical trapped microhand for manipulating micron-sized objects,” Opt. Express 14, 12497–12502 (2006). [CrossRef]   [PubMed]  

10. G. Gibson, L. Barron, F. Beck, G. Whyte, and M. Padgett, “Optically controlled grippers for manipulating micron-sized particles,” New J. Phys. 9, 14 (2007). [CrossRef]  

11. O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” Int. J. Robot. Res. 5, 90–98 (1986). [CrossRef]  

12. L. B. Rosenberg, “Virtual Fixtures: Perceptual Tools for Telerobotic Manipulation,” Proc. of IEEE Annual Virtual Reality International Symposium , 76–82 (1993). [CrossRef]  

Supplementary Material (1)

Media 1: AVI (3401 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Our experimental setup for optical manipulation of microspheres using haptic feedback.
Fig. 2.
Fig. 2. (3.32 MB) Movie of the virtual scene (left) and camera image (right) viewed by the user during two consecutive manipulation tasks. Both visual and haptic feedbacks were displayed to the user during these manipulation tasks. Yellow, green and red spheres in the virtual scene denote the trapped particle, obstacles, and the goal point respectively. The collision-free path is shown in blue in the movie. The dimensions of the camera image are 69 µm×53 µm. [Media 1]
Fig. 3.
Fig. 3. The results of the user study performed with 8 human subjects show that manipulating microspheres with haptic and visual feedback together (V+H) is significantly better than manipulating with visual feedback only (V). Almost two-fold improvements are seen in the (a) average path error and (b) average speed during optical manipulations (p<0.05).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.