Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D scanning characteristics of an amorphous silicon position sensitive detector array system

Open Access Open Access

Abstract

The 3D scanning electro-optical characteristics of a data acquisition prototype system integrating a 32 linear array of 1D amorphous silicon position sensitive detectors (PSD) were analyzed. The system was mounted on a platform for imaging 3D objects using the triangulation principle with a sheet-of-light laser. New obtained results reveal a minimum possible gap or simulated defect detection of approximately 350 μm. Furthermore, a first study of the angle for 3D scanning was also performed, allowing for a broad range of angles to be used in the process. The relationship between the scanning angle of the incident light onto the object and the image displacement distance on the sensor was determined for the first time in this system setup. Rendering of 3D object profiles was performed at a significantly higher number of frames than in the past and was possible for an incident light angle range of 15 ° to 85 °.

©2012 Optical Society of America

1. Introduction

Machine vision systems are now a vital component of quality control processes present in all areas of industry. Amongst the vast majority of sectors, automobile and semiconductor manufacturing, pharmaceutical packing and film container printing are some related examples. Tasks typically allocated to these systems include the detection of surface defects, serial number identification, counting parts on a conveyor belt, etc.

The initial concept of using machine vision for industrial inspection dates back to the 1930s, although it was not until the 1970s that the idea developed further into a system [1]. Then in the 1990s, the concept received special interest due to the advances and progress made in vision systems image and processing technology. This resulted in huge growth of the machine vision industry. Finally, in the 2000s, considerable technology improvements have taken place and the market is still growing and expanding at a fast pace. However, it must be stated that, the successful design and integration of these systems is still a bottleneck and remains a challenge in several areas, partly due to the lack of skilled labour [1].

Machine vision could be defined as the process of designing, developing and integrating automatic imaging systems to improve manufacturing processes in industry. Automation, mechanical engineering, optics and computer engineering make up machine vision.

What manufacturers really need is to be able to perform visual inspections using machine vision systems which offer high speeds, are reliable and are able to operate 24 hours in order to achieve proper measurement repeatability, to provide high magnifications and to offer a high level of flexibility and adaptability. Therefore, the high demand for quality makes human manipulated inspection impractical. The components of a machine vision system do not suffer from fatigue as humans do, and neither do they do mistakes or take decisions, which could affect the procedure. In fact, these systems work around the clock continuously, not loosing the rhythm required by the production process, plus offering high precisions as well as being capable of delivering 100% constant quality in their verifications. On the other hand it is not possible to compare cameras from machine vision systems to the human eye. In machine vision systems it is necessary to analyse each individual pixel from the image acquired and draw conclusions on it, after having processed all the collected information. Therefore, even though human vision has been previously regarded as impractical for industrial inspection, up to date no machine vision system is capable of matching human optics on some aspects such as, tolerance to changes in lighting and degradation of the image, interpretation and comprehension of the image, flexibility or variability of parts inspected [2].

A typical machine vision system is usually composed of several or all of the following components:

  • • One or more image sensor structures. Usually one or more digital or analogue cameras are used.
  • • Suitable optics for image focusing and projection. Lenses are normally used to focus the desired field of view onto the image sensor.
  • • Suitable light sources (Lasers, LEDs, Fluorescent or Halogen lamps, etc).
  • • An interface device (widely known as “frame grabber”) to digitize and transfer images to the processor/PC.
  • • Input/Output hardware. Usually being a processor/PC or embedded processor.
  • • Software platforms for image processing and representation. These platforms are also currently used to detect and recognise relevant image features.
  • • A synchronized system configuration involving sensors (magnetic, optical, etc) to detect movement of parts, for example, on a production line which would trigger image acquisition and processing. Actuators are also welcome to sort, route or reject defective parts if necessary.

Continuous wave modulation, time-of-flight estimation and structured light triangulation are among existent machine vision 3D measurement techniques. Out of the latter, correctly configured optical triangulation systems are typically able to acquire data faster and more precisely than other conventional existing methods, plus the fact that normally they do not require any data post processing. Plenty of research has been devoted to triangulation-based approaches and their performance characteristics have always been a subject of close analysis [3, 4]. The optical triangulation geometry scheme is similar for most of them. Frequently, CCD (Charge Coupled Devices) cameras in combination with laser spots or stripes are used for the study of the triangulation system parameters or related distance and angle calculations as well as for the detection of objects in 3D [57]. Other approaches use photodiodes or photo-detectors such as position sensitive detectors, as opposed to CCD cameras [810]. Within the structured light triangulation technique for 3D range imaging, this work is directly relevant to the sheet-of-light triangulation method. In this method a plane of light is shined onto the object using a laser sheet-of-light and the intersection of this plane with the object creates a line, which in turn is reflected and projected onto the image area of the camera. Some commercial sheet-of-light scanners are able to generate 10000 data points (not frames) per second and growing as technology advances, leading to the fastest 3D data collecting method when compared to other laser scanners [11]. Previous research work on the sheet-of-light method has been reported using discrete sensors [1215] and analogue sensors such as a position sensitive detector array of up to 128 elements [16]. In line with the existing known state of the art, our previous work recently presented a high speed sheet of light range imaging system [17] integrating a position sensitive detector array, as an alternative to traditional machine vision systems, whose vision sensor component is usually a CCD. If optimized, the overall system approach offers higher processing and scanning speed capabilities than the reviewed state of the art. In line with our previous work, the present work aims to exploit the 3D scanning optical characteristics of an inspection system using as sensor an array of 32 element amorphous silicon based position sensitive detectors [1820], integrated inside a self constructed machine vision system. Moreover, relevant detection issues such as the 3D profile detection resolution and the influence of the angle on 3D scanning are studied. The 3D profile detection resolution is determined from simulated defects or gaps created on the real object.

The influence of the incident light angle on 3D object rendering is studied by scanning a white rubber and a white ramp at various angles from 15 ° to 85 °. Furthermore, an in-depth analysis of the relationship between the scanning angle of the incident light onto the object and the image displacement distance on the sensor is carried out. Finally, 3D object profiles of a white rubber and a white bent plastic fork are rendered at a significantly higher number of frames than our previous work [17].

2. Working principle of position sensitive detectors for machine vision applications

The lateral photo effect consists on the appearance of a lateral photovoltage (ΔV) or photocurrent when a light beam impinges locally (spot or collimated line) the surface of a pn or pin junction between two collected electrodes. This effect was first observed by Schottky in 1930 [21]. At that time, Schottky investigated the position dependence of the output photocurrent of a Cu-Cu2O diode and tried to correlate the optical behaviour of this diode with its electrical characteristics. This work remained unnoticed until Wallmark [22] observed it again in 1957, on a Ge-In p + -n junction, calling the device a position sensitive detector (PSD). Thus, when a light spot falls on the PSD, an electric charge proportional to the light energy is generated at the incident position. This electric charge is driven through a thin collecting resistive layer (TCRL) and the photocurrent collected by the lateral electrodes placed on the surface of the PSD is inversely proportional to the distance between the incident position and the corresponding reference electrode. After Wallmark’s rediscovery, research and development work in the field of PSDs has increased, as well as its application in various areas of optical measurement such as surface inspection and control [1720, 2326].

Position sensitive detectors (PSDs) based on the crystalline silicon technology have already been used within machine vision systems [16]. A wide spectral response range and a high reliability are offered by these devices, plus bearing in mind the fact that they are able to detect simultaneously the intensity and the position of the centre of gravity of a light spot. The main areas of application where PSDs are needed, are those where precision is crucial, such as, machine tool and remote optical alignment, medical instrumentation or robotic vision and their operating principle is described elsewhere [27, 28]. In fact, any application which requires low bulks of signal processing power or high speeds in comparison to existing standard video frame rates is an ideal candidate for PSDs. Most commercially available PSDs are developed using crystalline silicon [29]. However, amorphous silicon is an alternative low cost material solution, which should be taken into account for various PSD applications.

Several researchers have already used amorphous silicon material to fabricate 1D, 2D and 3D thin film position sensitive detectors [20, 3035], performing material studies and presenting devices comprising active areas of less than 1 cm2. For these type of devices, the sensor is based on a pin structure [36], where the i-layer is highly photoconductive, while the p and n layer are highly conductive [37, 38].

To determine the electrical field E responsible for the drift of the current collected, we must combine the Poisson, continuity and current density equations, including, or not including, the recombination losses (Rl), taking into account that the carrier’s concentration n’ within the illuminated area is equal to p1 and n1, these being the excess of electron-hole pairs generated by the photon flux fλ. Outside the irradiated area, n’ equals to the dark carrier’s concentration, n0. The losses are given by Rl ≈naa where τa is the average mean time that the carriers generated in the i-layer take to reach the TCRL, given by τa ≈εε0/(2πσα), where ε is the relativity dielectric constant; ε0 is the air permittivity and σα is an equivalent conductivity, dependent on the TCRL and the doping collecting layer [39].

Jx=qμanE+εε0vdE+CldEdt

In Eq. (1), Cl is the capacitance per unit length of the device and Jx is the lateral current density that depends on the excess of ambipolar carriers generated within the illuminated region to which two main transverse components are ascribed:

Jx=Jxd+Jxl
where Jxd = σr Ex (σr is the conductivity of the TCRL) and Jxl = Jsx (Jsx is the saturation current density of the device in reverse biased mode, given by Jsx ≈σ0 Ex, where σ0 is the dark conductivity of the i-layer) are respectively the drift and the lateral leakage current density components.

In addition to Jx we must also consider the role of the transverse current density, Jy, on the device performances. Within the illuminated region, Jy is given by Jy = -Js -Jph, where Jph is the generated photocurrent density, dependent on the excesses of generated electron-hole pairs (Gna) and Rl, through the relation 1/q(dJy / dy) = -Gna + Rl. Outside the illuminated region, Jph ~0, and so, Jy ~-Js.

The correlation between Jx and Jy is ruled by the principle of charge conservation, leading to [40]:

dJxdx=JyWJx=aaJyWdx
where W is the thickness of the i layer and a is the half length of the irradiated line on top of the device, with a negligible width z. If the device is reversed biased, Jy is equal to the contribution of the photocurrent density generated by the irradiated light, Jph and the reverse current density (Js) {Jy = Js [exp(qV/KT) −1] - Jph. As |V| >> kT/q, this leads to [35]:
Jx=2aW(Js+Jph)
where Ex is determined assuming that the carriers collected by the TCRL flow through it towards the collecting electrodes, leading, at each point, inside or outside the illuminated region, to an ohmic dependence of Jx on Ex.

Substituting Eq. (1) into Eq. (4) and taking into account that ΔV is conditioned by σ0, σr, and σα, the dc (0) and ac (1) components of Ex are obtained through the equations:

dE0(x,t)dx+ρρsσ04a2WWrE0(x,t)+ClσrdE0(x,t)dt=Jph0(x,t)σr2aW
dE1(x,t)dx+αE1(x,t)+C1σrdE1(x,t)dt=Jph1(x,t)σr2aW
where: Wr is the thickness of the TCRL; ρs is the sheet resistance of the TCRL (ρs = 1/(σrWr) and Jph has a dc (0) and an ac (1) component: Jph = Jpho + Jph1, respectively outside and within the irradiated area. In the above relations α is the lateral fall-off parameter, given by α = ρsσ04a2 / (WWr), when Rl is neglected. These conditions lead to the build up of an electric field E along the junction plane that helps the ambipolar transport away from the irradiated region.

To solve Eq. (5) we consider that the origin of the co-ordinates is the gravity centre of the device. That is, -L/2 < x < L/2; E vanishes when |x| → ∞ and the generated carriers depend now on their distribution within the irradiated region. Under these conditions the general solution of Eq. (5), for a semi-infinite slab is Ex = ± E0{1 - exp[α(a ± x)]}, with E0 = 2a (Jph + Js) ρs / . The static potential spatial distribution function [ϕ (x)], is obtained taking into account that E = -(x) / dx:

ϕx=±E0α1exp[α(a±x)]tanh(αL/2)+c
or after mathematical manipulations where ϕ(x) is assumed finite at x = ± L/2 [35, 41]:
ϕx=±E0αsinh(αx)sinh(αL/2)+sinh(αa)x+c
c and c’ are constants related to the field distortions at the device edges [35, 42]. Here, ΔV = ϕ(x) - ϕ(L - x). The spatial range up to which a linear correlation exists between ΔV and x, depends on α. So, to enhance the linearity, spatial resolution and the signal to noise ratio, α << 1 cm−1.

Once defined ΔV the device can be substituted by an equivalent electric circuit based on a current source, If and an output resistance R0:

IfJph(2aΔ)sinh(αx)sinh(αL/2)+sinh(αa)
R0ρsLtanh(αx)2ΔαL/2
where R0 depends on ρs and on the spatial position of the light beam impinging on the surface of the PSD. Under these conditions, the lateral photocurrent collected is given by:
IL(x)[Js+Jph(x,t)]ΔaaWsinh(αx)sinh(αL/2)
where Δa is the illuminated area.

Thereby, it is possible to obtain the photocurrents IL1 and IL2 collected by the two electrodes, as a function of the electrodes interdistance (L) and of the total photocurrent (I0) that flows towards both directions from a spot localized in the position P of the surface of the PSD, under the linear approach [35]:

IL1=I0LxL;IL2=I0xL
P=IL2IL1IL2+IL1=2xLL

Extending this concept to an integrated array of linear PSDs, we have [35]:

Pn(xn)=L(I0,nI1,n)2(I0,n+I1,n)
where Pn(xn) represents the set of local points on the linear PSD array when illuminated by the light beam (projection of a light line over the PSD array).

Issues such as linearity, spatial resolution and response times were also analysed and this and other literature discussed characteristics [43] make pin amorphous silicon structures suitable for optical inspection and image processing applications especially where continuous detection is required. The most relevant application is three dimensional PSD shape measurement [17]. Figure 1 shows a photograph of an amorphous silicon 32 element PSD linear array [17, 20] and Fig. 2 illustrates the measured spectral response of a single amorphous silicon PSD element from such an array.

 figure: Fig. 1

Fig. 1 Photograph of an amorphous silicon 32 element PSD linear array developed at CEMOP/UNINOVA

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Measured spectral response of a single amorphous silicon PSD element from the array shown in Fig. 1. Peak is located around 650 nm and response range is 500-750 nm.

Download Full Size | PDF

The spectral response (in A/W) of the pin device that constitutes each line of the PSD array is a measure of its sensitivity to light, being defined as the ratio of the current generated by the device to the amount of light falling on it. The spectral response peak is located at around 650 nm and the highest recorded values are obtained between 600 and 700 nm, corresponding to the maximum quantum efficiency of the i layer [44]. The overall response of the device extends from approximately 500 to 750 nm.

Overall, the data shows that the spectral response decreases as the wavelength increases beyond 650 nm, due to an enhancement of the optical losses in the red region of the spectrum. On the other hand as the wavelength decreases towards the blue region, the spectral response decreases due to reflection losses and small absorption on the transparent conductive oxide layer used as TCRL [45, 46] and at the p layer [47], as well as possible back diffusion of carriers generated near the top of the i layer which are not collected [48].

3. Experimental

The amorphous silicon 32 element PSD linear array was produced as described elsewhere [20] and mounted on the experimental setup as shown in Fig. 3(a) and 3(b). The red laser line width reflected on the active area of the sensor after lens reduction was approximately 1mm and its length was about 18 mm making sure that all sensor channels were covered by the laser projection. The angle of projection was fixed to be about 45 ° for the set of experiments performed in sections 3.1 and 3.4 and varied accordingly from 15 ° to 85 ° for the set of experiments performed in sections 3.2 and 3.3).

 figure: Fig. 3

Fig. 3 (a) Sketch of the experimental setup for 3D profile scanning. (b) Photograph of the experimental setup.

Download Full Size | PDF

3.1 3D profile detection resolution

The spatial resolution is the minimum distance that can be clearly measured when the light spot/line is moved from one position to another. For the array under analysis, the resolution along the x direction depends on the 1D sensor performances and its resolution along the z axis is limited by the clearance/tolerance of the patterned lines that constitute the array. On the other hand, the resolution along the y direction (perpendicular to the zx plane) depends on the incident angle (see Fig. 3(a) and 3(b)).

The following experimental procedure was performed to infer the smallest gap (resolution) that can be detected by the sensor system while scanning a predefined prepared pattern which was glued on top of a white rubber (object). This predefined black and white stripe pattern aims to emulate the presence of small gaps or defects on the surface of an object, thereby mimicking surface defect detection procedures in industrial processes.

The width of the related frame image projection of the white rubber on the sensor is only covered by a few sensor channels. The velocity of the scanning system was also reduced to the smallest possible in order to analyze its possible influence on the obtained resolution of detection. In turn, the rate of acquisition was also varied in order to see the effect on the pattern detection.

A laser line was projected on a white rubber (with a black and white pattern) and the scanning process occurred while using the minimum rate of acquisition being 8 ms. The translation table moved from right to left so that the object (white rubber with black and white pattern) was scanned by the system. The object with the pattern and the pattern itself are shown in Fig. 4(a) and 4(b).

 figure: Fig. 4

Fig. 4 (a) Horizontal photograph of the white rubber (object) with black and white pattern glued on top. (b) Black and white pattern (50 µm – 1000 µm).

Download Full Size | PDF

The rubber (with black and white pattern) of Fig. 4(a) and 4(b) was placed on top of a translation table. Therefore the object structure was flat on the translation table as it is shown in Fig. 3(a) and 3(b). This triangulation configuration is the standard way to render objects in 3D using a sheet of light laser [11]. The object reflection was projected onto an area of the sensor covered only by just a few detectors. The dimensions of the white rubber object are the following: Length = 40.86 mm; Width = 18.49 mm; Height = 11.32 mm.

When the translation table moved and the laser hit the rubber rectangular object, the laser line was projected (after a lens magnification reduction) on the sensor. The height of the object creates a difference in the reflection profile frame as shown in Fig. 5(b) , 5(d) and 5(f). There are two ways to render in 3D, either by using a white support (white background) or a black support (black background).

 figure: Fig. 5

Fig. 5 Sketch of how the reflection of the height of the object looks like on the sensor active area. This is one frame of the object. (a) Laser line does not strike the object. (b) Laser line strikes the object. (c) Laser line does not strike the object. (d) Laser line strikes the object. (e) Laser line strikes a black stripe on the object (same as no object). (f) Laser line strikes the object.

Download Full Size | PDF

In these experiments, the rubber was placed directly on top of the translation table (black background) and not on top of the white colour support usually lying on the translation table. This means that initially (object not appearing) the laser line was not projected on the sensor because the black background does not reflect the light beam. When the object appeared, the reflection from the object did project itself onto the sensor because it is white and it reflects and in this situation, the light intensity recorded was 2.76 µW/cm2 for all these experiments.

The distance travelled from right to left on the translation table was about 69 mm and the incident angle shown in Fig. 3(a) and 3(b) was 45 ° and remained constant for all experiments. The rate of acquisition was set at 8 ms, for a translation table (object scanning) speed of 1.97 mm per second, in order to view the relevant influence on the response and to determine the minimum or smallest gap detection. Other results were taken at the following acquisition times when the translation table speed was also 1.97 mm per second: 1000 ms, 800 ms, 600 ms, 400 ms, 200 ms, 80 ms, 60 ms and 40 ms. The integration time of the system was set to 0.085 ms for all cases and each measurement acquired was the average of 128 samples.

3.2 Study of the influence of the incident angle on the obtained detection resolution - Angle change while 3D rendering white rubber object

The following experiments show the results in relation to the study of varying the incident angle while scanning the white rubber object from section 3.1. In this case, only the white rubber was scanned without the black and white pattern. Again, the width of the rubber was only covered by a few channels. The experimental setup was the same as in section 3.1 and the incident angle shown in Fig. 3(a) and 3(b) was varied in order to view the relevant influence on the response, as the object (white rubber) was scanned. Results were taken at the following angles: 15 °, 35 °, 55 °, 65 ° and 75 ° and here also each measurement recorded was the average of 128 acquired data samples.

3.3 Study of the influence of the incident angle on the obtained detection resolution - Angle change while 3D rendering white ramp object

A study of the variation of the incident angle shown in Fig. 3(a) and 3(b) was also performed using the same experimental procedure as the one described in section 3.2, but now a white ramp object was scanned as opposed to the white rubber object from section 3.2. These trials show the results in relation to the study of varying the incident angle while scanning a white ramp. The incident angle illustrated in Fig. 3(a) and 3(b) was varied in order to view the relevant influence on the response and results were taken at the following angles: 5 °, 15 °, 25 °, 35 °, 45 °, 50 °, 55 °, 65 °, 75 ° and 85 °. Here also, each measurement acquired was the average of 128 data samples.

3.4 High resolution 3D object scanning

The experimental procedure was also the same as in section 3.1. However, the difference in this case was that the white rubber object from section 3.1 without the black and white pattern shown in Fig. 4(b) and a white plastic fork were 3D scanned at high resolutions being approximately 2273 frames or 72736 points for the rubber and approximately 2130 frames or 68160 points for the plastic fork. Here also, each measurement acquired was the average of 128 data samples.

4. Results and discussions

4.1 3D profile detection resolution

In Fig. 4(b) there are 19 noticeable white colour gaps decreasing in width from 1000 µm to 100 µm on the black and white pattern shown and 19 noticeable black colour stripes decreasing in width from 1000 µm to 100 µm.

Figure 6(a) illustrates the 3D scanning response of sensor channel 11 when rendering the white rubber object glued with the black and white pattern (see Fig. 4(a)). The responses of channels 13 and 15 (not shown for clarity) are very similar to the response of channel 11.

 figure: Fig. 6

Fig. 6 (a) Sketch of the 3D scanning response of sensor channel 11 when rendering the white rubber object glued with the black and white pattern (see Fig. 4(a)). (b) Sketch of the magnification of the smallest peak detected from the 3D scanning response of sensor channel 15 when rendering the white rubber object glued with the black and white pattern (see Fig. 4(a)). (c) Sketch of the 3D scanning response of sensor channel 17 when rendering the side region indicated, corresponding to the white rubber itself without the black and white pattern from Fig. 4(b).

Download Full Size | PDF

The peaks detected and shown in Fig. 6(a) correspond to the white colour gaps in Fig. 4(b). Similarly, the gaps (or absence of peaks) detected in Fig. 6(a) correspond to the black colour stripes in Fig. 4(b). Therefore, in accordance to Fig. 4(b), only 14 white colour gaps and 14 black colour stripes are detected in Fig. 6(a). The last white colour gap or black colour stripe detected is 350 µm. Thereby, it can be stated that the minimum detail detected by the system in these conditions is 350 µm. Figure 6(b) shows a magnification of the response of channel 15 for the smallest peak detected. Here the detected peak is analysed closer and the width of the peak can be measured to see if it approximately matches the smallest detection value of about 350 µm. As shown in Fig. 6(b) the peak extends from 12.7 to 13.1 mm and therefore its estimated width is considered to be 375 µm ± 25 µm. This data is in conformity with what was expected, where deviations are assigned to factors involved in the scanning process such as possible translation table vibrations or light optics alignment issues. Figure 6(c) presents the 3D scanning response of sensor channel 17 when rendering the side region indicated, corresponding to the white rubber itself without the black and white pattern from Fig. 4(b). In these experiments the laser line and the frame of the object is projected along the longitudinal length of each detector. Therefore each individual detector sees a change in height, which comes from a change in the position on each detector. The real measured height of the white rubber object without the black and white pattern glued on top is 11.32 mm. In this particular case (see Fig. 6(c)), this is equivalent to a height of 3 mm (from −3.5 mm to −0.5 mm on the Y axis) as detected by channel 17. Moreover, for full device reliability the sensors’ response to the optical stimulus must be the same, within one allowable marginal error of ± 1%.

Another interesting observation is that the length of all channel responses analysed corresponds to the measured length of the rubber object being approximately 40.86 mm. This length of 40.86 mm was measured using an electronic ruler, and all responses are detecting for approximately 40 mm so the correspondence is quite good.

As already mentioned, results were taken at the following acquisition times: 1000 ms, 800 ms, 600 ms, 400 ms, 200 ms, 80 ms, 60 ms and 40 ms.

Table 1 provides an approximate guideline for the theoretical minimum possible detectable gaps at each of the acquisition times experimented.

Tables Icon

Table 1. Resolution of Detection at a Speed of 1.97 mms−1

The data show that the peak or gap detection is not lower than 350 μm due to experimental setup limitations, such the image frame optics reduction factor. Regarding detector 11, the laser line is reflected and the detection is performed along the analogue length of the detector. The image frame is reduced in size after the lens, conditioning the spatial limit of detection to 350 µm. These restrictions lead to the fact that below the 350 μm limit of detection or 350 μm black line (corresponding 350 μm white gap) shown in Fig. 4(b), the white gaps between black lines are so small that the relevant sensor channel does not notice them and thereby only detects black colour or absence of light.

Results show that the pattern starts to be detected effectively once the acquisition time is decreased down to 80 ms. Above 80 ms, the results are not regarded as completely reliable. When using an acquisition time of 80ms or lower, 14 black colour peaks or 14 white colour gaps are detected, and this corresponds to a minimum detail detection of 350 μm. This matches the theoretical guideline provided in Table 1.

Reducing the speed of the translation table by half, from 1.97 mms-1 to the minimum value of the translation table movement which is 0.985 mms-1 did not result in a smaller detail detection than 350 µm (<350 µm).

4.2 Study of the influence of the incident angle on the obtained detection resolution - Angle change while 3D rendering white rubber object

Figure 7 corresponds to the compilation of all angle responses analyzed by channel 4 of the sensor for the white rubber object scan. This allows a comparison of all angles studied for that particular sensor channel. As can be noticed, the higher the angle, such as for example 75 ° (closer to 90 °), the higher the difference between the top laser line and the bottom laser lines seen in Fig. 5(b), 5(d) and 5(f). This is why the height of the 75 ° angle detection response in Fig. 7 is taller than the other angle responses shown.

 figure: Fig. 7

Fig. 7 Sketch of the 3D scanning response of sensor channel 4 at several angles when rendering the white rubber itself without the black and white pattern from Fig. 4(b).

Download Full Size | PDF

Consequently, the lower the angle, such as 15 ° (closer to 0 °), the lower the difference between the top laser line and the bottom laser lines seen in Fig. 5(b), 5(d) and 5(f). This is why the height of the 15 ° angle detection response in Fig. 7 is shorter than the other angle responses presented.

Another interesting observation in Fig. 7 is that the length of all angle responses analysed corresponds closely enough to the measured length of the rubber object being 40.86 mm. This length of 40.86 mm was measured using an electronic ruler and all responses are detecting for approximately 40 mm, so here the correspondence is also quite good. In relation to the measured width of the object being 18.49 mm, a rough estimate can be obtained by counting the number of channels detecting and measuring that specific area covered by those detectors on the sensor. This area will be the closest match to the width of the object. According to the 65 ° angle response in Fig. 7, the real measured height of the rubber being 11.32 mm, is equivalent in this case to a height of approximately 1.5 mm (from −3.5 mm to −2 mm on Y axis) measured on the active area of the sensor. However, this value differs for each angle evaluated as shown in section 4.4.

4.3 Study of the influence of the incident angle on the obtained detection resolution - Angle change while 3D rendering white ramp object

Figure 8 is a compilation of all angle responses analyzed by channel 17 of the sensor concerning the white ramp object scan. As in section 4.2, this also allows a comparison of all angles studied for that particular sensor channel. Each of the responses seems to match correctly its associated scanning angle.

 figure: Fig. 8

Fig. 8 Sketch of the 3D scanning response of sensor channel 17 at several angles when rendering the white ramp.

Download Full Size | PDF

In this case, the distance travelled during the scanning process by the translation table is about 37 mm. With a default angle of about 50 °, this corresponds to a measured distance from one side to the other on the sensor of about 7 mm. With a different angle the scenario changes accordingly as shown in Fig. 8.

When the angle is 85 °, the laser line sweeps the ramp distance in only 7.5 mm (from 2.5 mm to 10 mm as shown in the X-axis of Fig. 8), thereby travelling a longer distance on the active area of the sensor than the standard 7 mm range (3.5 mm to −3.5 mm). As the angle changes up to almost 90 °, the laser line takes less than 7.5mm (according to X axis in Fig. 8) to sweep the ramp distance and the active area of the sensor (from 3.5 mm to −3.5 mm).

Similarly, in accordance to the previously mentioned relationship, for small angles such as 5 ° or 15 °, the laser line moves along only a relatively short distance on the active area of the sensor and in the case of a 5 ° angle it does not sweep the whole of the ramp distance (see Fig. 8). Regarding the 15 ° angle response, it spreads from 2 mm to about 30 mm (according to X axis in Fig. 8) so therefore it takes 28 mm for the laser line to sweep the ramp distance as well as to travel from approximately 1.75 mm to −2.5 mm on the sensor active area.

A 15 ° angle is probably suitable for analysis of fine details when scanning a ramp or a range of heights although for our 15 ° case only a range from 1.75 mm to −2.5 mm on the sensor was available as opposed to the possible existing sensor range (3.5 mm to −3.5 mm).

Section 4.4 explains in detail the relationship between the projected laser line, the viewing or scanning angle, the object height and the image displacement distance on the sensor. This relationship confirms the observations made in sections 4.2 and 4.3.

4.4 Geometry analysis of the triangulation platform

As mentioned elsewhere [11], within this active stereo approach and structured light technique, the height or depth of the object is computed using triangulation since the laser line source, the sensor and the object are arranged together forming the geometry of a triangle. The size of the increments within the same height range (for all angles), is proportional to the distance between the laser line source and the sensor.

A geometrical analysis of a very similar optical triangulation platform is already presented elsewhere [10] and the main difference in terms of geometry between the latter and the triangulation platform presented here is that, here, the laser light source is inclined at an angle as opposed to being perpendicular to the plane of the object and the sensor is perpendicular to the plane of the object as opposed to being inclined at an angle. A schematic of the triangulation scheme is presented in Fig. 9 , in which all distances and angles of interest are shown as well as the liaison between w, being the image displacement distance associated to object height difference projection (after lens reduction) on the sensor active area, h, being the real height of the object and z being the diagonal distance between the two object height points shown. As in [10], all appropriate derivations have been worked out in order to be able to obtain the required distance or angle relationships. The relevant explanations, formulae and calculations are presented below:

 figure: Fig. 9

Fig. 9 Schematic of the geometry of the triangulation platform scheme.

Download Full Size | PDF

In order to find out the distance from the object to the lens or p and the distance from the lens to the sensor or q, it is necessary to use the well known lens equation presented below:

1f=1p+1q
where f is the focal length of the lens used. Similarly, in order to work out an expression for p’, or the distance from the lowest new object height point related to the associated height difference distance w (see Fig. 9), and q’ being the new distance from the lens to the sensor in relation to the associated height difference distance w (see Fig. 9), the same lens equation can be used by simply replacing p and q by p’ and q’. By resolving Eq. (14), an expression for q in terms of p is obtained (or p in terms of q) as shown next:
q=pfpforq'=p'fp'f
similarly, an expression for q’ in terms of p’ can be obtained (or p’ in terms of q’).

In order to work out an expression relating the distance w to the angle θ, well known trigonometry can be used for the triangle formed by w, q’ and q:

w2=q2+q'22qq'cosθ
by resolving Eq. (16), an expression of the cosine of the angle θ is obtained as shown below:
cosθ=q2+q'2w22qq'
similarly, in order to work out an expression relating the distance z to the angle θ, again, well known trigonometry can be used for the triangle formed by z, p’ and p:
z2=p2+p'22pp'cosθ
by resolving Eq. (18), an expression of the cosine of the angle θ is also obtained here as shown below:
cosθ=p2+p'2z22pp'
if we combine Eq. (17) and Eq. (19) an expression combining z and w is worked out:
w=[q2+q'2(p2+p'2z2)qq'pp']1/2
this relationship is in effect also linked to the real object height, or h, since:
cosβ=hz
where β is the viewing angle or the angle between the laser light source and the axis perpendicular to the sensor.

Finally, it is possible to work out the distance p’ in relation to the viewing angle β by the using well known trigonometry. The following expressions are obtained for the two triangles formed by r, p’, p + h and h, r, z:

p'2=(p+zcosβ)2+r2
z2=zcosβ2+r2
by resolving Eq. (22) and Eq. (23) an expression of p’ in terms of z and β is worked out:
p'=(p2+2pzcosβ+z2)1/2
since p’ is now known, q’ can be obtained by substituting Eq. (24) back into Eq. (15).

All these expressions allow the calculation of all required parameters for the analysis of the triangulation system platform presented.

It is now possible to work out the image displacement distance on the sensor, or w, from the real height of the object h, at a specific viewing angle β. It is important to note that the focal length f is always a fixed value of 50mm for the particular lens used in our experiments. However, the modular focus present in our optical system allows us to change the magnification, which in turn alters distances p and q. The expression below defines magnification in terms of p and q:

m=qp

For the case of the 45 ° result obtained in Fig. 6(c), the magnification used was approximately −0.26 obtained from Table 2 below, where q = 63.16 mm and p = 240 mm. This corresponds to a frame or image reduction factor of about 3.8 for this particular case.

Tables Icon

Table 2. Theoretical Calculated Values of w, for a Fixed h and Varying Angle β, where f = 50 mm; q = 63.16 mm; p = 240 mm and h = 11.32 mm

Table 2 shows the theoretical calculated values for image displacement distance on the sensor, w, for a white rubber object height h of 11.32 mm, when the viewing or scanning angle varies from 5 ° to 85 ° at intervals of 5 °. Associated distances z, p, q, p’ and q’ as well as the fixed focal length f are also presented or calculated.

As noticeable in Table 2 for the 45° case, an image displacement distance (on sensor) w of 2.92 mm is obtained and this approximately matches the height of the scanned 3D white rubber object profile shown in Fig. 6(c) which is about 3 mm (from −3.5 mm to −0.5 mm on Y axis). Similarly, concerning the results obtained in Fig. 7, the magnification used was approximately −0.08 obtained from Table 3 below, where q = 54 mm and p = 675 mm.

Tables Icon

Table 3. Theoretical Calculated Values of w, for a Fixed h and Varying Angle β, where f = 50mm; p = 675mm; q = 54mm and h = 11.32mm

This corresponds to a frame or image reduction factor of about 12.5. As in Table 2, Table 3 presents all specific values for this particular case plus the error between theoretical and experimental values of w for the angles of interest. As noticeable in Table 3, the image displacement distance (on sensor) w obtained for all cases of interest, approximately matches the height of the scanned 3D white rubber object profile for all those cases of interest or all angles shown in Fig. 7. For example, for the 55 ° case Fig. 7 shows a height of about 1.25 mm (from −3.5 mm to −2.25 mm on Y axis) and its theoretical calculated value (see Table 3) is 1.27 mm, thereby a good match exists. Figure 10 illustrates the match between theoretical and experimental values of w for the angles of interest.

 figure: Fig. 10

Fig. 10 Sketch of the theoretical versus the experimental values of w for the angles of interest.

Download Full Size | PDF

Overall, the set of results presented indicate that an angle of 85 ° or higher becomes impractical since such a large angle derives in a height difference which is too large and extends beyond the available height range. Similarly, an angle of 5 ° or lower is also impractical since such a small angle causes no noticeable significant height difference between the object being present or not. Therefore, a 5 ° angle or lower cannot be used in this system setup since no height difference detection will occur. These observations agree with the work performed elsewhere [11].

Table 2 and 3 and Fig. 9 illustrate the above mentioned relationships in relation to the small and large angles and the height difference observed on the sensor. In other to be on the safe side it is better to choose an angle midway between 75 ° and 15 °. This angle of choice should generally be 45 ° or nearby since this would ensure a proper representation of the height of the object profile in 3D. This last statement also matches extremely well the remarks made elsewhere [11].

4.5. High resolution 3D object scanning

The high resolution 3D profile of the white colour rubber object is shown in Fig. 11(a) .

 figure: Fig. 11

Fig. 11 (a) High resolution 3D rendered profile of the white colour rubber.
(b) High resolution 3D rendered profile of the white colour fork.

Download Full Size | PDF

The X-axis represents the 32 sensor channels (0-31), the Y-axis represents the number of frames and the Z-axis represents the detected height (in mm) of the 3D profile. In this case the object was rendered using approximately 2273 frames or 72736 points. Therefore the size of the rendered 3D image is 72736 points. The gap seen on the rubber 3D object representation in Fig. 11(a) is there because one or two relevant sensor channels covering that specific scanning area are not responding properly and so therefore they are not detecting the height difference properly as they should be. The 3D rubber object representation looks very similar to the real rubber object shown in Fig. 4(a) (without the black and white pattern). The length of the profile also matches correctly the one from the real object as was discussed in section 4.1. The high resolution 3D rendering result for the white colour plastic fork is shown in Fig. 11(b). Here also, the X-axis represents the 32 sensor channels (0-31), the Y-axis represents the number of frames and the Z-axis represents the detected height (in mm) of the 3D profile. As in the rubber object 3D representation, here also the gap seen on the fork 3D object representation in Fig. 11(b) is there because one or two relevant sensor channels covering that specific scanning area are not responding properly and so therefore they are not detecting the height difference properly as they should be. The resolution is determined by the number of frames scanned. In this case the object was rendered using approximately 2130 frames or 68160 points. Therefore the size of the rendered 3D image is 68160 points. Results obtained from the study of the angle in sections 4.2 and 4.3 have a direct impact on the quality of the 3D rendered object profile. A high detailed (high number of frames) 3D object profile can be obtained when using small angles although it can also take a large amount of time to perform the 3D scan. Similarly, a low detailed (low number of frames) 3D object profile will be obtained when using large angles taking a small amount of time to perform the 3D scan.

5. Conclusions

An overall conclusion is that the peak or gap detection is not lower than 350 μm because the experimental setup limitations, such as the image frame optics reduction factor or a smaller sensor size than the real object, do not allow it and thereby the sensor cannot detect any lower than approximately 350 μm. Therefore, the minimum detail or gap that can be detected by the sensor is 350 μm. Regarding the study of the angle, while 3D scanning a rubber, it is important to note that at large angles the height difference is too large and so the height of the object extends beyond the available height range of the sensor, not being properly detected. At small angles the height difference is too small and also the height of the object would not be distinguished correctly. We have demonstrated that it is possible to render an object in 3D within a scanning angle range of 15 ° to 85 ° and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Regarding the study of the angle while 3D scanning a ramp, it can be concluded that the laser line takes less traveling distance (<7.5 mm on the translation table) to scan the ramp as the angle increases towards 90 °. And the opposite relationship also holds. Overall, and based on these results, it can be concluded that small angles allow for a high detailed but slow 3D analysis of the object and inversely, large angles allow for a low detailed but rapid 3D analysis of the object. Results show how simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately at high resolution, using this sensor and system platform. The accuracy of the method depends mainly on the sensor resolution as well as on the precision of the light source and the optical geometric dimensions used. The rate of acquisition and the translation table speed determine the gap between frames of the object. The higher the number of frames the higher the expected profile resolution.

Acknowledgments

This work was supported by ‘Fundação para a Ciência e a Tecnologia’ (Foundation for Science and Technology, FCT-MCTES) through a contract with CENIMAT/I3N and by the projects PEst-C/CTM/LA0025/2011 (Projecto Estratégico - LA 25 - 2011-2012 - Strategic Project - LA 25 - 2011-2012), PTDC/CTM/099719/2008, ADI - 2009/003380;” ADI - 2009/005610, and by the European Commission FP6, Marie Curie Research Training Network “ASSEMIC”, project MRTN-CT-2003-504826. The authors are very grateful to D. Guerreiro for his valuable help in mounting the system. J. Contreras would also like to thank FCT-MCTES for providing him with the PhD fellowship SFRH/BD/62217/2009. In addition, the authors extend their gratitude to P. Wojcik, R. Ferreira and A. Vicente for their great help on manuscript image preparation.

References and links

1. B. G. Batchelor and P. F. Whelan, Intelligent vision systems for industry (Springer Verlag, 1997).

2. J. Hochberg, “Machines should not see as people do, but must know how people see,” Comput. Vis. Graph. Image Process. 37(2), 221–237 (1987). [CrossRef]  

3. R. G. Dorsch, G. Häusler, and J. M. Herrmann, “Laser triangulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33(7), 1306–1314 (1994). [CrossRef]   [PubMed]  

4. T. Sawatari, “Real-time noncontacting distance measurement using optical triangulation,” Appl. Opt. 15(11), 2821–2824 (1976). [CrossRef]   [PubMed]  

5. T. A. Clarke, K. T. V. Grattan, and N. E. Lindsey, “Laser-based triangulation techniques in optical inspection of industrial structures,” Proc. SPIE 1332, 474–486 (1991). [CrossRef]  

6. S. Klancnik, J. Balic, and P. Planincic, “Obstacle detection with active laser triangulation,” Adv. Prod. Eng. Manage. 2, 79–90 (2007).

7. A. Peiravi and B. Taabbodi, “A reliable 3D laser triangulation-based scanner with a new simple but accurate procedure for finding scanner parameters,” Am. J. Sci. 6, 80–85 (2010).

8. V. Lombardo, T. Marzulli, C. Pappalettere, and P. Sforza, “A time-of-scan laser triangulation technique for distance measurements,” Opt. Lasers Eng. 39(2), 247–254 (2003). [CrossRef]  

9. V. S. Mikhlyaev, “Influence of a tilt of mirror surface on the measurement accuracy of laser triangulation rangefinder,” J. Phys.: Conf. Ser. 48, 739–744 (2006). [CrossRef]  

10. P. B. Petrović, “Rubberized cord thickness measurement based on laser triangulation, Part I: technology,” FME Trans. 35, 77–84 (2007).

11. V. Raja and K. J. Fernandes, Reverse engineering - an industrial perspective (Springer, 2008).

12. T. H. Hoo and M. R. Arshad, “A simple surface mapping technique using laser triangulation method,” presented at ICOLA, Jakarta, Indonesia, (2002).

13. M. Johanesson and A. Astrom, “Sheet-of-light range imaging with MAPP2200,” Technical Report LiTHISY-1–140 1, Dept. EE, Linköping University, Linköping, Sweden, SE-581 83 (personal communication, 1992).

14. M. Johannesson, “Sheet-of-light range imaging in: linköping studies in science and technology,” Dept. EE, Linköping University, Linköping, Sweden, SE-581 83 (personal communication, 1993).

15. M. Johannesson, “Can sorting using sheet-of-light range imaging and MAPP2200,” in Conference Proceedings., International Conference on Systems, Man and Cybernetics, 'Systems Engineering in the Service of Humans' (1993), 3, 17–20, 325–330.

16. K. Araki, M. Shimizu, T. Noda, Y. Chiba, Y. Tsuda, K. Ikegaya, K. Sannomiya, and M. Gomi, “High speed and continuous 3-D measurement system,” in Proceedings., 11th IAPR International Conference on Pattern Recognition, Vol. IV. Conference D: Architectures for Vision and Pattern Recognition (1992), 30, 62–65.

17. J. Contreras, I. Ferreira, M. Idzikowski, S. Filonovich, S. Pereira, E. Fortunato and R. Martins, “Amorphous silicon position sensitive detector array for fast 3D object profiling,” IEEE Sens. J. (submitted).

18. J. Contreras, C. Baptista, I. Ferreira, D. Costa, S. Pereira, H. Águas, E. Fortunato, R. Martins, R. Wierzbicki, and H. Heerlein, “Amorphous silicon position sensitive detectors applied to micropositioning,” J. Non-Cryst. Solids 352(9-20), 1792–1796 (2006). [CrossRef]  

19. J. Contreras, D. Costa, S. Pereira, E. Fortunato, R. Martins, R. Wierzbicki, H. Heerlein, and I. Ferreira, “Micro cantilever movement detection with an amorphous silicon array of position sensitive detectors,” Sensors (Basel, Switzerland) 10(9), 8173–8184 (2010). [CrossRef]   [PubMed]  

20. R. Martins, E. Fortunato, J. Figueiredo, F. Soares, D. Brida, V. Silva, and A. Cabrita, “32 Linear array position sensitive detector based on NIP and hetero a-Si:H microdevices,” J. Non-Cryst. Solids 299–302, 1283–1288 (2002). [CrossRef]  

21. W. Schottky, “Über den entstehungsort der photoelektronen in kupfer-kupferoxydul-photozellen,” Phys. Z. 31, 913–925 (1930).

22. J. T. Wallmark, “A new semiconductor photocell using lateral photo effect,” Proc. IRE 45, 474–483 (1957).

23. L. D. Hutcheson, “Practical electro-optic deflection measurements system,” Opt. Eng. 15, 61–63 (1976).

24. S. Middelhoek and D. J. W. Noorlag, Silicon microtransducers: a new generation of measuring elements, modern electronic measuring systems (Delft University Press, 1978), Chap. 1.

25. H. Walcher, Position sensing - angle and distance measurement for engineers (Butterworth-Heinemann, Oxford, 1994).

26. R. Ohba, Intelligent sensor technology (Wiley, 1992).

27. SiTek Electro Optics AB, Ögärdesvägen, Partille, Sweden, 13A S-433 30. www.sitek.se

28. R. F. P. Martins and E. M. C. Fortunato, “Interpretation of the static and dynamic characteristics of 1-D thin film position sensitive detectors based on a-Si:H p-i-n diodes,” IEEE Trans. Electron. Dev. 43(12), 2143–2152 (1996). [CrossRef]  

29. J. Geist, “Sensor technology and devices,” L. Ristik, ed. (Artech House Publishers, 1994).

30. E. Fortunato, F. Soares, P. Teodoro, N. Guimarães, M. Mendes, H. Águas, V. Silva, and R. Martins, “Characteristics of a linear array of a-Si:H thin film position sensitive detector,” Thin Solid Films 337(1-2), 222–225 (1999). [CrossRef]  

31. A. Cabrita, J. Figueiredo, L. Pereira, H. Aguas, V. Silva, D. Brida, I. Ferreira, E. Fortunato, and R. Martins, “Thin film position sensitive detectors based on pin amorphous silicon carbide structures,” Appl. Surf. Sci. 184(1-4), 443–447 (2001). [CrossRef]  

32. L. Pereira, D. Brida, E. Fortunato, I. Ferreira, H. Aguas, V. Silva, M. F. M. Costa, V. Teixeira, and R. Martins, “a-Si:H interface optimisation for thin film position sensitive detectors produced on polymeric substrates,” J. Non-Cryst. Solids 299–302, 1289–1294 (2002). [CrossRef]  

33. E. Fortunato, L. Pereira, H. Águas, I. Ferreira, and R. Martins, “Flexible a-Si:H position sensitive detectors,” Proc. IEEE 93(7), 1281–1286 (2005). [CrossRef]  

34. R. Martins, G. Lavareda, E. Fortunato, F. Soares, L. Fernandes, and L. Ferreira, “A linear array position sensitive detector based on amorphous silicon,” Rev. Sci. Instrum. 66(11), 5317–5321 (1995). [CrossRef]  

35. R. Martins and E. Fortunato, “Thin film position sensitive detectors: from 1D to 3D applications,” in the technology and applications of amorphous silicon, R. Street, ed. (Spring, Verlagen, 2000).

36. R. Martins, L. Raniero, L. Pereira, D. Costa†, H. Águas, S. Pereira, L. Silva, A. Gonçalves, I. Ferreira, and E. Fortunato, “Nanostructured silicon and its application to solar cells, position sensors and thin film transistors,” Philos. Mag. 89, 2699–2721 (2009). [CrossRef]  

37. H. Águas, S. K. Ram, A. Araujo, D. Gaspar, A. Vicente, S. A. Filonovich, E. Fortunato, R. Martins, and I. Ferreira, “Silicon thin film solar cells on commercial tiles,” Energy Environ. Sci. 4(11), 4620–4632 (2011). [CrossRef]  

38. R. Martins, A. Macarico, I. Ferreira, R. Nunes, A. Bicho, and E. Fortunato, “Highly conductive and highly transparent n-type microcrystalline silicon thin films,” Thin Solid Films 303(1-2), 47–52 (1997). [CrossRef]  

39. C. A. Klein and R. W. Bierig, “Pulse-response characteristics of position-sensitive photodetectors,” IEEE Trans. Electron. Devices 21(8), 532–537 (1974). [CrossRef]  

40. W. P. Connors, “Lateral photodetector operating in the fully reverse-biased mode,” IEEE Trans. Electron. Devices 18(8), 591–596 (1971). [CrossRef]  

41. D. J. W. Noorlag and S. Middelhoek, “Two dimensional position-sensitive photodetector with high linearity made with standard i.c.-technology,” IEE J. Sol. State Electron. Devices 3, 75–82 (1979).

42. E. Fortunato, G. Lavareda, R. Martins, F. Soares, and L. Fernandes, “Large-area 1D thin-film position-sensitive detector with high detection resolution,” Sensor Actuators, A 51, 135–142 (1996).

43. J. Henry and J. Livingstone, “Sputtered a-Si:H thin-film position sensitive detectors,” J. Phys. D Appl. Phys. 34(13), 1939–1942 (2001). [CrossRef]  

44. A. Fantoni, M. Viera, and R. Martins, “Influence of the intrinsic layer characteristics on a-Si: H p-i-n solar cell performance analysed by means of a computer simulation,” Sol. Energy Mater. Sol. Cells 73(2), 151–162 (2002). [CrossRef]  

45. L. Raniero, I. Ferreira, A. Pimentel, A. Goncalves, P. Canhola, E. Fortunato, and R. Martins, “Role of hydrogen plasma on electrical and optical properties of ZGO, ITO and IZO transparent and conductive coatings,” Thin Solid Films 511–512, 295–298 (2006). [CrossRef]  

46. R. Martins and E. Fortunato, “Lateral Photoeffect in large area one-dimensional thin-film position-sensitive detectors based in a-Si:H p-i-n devices,” Rev. Sci. Instrum. 66(4), 2927–2934 (1995). [CrossRef]  

47. Z. H. Hu, X. B. Liao, H. W. Diao, Y. Cai, S. B. Zhang, E. Fortunato, and R. Martins, “Hydrogenated p-type nanocrystalline silicon in amorphous silicon solar cells,” J. Non-Cryst. Solids 352(9-20), 1900–1903 (2006). [CrossRef]  

48. E. Fortunato, G. Lavareda, R. Martins, F. Soares, and L. Fernandes, “High detection resolution presented by large-area thin film position-sensitive detectors,” Proc. SPIE 2397, 259–270 (1995). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Photograph of an amorphous silicon 32 element PSD linear array developed at CEMOP/UNINOVA
Fig. 2
Fig. 2 Measured spectral response of a single amorphous silicon PSD element from the array shown in Fig. 1. Peak is located around 650 nm and response range is 500-750 nm.
Fig. 3
Fig. 3 (a) Sketch of the experimental setup for 3D profile scanning. (b) Photograph of the experimental setup.
Fig. 4
Fig. 4 (a) Horizontal photograph of the white rubber (object) with black and white pattern glued on top. (b) Black and white pattern (50 µm – 1000 µm).
Fig. 5
Fig. 5 Sketch of how the reflection of the height of the object looks like on the sensor active area. This is one frame of the object. (a) Laser line does not strike the object. (b) Laser line strikes the object. (c) Laser line does not strike the object. (d) Laser line strikes the object. (e) Laser line strikes a black stripe on the object (same as no object). (f) Laser line strikes the object.
Fig. 6
Fig. 6 (a) Sketch of the 3D scanning response of sensor channel 11 when rendering the white rubber object glued with the black and white pattern (see Fig. 4(a)). (b) Sketch of the magnification of the smallest peak detected from the 3D scanning response of sensor channel 15 when rendering the white rubber object glued with the black and white pattern (see Fig. 4(a)). (c) Sketch of the 3D scanning response of sensor channel 17 when rendering the side region indicated, corresponding to the white rubber itself without the black and white pattern from Fig. 4(b).
Fig. 7
Fig. 7 Sketch of the 3D scanning response of sensor channel 4 at several angles when rendering the white rubber itself without the black and white pattern from Fig. 4(b).
Fig. 8
Fig. 8 Sketch of the 3D scanning response of sensor channel 17 at several angles when rendering the white ramp.
Fig. 9
Fig. 9 Schematic of the geometry of the triangulation platform scheme.
Fig. 10
Fig. 10 Sketch of the theoretical versus the experimental values of w for the angles of interest.
Fig. 11
Fig. 11 (a) High resolution 3D rendered profile of the white colour rubber.
(b) High resolution 3D rendered profile of the white colour fork.

Tables (3)

Tables Icon

Table 1 Resolution of Detection at a Speed of 1.97 mms−1

Tables Icon

Table 2 Theoretical Calculated Values of w, for a Fixed h and Varying Angle β, where f = 50 mm; q = 63.16 mm; p = 240 mm and h = 11.32 mm

Tables Icon

Table 3 Theoretical Calculated Values of w, for a Fixed h and Varying Angle β, where f = 50mm; p = 675mm; q = 54mm and h = 11.32mm

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

J x = q μ a n E + ε ε 0 v d E + C l d E d t
J x = J x d + J x l
d J x d x = J y W J x = a a J y W d x
J x = 2 a W ( J s + J p h )
d E 0 ( x , t ) d x + ρ ρ s σ 0 4 a 2 W W r E 0 ( x , t ) + C l σ r d E 0 ( x , t ) d t = J p h 0 ( x , t ) σ r 2 a W
d E 1 ( x , t ) d x + α E 1 ( x , t ) + C 1 σ r d E 1 ( x , t ) d t = J p h 1 ( x , t ) σ r 2 a W
ϕ x = ± E 0 α 1 exp [ α ( a ± x ) ] tan h ( α L / 2 ) + c
ϕ x = ± E 0 α sinh ( α x ) sinh ( α L / 2 ) + sinh ( α a ) x + c
I f J p h ( 2 a Δ ) sin h ( α x ) sin h ( α L / 2 ) + sin h ( α a )
R 0 ρ s L tan h ( α x ) 2 Δ α L / 2
I L ( x ) [ J s + J p h ( x , t ) ] Δ a a W sin h ( α x ) sin h ( α L / 2 )
I L 1 = I 0 L x L ; I L 2 = I 0 x L
P = I L 2 I L 1 I L 2 + I L 1 = 2 x L L
P n ( x n ) = L ( I 0 , n I 1 , n ) 2 ( I 0 , n + I 1 , n )
1 f = 1 p + 1 q
q = p f p f o r q ' = p ' f p ' f
w 2 = q 2 + q ' 2 2 q q ' cos θ
cos θ = q 2 + q ' 2 w 2 2 q q '
z 2 = p 2 + p ' 2 2 p p ' cos θ
cos θ = p 2 + p ' 2 z 2 2 p p '
w = [ q 2 + q ' 2 ( p 2 + p ' 2 z 2 ) q q ' p p ' ] 1 / 2
cos β = h z
p' 2 = ( p + z cos β ) 2 + r 2
z 2 = z cos β 2 + r 2
p' = ( p 2 + 2 p z cos β + z 2 ) 1 / 2
m = q p
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.