Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators

Open Access Open Access

Abstract

Most image sensors are planar, opaque, and inflexible. We present a novel image sensor that is based on a luminescent concentrator (LC) film which absorbs light from a specific portion of the spectrum. The absorbed light is re-emitted at a lower frequency and transported to the edges of the LC by total internal reflection. The light transport is measured at the border of the film by line scan cameras. With these measurements, images that are focused onto the LC surface can be reconstructed. Thus, our image sensor is fully transparent, flexible, scalable and, due to its low cost, potentially disposable.

© 2013 Optical Society of America

1. Introduction

Conventional optoelectronic techniques have forced image sensors into a planar shape. Recent approaches ease this situation. For instance, silicon photodiodes are interconnected by elastomeric transfer elements in order to realize a hemispherical detector geometry that mimics the shape of the human eye, theoretically enabling a wide field of view and low aberrations [1]. Organic photodiodes, as another example, allow application of ink-jet digital lithography to be used to implement sensors on fully flexible substrates [24].

Image sensors that consist of a grid of light-sensing fibers are also flexible, but –compared to those consisting of grids of photodiodes– block less light. They might enable new applications such as lens-less imaging [5]. Thin-film luminescent concentrators (LCs) are polymer films doped with fluorescent dyes that absorb light of a specific wavelength and re-emit it at a longer wavelength. LC-based waveguides forward the emitted light to the edges of the LC by total internal reflection with a non-linear attenuation that depends on the distance of the light traveled. Normally they are used to reduce costs and improve solar cells with poor spectral response at short wavelengths. Photodiodes glued to the LC surface plane create an interface with higher refractive index than air or the polymer of the LC. This causes light to be decoupled from the LC at the positions of the photodiodes. The attenuation of the measured light at these positions allows localization of an incident light point either on horizontally and vertically interwoven LC strips or –with simple triangulation– on a continuous surface. Thus, LCs have also been used for camera-free laser-pointer tracking on potentially large and scalable sensor surfaces [69].

LCs have several interesting properties: they are flexible, fully transparent, and low-cost – and therefore scalable and disposable– polymer films (Fig. 1). The state of the art that uses LCs for light sensing is currently able to reconstruct only simple point images. Our approach makes reconstruction of entire grayscale images possible.

 figure: Fig. 1

Fig. 1 A thin-film luminescent concentrator (LC) is a flexible, fully transparent, scalable, and low-cost polymer film. Our approach reconstructs grayscale images focused onto the LC surface. The image shows Bayer Makrofol® LISA Green LC film that absorbs blue and re-emits green light.

Download Full Size | PDF

This is –to the best of our knowledge– the first method that enables a fully transparent (no integrated circuits or other structures such as grids of optical fibers or photodiodes), flexible (makes curved sensor shapes possible), scalable (sensor size can be small to large at similar cost, pixel size is not restricted to size of the photodiodes), and disposable (the sensing area is low-cost and can be replaced if damaged) image sensor.

2. Light transport within luminescent concentrators

Luminescent concentrators, described in detail in [10], are polymer plates or foils that are doped with fluorescent molecules. Light that is not reflected on the surface of an LC passes through, making it transparent. The dye inside the LC absorbs a specific portion of the light spectrum that passes through, and re-emits it a lower frequency. For instance, blue light is absorbed and re-emitted as green light. The band of the spectrum that can be absorbed depends on the chemical structure of the dye. The fluorescent particles randomly emit light in all directions. Most of the first-generation photons are trapped inside the LC due to total internal reflection (TIR) and are propagated to the edges of the LC.

The amount of light that finally reaches the edge is subject to various losses. Cone-loss occurs if the angle between the incident ray of light and the normal of the LC-to-air interface is greater than a critical angle. The critical angle can be derived from Snell’s law and is given by

θc=arcsin1n,
where n is the refractive index. For example, TIR occurs at an angle greater than 39.2 degrees for an LC made of polycarbonate. The solid angles above and below a fluorescent particle where TIR does not occur are cone-shaped. For a planar LC with refractive index n, the fraction of luminescence P that is lost due to cone-loss is given by
P=111n2.
For an LC film made of polycarbonate (n = 1.58), the loss is approximately 22.6%.

Absorption processes are another source of loss along the path from a fluorescent particle to the edge. Self-absorption is the re-absorption of a fluorescent photon by another dye molecule due to overlapping absorption and emission spectra. The longer the path length, the higher the probability of self-absorption. The polymer host itself is another source of absorption. The Beer-Lambert law states that the intensity of light decreases according to

I=I0eμd
when it travels through a homogeneous absorbing substance, where I is the intensity leaving the material, I0 is the intensity entering the material, μ is the attenuation coefficient that is constant along the transport path, and d is the length of the transport path.

Other losses are due to scattering and incomplete total internal reflection because of LC surface imperfections. The absorption and propagation of light within an LC is illustrated in Fig. 2.

 figure: Fig. 2

Fig. 2 Light transport within luminescent concentrator: 1) Incident light is transmitted and not absorbed by a fluorescent molecule. 2) Emitted light is lost at the critical escape cones. 3) Light that is not reflected on the surface is absorbed, re-emitted, and transported to the edge either directly or by total internal reflection. 4) Emitted light is self-absorbed by another dye molecule.

Download Full Size | PDF

3. Measuring light transport by sampling a 2D light field

We use a square thin-film LC that collects and transports the incident light of an image which is focused on its surface. The LC area is divided into a virtual grid of n × m = l discrete entrance points (i.e., pixels). The amount of transported light at the four edges of the LC sheet is measured with CIS (contact image sensor) line scan cameras. Each line scan camera consists of a single array of photosensors.

The correlation of the transport losses between the entrance points (i.e., the pixels p) on the LC surface and the total of k photosensors (s) at the edges of the LC sheet can be represented by

s=Tp+e,
where s⃗ is the k-dimensional column vector of all photosensor responses, p⃗ the l-dimensional column vector of all pixel intensities, and T the k × l-dimensional light-transport matrix of the LC. Note that e⃗ is the k-dimensional column vector of the constant ambient light contribution that is additionally transported to the photosensors (including also the sensors’ constant noise level).

Computing the coefficients of the light-transport matrix T as explained in section 2 would require precise knowledge of the LC’s internal (and potentially imperfect) structure and shape at each location, which is practically impossible. Instead, we measure T as part of a one-time calibration procedure: Projecting a single light impulse to one pixel p⃗i enables simultaneous measurement of the i-th column of T, which equals the sensor responses s⃗ under the impulse illumination at pixel p⃗i. Repeating this for all pixels p⃗i, with 1 ≤ il yields all coefficients of T. Note that the photosensor response has to be linear. Thus, the line scan cameras must initially be linearized. Furthermore, the ambient light contribution e⃗ must be measured and subtracted from s⃗ when the matrix coefficients are sampled. The measurement of e⃗ is part of the calibration process, and must be repeated if the ambient light changes significantly over time. The transport matrix remains constant as long as the shape of the LC is not changed; otherwise, T must be re-calibrated.

In principle, the image focused on a statically (but arbitrarily) shaped LC surface can then be reconstructed with the inverse light transport

p=T1(se)
or an alternative image reconstruction technique, such as tomographic reconstruction (e.g., filtered backprojection).

However, since each photodiode measures the integral of all pixel contributions across the entire LC film, the light-transport matrix would be dense with a high condition number and image reconstruction using Eq. (5) would become very unstable (particularly in the presence of sensor noise). A tomographic reconstruction would be seriously undersampled.

In order to solve this problem, we cut triangular slits in the LC edges, on the surfaces of which we placed the photodiodes (Figs. 3 and 4). While reflective paint underneath the photosensors at the back of the LC film reflects additional light towards the photosensors, opaque plasticine filled into the cut-out film areas reduces stray light. Both leads to a cleaner signal when measuring the decoupled light.

 figure: Fig. 3

Fig. 3 Schema for sampling light transport as a 2D light field: (a) Photosensors (s1, s2,...,sj) located at the edges of the LC sheet that is divided into (p1, p2,..., pi) virtual pixels (from top left to bottom right). The photosensors are positioned at the bottom of the triangular aperture slits that are located along the LC edges. (b) Close-up of a triangular slit. Each photosensor measures the transported light integral at a particular angle. (c) The measurements of the photosensors at the same local position within each triangle at the same edge can be considered as the projection of light to the edge at a specific angle.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Microscopic views of triangular slit structure: (a) Darkfield image of single triangular slit sampling multiple directions ϕ⃗ at one slit position x⃗i. (b) Brightfield image of multiple triangular slits sampling the 2D light field L(x, ϕ).

Download Full Size | PDF

Note that each triangular slit can be considered as a simple one-dimensional camera with the slit-opening at the top corresponding to a one-dimensional aperture.

Measuring the light-transport matrix or a focused image using this triangular slit structure corresponds to sampling a two-dimensional light field L(x, ϕ) and describes the amount of light being transported within the LC film towards each discrete position x at the LC edges, from each discrete direction ϕ. In this case the light-transport matrix used for image reconstruction in Eq. (5) becomes sparse and its condition number is reduced. Further, more positional and directional samples are available for an alternative tomographic image reconstruction.

4. Image reconstruction

The measured light-transport matrix includes all attenuations of transported light that are due to cone-loss, scattering, incomplete total internal reflection, imperfections of the LC structure, and (self-)absorption, as explained in section 2.

Reconstructing the image focused on the LC surface requires solving a system of linear equations (Eq. (4)) in p⃗. By determining the (pseudo-)inverse light-transport matrix T−1, a direct solution in p⃗ can be found, as Eq. (5) explains. However, the inverse cannot be calculated for every matrix and, even if it could, it would not be very robust against noise. Other methods, such as QR decomposition (QRD), singular value decomposition (SVD), biconjugate gradients stabilized (BiCGStab) and non-negative least squares (NNLS), yield better solutions in the presence of noise. In our experiments, we found BiCGStab and NNLS to be most robust when compared to QRD, SVD, and the pseudo-inverse of T. Section 7 discusses this in more detail.

The drawback of most of these methods is that they are not suitable for reconstructing high-resolution images. A resolution of 512 × 512 pixels results in a system of more than 260,000 unknowns. Even with today’s computing power it would take considerable time to solve such an extensive equation system to the required level. However, numerical solvers such as NNLS can be useful in the reconstruction of low-resolution images, while BiCGStab supports also high resolutions. Section 7 presents a more detailed performance and quality analysis.

Alternatively, 2D reconstruction of higher resolution images can be achieved tomographically from the multiple 1D projections that are included in the sampled light field L(x, ϕ) over varying directions ϕ, as illustrated in Fig. 3(c). This corresponds to a Radon transform, and tomographic image reconstruction can be accomplished using a backprojection technique that enables fast and robust reconstruction of higher-resolution images:

For all pixels of the image to be reconstructed, backprojection integrates the light that is transported through a pixel in all directions. While a single row of the light-transport matrix T represents the contribution of all image pixels to one photosensor, a single column of T represents the contribution of one image pixel to all photosensors. Thus, the tomographic back-projection operator is, in principle, equivalent to the transpose of the light-transport matrix, and tomographic image reconstruction with backprojection corresponds to

p=TT(se),
where the transpose of T is equal to the matrix multiplication of all columns of T with the measured photosensor values (without ambient light contribution).

Simple backprojection does not directly reconstruct the original image but a blurred version thereof. Consider the backprojection result of an image containing a single point in the image center. It would show a point spread that falls off towards the image edges and is proportional to the light loss over a particular transport distance. Thus, backprojection reconstructs the convolution of the desired image with a point-spread function (PSF) that depends on the sampling of the light transport within the LC. To obtain the desired image, the backprojection result must be deconvolved (or inverse-filtered) with the PSF. In practice, the PSF can vary locally, as the internal structure of the LC and the sampling density might vary. Compared to simple backprojection, advanced filtered backprojection techniques reconstruct a deblurred image by pre-filtering the Radon transform before backprojection instead of deconvolving the backprojected result. This has significant advantages in terms of performance and robustness. While deconvolution is ill-posed, determining the proper filter parameters is also a challenging problem.

The algebraic reconstruction technique (ART) [11, 12] is an iterative approach to tomo-graphic reconstruction that is based on series expansion. It starts with making a guess at the solution vector p, which is projected orthogonally onto the first hyperplane (the first equation) of the linear system y = Tx, resulting in a solution for p. This process is repeated for the remaining equations of the system, which yields a solution vector p that approximates the overall solution. One iterative step of ART is repeated n times, each time using the solution vector p of the previous iteration as the initial guess. We apply a faster variant of ART called simultaneous algebraic reconstruction technique (SART) [13]. Instead of calculating each value of p sequentially for each equation, SART calculates p simultaneously for all equations of the linear system within one iteration.

5. Super-resolution imaging

Generally, image reconstruction becomes more error-prone and time-consuming with increasing image resolutions that lead to large light-transport matrices. This applies to tomographic methods and to other techniques that solve large equation systems numerically. Insufficient dynamic range and low signal-to-noise ratio of the photosensors, as well as a low sampling density make image reconstruction unstable for high-resolution images.

However, instead of reconstructing a high-resolution image with a single high-resolution light-transport matrix, the image can be approximated by combining the results of multiple reconstructions with low-resolution but shifted light-transport matrices (Fig. 5). This makes feasible the reconstruction of higher-resolution images of adequate reconstruction quality and at acceptable speed.

 figure: Fig. 5

Fig. 5 Super-resolution reconstruction with shifted light-transport matrices (principle): reconstructing 2 × 2 low-resolution pixels (blue) at 3 × 3 sub-pixel-shifted positions results in an image with a resolution of 6 × 6 pixels (yellow).

Download Full Size | PDF

Just as the image can be focused anywhere on the LC surface, the light-transport matrix can be measured at any position. Thus, during the one-time calibration procedure, we measure multiple light-transport matrices with light impulses that are shifted by sub-pixel, rather than full-pixel distances. This leads to a set of low-resolution light-transport matrices that reconstruct low-resolution images at sub-pixel-shifted positions on the LC surface. The intensities of the reconstructed low-resolution pixels equal the average of the intensities of the high-resolution image regions that are focused on the LC surface underneath the corresponding low-resolution pixel areas.

Thus, each sub-pixel-shifted and reconstructed low-resolution pixel corresponds to a pixel of the high-resolution image, as shown in Fig. 5 for the example of a super-resolution reconstruction from nine shifted 2 × 2 images to one 6 × 6 image.

The result reconstructs the desired high-resolution image convolved with an average filter and a kernel size that is equal to the quotient of the two resolutions involved: high-resolution / low-resolution. Figure 6 illustrates the nine reconstruction steps needed to compute a 27 × 27 image from shifted 9 × 9 image reconstructions. The result (Fig. 6(a)) approximates the desired high-resolution image convolved by a 27/9 × 27/9 = 3 × 3 average kernel (Fig. 6(b)).

 figure: Fig. 6

Fig. 6 Super-resolution reconstruction steps (9 × 9 to 27 × 27 example): The upper row shows the nine low-resolution reconstructions created with the 3 × 3 shifted light-transport matrices. The center row shows the same images with the reconstructed pixels placed at the correct positions within the high-resolution image. The bottom row presents the accumulation of the center-row images from left to right. (a) The final 27 × 27 super-resolution reconstruction. (b) Best possible result: original image (d) convolved with a 3 × 3 average kernel. (c) Direct reconstruction with a single high-resolution transport matrix.

Download Full Size | PDF

Without subsequent deconvolution, the convolved image defines the quality limit of our super-resolution technique. Nevertheless, it still leads to a better image quality than any of the low-resolution reconstructions that can be achieved with the same precision of light-transport sampling (i.e., with the same light-transport matrix resolution), as can be seen in Fig. 6. Figure 6(c) illustrates the result achieved with a single high-resolution light-transport matrix that attempts to reconstruct the 27 × 27 pixels directly. The low dynamic range and SNR of the photosensors lead to a noisy reconstruction. In section 7, we evaluate the advantages of super-resolution reconstruction over direct reconstruction in more detail.

Note that during capture, only a single measurement is necessary for reconstructing a super-resolution image. Thus, the recording time is not increased. Only the one-time calibration procedure takes additional time. To compute an image of h × h resolution with multiple l × l image reconstructions, (h/l)2 light-transport matrices must be calibrated.

6. Experimental setup and implementation

The sampling schema in Fig. 3 provides a number of variable parameters that need to be optimized in order to obtain light-transport matrices with high numerical stability (Fig. 7):

 figure: Fig. 7

Fig. 7 Optimizing LC sensor parameters: An aperture of width a and a distance d to the photosensors lead to the field of view α of one triangular slit. It defines the distance w that is required by the photosensors at the edge of the LC. The integration area for a single photosensor is highlighted in orange.

Download Full Size | PDF

In addition to the aperture width a and the distance between photosensors and aperture d, the optimal number of triangular slits n that surround the LC imaging area must be determined.

In general, the integration area of one photosensor should only cover a single line of pixels, so that each equation contains as few unknowns as possible, since this results in a sparse transport matrix and higher numerical stability. The area can be reduced by either decreasing a or increasing d, but –in order to retain a wide field of view for each triangular slit– a small aperture width is preferred to a large distance between sensor elements and aperture.

The aperture width a and the the distances d and w define both the field of view α of a triangular slit and the integration area of a single photosensor:

a=2dtan(α/2)wtan(α/2)cot(α/2).
These parameters must be chosen such that a single photosensor captures the light of as few pixels as possible. At the same time, the whole LC surface area must be covered such that no pixel is omitted and that each pixel is measured multiple times from different directions. In general, this requires a wide field of view α and a small aperture width a. However, there is no clear analytical correlation between the parameter values and the condition number of the resulting light-transport matrix (which defines its numerical stability).

We found the optimal values for our prototypes by a brute force search of the entire parameter space (a, d, and total number of triangular slits n per edge) by minimizing the condition number of the resulting light-transport matrix. For a given set of parameters, the light-transport matrix is simulated with the analytical light-transport calculations, as explained in section 2. The constraints considered in this optimization task are defined by the limitations of the fabrication process, the line scan cameras used, the constant size of the evaluated LCs, and the desired image resolution.

We used CIS line scan cameras (M106-A4-R1/CMOS Sensor Inc.) with 1728 sensor elements on 210 mm in our experiments. While integration time and number of readouts per scan can be adjusted in software with a programmable USB controller (USB-Board-M106A4/Spectronic Devices Ltd), gain and dark level must be adjusted with potentiometers on the controller.

We evaluated two different LC sheets in our experiments: a smaller sheet of 108 mm × 108 mm and a larger sheet of 216 mm × 216 mm (both Bayer Makrofol® LISA Green with a thickness of 0.3 mm). We cut out the triangular slits with a GraphRobo/Graphtec cutting plotter.

The following parameter ranges were chosen: The triangular slits had to have a minimum a of 0.5 mm to avoid breakage and the distance d was constrained to a range of 2 to 5 mm. The number of triangular slits per edge n kept below 1.5 times the desired image resolution (i.e., n = 1...24 for a desired image resolution of 16 × 16).

As expected, the optimal aperture size a is always the defined minimum, as it ensures the highest-resolution directional sampling. It should be noted that, without fabrication limitations, smaller aperture sizes yield even smaller condition numbers. The optimal numbers of triangular slits per edge n were found to be 16 and 32 for the smaller sheet with a desired resolution of 16 × 16 and the larger sheet with a target resolution of 32 × 32, respectively. Thus, 54 photosensors were used for each triangular slit in both cases. The optimal distance d between aperture and photosensors was found to be 3.25 mm in both cases. Parameters for other configurations and target resolutions were determined analogously.

To increase the dynamic range and signal-to-noise ratio of the photosensors, we record multiple exposures (up to 11, between 20 ms and 900 ms) with multiple (on average 2 per exposure – more for lower and less for higher exposures) readouts per recording. Initially, the transfer functions of all photosensor elements are measured and linearized individually. The measurement of the light-transport matrices and the ambient light contributions are also measured during this one-time calibration, as explained in section 3.

To automatize the calibration procedure of our experiments, we use an LCD video projector (SP-M250S/Samsung) to focus light impulses and sample images on the LC surface. The exposure times of the photosensors are adjusted to the projector brightness. Figure 8 illustrates our experimental setup.

 figure: Fig. 8

Fig. 8 Experimental setup: LC sensor surrounded by four line scan cameras. An LCD projector provides focused light impulses and sample images for automized calibration and experimentation.

Download Full Size | PDF

7. Results

In our experiments, we evaluated two different LC sheet sizes (smaller: 108 mm × 108 mm and larger: 216 mm × 216 mm, see section 6 for optimal slit configurations), several reconstruction resolutions (9 × 9, 16 × 16, 32 × 32, and 64 × 64 for direct reconstruction and 18 × 18, 27 × 27, 32 × 32, and 64 × 64 for super-resolution reconstruction), and various image reconstruction techniques (BiCGStab, NNLS, QRD, SVD, pseudo-inverse (PINV), SART, filtered backprojection (FBP)) using a total of nine different sample images. All images were focused on the entire (planar) LC sheet area. Figure 9 illustrates a comparison for a reconstructed image with a resolution of 16 × 16 pixels.

 figure: Fig. 9

Fig. 9 Comparison of different image reconstruction techniques for a sample image with a resolution of 16 × 16 pixels: non-negative least squares (NNLS), simultaneous algebraic reconstruction technique (SART), biconjugate gradients stabilized (BiCGStab), QR decomposition (QRD), singular value decomposition (SVD), pseudo-inverse (PINV), filtered backprojection (FBP).

Download Full Size | PDF

We found that the reconstruction quality of QRD, SVD, PINV, and FBP is not acceptable – even for low image resolutions. This is due to numerical instabilities resulting from high condition numbers of T (QRD, SVD, PINV) or insufficient directional sampling (FB). Only SART, NNLS, and BiCGStab (without an overshooting number of iterations) provided reasonable reconstruction quality. For these techniques, Fig. 10 shows direct reconstruction and super-resolution reconstruction results for different resolutions. Note, that for SART we use an initial guess that is computed with a few (30) BiCGStab iterations. In the following, we refer to this as BiSART. NNLS and BiCGStab do not require an initial guess. We apply the structural similarity index (SSIM) [14] for a quantitative comparison of the results with the ground truth. The SSIM is a commonly applied objective method for assessing perceptual image quality based on the degradation of structural information. It enhances comparison results over traditional methods like peak signal-to-noise ratio (PSNR) and mean squared error (MSE). Our first observation is that no significant reconstruction difference between the small and the large LC sensor sheets could be found. SART, NNLS, and BiCGStab performed equally well independently of LC sheet size.

 figure: Fig. 10

Fig. 10 Direct reconstruction results for target resolutions of 9 × 9, 16 × 16, and 32 × 32; and super-resolution reconstructions results for resolutions of 16 × 16 to 32 × 32 and 32 × 32 to 64 × 64. The structural similarity index (SSIM) [14] is provided in blue. A SSIM of 1.0 indicates a perfect match with the ground-truth / best possible image at the corresponding resolution. Lower SSIM values indicate larger differences.

Download Full Size | PDF

A second observation is that while BiSART always leads to slightly better reconstructions than BiCGStab, NNLS results in a higher quality only in two cases (Fig. 10, column j). We believe that the sparsity of these results are of advantage for least-squares estimators that constrain the solution by iterative extension of the active set of unknowns (e.g., based on the Lagrange multiplier, as for NNLS).

A final observation is that we reach the limits for direct reconstructions with our prototype at a resolution of 32 × 32. The dynamic range and the signal-to-noise ratio of the line scan cameras, as well as the sampling resolution that is constrained by our fabrication process were insufficient for directly reconstructing a resolution of 64 × 64. A super-resolution reconstruction of 64 × 64 from 2 × 2 × 32 × 32 results in improvements. The reconstruction quality at the image center is lower than at the borders. The reason for this are the wide apertures of the triangle slits (limiting depth of field), and the low quality photosensors (making it impossible to pick up slight brightness variations of far pixels within the measured light integrals).

Table 1 presents execution timings of unoptimized image reconstruction algorithms implemented on multicore-CPU and graphics-processor (GPU) hardware. Note that the implicit matrix factorization required for NNLS is not well suited for the single-instruction-multiple-data (SIMD) architecture of GPUs. A higher performance for NNLS can be achieved on multiple-instructions-multiple-data (MIMD) architectures of multicore-CPUs that allow task-level parallelization. However, BiCGStab and BiSART are well suited for SIMD parallelization on GPUs.

Tables Icon

Table 1. Computation times of NNLS multicore-CPU code (on Intel i7 QuadCode, 2.67 GHz) and BiCGStab/BiSART GPU implementations (on NVIDIA GTX 580, 772 Mhz) for direct reconstructions and super-resolution reconstructions. For higher resolutions (128 × 128 and above, in our experiments), super-resolution reconstruction outperforms direct reconstruction. The sizes of the light-transport matrices range from 280-thousand (9 × 9) to 57-million (128 × 128) entries.

As outlined above, the difference of reconstruction quality between BiCGStab and BiSART is marginal. However, BiCGStab is significantly faster – especially for higher resolutions. For target resolutions of 128 × 128 and higher, super-resolution reconstruction outperforms direct reconstruction in our experiments. The speed-up grows proportionally with the target resolution.

8. Limitations

The main limiting factors that constrain reconstruction quality and resolution of our approach are the relatively small dynamic range (10 bit), low signal-to-noise ratio (we measured a 20-log-ratio as low as 20 dB), and low resolution (54 photosensors per triangle slit) of the CIS line scan cameras used. They are normally employed in flatbed scanners (where neither a high signal-to-noise ratio nor a large dynamic range or a high resolution is required), and are not capable of measuring small differences in radiance and very low or very bright intensities. High-dynamic range (HDR) measurements using multiple exposures improves this situation, but leads to longer recording times.

Other limitations are the relatively wide aperture openings used, the inefficient decoupling of light with the photosensors that are loosely placed (not glued) on top of the LC surface, and remaining stray light that passes imperfectly cut and filled triangular areas. All of these issues are due to fabrication constraints.

Smaller aperture openings would narrow the integration area that is measured by a single photosensor. On the one hand, this has a positive effect on the condition number of the light-transport matrix, as it increases the numerical stability of image reconstruction. On the other hand, it opens up the possibility of using other techniques such as filtered backprojection, which, in turn, enables the reconstruction of higher resolution images in less time.

Higher decoupling efficiency, dynamic range, and signal-to-noise ratio support shorter exposure times and smaller (and therefore more) photosensors. This also leads to higher reconstruction resolution and quality in less time. More photosensors per triangle slit together with smaller aperture openings lead to a higher sampling density.

9. Future work and applications

Micro-lenses attached to the LC borders would be more light-efficient for sampling the 2D light field than triangle slits cut into the LC surface. However, this requires a production process that ensures a precise and robust alignment of lenses, photosensors and film. In addition to general fabrication improvements, we intend to investigate the following configurations that are enabled by transparent and flexible image sensors:

Using stacks of multiple thin-film LC layers with small overlaps of absorption and emission spectra makes color sensors possible, as illustrated in Fig. 11.

 figure: Fig. 11

Fig. 11 Color sensor: Stack of multiple LC layers (a) with small overlaps of absorption and emission spectra (c).

Download Full Size | PDF

Stacking also allows simultaneous recording at multiple exposures, as shown in Fig. 12(a). In this case, the photosensors attached to each LC layer would use a different exposure time. The overall capture time then depends only on the maximum exposure time and not on the integral of all exposure time slots. Compared to multi-exposure sequences, this effectively reduces the time for HDR recordings by a factor of two.

 figure: Fig. 12

Fig. 12 High-dynamic-range sensor: Simultaneous measurements of multiple exposures with (a) stacked LC layers, (b) directional multiplexing, (c) positional multiplexing, or a combination thereof.

Download Full Size | PDF

Simultaneous recording of multiple exposures can also be achieved by multiplexing them directionally (i.e., having multiple photosensors within each triangular slit record at different exposure times), positionally (i.e., having all multiple triangular slits record at different exposure times), or a combination thereof. This is illustrated in Figs. 12(b) and 12(c).

In both cases, stacking and multiplexing, the photosensors that record shorter exposure times can repeat their measurements (possibly varying the exposure) within the time slot required for the maximum exposure time. This leads to more exposure samples without increasing the overall recording time. Initial experiments with directional and positional multiplexing revealed a decrease of reconstruction quality by 20% and a speed-up of recording by a factor of 2, compared to multiple sequential exposures for all directions. A higher sampling density will reduce the loss of reconstruction quality.

Applying different neutral density (ND) filters in front of the photosensors instead of varying exposure times would also enable HDR recordings.

We will investigate curved and flexible sensor shapes and will evaluate the efficiency of our light-transport measurements and image reconstruction techniques for higher degrees of cone-loss. We will also seek more robust and faster image reconstruction techniques and will optimize our GPU implementations for supporting applications that require better performance.

Potential applications of transparent and flexible imaging sensors include

  • new forms of user-interfaces, such as non-touch screens (i.e., graphical user interfaces that react to user input without the screen surface being touched) – e.g., by recording and evaluating shadows cast on the sensor surface);
  • novel lens-less imaging devices that record 4D light fields – as discussed in [5];
  • wide-field-of-view imaging systems with low aberrations – as presented in [1];
  • high-dynamic-range or multi-spectral extensions for conventional cameras, e.g., by mounting a stack of LC layers on top of a high-resolution CMOS or CCD sensor, and by recording and combining low- and high-resolution images at multiple exposures or spectral bands as done, for example, in HDR and wide color gamut displays combining a high-resolution LCD panel with a low-resolution LED backlight matrix [15];
  • improved touch-sensing devices that are based on frustrated total internal reflection (FTIR) [16, 17] – e.g., by enabling the recording of 2D light fields using arrays of triangular apertures within the light guides for improving image reconstruction, or by sandwiching our image sensor with an unmodified light guide to enable thin form-factors (compared to the common FTIR devices that apply regular cameras).

Acknowledgments

We thank Robert Koeppe of isiQiri interface technologies GmbH for fruitful discussions and for providing LC samples. This work was supported by Microsoft Research under contract number 2012-030(DP874903) – LumiConSense.

References and links

1. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C. J. Yu, J. B. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454(7205), 748–753 (2008). [CrossRef]   [PubMed]  

2. T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Appl. Phys. Lett. 92(21), 213303 (2008). [CrossRef]  

3. G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10(17), 1431–1434 (1998). [CrossRef]  

4. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE T. Electron Dev. 52(11), 2502–2511 (2005). [CrossRef]  

5. A. F. Abouraddy, O. Shapira, M. Bayindir, J. Arnold, F. Sorin, D. S. Hinczewski, J. D. Joannopoulos, and Y. Fink, “Large-scale optical-field measurements with geometric fibre constructs,” Nature Mat. 5(7), 532–536 (2006). [CrossRef]  

6. R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Opt. Express 18(3), 2209–2218 (2010). [CrossRef]   [PubMed]  

7. S. A. Evenson and A. H. Rawicz, “Thin-film luminescent concentrators for integrated devices,” Appl. Optics 34(31), 7231–7238 (1995). [CrossRef]  

8. P. J. Jungwirth, I. S. Melnik, and A. H. Rawicz, “Position-sensitive receptive fields based on photoluminescent concentrators,” P. Soc. Photo-Opt. Ins. 3199, 239–247 (1998).

9. I. S. Melnik and A. H. Rawicz, “Thin-film luminescent concentrators for position-sensitive devices,” Appl. Opt. 36(34), 9025–9033 (1997). [CrossRef]  

10. J. S. Batchelder, A. H. Zewail, and T. Cole, “Luminescent solar concentrators. 1: Theory of operation and techniques for performance evaluation,” Appl. Opt. 18(18), 3090–3110 (1979). [CrossRef]  

11. M. Slaney and A. Kak, Principles of Computerized Tomographic Imaging (IEEE Press, 1988).

12. G. T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed. (Springer Verlag, 2010).

13. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm,” Ultrasonic Imaging 6(1), 81–94 (1984). [CrossRef]   [PubMed]  

14. Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

15. H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs, “High dynamic range display systems,” ACM T. Graphic 23(3), 760–768 (2004). [CrossRef]  

16. J. Y. Han, “Low-cost multi-touch sensing through frustrated total internal reflection,” in Proceedings of the 18th annual ACM symposium on User interface software and technology, (Association for Computing Machinery, New York, 2005), 115–118. [CrossRef]  

17. J. Moeller and A. Kerne, “Scanning FTIR: unobtrusive optoelectronic multi-touch sensing through waveguide transmissivity imaging,” in Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction, (Association for Computing Machinery, New York, 2010), 73–76. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 A thin-film luminescent concentrator (LC) is a flexible, fully transparent, scalable, and low-cost polymer film. Our approach reconstructs grayscale images focused onto the LC surface. The image shows Bayer Makrofol® LISA Green LC film that absorbs blue and re-emits green light.
Fig. 2
Fig. 2 Light transport within luminescent concentrator: 1) Incident light is transmitted and not absorbed by a fluorescent molecule. 2) Emitted light is lost at the critical escape cones. 3) Light that is not reflected on the surface is absorbed, re-emitted, and transported to the edge either directly or by total internal reflection. 4) Emitted light is self-absorbed by another dye molecule.
Fig. 3
Fig. 3 Schema for sampling light transport as a 2D light field: (a) Photosensors (s1, s2,...,sj) located at the edges of the LC sheet that is divided into (p1, p2,..., pi) virtual pixels (from top left to bottom right). The photosensors are positioned at the bottom of the triangular aperture slits that are located along the LC edges. (b) Close-up of a triangular slit. Each photosensor measures the transported light integral at a particular angle. (c) The measurements of the photosensors at the same local position within each triangle at the same edge can be considered as the projection of light to the edge at a specific angle.
Fig. 4
Fig. 4 Microscopic views of triangular slit structure: (a) Darkfield image of single triangular slit sampling multiple directions ϕ⃗ at one slit position x⃗i. (b) Brightfield image of multiple triangular slits sampling the 2D light field L(x, ϕ).
Fig. 5
Fig. 5 Super-resolution reconstruction with shifted light-transport matrices (principle): reconstructing 2 × 2 low-resolution pixels (blue) at 3 × 3 sub-pixel-shifted positions results in an image with a resolution of 6 × 6 pixels (yellow).
Fig. 6
Fig. 6 Super-resolution reconstruction steps (9 × 9 to 27 × 27 example): The upper row shows the nine low-resolution reconstructions created with the 3 × 3 shifted light-transport matrices. The center row shows the same images with the reconstructed pixels placed at the correct positions within the high-resolution image. The bottom row presents the accumulation of the center-row images from left to right. (a) The final 27 × 27 super-resolution reconstruction. (b) Best possible result: original image (d) convolved with a 3 × 3 average kernel. (c) Direct reconstruction with a single high-resolution transport matrix.
Fig. 7
Fig. 7 Optimizing LC sensor parameters: An aperture of width a and a distance d to the photosensors lead to the field of view α of one triangular slit. It defines the distance w that is required by the photosensors at the edge of the LC. The integration area for a single photosensor is highlighted in orange.
Fig. 8
Fig. 8 Experimental setup: LC sensor surrounded by four line scan cameras. An LCD projector provides focused light impulses and sample images for automized calibration and experimentation.
Fig. 9
Fig. 9 Comparison of different image reconstruction techniques for a sample image with a resolution of 16 × 16 pixels: non-negative least squares (NNLS), simultaneous algebraic reconstruction technique (SART), biconjugate gradients stabilized (BiCGStab), QR decomposition (QRD), singular value decomposition (SVD), pseudo-inverse (PINV), filtered backprojection (FBP).
Fig. 10
Fig. 10 Direct reconstruction results for target resolutions of 9 × 9, 16 × 16, and 32 × 32; and super-resolution reconstructions results for resolutions of 16 × 16 to 32 × 32 and 32 × 32 to 64 × 64. The structural similarity index (SSIM) [14] is provided in blue. A SSIM of 1.0 indicates a perfect match with the ground-truth / best possible image at the corresponding resolution. Lower SSIM values indicate larger differences.
Fig. 11
Fig. 11 Color sensor: Stack of multiple LC layers (a) with small overlaps of absorption and emission spectra (c).
Fig. 12
Fig. 12 High-dynamic-range sensor: Simultaneous measurements of multiple exposures with (a) stacked LC layers, (b) directional multiplexing, (c) positional multiplexing, or a combination thereof.

Tables (1)

Tables Icon

Table 1 Computation times of NNLS multicore-CPU code (on Intel i7 QuadCode, 2.67 GHz) and BiCGStab/BiSART GPU implementations (on NVIDIA GTX 580, 772 Mhz) for direct reconstructions and super-resolution reconstructions. For higher resolutions (128 × 128 and above, in our experiments), super-resolution reconstruction outperforms direct reconstruction. The sizes of the light-transport matrices range from 280-thousand (9 × 9) to 57-million (128 × 128) entries.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

θ c = arcsin 1 n ,
P = 1 1 1 n 2 .
I = I 0 e μ d
s = T p + e ,
p = T 1 ( s e )
p = T T ( s e ) ,
a = 2 d tan ( α / 2 ) w tan ( α / 2 ) cot ( α / 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.