Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor

Open Access Open Access

Abstract

In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

© 2016 Optical Society of America

1. Introduction

Ultra-high-speed (UHS) imaging, which allows images to be captured at rates of mega frames per second (fps) or higher, is a powerful tool for analyzing UHS phenomena in various fields, such as chemistry, biology, and other scientific research. UHS imaging schemes are categorized into two types: time-resolved imaging in the optical domain and the electronic domain. Examples of the optical domain method include a pump-probe method that can observe repetitive phenomena at over 100 Gfps [1] and sequentially-timed all-optical mapping photography (STAMP) that can capture 6 images at 4.4 Tfps for a single event [2]. Although these methods can capture images at high frame rates, bulky special light sources and optical systems are required. On the other hand, electronic-domain methods allow more compact systems. A streak camera [3] is an example of a UHS time-resolved device. This camera obtains spatially-one-dimensional images at over 10 Tfps by mapping photons to two-dimensional space (one-axis for space and the other for time). Although this type of camera achieves the highest frame rate among electronic UHS imaging devices, it can capture only one-dimensional images. In the field of computational photography, transient imaging methods based on conventional time-of-flight cameras have been studied [4, 5]. Resolutions shorter than 100 ps have been reported. However, they require scanning of delay or frequency. Therefore, single event ultra-fast phenomena cannot be observed.

To capture two-dimensional high-frame-rate images with a simple system, UHS image sensors are utilized. Silicon-chip implementations of UHS image sensors are important in industrial and scientific applications due to their compactness, ease of handling, and low-power operation. They are classified into continuous capturing devices, where images are captured and read out continuously, like video cameras [6], and burst capturing devices, where a predefined number of frames are captured at one time and then read out. Burst capturing can achieve a much higher frame rate, e.g. faster than mega fps, and frame rates in the 1 to 10 Mfps regime have been reported so far [7–9]. Typical silicon UHS image sensors are equipped with a multi-frame memory integrated into the sensor, and electronic images formed by converting photons to electrical signals with photodiodes (and related circuitry) are transferred to these memories during capturing. In burst capturing, frame memories are not externally read out during capturing, so that a high frame rate, determined by the transfer speed to the frame memory, can be realized. A UHS image sensor implemented by a charge-coupled device (CCD) [7, 8] has been used to capture 139 frames at 16.7 Mfps at one time. With complementary metal oxide semiconductor (CMOS) technology, a system that captures 248 frames at 20 Mfps has been achieved [9]. The frame rate of silicon-chip UHS image sensors is determined by several factors, such as photo-electron transfer from the photodiode to the detection node, the pixel readout circuit, and signal transfer. Although the ultimate frame rate of silicon-chip UHS image sensors is defined by the saturation speed of electrons in silicon, which is typically 105μm/ns [10], the performance of conventional UHS image sensors is far from this level. Further improvement of the frame rate is expected.

Compressive sampling (CS) [11], which is a new efficient sampling method, has been studied for extending the functionality and performance of imaging systems. CS imaging systems are categorized into computational imaging and computational photography approaches. CS is based on the sparsity of a signal, and reproduces the original signal from an observed signal whose number of sampling points is smaller than that of the original signal. Information in space, time, wavelength, and light angles is effectively compressed. Several CMOS image sensors based on CS have been developed [12]. For example, CS has been incorporated into column-parallel ΔΣ analog-to-digital converters to reduce the amount of image data [13]. Furthermore, it has been combined with a streak camera to allow multiple two-dimensional spatial images to be captured. This method modulates image information in the optical and electrical domains to realize frame rates of up to 100 Gfps [14]. A multi-aperture (MA) camera, composed of multiple lenses and (sub-)image-sensors, is one of the emerging unconventional imaging optics in computational imaging [15–17]. A combination of MA (or multi-camera) and CS to enhance the frame rate has been proposed [18,19], where multiple frames are temporally compressed. Consecutive images are reconstructed in post-processing by solving the inverse problem. The benefits of these methods are high frame rate and high sampling efficiency.

We have found that a combination of MA optics and CS is also advantageous for implementing UHS image sensors. In the framework of frame rate enhancement based on MA and CS, some of the time-sequential images are selected by a binary random shutter, which is called a coded shutter, and are superimposed on the focal plane of the image sensor. Here, the most important fact is that the temporal resolution or the frame rate is defined only by the highest frequency of the shutter. Neither the pixel readout circuit nor signal transfer to the frame memory affects the frame rate. Therefore, a pixel with UHS charge modulation can implement a UHS image sensor. We have proposed and demonstrated a UHS charge modulation pixel called a lateral electric field charge modulator (LEFM) whose temporal resolution is much shorter than 1 ns [20, 21]. From the viewpoint of UHS image sensor implementation, MA-CS frame rate enhancement is a promising approach for increasing the frame rate of burst capturing because the ultimate frame rate of the image sensor can be as fast as the electron transfer speed, defined by the saturation speed of electrons if the shutter control signaling is elaborately designed. This is what we can call an ultra-fast electronic version of flutter shutter [22]. However, high-speed imaging based on MA and CS is free from directional dependency of temporal resolution enhancement unlike the original flutter shutter method where only one-dimensional motion is resolved. Based on the above idea, a dedicated UHS CMOS image sensor was fabricated and demonstrated. This prototype image sensor achieved the highest reported frame rate of up to 200 Mfps. With CS, 32 frames were reconstructed from 15 captured compressed images. The electronic part and the results of the image capturing demonstration are described in [23].

In this paper, the proposed method is compared with the conventional methods based on CCDs and CMOS devices, and the benefits and drawbacks are clarified. In addition, the image reproduction procedure is also discussed. The compression ratio and PSNR of the reproduced images are discussed based on simulation. A processing flow specific to the MA optics, such as disparity correction and shutter skew, is also shown, and its effectiveness was confirmed in simulations.

In section 2, the concept and principle of the temporally compressive MA image sensor are introduced. Then, the frame rate and optical efficiency are compared with those of conventional UHS image sensors. In Section 3, the relationship between the PSNR and compression ratio is investigated by simulation. Section 4 explains the fabricated 5×3 MA CMOS image sensor and its operation. Section 5 shows a processing flow of image capturing with the sensor. Section 6 shows the experimental configuration and image processing results of plasma emission captured at 200 Mfps, where disparities, pixel temporal responses, and shutter skews among the apertures are corrected. Section 7 discusses the scalability and signal-to-noise ratio. Finally, section 8 concludes this paper.

2. Architecture

2.1. Principle

2.1.1. Compressive sampling

In CS [11], a whole signal can be reproduced from fewer samplings on the basis of the sparsity of the signal. The image acquisition process of the whole camera system, including space and time, is formulated by the following linear expression:

y=Ax,
where x, y, and A are the original signal (dimension 1), the observed signal (1), and an observation matrix (m×n), respectively. Note that the dimension of y is smaller than that of x, so that this problem is underdetermined.

If x is K-sparse, this problem becomes compressive, and an approximate solution can be obtained from y and A. K-sparse means most of the elements of x are zero, and only K (< n) elements have non-zero values. In solving this inverse problem, total variation [24] regularization is often used, as denoted by

minxiDixps.t.y=Ax,
where Dix is the discrete gradient of x at pixel i, and ||Dix||p means the p norm of Dix. Here, p can be either 1 or 2. In this equation, the original signal x is estimated by minimizing the sum of ||Dix||p.

2.1.2. Compressive UHS image sensor

Our scheme realized the fastest reported UHS imaging system, and high sampling efficiency can be realized by combining particular optics, image sensor, and processing. As will be described in Sec. 2.3, the bottleneck in accelerating the frame rate in a solid-state UHS image sensor is eliminated by this cooperative architecture. As imaging optics, an MA imaging system shown in Fig. 1 is used. A lens and an image sensor constitute an aperture, and the whole camera is composed of an array of apertures. The MA optics is combined with a temporally coded focal-plane shutter. Figure 2 depicts the procedure of our UHS imaging based on the MA optics and CS, with aperture-wise coded shutters. A subject (Fig. 2(a)), which moves or changes in time, is optically duplicated in every aperture by the lens array (Fig. 2(b)). Each aperture has its own sub-image-sensor (Fig. 2(c)) with a function for temporally modulating and accumulating photo-generated charges with a binary random coded shutter (Fig. 2(d)). Note that the coded shutters are identical in the same aperture but differ aperture by aperture. The operation at the aperture is similar to multiple exposure [25]. The images obtained by this image sensor are blurry because multiple images for different timings are superimposed (Fig. 2(e)). After solving the inverse problem of the image acquisition system with the coded shutters, the temporally multiplexed images are decomposed. Finally, temporally resolved images are obtained (Fig. 2(f)).

 figure: Fig. 1

Fig. 1 Multi-aperture imaging system.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Procedure of the proposed method.

Download Full Size | PDF

In the proposed scheme, the coded shutters correspond to the observation matrix A, which consists of a stack of m vectors: a1am. Here, a vector ai has n elements: si1 − sin. x and y are composed of n consecutive images x1 xn and m compressed images y1 ym, respectively. xi and yi are column vectors where two-dimensional image is rearranged by raster scanning.

(y1y2ym)=(a1a2am)(x1x2xn)=(s11s1nsm1smn)(x1x2xn).
ai shows a temporal shutter pattern for aperture-i, and is represented by a row vector, (si1...sin). sij means shutter opening, namely, 0 or 1. For sij = 1, an image for the j-th frame is included in the compressed image for aperture-i, yi. On the other hand, for sij = 0, the j-th frame is not included. Thus, observation matrix A is represented by a stack of ai as shown in Eq. (3).

2.2. Sensor architecture

The MA UHS image sensor architecture composed of m apertures is shown in Fig. 3. Each aperture (AP) has a shutter controller and a pixel array. The key technology is a charge modulation pixel that realizes a focal-plane UHS electronic shutter with an arbitrary binary pattern. The function of the charge modulation pixel is schematically depicted in Fig. 3 as a charge multiplexer and an accumulator. The temporally multiplexed signal is stored in a floating diffusion (FD) or a storage diode (SD). The pixel structure is described in more detail in Sec. 4.1. The shutter controller has a memory for the binary shutter pattern and outputs the shutter control signal to the pixels during image capturing.

 figure: Fig. 3

Fig. 3 Image sensor architecture.

Download Full Size | PDF

In this scheme, the pixel array itself works as a frame memory and is read out only once after capturing of n frames is completed. During image capturing, the pixel readout circuit does not operate, although it does in conventional CMOS UHS image sensors. Hence, the frame rate is limited only by the one-stage charge transfer time from PD to FD or SD. When the shutter controller is elaborately designed, the highest charge transfer speed is determined by the saturation velocity of electrons in silicon. Furthermore, CS enables higher compression ratio than conventional UHS image sensors. Here, the compression ratio is defined as the ratio of the number of reproduced data points (the number of frames times the number of pixels) and the number of observed points. Although the compression ratio of the conventional image sensors is always one, that of the proposed scheme can become more than one.

2.3. Comparison

In this section, the proposed UHS camera is compared with conventional cameras in terms of frame rate and optical efficiency. In UHS imaging, three kinds of “rates” should be considered: 1) charge modulation rate, 2) effective frame rate, and 3) readout frame rate. The charge modulation rates can be further classified to pure charge modulation rate determined only by the charge modulator performance and gross charge modulation rate by both of the charge modulator and the control circuit. The effective frame rate means the actual frame rate of the reproduced images. The readout frame rate that means the frequency for reading out a full image is determined by the operation speed of peripheral circuits such as correlated double sampling circuit and analog-to-digital conversion circuit in the CMOS image sensor. In the following, effective frame rate is referred to as the frame rate of UHS image sensors.

2.3.1. Frame rate

First, the maximum effective frame rates are roughly compared. In CCD high-speed image sensors [7, 8], a multi-frame memory is implemented in a pixel-distributed manner; that is, the CCD frame memory is directly connected to every PD. Thus, electrons generated at the PD are transferred to the multi-stage CCD memory, as shown in Fig. 4. This memory is controlled by the applied voltages on the gate electrodes of the CCD elements. For example, four phases are necessary for one-stage charge transfer. The effective frame rate of this sensor is defined by the charge transfer time from the PD to the 1st element of the CCD, tm, the single stage charge transfer time, tCCD, and the number of the stages, NCCD. tCCD is limited by the parasitic resistance and capacitance of the control electrode and charge transfer time. The effective frame rate becomes

fCCD=1max{tm,NCCD×tCCD}.

 figure: Fig. 4

Fig. 4 A CCD UHS image sensor.

Download Full Size | PDF

In [8], tm is 10 ns and the number of the stages NCCD is 4. Because 1/(NCCD×tCCD) = 16.7M is shown in the reference, tCCD can be approximately 15 ns.

The CMOS UHS image sensor [9] that stores signals in the column frame memory is shown in Fig. 5. This sensor reduces noise by using the in-pixel correlated double sampling (CDS) circuit before transferring the signal to the column frame memory. In [9], one-fourth of the pixel values are transferred at once to the frame memory, and this operation is repeated two or four times. Therefore, the effective frame rate is defined by the charge transfer time from the PD to the FD, tm, the CDS time, tCDS, the number of readout divisions, NCMOS, and the signal transfer time from the pixel to the memory, tCMEM. To shorten tCMEM, a large current is supplied to the pixel array. However, the maximum current is limited by electromigration, which determines the maximum effective frame rate. Furthermore, parasitic resistance and capacitance of the vertical read line also degrade it. The effective frame rate is given by

fCMOS=1tm+tCDS+NCMOS×tCMEM.

 figure: Fig. 5

Fig. 5 A CMOS UHS image sensor.

Download Full Size | PDF

The values of tm, NCMOS and tCMEM are shown in [9] as 10 ns, 2, and shorter than 15 ns, respectively. The tCDS can be estimated from the Fig. 6(b) of the reference to be about 1.5×tCMEM, namely, tCDS < 22.5 ns.

 figure: Fig. 6

Fig. 6 Simulation results of the proposed method. (a) Original consecutive images and (b) compressed images for 6 apertures.

Download Full Size | PDF

As shown in Fig. 2, our scheme based on MA optics and CS does not need any multi-stage charge transfer in the CCD or pixel readout accompanied with CDS during capturing in the CMOS. Therefore, the effective frame rate is limited only by the gross charge modulation time, tm. Thus, the effective frame rate becomes

fproposed=1tm.

In the proposed method, note that the gross charge modulation rate and the effective frame rate after reproduction are the same. In [23], the charge modulation speed was determined by the digital control circuit, not by the charge transfer speed. Therefore, tm is 5 ns. Equations (4)(6) show that the proposed method can potentially achieve the highest effective frame rate because the effective frame rate is determined only by tm. The parameters described in or estimated from [8,9,23] are summarized in Table 1.

Tables Icon

Table 1. Summary of parameters and frame rates.

2.3.2. Photosensitivity

This section discusses the photosensitivity of UHS image sensors with single- and multi-aperture optics. The number of the electrons generated in one pixel is defined by

Ne=aηART4Fn2Eoτ
where a is a constant of proportionality, η is the quantum efficiency, which is the ratio of the number of generated electrons to the number of incident photons, A is the pixel area, R is the reflectivity of the object, T is the lens transmittance, Fn is the F-number of the single-aperture lens, Eo is illumination on the object surface, and τ is exposure time. This equation can be simplified with a constant C = a · η · R · T · Eo · τ/4:
Ne=CAFn2.

In terms of limitation of the sensor size determined by the chip yield, a reasonable constraint is that the product of the pixel area and the number of apertures in a multi-aperture system should be the same as the pixel area in a single-aperture system. Thus, we assume Am = As/M, where subscripts s and m mean single- and multi-aperture, respectively. When we consider square M, where ps is the single-aperture pixel pitch. The total number of electrons generated in the corresponding pixels for all apertures is expressed by:

Nem,tot=CAmFnm2M=CAsFnm2.

If these F-numbers are the same, the same numbers of electrons are obtained, which means the photosensitivities are the same. However, with multiple apertures, the total noise is larger than that of a single aperture because the noise for M corresponding pixels is summed up [26]. However, this drawback can be alleviated with a low-noise ADC, as used in our prototype.

Pixel size is a significant factor related to photosensitivity. For example, the pixel size in [9] is 32 μm square. Assuming M = 7×7 apertures, which allows approximately 100 frames to be reconstructed for a 200% compression ratio, the pixel size of the multiple apertures becomes about 4.6 μm square. This pixel size can be achieved by the pixel sharing technique [27] and more advanced process technologies. Furthermore, development of a fast, small distortion lens array is needed.

3. Simulation results

The proposed scheme was confirmed by simulation. Twelve consecutive images shown in Fig. 6(a) were prepared as an original image set, which imitates an electric discharge [28]. Coded shutters were generated randomly under the following constraints; the average shutter opening was approximately 50%, and at least one shutter among the apertures was open in every frame. Because our prototype sensor has two storage diodes per photodiode as described in Sec. 4, a shutter opening of 50% is selected to let the saturation levels the same for both diodes. Figure 6(b) shows an example of the compressed images. Gaussian sensor noise and photon shot noise were considered in the compressed images. The standard deviation of the image sensor noise is set to one electron, assuming a low-noise, high-dynamic-range column analog to digital converter [29]. The number of saturation electrons was 10,000.

To investigate the relationship between the compression ratio and the peak-signal-to-noise ratio (PSNR), PSNR was calculated by changing the number of apertures. Fifty different sets of coded shutters were randomly generated for every compression ratio. The simulation results are shown in Fig. 7, and the average and maximum PSNRs are summarized in Table 2. TVAL3 [30] was used as a CS solver. Original TVAL3 code uses two-dimensional (xy) total variation (TV). We assigned temporal axis to y-axis in simulation for simplicity, so that TV in space is one-dimensional. Note that no dictionary, such as a wavelet transform or cosine transform, was applied.

 figure: Fig. 7

Fig. 7 Relationship between compression ratio and PSNR.

Download Full Size | PDF

Tables Icon

Table 2. Summary of PSNR.

Figure 8 compares the reproduced image sets with the best PSNR for a given compression ratio. Figure 8(a) is an original image set (the same as Fig. 6(a)). Figures 8(b)–8(d) are the reconstructed images for the compression ratios of 400%, 200%, and 133%, respectively. At the compression ratio of 400%, the outline and the brightness were not reproduced correctly. However, the outlines in Figs. 8(c) and 8(d) seem to be correctly reproduced. A rough observation is that a PSNR of 20 dB is required for correct outline reproduction and 30 dB for correct brightness reproduction. Figure 9(a) (the same as Fig. 8(c)) and 9(b) respectively compare the reproduced images with the best and the worst PSNRs for the compression ratio of 200%, which indicates the importance of shutter pattern selection.

 figure: Fig. 8

Fig. 8 Original and reconstructed images with different compression ratios. (a) Original images, and reconstructed images for compression ratios of (b) 400%, (c) 200%, and (d) 133%.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Reconstructed images for (a) the best PSNR and (b) the worst PSNR for the compression ratio of 200%.

Download Full Size | PDF

Coherence is an indicator of reproducibility in inverse problems, and is defined by the following equation [31],

μ(A)=max(1i<jN)|ai,aj|ai2aj2.

The minimum coherence values for the compression ratios of 400%, 200%, and 133% are 1, 0.87, and 0.77, respectively. In the proposed architecture, only the time information is compressed, and the number of the maximum sampling points for each pixel is not large, that is, 12 in simulation. Therefore, the coherence values are relatively high. Because no obvious relationship between coherence and PSNR is observed in simulation, the shutter pattern is evaluated only in PSNR in this paper.

Dependency of PSNR on shutter opening for a compression ratio of 200% is also confirmed. The maximum (average) PSNRs for the shutter openings of 25%, 50%, and 75% are 25.0 dB (20.7 dB), 26.2 dB (21.6 dB), and 24.8 dB (20.6 dB), respectively. The simulation results show that the shutter opening of 50% is the best in PSNR. However, no significant difference is observed.

The original images are binary images for simplicity to depict the outline of an textureless object like plasma described in Sec. 6.2. We confirmed that dependencies of PSNR on the shutter pattern were almost the same for binary and gray images although gray images gave better PSNRs.

4. Implementation

4.1. Image sensor

The block diagram and a photomicrograph of a prototype image sensor are shown in Figs. 10 and 11, respectively. The specifications are summarized in Table 3. This image sensor consists of 5×3 apertures, a clock controller, a clock tree, an addressing module, a voltage and current reference block, a serial peripheral interface, column-parallel readout circuits, vertical and horizontal scanners, and an output block. The aperture consists of a 64×108-pixel array and a shutter controller. The shutter controller supplies the shutter control signals to the pixels. The clock controller conveys one of two kinds of clock signals to write a coded shutter pattern or to generate the shutter control signals synchronized with the clock signal.

 figure: Fig. 10

Fig. 10 Block diagram of a prototype CMOS image sensor.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Photograph of a prototype CMOS image sensor.

Download Full Size | PDF

Tables Icon

Table 3. Specifications of the prototype image sensor.

To realize UHS charge modulation and accumulation, a lateral electric field charge modulator (LEFM) [20, 21] is utilized in each pixel. Figures 12(a) and 12(b) depict the pixel structure and its potential diagram, respectively. This pixel can transfer photo-generated electrons to one of the SDs and the drain by using control gates, G1 and G2, and a drain gate, TD. The accumulated photoelectrons are transferred to FDs by applying a high voltage to a transfer gate, TX. However, the fabricated sensor cannot accumulate electrons in the SDs possibly due to a fabrication problem. Therefore, photo-generated electrons are accumulated in the FDs directly by applying a constant high voltage to TX. When G1 or G2 is high and TD is low, the electrons in the photodiode are promptly transferred to FD1 or FD2, respectively, and are then accumulated. On the other hand, when TD is high and G1 and G2 are low, the electrons are all drained. Because the potential profile of the charge transfer path is controlled from both sides and there is no structure on the path, ultra-fast and loss-less charge transfer is realized. Electron transfer in the sub-nanosecond regime has been confirmed [21].

 figure: Fig. 12

Fig. 12 (a) Simplified LEFM pixel structure and (b) its potential diagram.

Download Full Size | PDF

The operation of this image sensor includes three phases: setting, capturing, and readout. These phases are explained below. In the setting phase, a shutter pattern with variable length is set in every shutter controller. The addressing module selects one of the APs to configure. In the capturing phase, the shutter controller outputs the shutter control signals to G1, G2, and TD of the pixels. Thus, the pixels accumulate modulated photo-generated electrons. After repeating the shutter generation as programmed, the captured images of all apertures are read out in the readout phase.

4.2. Aperture

Figure 13 shows a block diagram of one aperture. There are two types of circuits, one for generating the shutter pattern and one for controlling starting and stopping of the shutter pattern generating circuit, which are named a pattern generator and a start/stop controller, respectively. The pattern generator consists of a shutter pattern memory, a non-overlap signal generator, a clock tree, and level-shifting drivers. The memory is implemented by a cyclic shift register composed of 128 D-FFs, which can store a binary shutter pattern with 2 to 128 bits. The start/stop controller is equipped with an end-of-pattern memory and a start/stop detector.

 figure: Fig. 13

Fig. 13 Block diagram of one aperture.

Download Full Size | PDF

In the capturing phase, the shutter pattern memory is read out bit by bit, and the read-out bits are conveyed to the pixels through the non-overlap signal generator, the clock tree, and the level-shifting drivers. The non-overlap signal generator creates two complementary signals, GV1 and GV2, without any overlap so as not to let G1 and G2 turn on at the same time. During capturing, when the shutter pattern memory output is high, G1 is turned on, and the electrons are transferred to FD1. When the memory output is low, G2 becomes high. Then, the electrons are transferred to FD2.

The end-of-pattern memory and the start/stop detector are used for controlling starting and stopping of shutter pattern generation. The end-of-pattern memory is implemented in the same circuit as that of the shutter pattern memory, which stores the position of the end of the shutter pattern.

The circuits operate with clock signal CLK, and generation is controlled by CAP TRG. STOP SIG and CYC END signify the circuit condition. The details of these signals are described in the next section.

4.3. Timing chart

A timing chart of the shutter controller is shown in Fig. 14. Because every aperture works independently only with the clock and trigger signals in the capturing phase, the operation of only one aperture is described here.

 figure: Fig. 14

Fig. 14 Timing chart of the shutter controller.

Download Full Size | PDF

In the setting phase, a shutter pattern is set through the serial interface. A clock signal, CLK, and a data signal, PAT SDI, are used for this purpose. The pattern is written to the shutter pattern memory in the aperture selected by the aperture addressing module.

In the capturing phase, the clock signal, CLK, for generating a shutter pattern is conveyed to all apertures from the clock controller. CAP TRG is also connected to all apertures. After the setting phase, image capturing starts when CAP TRG changes from low to high. Then, the start/stop detector lets the shutter pattern memory start shutter signal generation by turning STOP SIG to low, and the shutter control signals are output to the pixels. CYC END, which is the output signal from the cycle-end memory, becomes low for three clocks at the end of one cycle. In the prototype image sensor, CYC END of one specific aperture out of the 15 apertures is externally output from the chip. While CAP TRG is high, pattern generation is repeated continuously. To stop pattern generation, CAP TRG is changed to low. After finishing the current shutter pattern cycle, pattern generation ends and STOP SIG becomes high. STOP SIG is monitored from the outside to detect the end of capturing. The inverted signal of STOP SIG is distributed to the pixels through the clock tree as TDV, which controls charge draining in the pixels. As a result, photo-generated electrons are all drained after capturing. In the proposed architecture, all pixels operate in the global shutter mode during capturing, that is, the same shutter control signals are applied to all the pixels in an array simultaneously. In the readout, instead, pixel values are read out in the rolling shutter mode. Note that in the readout phase, photo-generated charges are all drained and are not transferred to the storage diodes, so that no image distortion appears.

5. Capturing and processing flow

The operation flow of the MA temporally compressive imaging method is shown in Fig. 15. A UHS phenomenon is captured by the fabricated MA image sensor with coded shutter patterns, allowing compressed images to be obtained (Fig. 15(a)). Because each aperture has parallax, the position of the object in each aperture image is different. Furthermore, alignment of each lens has an error. To correct these spatial displacements, a simple method based on the centroids of images captured with the same shutter pattern (Fig. 15(b)) are used. This method can be used for simple situations that an object is approximately planar and its distance is constant and known. In this paper, a plasma emission in air is observed as shown in Sec. 6.2. The plasma is generated around the focusing point of laser, so that the distance can be measured prior to an observation. The shape is almost elliptical and its long axis is along the optical axis. Therefore, the plasma can be approximated as a planar object. In such a case, a simple registration method based on the centroid is applicable. By calculating the centroid of the object in every image of Fig. 15(b), the discrepancies between the apertures are obtained in sub-pixel units, assuming that the object is in a plane. After that, images are cropped to be registered based on the centroid (Fig. 15(c)). Although the ideal waveform of the applied shutter pattern consists of only binary signals 0 and 1, the actual one is skewed and blurry due to the arrival time of the clock signal and the finite pixel response, as shown in Fig. 15(d), which should be considered in reproduction. Therefore, in processing, the ideal shutter patterns are convoluted with the impulse responses of the aperture. Finally, temporally resolved images are reconstructed by solving the inverse problem with the applied shutter patterns (Fig. 15(e)).

 figure: Fig. 15

Fig. 15 Operation flow of the MA temporally compressive imaging.

Download Full Size | PDF

6. Experimental results

6.1. Impulse response of every aperture

To obtain the real temporal response of every aperture, the impulse responses of pixels were measured. The shortest electronic shutter was programmed to open for 5 ns, and a short pulse laser (HAMAMATSU PHOTONICS, M10306-09, wavelength 635 nm, pulse width 88 ps) illuminated the sensor. The laser emission delay was scanned in 0.5 ns steps over a 13 ns range. The normalized average of the output signal of each aperture is shown in Fig. 16. In the figure, the color and type of each line correspond to the column and row of the aperture position, respectively. The average rising and falling times calculated from the steepest slopes were 1.53 ns and 1.69 ns, respectively. The maximum skew between apertures was 3 ns.

 figure: Fig. 16

Fig. 16 Impulse response of every aperture.

Download Full Size | PDF

6.2. 200 Mfps imaging of plasma emission

6.2.1. Experimental configuration

To demonstrate single-event UHS imaging, plasma emission generated by focusing a nanosecond pulse laser (Nd:YAG, wavelength 532 nm (SHG), pulse width (FWHM) ~ 8 ns) in air was observed with the prototype image sensor at 200 Mfps (corresponding to 5 ns time resolution). Although the highest frame rate is defined by the pure charge modulation speed, that is, the reciprocal of the sum of rising and falling times, namely, 1/(1.53 ns + 1.69 ns) = 311 Mfps, the effective frame rate of this sensor is determined by the gross charge modulation speed including the operation frequency of the shutter controller, which depends on circuit design as well as process technology. Because the maximum operation frequency of the shutter controller was 200 MHz, the effective frame rate was limited to 200 Mfps in this experiment. Figures 17(a) and 17(b) show the experimental configuration and the applied shutter patterns, respectively. The image sensor was equipped with a lens array shown in Table 4, a ×20 objective lens, a band pass filter (red-transparent, transmittance of 0.14% at a wavelength of 532 nm), and ND filters as shown in Fig. 18. The lens array was composed of an iron lens slot made with machining and doublet lenses with a diameter of 1.0 mm and a focal length of 3.0 mm (TS achromatic lens 1X3 MGF2, Edmund Optics). Because the horizontal pitch of 0.72 mm was smaller than the lens diameter, one or both sides of the lenses were cut off with grinding. After that, the lenses were inserted in the iron slot, and glued to fix in it. Figure 19 shows a photograph of the lens array where a lid of the stop array was unmounted. The camera captured continuously with 32-bit random shutter patterns at 200 Mfps, which was controlled by a field-programmable gate array (FPGA; Altera, Cyclone III). When the plasma emission occurred, the avalanche photodiode (APD) outputs a trigger signal, and when the FPGA detected it, image capturing was stopped by setting CAP TRG low. Finally, the captured images were read out. The readout frame rate was 10 fps, which is equal to the pulsing frequency of the laser. The low and high voltage levels of GV1 and GV2 were -1 V and 2 V, respectively. For TDV, they were 0 V and 3 V.

 figure: Fig. 17

Fig. 17 (a) Experimental configuration for observing plasma emission and (b) the applied shutter patterns.

Download Full Size | PDF

Tables Icon

Table 4. Specifications of the prototype camera.

 figure: Fig. 18

Fig. 18 Optical setup.

Download Full Size | PDF

 figure: Fig. 19

Fig. 19 A fabricated lens array.

Download Full Size | PDF

6.2.2. Spatial and temporal calibration

A. Image registration

The 15 captured compressed images are shown in Fig. 20(a). The image position reflects the lens position and disparity. To compensate for these displacements, images for the same shutter pattern were captured (Fig. 20(b)). Thus, the image position of every lens was determined by the centroid of the plasma image shown in Fig. 20(b). Based on the centroid, the surrounding 40×60-pixels images were cropped (Fig. 20(c)).

 figure: Fig. 20

Fig. 20 (a) Fifteen captured compressed images, (b) images for registration, and (c) aligned and cropped compressed images.

Download Full Size | PDF

B. Single-event transient imaging

Temporally resolved images (Figs. 21(a) and 21(b)) were reconstructed by solving the inverse problem using TVAL3 from the original compressed images (Fig. 20(c)) and the applied shutter patterns (Fig. 17(b)). As shown in Fig. 20(c), 15 compressed images are slightly different reflecting the shutter patterns. Figures 21(a) and 21(b) compare the reproduced images obtained without and with taking into consideration the impulse responses of the image sensor, respectively. Figure 21(c) shows 15 non-compressed images to confirm the accuracy of the reconstructed images. These non-compressed images were captured by sliding shutter patterns with 5 ns-wide time windows shown in Fig. 21(d). In Fig. 21(a), the plasma emission was observed for 6 frames (30 ns), and in Fig. 21(b), it also occurred for the same period. As shown in Fig. 21(b), the reproduced images obtained by taking into consideration the pixel responses are more consistent with Fig. 21(c), which is regarded as the ground truth, and there are less artifacts in comparison with Fig. 21(a). Thus, an accuracy improvement of the reconstructed images was confirmed. The PSNR was improved from 25.8 dB to 30.8 dB in simulation when a measured impulse response was considered.

 figure: Fig. 21

Fig. 21 Temporally resolved images by (a) using ideal shutter pattern as observation matrix, (b) convoluting impulse response of each aperture, and (c) non-compressive temporally resolved images.

Download Full Size | PDF

7. Discussion

In UHS cameras, the number of sequentially captured frames is important. In general, over 100 frames are needed for practical applications. The frame number in the proposed method is determined by the number of apertures and the compression ratio. For example, with a compression ratio of about 200%, at least 7×7 apertures are necessary. Because every aperture of the proposed image sensor operates independently, and they are controlled only by the trigger and clock signals, it is easy to increase the number of apertures. Thus, the proposed method has high scalability. To implement a large aperture array, low-skew clock distribution is required. For simplicity, the current prototype image sensor distributes the clock signal from the left side of the aperture array through buffer circuits. Then, the repeater in each aperture conveys it horizontally from one aperture to the next aperture. Therefore, different skews occur depending on the column position, as shown in Fig. 16. To reduce the skews, an H-tree is effective, and this will be introduced in the next design.

Furthermore, the number of pixels is also an important factor. Because the parasitic capacitance of the LEFM gate becomes the load associated with the control signal lines, GV1, GV2, and TDV in Fig. 13, for the driver circuit, a strong and low-delay driver is needed for realizing UHS modulation. Because the LEFM drivers are implemented on one side of the pixel array in the prototype, skews that depend on the row position can occur. Reference [21] alleviates this problem by arranging drivers on both the top and bottom sides of the pixel array (256×512 pixels), and it was verified that time-resolved image capturing can be realized for over 100k pixels. Additionally, it is effective to put a local driver inside each pixel, as shown in [32]. To reduce the pixel size with the in-pixel driver scheme, stacked chip technology is a promising approach [33].

Fill factor is one of the most important parameters of image sensor, which determines photosensitivity. Although the multi-aperture optics does not influence the pixel fill factor, sensing area is consumed not only by pixel arrays but also shutter controllers. On the focal plane, the area of the shutter controllers limits the field of view of the lenses. Stacking technology is also a good option to alleviate this issue.

Flexibility of exposure control is a significant issue, which is deeply related to layout in integrated circuit implementation. As demonstrated in [34] and [35], pixel-wise exposure control is effective for better reproducibility in solving inverse problems. However, in terms of sensor layout, we have to prepare different control signal lines as many as different shutter patterns. In the prototype sensor, shutter signal lines are laid out vertically as shown in Figs. 3 and 13. This is a typical layout in time-resolving image sensors. For example, pixel size for time-resolving functional image sensors is typically ten to a few tens microns. If 32 different shutter signals are prepared for one column, they occupy approximately 22 μm width when line and space are 0.5 μm and 0.2 μm, respectively. Although these sizes depend on technology and design, for fast signaling in general, wider metal and spacing are required to reduce resistance and capacitance, respectively. Such a big wiring area is not negligible in pixel layout. The area problem can be mitigated by multiplexing shutter control signals. However, it will still consume pixel area with demultiplexing circuits or degrade time resolution by temporal multiplexing of control signals, which is not desired in UHS imaging. Therefore, in the proposed architecture, we adopted the aperture-wise shutter control scheme.

In Sec. 6.2.2, a simple registration method is described. However, more advanced method is required to observe general objects. We can consider three major situations: 1) object is approximately planar and its distance is constant and known. 2) object is approximately planar, but the distance is unknown (but constant). 3) object is not planar, and its depth map is unknown. Only situation-1 is treated in this paper. For the situation-2, a simple option can be that the inverse problem is solved with changing the distance of a plane in a possible distance range. In this case, no centroid measurement is required, although precise camera parameter extraction is necessary to emulate imaging with parallax. It is expected that the distance that gives the sparsest reconstruction can be a solution. The situation-3 is the most complicated, and includes a situation that an object is planar, but its distance changes in time. This topic is beyond this paper, and further investigation is required.

8. Conclusions

In this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor prototype with 5×3 apertures was demonstrated. The sensor captured plasma emission at a frame rate of 200 Mfps by compressing 32 frames with focal-plane temporally random-coded shutters. Then, a series of images was successfully reconstructed. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively, and the maximum skew between the shutters was 3 ns. Selection of the shutter pattern was discussed based on simulation, which showed that a PSNR of more than 20 dB was achieved at a compression ratio of 200% without a dictionary. The achieved frame rate of 200 Mfps is the fastest reported rate among silicon image sensors. The frame rate limitations and optical efficiency of the multi-aperture scheme were compared with conventional silicon UHS sensors based on a simple estimation, and the results showed that the proposed method can achieve the fastest reported frame rate, and there is almost no drawback in optical efficiency when using low-noise analog-to-digital converters and an elaborately designed imaging lens array with the same F-number as that for used for single-aperture UHS cameras. Image processing specific to multi-aperture optics and the prototype sensor, such as processing that takes account of disparities and temporal pixel responses, was applied. With this processing, an improvement of PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

Acknowledgments

The authors are grateful to M. Fukuda for his kind support. This work is partially supported by Grant-in-Aid for Scientific Research (B) Number 15H03989 and (S) Number 2522905, and JSPS KAKENHI Grant Number 15J10262. This work is also supported by the VLSI Design and Education Center (VDEC), The University of Tokyo, in collaboration with Cadence Corporation, Synopsys Corporation, and Mentor Graphics Corporation.

References and links

1. A. Velten, Di Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graph. (SIGGRAPH) 32(4), 44 (2013). [CrossRef]  

2. N. Nakagawa, A. Iwasaki, Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-optical mapping photography (STAMP),” Nat. Photonics 8, 695–700 (2014). [CrossRef]  

3. U. Frühling, M. Wieland, M. Gensch, T. Gebert, B. Schütte, M. Krikunova, R. Kalms, F. Budzyn, O. Grimm, J. Rossbach, E. Plönjes, and M. Drescher, “Single-shot terahertz field driven x-ray streak-camera,” Nat. Photonics 3, 523–528 (2009). [CrossRef]  

4. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167 (2013). [CrossRef]  

5. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. (SIGGRAPH) 32(4), 45 (2013). [CrossRef]  

6. M. Furuta, Y. Nishikawa, T. Inoue, and S. Kawahito, “A high-speed, high-sensitivity digital CMOS image sensor with a global shutter and 12-bit column-parallel cyclic A/D converter,” IEEE Jo. Solid-State Circ. 42, 766–774 (2007). [CrossRef]  

7. T. G. Etoh, V. T. SonDao, T. Yamada, and E. Charbon, “Toward one giga frames per second ? evolution of in situ storage image sensors,” MDPI Sensors 13(4), 4640–4658 (2013). [CrossRef]  

8. T. Arai, J. Yonai, T. Hayashida, H. Ohtake, H. van Kujik, and T. G. Etoh, “A 252-V/lux·s, 16.7-million-frames-per-second 312-kpixel back-side-illuminated ultrahigh-speed charge-coupled device,” IEEE Electron Devices 60, 3450–3458 (2013). [CrossRef]  

9. Y. Tochigi, K. Hanzawa, Y. Kato, R. Kuroda, H. Mutoh, R. Hirose, H. Tominaga, K. Takubo, Y. Kondo, and S. Sugawa, “A global-shutter CMOS image sensor with readout speed of 1-Tpixel/s burst and 780Mpixel/s continuous,” IEEE J. Solid-State Circ. 48(1), 329–338 (2013). [CrossRef]  

10. C. Canali, G. Majni, R. Minder, and G. Ottaviani, “Electron and hole drift velocity measurements in silicon and their empirical relation to electric field and temperature,” IEEE Trans. Electron Dev. 22(11), 1045–1047 (1975). [CrossRef]  

11. E. J. Cades and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

12. M. Dadkhah, M. J. Deen, and S. Shirani, “Compressive sensing image sensor-hardware implementation,” MDPI Sensors 13, 4961–4978 (2013). [CrossRef]  

13. Y. Oike and A. E. Gamal, “A 256× 256 CMOS image sensors with ΔΣ-based single-shot compressed sensing,” in Proceedings of IEEE International Solid-State Circuits Conference (IEEE2012), 386–387.

14. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014). [CrossRef]   [PubMed]  

15. K. Fife, A. E. Gamal, and H.-S. P. Wong, “A multi-aperture image sensor with 0.7μ m pixels in 0.11μ m CMOS technology,” IEEE J. Solid-State Circ. 43(12), 2990–3005 (2008). [CrossRef]  

16. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef]  

17. P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. (SIGGRAPH) 26(3), 68 (2007). [CrossRef]  

18. X. Wu and R. Pournaghi, “High frame rate video capture by multiple cameras with coded exposure,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 577–580.

19. M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49(10), B9–B17 (2010). [CrossRef]   [PubMed]  

20. S. Kawahito, G. Baek, Z. Li, S. Han, M. Seo, K. Yasutomi, and K. Kagawa, “CMOS lock-in pixel image sensors with lateral electric field control for time-resolved imaging,” in Int’l Image Sensor Workshop (2013), pp. 1417–1429.

21. M. Seo, K. Kagawa, K. Yasutomi, T. Takasawa, Y. Kawata, N. Teranishi, Z. Li, I.A. Halin, and S. Kawahito, “10.8ps-time-resolution 256× 512 image sensor with 2-tap true-CDS lock-in pixels for fluorescence lifetime imaging,” in Proceedings of IEEE International Solid-State Circuits Conference (IEEE2015), 198–199.

22. J. Holloway, A. C. Sankaranarayanan, A. Veeraraghavan, and S. Tambe, “Flutter shutter video camera for compressive sensing of videos,” in Proceedings of IEEE Conference on Computational Photography (IEEE2012), pp. 1–9.

23. F. Mochizuki, K. Kagawa, S. Okihara, M. Seo, Z. Bo, T. Takasawa, K. Yasutomi, and S. Kawahito, “Single-shot 200Mfps 5×3-aperture compressive CMOS imager,” in Proceedings of IEEE International Solid-State Circuits Conference (IEEE2015), 116–117.

24. C. B. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Master Thesis, Rice University (2009).

25. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. (SIGGRAPH) 25(3), 795–804 (2006). [CrossRef]  

26. B. Zhang, K. Kagawa, T. Takasawa, M. Seo, K. Yasutomi, and S. Kawahito, “RTS noise and dark current white defects reduction using selective averaging based on a multi-aperture system,” MDPI Sensors 14(1), 1528–1543 (2014). [CrossRef]  

27. M. Seo, S. Kawahito, K. Yasutomi, K. Kagawa, and N. Teranishi, “A low dark leakage current high-sensitivity CMOS image sensor with STI-less shared pixel design,” IEEE Trans. Electron Dev. 61(6), 2093–2097 (2014). [CrossRef]  

28. NAC Image Technology, ”ULTRA Neo,” https://www.nacinc.jp/analysis/ultra-neo/.

29. M. Seo, T. Sawamoto, T. Akahori, Z. Liu, T. Iida, T. Takasawa, T. Kosugi, T. Watanabe, K. Isobe, and S. Kawahito, “A low-noise high dynamic-range 17b 1.3-Megapixel 30-fps CMOS image sensor with column-parallel two stage folding integration/cyclic ADC,” IEEE Trans. Electron Dev. 59(12), 3396–3400 (2012). [CrossRef]  

30. C. Li, W. Yin, and Y. Zhang, “TVAL3: TV minimization by augmented lagrangian and alternating direction algorithms,” http://www.caam.rice.edu/~optimization/L1/TVAL3/.

31. S. Gleichman and Y. C. Eldar, “Blind compressed sensing,” IEEE Trans. Inf. Theory 57(10), 6958–6975 (2011). [CrossRef]  

32. K. Yasutomi, T. Usui, S. Han, T. Takasawa, K. Kagawa, and S. Kawahito, “A 0.3mm-resolution time-of-flight CMOS range imager with column-gating clock-skew calibration,” in Proceedings of IEEE International Solid-State Circuits Conference (IEEE2014), 132–133.

33. S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8Mpixel back-illuminated stacked CMOS image sensor,” in Proceedings of IEEE International Solid-State Circuits Conference (IEEE2013), 484–485.

34. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE Conference on Computer Vision (IEEE2011), 287–294.

35. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1
Fig. 1 Multi-aperture imaging system.
Fig. 2
Fig. 2 Procedure of the proposed method.
Fig. 3
Fig. 3 Image sensor architecture.
Fig. 4
Fig. 4 A CCD UHS image sensor.
Fig. 5
Fig. 5 A CMOS UHS image sensor.
Fig. 6
Fig. 6 Simulation results of the proposed method. (a) Original consecutive images and (b) compressed images for 6 apertures.
Fig. 7
Fig. 7 Relationship between compression ratio and PSNR.
Fig. 8
Fig. 8 Original and reconstructed images with different compression ratios. (a) Original images, and reconstructed images for compression ratios of (b) 400%, (c) 200%, and (d) 133%.
Fig. 9
Fig. 9 Reconstructed images for (a) the best PSNR and (b) the worst PSNR for the compression ratio of 200%.
Fig. 10
Fig. 10 Block diagram of a prototype CMOS image sensor.
Fig. 11
Fig. 11 Photograph of a prototype CMOS image sensor.
Fig. 12
Fig. 12 (a) Simplified LEFM pixel structure and (b) its potential diagram.
Fig. 13
Fig. 13 Block diagram of one aperture.
Fig. 14
Fig. 14 Timing chart of the shutter controller.
Fig. 15
Fig. 15 Operation flow of the MA temporally compressive imaging.
Fig. 16
Fig. 16 Impulse response of every aperture.
Fig. 17
Fig. 17 (a) Experimental configuration for observing plasma emission and (b) the applied shutter patterns.
Fig. 18
Fig. 18 Optical setup.
Fig. 19
Fig. 19 A fabricated lens array.
Fig. 20
Fig. 20 (a) Fifteen captured compressed images, (b) images for registration, and (c) aligned and cropped compressed images.
Fig. 21
Fig. 21 Temporally resolved images by (a) using ideal shutter pattern as observation matrix, (b) convoluting impulse response of each aperture, and (c) non-compressive temporally resolved images.

Tables (4)

Tables Icon

Table 1 Summary of parameters and frame rates.

Tables Icon

Table 2 Summary of PSNR.

Tables Icon

Table 3 Specifications of the prototype image sensor.

Tables Icon

Table 4 Specifications of the prototype camera.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

y = Ax ,
min x i D i x p s . t . y = Ax ,
( y 1 y 2 y m ) = ( a 1 a 2 a m ) ( x 1 x 2 x n ) = ( s 11 s 1 n s m 1 s m n ) ( x 1 x 2 x n ) .
f C C D = 1 max { t m , N C C D × t C C D } .
f C M O S = 1 t m + t C D S + N C M O S × t C M E M .
f p r o p o s e d = 1 t m .
N e = a η A R T 4 F n 2 E o τ
N e = C A F n 2 .
N e m , t o t = C A m F n m 2 M = C A s F n m 2 .
μ ( A ) = max ( 1 i < j N ) | a i , a j | a i 2 a j 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.