Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

CAOS-CMOS camera

Open Access Open Access

Abstract

Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

© 2016 Optical Society of America

1. Introduction

Depending on the application, imaging systems are called scanners, profilers, cameras, imagers, and optical sensors. Classic state-of-the-art optical imager designs in a variety of applications deploy photo-detector arrays such as the Charge Coupled Devices (CCDs) and the Complementary Metal Oxide Semiconductor (CMOS) optical sensor devices. In 2014, introduced is a new Global Shutter Sony Pregius model IMX174 CMOS sensor that has a 72.94 dB dynamic range [1]. In addition, recent CMOS sensor research devices from major commercial manufacturers are showing 80 dB dynamic ranges [2]. Techniques and sensor architectures used to achieve these high dynamic ranges are diverse and include pixel level light integration time control, pixel level electronic gain control including non-linear conversion gain control, and doubling pixel size for large quantum well size in the rolling shutter mode [3]. Nevertheless, there exists a strong need for an alternative imager design that operates under extreme contrast and brightness conditions (i.e., irradiance contrast levels > 104:1 requiring over 80 dB camera dynamic range) that has the ability to provide a large quantum well capacity with time-space pixel agility, optical spectrum flexibility, and exceptional inter-pixel crosstalk control and suppression. Starting in 2001, we proposed and extensively demonstrated (using a DMD: Digital Micromirror Device) an agile pixel Spatial Light Modulator (SLM)-based optical imager based on single pixel and dual pixel photo-detection with a large quantum well capacity that is suited for operations with both coherent and incoherent light across broad spectral bands, i.e., 337 nm – 2500 nm [4–11]. This imager design operates with the agile pixels programmed in a limited Signal-to-Noise Ratio (SNR) operations starring time-multiplexed mode where acquisition of image irradiance (i.e., intensity) data is done one agile pixel at a time across the SLM plane where the desired incident image radiation is present. In effect, the agile pixel electronically adapts in a deterministic way to the imaging scenario to extract the user desired image data. This imager does not use pseudo-random spatial coding of the optical radiation with iterative computational signal processing of the detected photo-current to produce an estimated “computational” image. Our imager physically samples and detects the true optical irradiance information and then deploys computer processing to stitch the agile pixel data to produce the user desired image map. This imager can operate adaptively to electronically reprogram its user specified agile pixel settings to improve desired imaged data quality. To put things in context, it is important to note that imaging with a single pixel (or single point detector) goes back to the late 1960’s when USSR space program and NASA explored robust imager designs for space missions [12,13]. Today, our proposed SLM-based agile pixel imager design [4] is being called by some as a single pixel imager/camera (as one point photo-detector can be used for light detection versus a multi-element detector array). This basic imager has been engineered to implement Compressed Sensing (CS) based imaging [14] where the DMD imparts pseudo-random spatial codes on the light irradiance under observation and spatial correlation methods and iterative imaging processing are deployed to create an estimate of the true image that is considered of sparse spatial content. Interesting, essentially the same optical design as our proposed SLM-based imager has been used to form a ghost computational [15] and ghost compressive imager [16], but in this case, the SLM codes the light before it strikes the object to be imaged. It is also important to note that coding of optical radiation for designing a variety of single photo-detector optical instruments has been around for over 50 years and has been deployed in a variety of ways (moving 2-D binary spatial codes) to extract spectral and spatial information [17–21]. For example, the ref.17 single pixel spectrometer encodes infrared optical spectra with two dimensionally patterned rotating gratings while the ref.21 single pixel active 3-D imager (using a laser) deploys 2-D spatial codes (of CDMA variety) to encode and decode scanned object pixel irradiances in the 3-D sample space. In addition, the DMD has also been used with classic CCD/CMOS cameras to realize imaging spectrometers [22] and control camera blooming [23], Field of View (FOV) and pixel level irradiance integration times [24].

Motivated by modern day advances in RF wireless, optical wired communications and ultra-high speed electronic signal processing and photonic device technologies and using our prior-art SLM-based imager design [4], recently proposed and demonstrated is a new and improved imager design platform called Coded Access Optical Sensor (CAOS) [25, 26] that has the ability to provide high dynamic range low inter-pixel crosstalk images using time-frequency-space coded agile pixels. Presented in this paper is a novel hybrid imager design [27] that combines the CMOS/CCD sensor or any photo-detector array (PDA) sensor with the CAOS imager platform within one fully programmable optical camera unit. Specifically, the CAOS-CMOS/CCD imager functions as a smart high dynamic range image information sifter that is guided by raw image data generated from a CCD/CMOS PDA sensor. This paper for the first time describes the implemented optical design and operations of a CAOS-CMOS camera including an experiment demonstrating the powers of its high dynamic range to decipher objects under extreme contrast and brightness conditions.

2. The CAOS-CMOS camera design

Figure 1 shows the design of the proposed CAOS-CMOS imager. Lens L1 directs the irradiance to be imaged from an external light distribution plane onto the agile pixels-plane of the programmable Two Dimensional (2-D) DMD. The point PD engaged with the DMD via the lens L2 operating in the Scheimpflug [28] imaging condition forms the CAOS imaging platform. In contrast, the CMOS PDA engaged with the DMD via the lens L3 operating in the Scheimpflug imaging condition forms the CMOS imager. The DMD consists of a multi-pixel grid of micromirrors. Each micromirror is electronically programmable and has two distinct tilt states, i.e., ± θ digital tilt states. In the proposed hybrid CAOS-CMOS imager design, the DMD provides two functions. First, it performs a spatial routing function for camera platform selection by forming a programmable beam splitter that directs chosen target scene irradiance pixels to either the point PD port or the CMOS PDA port. Second, the DMD imparts CAOS mode temporal modulation to certain pixels in the target scene irradiance map. Specifically, these selected agile pixels on the DMD can operate in different time-frequency coding modes like Frequency Division Multiple Access (FDMA), Code-Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA). CDMA and FDMA will produce spread spectrum electrical signals from the point PD while TDMA is the staring-mode operation of the CAOS platform with one agile pixel at a time producing a DC signal per agile pixel position (same as our original DMD-imager). For full impact of the CAOS platform, agile pixel codes should include CDMA, FDMA or mixed CDMA-FDMA codes that produce not only a point PD signal with a broad electrical spectrum (that looks like a chaotic electrical signal), but also engages advanced information coding techniques to provide isolation (e.g., minimum cross-correlation) and robustness amongst the simultaneously deployed time-frequency codes. The proposed Fig. 1 design also shows the placement of optional Smart Modules (SM) labelled as SM1, SM2, and SM3 in the system. Depending on the camera application, each SM can contain a variety of electronically programmable optical conditioning elements such as variable apertures, on/off shutters, variable focus lenses, spectral filters, polarizers, and variable attenuators. When engaged with electronic post-processing, the SMs can improve imaging performance parameters including limiting saturation of the point PD and PDA. Choice of point PD and PDA also depends on the application. For example, a variety of point PDs can be used such as avalanche PDs, bolometers and photo-multiplier tubes while PDA’s can be CMOS/CCD and Focal Plane Array (FPA) devices. Note that a silicon CMOS PDA with a limited spectral range can modified for broader band operations using heterogeneous integration of thin film detectors on silicon CMOS circuitry [29,30]. As shown in Fig. 1, the point PD can also be engaged with a Variable Gain Amplifier (VGA) module with phase-locking operations. All electronically programmable components are controlled by the camera processor that can adapt component drive conditions to deliver the desired camera imaging performance.

 figure: Fig. 1

Fig. 1 The CAOS-CMOS Camera design.

Download Full Size | PDF

The CAOS-CMOS camera high dynamic range operation can be explained using the following example. Imagine the camera focussed on a given far field scene using controlled SM1 parameters with all DMD micromirrors set to their – θ state to direct all the imaged scene irradiance to the CMOS PDA. Now imagine that the scene happens to contain a high brightness target called target 1 as well as an extremely low brightness target called target 2. Furthermore, assume that the difference in irradiance between the target 1 maximum irradiance pixel and target 2 maximum irradiance pixel exceeds the dynamic range capability of the CMOS sensor. In such a high contrast imaging scenario, the CMOS sensor will fail to register the target 2 in the viewed scene. In fact, if target 1 pixels have significant optical irradiance due to being a high brightness target, all the CMOS sensor pixels can saturate causing a full white image to be seen as the output of the CMOS sensor. In such a scenario, a variable optical attenuator in SM1 and/or SM3 can be engaged to start attenuating the optical irradiance reaching the CMOS sensor. The assumption here is that such an attenuation process does not spoil the final quality of the viewed image. Given adequate attenuation, the irradiance value of the maximum irradiance pixel in the target 1 can be brought down to the level just below saturation of the CMOS pixel. At this stage, target 1 becomes visible to the CMOS sensor providing a within CMOS sensor dynamic range limit gray-scale image of target 1. Target 2 remains undetected by the CMOS sensor. Also, as the camera processor has determined the location and pixel-based irradiances of target 1, the target free zones in the CMOS sensor viewed scene are recorded. Next, the CAOS mode of the camera is engaged by turning on the ± θ state temporal coded modulations for the DMD pixels in the target free zones. The point PD now provides a coded electrical signal that can undergo electronic signal processing (e.g., Fourier Transform when using FDMA codes) to decode and recover the relative irradiance values of the pixels from the target 1 free zones. In general, the point PD gives a time-frequency encoded current i(t) which can be written as:

i(t)=Kn=1Ncn(t)In(xn,yn).

For the nth agile pixel on the DMD, In is the pixel irradiance at the pixel central coordinates (xn,yn), cn(t) is its time-frequency code, N is the total number of simultaneously coded agile pixels and K is a constant depending on various photo-detection factors. The next stage of signal conditioning is amplification where i(t) can be electrically amplified by a fixed factor GA giving iA(t):

iA(t)=GAi(t).

Electrical coherent amplification using a phase-locked amplifier can also be deployed for weak optical irradiance signal amplification. The signal iA(t) next undergoes time-frequency domain processing such as the Fourier Transform (FT) to recover the optical irradiance value at each agile pixel using the known agile pixel FDMA modulation codes. Using Eqs. (1) and (2), and assuming pure frequency codes cn = cos(2πfnt) are used for agile pixel coding where fn is the frequency code for the nth agile pixel, the output S(f) of the Fourier Transform of iA(t) is written as:

S(f)=FT{iA(t)}=FT{GAKn=1Ncn(t)In(xn,yn)},S(f)=FT{GAKn=1Ncos(2πfnt)In(xn,yn)}.

The Eq. (3) expression can be equated further. Assuming single sideband spectrum analysis and using G as a fixed spectrum analysis gain factor, Eq. (3) can be reformulated as:

S(f)=GGAKn=1NIn(xn,yn)δ(ffn),S(f)=GGAK{I1(x1,y1)δ(ff1)+I2(x2,y2)δ(ff2)+....+IN(xN,yN)δ(ffN)}.

Equation (4) assumes that a single frequency spectral code appears as a delta function, however, in reality each finite time duration signal has a finite spectral bandwidth. The Eq. (4) expression indicates that the nth agile pixel coded irradiance (having a unique identifier frequency code) separates out into a spectral peak in the frequency domain at f = fn having an amplitude proportional to In(xn,yn). Note that if FDMA codes are used, the simultaneously sampled agile DMD pixels need to be programmed with these unique frequency codes to enable decoding of the optical irradiances at these pixel locations in the resulting spectral domain. The FDMA frequency codes need to be chosen carefully to avoid interference/ cross-talk due to inter-modulation products and frequency harmonics between chosen frequency codes. Using Eq. (4), the irradiance map of the image at the DMD plane can be reconstructed by sampling the amplitude of S(f) at each coded frequency location, and then assigning each amplitude to its corresponding agile pixel location, allowing a complete 2-D image reconstruction. Because the point PD combined with the VGA, SM1/SM2 control, and electronic decoding can avoid point PD optical saturation and produce high computational signal processing gain, target 2 will register in the viewed camera image when using the CAOS mode. To calibrate the low brightness target 2 relative to the high brightness target 1, the CAOS mode is also applied to at least the maximum irradiance zone of the known target 1 zone so a relative irradiance map for both targets can be generated. In effect, the CAOS-CMOS camera via smart camera processing operations can produce a true image from a high brightness and high contrast imaging scenario.

How the camera sifts through the viewed pixels in the scene in the CAOS mode depends on the camera application and optical scene characteristics. In other words, the size, location and temporal characteristics of the coded agile pixel as well as the number of simultaneous agile pixels used during sifting is determined by the camera processor in collaboration with the CMOS sensor gathered real-time images plus machine learning parameters acquired via prior training of the application deployed camera. In effect, speed of acquisition of full images via the proposed camera is not only hardware and software dependent, but also application dependent with trade-offs between agile pixel sizes and sampled pixel total count versus camera response time.

3. Experimental demonstration of the CAOS-CMOS camera

For a first demonstration of the basics of the CAOS-CMOS imager, the experimental setup implemented in the laboratory is shown in Fig. 2. L1, L2, and L3 have a diameter of 2.54 cm while the focal lengths of L1, L2 and L3 are 7.5 cm, 2.5 cm, and 4.0 cm, respectively. The DMD used is Texas Instrument’s DLP 3000 DMD chip having a micromirror pitch of 7.637 μm, a 608 × 684 micromirror array arranged in a diamond configuration and a micromirror tilt angle θ = ± 12° with respect to the DMD normal. The CMOS PDA sensor deployed is IDS UI-1250LE-M-GL monochrome CMOS camera module which uses the sensor EV76C570ABT with a dynamic range of 51.3 dB, having a pixel size of 4.5 μm, an exposure time of 87.2 ms, a frame-rate of 11.5 frames per second and a pixel count of 1600 × 1200 pixels [31]. The Dynamic Range (DR) of an optical image sensor is given by [1, 32]: DR = 20 log[Full Well Charge Storage Capacity of the Photo-cell (electrons) / Noise Charge (electrons)] = 20 log[isat(Saturation) / iN(Dark Noise)] = 20 log(Pmax / Pmin), where Pmax and Pmin are the photo-cell (i.e., sensor pixel) maximum detected optical power and the noise-limited minimum detected optical power, respectively.

 figure: Fig. 2

Fig. 2 Snapshot of the CAOS-CMOS camera experimentally setup in the laboratory.

Download Full Size | PDF

The distance between the centers of L1 and the DMD chip is 7.9 cm, the distance between the centers of the DMD and L2 is 8.4 cm, the distance between the centers of the L2 and the PD is 3.7 cm, the distance between the centers of the DMD and L3 is 8.8 cm, and the distance between the centers of the L3 and the CMOS sensor is 7.3 cm. The SM1 deployed consists of a controllable aperture set to a diameter of 1.85 cm for the complete duration for the experiment. Note that for this demonstration, SM2 and SM3 are not deployed. The processor block in Fig. 2 consists of dedicated circuitry to control the DMD, a PD circuit for PD current-to-voltage conversion, a Laptop and a microcontroller (μC) STM32F4 from STMicroelectronics comprising of a 12-bit Analog-to-Digital Converter (ADC). The PD voltage from the PD circuit is acquired by the ADC which is then real time transferred to the Laptop using a USB communication serial interface. Note that this voltage is limited to 3 V which is the maximum limit of the ADC deployed in the experiment. The Laptop is also connected to the DMD circuitry via USB to control the tilt states of the individual DMD micromirrors. The size, shape and time-frequency codes of the agile pixels on the DMD chip are completely programmable. The CMOS sensor is interfaced with the Laptop directly via a USB connection. A custom designed Graphical User Interface (GUI) is developed in C + +/CLI language using Visual Studio 2012 in the Laptop to manage CAOS-CMOS camera operations. The scene for this demonstration shown in the Fig. 3 image acquired using a Nikon D3300 camera. The Fig. 3 target scene consists of Rolson’s 5W Aluminum Z2 LED Torch with a head diameter of 53 mm and luminosity of 180 lumens, a custom made traffic sign and an incandescent light bulb with a line filament. The torch represents a bright light source, the traffic sign is a dim passive target while the filament in the light bulb is a current controlled variable brightness target. All three targets lie in the same plane located at a distance 1.38 m from L1. To ensure adequate lighting of the target scene, particularly of the passive traffic sign region, ambient room light is turned on. In addition, the target scene is also illuminated using two LED work lights (model Streetwize multi-purpose rechargeable torch/work light), providing adequate scene lighting. Note that the Fig. 3 image field of view is analogous to the field of view of the CAOS-CMOS camera.

 figure: Fig. 3

Fig. 3 Field of View of the target scene as viewed from the CAOS-CMOS camera in the experimental demonstration.

Download Full Size | PDF

The point PD (i.e., photo-cell) in the experiment is a Silicon Switchable Gain Detector PDA36A by Thorlabs [33] having an active area of 13 mm2 (3.6 mm × 3.6 mm). This point PD is operated at its in-built 70 dB gain setting and the output is terminated to a 50 Ω resistor. In this setting, the PD provides a gain GA of 2.38 × 106 V/A, a maximum voltage output Vmax of 5 V and a Bandwidth (BW) = 5 KHz. The rise time tr of the PD is computed as tr = 0.35/BW = 70 μs [34]. Since the ADC used has a maximum input voltage limit of 3 V, Vmax is limited to 3 V. The maximum current output imax can be found by imax = Vmax/GA = 3/(2.38 × 106) = 1.26 × 10−6 A. Using a wavelength responsivity R(λ) of 0.35 A/W at λ = 550 nm (central wavelength of the visible range), the maximum detectable optical power Pmax of the PD is computed to be Pmax = imax/R(550 nm) = (1.26 × 10−6)/0.35 = 3.6 × 10−6 W. To compute the dynamic range of the point PD, it is desirable to compute its minimum detectable optical power Pmin. For that, the Noise Equivalent Power (NEP) [35] of the PD is used. NEP is the optical power incident on the detector that needs to be applied to equal the noise power from all sources in the detector. For the PD in the experiment, the wavelength dependent NEP(λ) at λ = 950 nm is 2.10 × 10-12 W/√Hz. R(950 nm) is also known to be 0.65 A/W from the point PD datasheet. Since the point PD is operated in the visible range, NEP(550 nm) is required and is computed using NEP(λ) = NEPmin × [Rmax/R(λ)], where in this setting, NEPmin = NEP(950 nm) and Rmax = R(950 nm). Thus, NEP(550 nm) = NEP(950 nm) × R(950 nm)/R(550 nm) = 4.01 × 10−12 W/√Hz. Since the PD BW is 5 kHz at the 70 dB setting, Pmin can be computed using Pmin = NEP(λ) × √BW = NEP(550 nm) × √(5 × 103) = 2.84 × 10−10 W. Therefore, the point PD electrical Dynamic Range (DR), defined earlier [1, 32] as 20 log(Pmax/Pmin), is computed to be 82.0668 dB at the settings used in the experiment. This designed DR value for the point PD will be compared to the experimentally demonstrated CAOS mode camera DR.

To begin the experimental demonstration, the CMOS mode of the CAOS-CMOS camera is deployed. With the torch off and the bulb on, an image of the scene is acquired using the CMOS mode and shown in Fig. 4(a). All three of the torch, traffic sign and bulb’s filament are clearly seen in Fig. 4(a). Next, the torch is turned on, causing the CMOS sensor to saturate completely due to the high light intensity, as shown in Fig. 4(b). To combat CMOS sensor saturation, Thorlabs Neutral Density (ND) filters are incorporated into SM1. Attenuating the optical irradiance from the scene by a factor of 1,000 compared to Fig. 4(b) results in the image shown in Fig. 4(c). The Fig. 4(c) image shows that only the torch region is completely saturated while the other targets are not visible. Next, the scene is attenuated by a factor of 3,200 compared to Fig. 4(b) resulting in the image in Fig. 4(d). This Fig. 4(d) image gives a clearer boundary of the torch region. However, the bright zones in the torch region in Fig. 4(d) are still saturated. Attenuating the optical irradiance from scene by a factor of 10,000 compared to Fig. 4(b) results in the image in Fig. 4(e). The Fig. 4(e) image gives an even clearer outline of the torch region with the bright zones of the torch falling just under the saturation limit, indicating that this is the amount of attenuation required to bring the brighttorch region to the unsaturated range of the CMOS sensor. However, in Fig. 4(e), the filament bulb and the traffic sign are absent, indicating that the optical irradiance from these zones have been attenuated into the noise floor of the CMOS sensor. This conclusion is further confirmed by viewing Fig. 4(f) that shows the logarithmic scale image of the irradiance data from the Fig. 4(e) image. Thus the CMOS mode of the CAOS-CMOS camera fails to register at the same time both the bright region of the torch and the dim regions of the traffic sign and the filament of the viewed scene. Nevertheless, the CMOS-mode of the camera has provided the camera system with the locations (within the CMOS sensor pixel grid) of scene regions where a possible yet to be seen target may exist. With this CMOS-mode provided scene intelligence, the high DR CAOS mode of the CAOS-CMOS camera is deployed. Specifically, one can use smart CAOS-mode scanning of these possible target-free regions such as by performing DMD programmed horizontal and vertical slit scans over a given sub-region to quickly identify regions of the scene with possible low light level targets. If the fast scans indicate presence of such low light level targets, the DMD can be programmed in the agile pixel pin-hole CAOS mode to spatially resolve the specific pixel-grid locations of these targets in the viewed scene. It is important to note that the CAOS-CMOS camera is a fully programmable adaptive system that efficiently engages both the CMOS-mode and the CAOS-mode to search for targets with the high DR scene environment and no a priori scene information is required for successful imaging operations. Such coordinated operations also speeds up the CAOS mode target search operations of the proposed hybrid camera. In summary, compared to a CAOS-only mode camera, the CAOS-CMOS camera has the ability to speed-up CAOS-mode operations by only scanning the scene regions where the CMOS-mode failed to register seeing a target.

 figure: Fig. 4

Fig. 4 Images of the scene viewed using the CMOS mode of the CAOS-CMOS camera. (a) Unsaturated scene with a torch off, (b) saturated scene due to torch lighting, (c) scene of (b) attenuated by a factor of 1,000, (d) scene of (b) attenuated by a factor of 3,200, (e) scene of (b) attenuated by a factor of 10,000, and (f) logarithmic scale image of the irradiance data from image (e).

Download Full Size | PDF

For the present experiment where we already know the locations of the deployed high brightness and low light level test targets, we do not require a CAOS mode target search operation using the CMOS-mode provided Fig. 4(f) image data. Thus to implement a complete experimental image acquisition comparison of the CMOS mode versus the CAOS mode, an agile pixel pin-hole scan of the entire scene is done using the CAOS mode. For this CAOS mode implementation, there is no attenuation installed in SM1. The target scene consists of the torch on and the filament on, i.e., the same setting which resulted in the CMOS mode giving a saturated image in Fig. 4(b). In the experimental demonstration, the PD signal iA(t) is digitized using the ADC in the μC and stored in the Laptop. This allows the Fourier Transform operation to be conducted using MATLAB, in which the Fast Fourier Transform (FFT) function is utilized. Additionally, the number of simultaneously coded agile pixels on the DMD chip is 2 (i.e., N = 2). This is due to hardware limitations of speed and memory imposed by the low cost DMD board and the interfacing between the DMD circuitry, μC and the laptop. Because of this limitation, image reconstruction in the CAOS mode is accomplished using a hybrid FDMA and TDMA mode of operations. First, two simultaneously time-frequency FDMA coded agile pixels modulate and direct the coded light to the PD. After the PD signal is digitized and transferred to the laptop, the next set of two agile pixels at different locations from the previous set of pixels is modulated with the same unique frequency codes as the previous set, and the corresponding PD signal is acquired and stored. This is repeated in time (implementing TDMA) until all the PD signals from all agile pixels covering the desired scan zone have been acquired. Note that ideally, the image can be acquired almost instantly in the FDMA mode, given that all the pixels modulate with different codes at the same time. In the CAOS mode demonstration, the programmed agile pixel has dimensions of 20 × 20 micromirrors, resulting in a pixel dimension of 152 μm × 152 μm. This 20 × 20 micromirror pixel size is chosen to ensure that adequate irradiance emanating from the passive traffic sign region in the scene is captured to register an irradiance reading above the noise floor of the system. It is also important not to saturate the ADC input voltage limit when choosing the pixel size and number of pixels. For example, the ADC in the experiment has a 3 volt limit, which means the PD voltage resulting from the PD current to voltage conversion circuitry must have a peak value of less than 3 volts in to avoid saturation of the ADC readings. It turns out that when using the 20 × 20 micromirror pixel and with N = 2, the max voltage reading from the ADC in the demonstration (which occurs when the agile pixels are located at the brightest zone of the torch in the scene) is 2.8 volts. Note that if N = 3 is chosen, the ADC in the present experimental setting will saturate when the pixels are located in the torch region. In such a scenario an ADC with a higher input voltage limit could be used, a smaller agile pixel size could be deployed, GA can be reduced or N can be adjusted. Therefore, the choice of these parameters is important from the CAOS performance point of view. Using the chosen 20 × 20 micromirror pixel size and N = 2, a total of 868 agile pixels (i.e., 434 separate agile dual-pixel sets) are modulated on the DMD chip in the hybrid FDMA-TDMA mode to obtain the complete image of the Fig. 3 scene. The 868 pixel scan consists of 28 rows and 31 columns in the DMD’s diamond arrangement of micromirrors. The two frequency codes chosen are f1 = 133.4 Hz and f2 = 200.2 Hz. This choice of frequency codes ensures that the resulting intermodulation products between the frequencies and their harmonics do not affect the frequency domain amplitudes at f1 and f2.

The steps implemented to extract the optical irradiance data at each pixel of the image using the CAOS mode are: (1) Using the hybrid FDMA-TDMA mode, the point PD sequentially (TDMA) acquires the optical irradiance signals from all 434 agile pixel sets modulating at FDMA codes f1 and f2. (2) The resulting PD voltage values for all sets are sampled at 2.4 KHz using the ADC and stored into the laptop memory as 434 “.txt” files (one per modulating dual-pixel set). (3) For each file, the resulting digitized signal is subject to the FFT operation in MATLAB. (4) In the resulting frequency domain plots of |S(f)| vs. f in MATLAB for each file, the magnitude of S(f) at f1 = 133.4 Hz and f2 = 200.2 Hz are recorded. Note that these |S(f1)| and |S(f2)| values for all 434 dual-pixel sets give the relative optical irradiance strength at their respective agile pixel locations. (5) All |S(f1)| and |S(f2)| values acquired for all files are assigned to the corresponding 868 agile pixel locations to obtain the complete DMD plane optical irradiance map I(x, y).

Figure 5(a) shows the acquired 2-D image reconstruction I(x, y) of the target scene imaged on the DMD plane using the CAOS mode. The color coding in the Fig. 5(a) plot indicates the scaled relative intensity of the acquired I(x, y) map, and is reported in the manuscript for visual illustration purpose only. Figure 5(a) is plotted in the diamond coordinates of the DMD chip micromirror layout. The Fig. 5(a) image map shows the shape of the torch region and is comparable to the CMOS sensor acquired image in Fig. 4(e). The Fig. 5(a) CAOS mode obtained scaled I(x, y) map includes both the traffic sign as well the filament bulb data, but is not visible due to the limited DR of the mechanism (e.g., computer display, printed hand copy on paper) to display the irradiance map. To enable high DR viewing of the scene, the logarithm of scaled I(x, y) data is taken and plotted as Fig. 5(b) to reveal the true power of the CAOS mode. In Fig. 5(b), the torch, traffic sign and filament regions are simultaneously “seen”, demonstrating the presence of two more targets in the field of view of the camera that were otherwise unseen by the CMOS mode of the hybrid camera.

 figure: Fig. 5

Fig. 5 2-D image reconstruction of the target scene using the CAOS mode of the CAOS-CMOS camera. (a) scaled irradiance map I(x,y) is shown, (b) scaled irradiance map of the logarithm of I(x,y) values is shown, and (c) same plot as in (b) but with additional labels of regions R1, R2, R3 and R4 as well as location of traces T1, T2 and T3 used for quantitative image analysis purposes.

Download Full Size | PDF

For an analysis of the Fig. 5(b) CAOS-mode acquired image, different sets of two pixels are marked in Fig. 5(c), which is the same plot as Fig. 5(b), as regions R1, R2, R3 and R4. R1 covers a segment of the torch, R2 has one pixel covering the filament and one in the background, R3 samples the traffic sign irradiance while R4 is on the black background and acts as a comparison. Figure 6 shows the resulting FFT plots in MATLAB for all four regions R1, R2, R3 and R4. Note that in the Fig. 6 frequency domain plots, only the peaks at frequencies 133.4 Hz and 200.2 Hz are of interest since these correspond to the modulation codes of the pixels. The 150 Hz peak labeled in each of the Fig. 6 plots is present due to the photodetection of light sources (including room lighting and torch) powered by the mains electricity supply. 150 Hz is the 3rd harmonic of the 50 Hz mains electricity supply [36] and is picked up in the FFT operation over all agile pixels. However, this 150 Hz peak does not carry I(x, y) data since the peaks at FDMA code frequencies of 133.4 Hz and 200.2 Hz are the ones of interest. Figure 6(a) plots the frequency domain amplitude for R1, giving heights of |S(f)| = 1940 for f1 = 133.4 and |S(f)| = 401.7 for f2 = 200.2 Hz. The peak at f1 of frequency domain amplitude = 1940 is the highest recorded peak in the CAOS mode acquired image. Figure 6(b)plots |S(f)| for R2 covering the filament. In Fig. 6(b), the amplitude |S(f1)| = 3.125 indicates the pixel covering the filament while the pixel covering the scene background has a relative irradiance of |S(f2)| ~0.1. Figure 6(c) plots |S(f)| for R3 covering the traffic sign, giving |S(f1)| = 0.2823 and |S(f2)| = 0.3037. Figure 6(d) plots |S(f)| for R4 covering the background of the scene. Figure 6(d) shows that no significant irradiance is being picked up by the frequency domain plot at the two frequencies of interest, i.e, |S(f1)| and |S(f2)| ~0.1, which is the noise floor of the experimental system. Also marked in Fig. 5(c) are line traces T1, T2, and T3. Trace T1 passes through the torch region, T2 passes through the traffic sign region while T3 passes through the filament region. The 1-D optical irradiance I(y) along the y-direction of the DMD chip for traces T1, T2, and T3 are shown in Figs. 7(a), 7(b), and 7(c), respectively. These Fig. 7 plots illustrate the variation of optical irradiance along different line regions of the image.

 figure: Fig. 6

Fig. 6 The frequency domain plots of chosen CAOS acquired signals at regions (a) R1 covering a segment of the torch, (b) R2 covering the filament, (c) R3 covering the traffic sign, and (d) R4 covering the black background. Note that only |S(f)| peaks at frequencies f1 = 133.4 and f2 = 200.2 indicate the scaled relative optical irradiance at the corresponding agile pixels. The peak at 150 Hz appearing in each plot is due to the 3rd harmonic of the 50 Hz electricity mains supply.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Plots showing the measured scaled irradiances along Fig. 5(c) labeled traces (a) T1, (b) T2, and (c) T3.

Download Full Size | PDF

To evaluate the Dynamic Range of the Fig. 5 CAOS mode acquired image, note that the highest measured relative irradiance value (|S(f)|) is 1940 which is for the torch region of the scene. Whereas, the lowest recorded “signal” (above the noise floor of ~0.1) is measured to be |S(f)| 0.153. Using these values, the DR of the CAOS mode is computed to be 20 log(1940/0.153) = 82.06 dB. Note that this result matches the DR of 82.0668 dB calculated earlier for the point PD used in the experiment. Therefore, the DR obtained by the CAOS mode is limited by the DR of the point PD. In the experiment, due to the weak optical irradiance for the traffic sign portion of the scene, the 70 dB gain setting of the point PD amplifier is deployed. If, for example, the lower gain 30 dB setting of the PD VGA is used which provides GA = 0.75 × 104 V/A, NEP(970 nm) = 2.34 × 10−12 W/√Hz and BW = 5.5 MHz, the point PD DR is be computed to be 100.7432 dB, a near 19 dB improvement in camera DR compared to the 70 dB VGA gain setting. However, the trade-off for getting higher DR using the reduced VGA gain means lower photocurrent from reduced optical irradiance scene regions such as the traffic sign in Fig. 3. In such a scenario, another parameter which can be adjusted is the agile pixel size. In the experimental demonstration, a 20 × 20 micromirror pixel size is deployed. Using a larger pixel size increases the per pixel coded optical irradiance striking the point PD, increasing the likelihood of detection of weak light irradiances although at the expense of output image resolution. In this case, one must also be careful not to exceed the voltage limit of the ADC as increasing the agile pixel size also increases the light irradiance collected from the bright regions of the scene. Note that there is also a trade-off between the CAOS-mode imaging speed and N, the number of simultaneously sampling agile pixels as N increases. In other words, to operate the camera at faster imaging speed, the larger number of modulating agile pixels can result in a stronger total optical irradiance striking the point PD. In order to avoid saturation of the point PD, the VGA gain needs to be reduced and this affects the detection of weak irradiance regions in the viewed scene.

The smart operation of the hybrid camera uses the fast speed acquisition CMOS-mode acquired images that undergoes threshold image processing to first identify the spatial locations of any high brightness targets. To implement a fast operations CAOS-mode, a search mode is implemented to look for low optical irradiance targets such as by using orthogonal direction line (slit) scans, e.g., along x and y directions of the DMD plane. For the case in the experiment, the right-hand side of the CMOS-mode image (see Fig. 4(c)) appears dark that is about half the image frame. Considering a slit width of 20 micromirrors, there are 15 and 28 slit positions covering the x and y direction scans, respectively. Using a slit reset time of 5 seconds (includes modulating and loading time), the experimental system takes 75 seconds to generate a x-slit scan and 140 seconds to generate a y-slit scan. Via image processing, the acquired 2-D slit data provides zoomed locations of possible weak irradiance targets. For the experiment, the 2-D slit scan mode indicates presence of weak irradiance targets in the zoomed 15 × 28 agile pixel count region. With N = 2, 420 (15 × 28) agile pixels, FDMA modulation in sets of 2 agile pixels, a set loading and acquisition time of 6 seconds (presently highly limited by our non-optimized electronic boards), a 21 minutes image acquisition time is required. For a full DR analysis and comparison between the CMOS-mode image and the CAOS mode image of the hybrid camera, a complete CAOS-mode scan of the DMD chip (28 × 31 agile pixels) is acquired as shown in Fig. 5. Deploying CMDA codes using a current state-of-the-art 32 KHz frame rate DMD-based CAOS mode [37], one can estimate much faster image generation time. Assuming a scanning grid of 1000 agile pixels on the DMD plane, one can deploy 100 different CDMA orthogonal or pseudo-random codes [38] with each code having a length of 100 bits and each code assigned to a certain agile pixel in a 100 agile pixels grouping. Given a 32 KHz frame rate, a single bit time duration is 1/(32 × 103) s = 31.25 µs. A code of 100 bits has a duration of 31.25 µs × 100 = 3.125 ms. Thus in 3.125 ms, irradiance data from 100 simultaneously modulating codes programmed to spatially specific 100 agile pixels is acquired. To use 100 CDMA codes for 1000 agile pixels, TDMA is deployed to modulate 100 agile pixels at a time to cover the scan region, using 100 separate codes for each 100 agile pixel sets. Therefore, the total irradiance acquisition duration becomes 3.125 ms × 10 = 31.25 ms, similar to what is considered real-time video rates. Note that if TDMA is not deployed and a 100 pixel image is desired, 100 CAOS coded pixels can be simultaneously deployed, giving a total imaging time of 3.125 ms or image refresh rate of near 300 Hz. Thus using the programmability feature of the CAOS mode and even faster frame rate DMDs, one can expect faster image acquisition. This statement assumes that the CMOS/PDA sensor has image refresh rates that match or exceed the CAOS-mode imaging rate and with today’s CMOS optical sensor technology, this is indeed the case. Ultimately, image smear of fast moving targets depends on the speed of the target and the proposed imager can be programmed to avoid or reduce image smear.

Today, fast > 65 MHz sampling rate 14-bits ADCs are available [39] for digitizing the point PD signal to implement fast decoding and image processing. Note that in the deployed CAOS mode, inter-pixel crosstalk effects are minimal. There is zero crosstalk between the time multiplexed pixel data sets acquired using the TDMA mode as the pixel sets are acquried at different instants in time. In the CAOS mode deploying orthogonal CDMA codes, one minimizes the inter-pixel crosstalk by using orthogonal codes [38], while frequency codes in a pixel set in the CAOS FDMA mode are selected suitably to avoid spectral overlap. This pixel readout mechanism is unlike CMOS/CCD sensors where the optical and electrical crosstalk effects are ever-present due to the hardwired pixel array structure and charge readout architectures, thus limiting inter-pixel crosstalk performance in these cameras.

In summary, camera programmable parameters such as point PD VGA gain and bandwidth, number of simultaneous agile pixels, agile pixel size, total agile pixels in sampled scene, SM settings as well as incident lighting levels of the sampled scene zones all affect the performance of the CAOS mode. Used in conjunction with the CMOS mode processed image data, a search mode with a quick raw scan using larger agile pixels is used to provide a coarse resolution irradiance distribution map before optimizing the different camera parameters for the different scene zones to implement smart image reconstruction via the CAOS mode.

4. Conclusion

For the first time, proposed and demonstrated is a CAOS-CMOS camera that combines the time-frequency agile pixel CAOS imager with the traditional CMOS camera architecture to realize a powerful, high dynamic range imaging platform. The CAOS-CMOS camera design deploys a DMD chip which receives the incoming optical irradiance from a target scene and selectively directs it towards the CAOS arm and the CMOS arm of the camera. In the experimental demonstration, a target scene consisting of a bright light source and two dim light sources near the CAOS-mode noise floor are used to investigate the performance limits of both the CAOS and CMOS operational modes of the hybrid camera. The CMOS camera, rated at 51.3 dB DR according to the manufacturer, is experimentally shown to provide an image having insufficient dynamic range, whereas the CAOS-mode camera exhibited a DR of 82.06 dB under the experimental conditions and successfully reconstructed the target scene completely. This experimental 82.06 dB DR value of the CAOS operational mode is limited due to the components used in the demonstration. Much higher DR values are obtainable using the CAOS mode of the camera using different programmable settings of the camera smart components plus the improved electronic capabilities of the ADC, and/or by deploying coherent detection via phase-locked loop amplification. The high DR imaging performance of the CAOS-CMOS camera has diverse applications across the fields of astronomy, machine vision, safety and surveillance, undersea observations and marine science, medical imaging and extreme environment imaging.

References and links

1. Point Grey White Paper Series, “Sony Pregius Global Shutter CMOS Imaging Performance,” (Point Grey Research, 2015).

2. M.-W. Seo, S.-H. Suh, T. Iida, T. Takasawa, K. Isobe, T. Watanabe, S. Itoh, K. Yasutomi, and S. Kawahito, “A low-noise high intrascene dynamic range CMOS image sensor with a 13 to 19b variable-resolution column-parallel folding-integration/cyclic ADC,” IEEE J. Solid-State Circuits 47(1), 272–283 (2012). [CrossRef]  

3. S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, and K. Inoue, “A 1/4-inch 8Mpixel back-illuminated stacked CMOS image sensor,” in Proceedings of IEEE Conference on Solid-State Circuits Conference Digest of Technical Papers (ISSCC) (IEEE, 2013), pp. 484–485.

4. S. Sumriddetchkajorn and N. A. Riza, “Micro-electro-mechanical system-based digitally controlled optical beam profiler,” Appl. Opt. 41(18), 3506–3510 (2002). [CrossRef]   [PubMed]  

5. N. A. Riza and M. J. Mughal, “Optical Power Independent Optical Beam Profiler,” Opt. Eng. 43(4), 793–797 (2004). [CrossRef]  

6. N. A. Riza and F. N. Ghauri, “Super Resolution Hybrid Analog-Digital Optical Beam Profiler Using Digital Micromirror Device,” IEEE Photonics Technol. Lett. 17(7), 1492–1494 (2005). [CrossRef]  

7. M. Gentili and N. A. Riza, “Wide-Aperture No-Moving-Parts Optical Beam Profiler Using Liquid-Crystal Displays,” Appl. Opt. 46(4), 506–512 (2007). [CrossRef]   [PubMed]  

8. M. Sheikh and N. A. Riza, “Demonstration of Pinhole Laser Beam Profiling using a Digital Micromirror Device,” IEEE Photonics Technol. Lett. 21(10), 666–668 (2009). [CrossRef]  

9. N. A. Riza, S. A. Reza, and P. J. Marraccini, “Digital micro-mirror device-based broadband optical image sensor for robust imaging applications,” Opt. Commun. 284(1), 103–111 (2011). [CrossRef]  

10. N. A. Riza, P. J. Marraccini, and C. Baxley, “Data Efficient Digital Micromirror Device-Based Image Edge Detection Sensor using Space-Time Processing,” IEEE Sens. J. 12(5), 1043–1047 (2012).

11. M. J. Amin, J. P. La Torre, and N. A. Riza, “Embedded Optics and Electronics Single Digital Micromirror Device-based Agile Pixel Broadband Imager and Spectrum Analyser for Laser Beam Hotspot Detection,” Appl. Opt. 54(12), 3547–3559 (2015). [CrossRef]  

12. S. Selivanov, V. N. Govorov, A. S. Titov, and V. P. Chemodanov, “Lunar Station Television Camera,” (Reilly Translations): NASA CR-97884 (1968).

13. F. O. Huck and J. J. Lambiotte, “A Performance Analysis of the Optical-Mechanical Scanner as an Imaging System for Planetary Landers,” NASA TN D-5552 (1969).

14. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A New Compressive Imaging Camera Architecture using Optical-Domain Compression,” Proc. SPIE 6065, 606509 (2006). [CrossRef]  

15. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

16. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

17. M. J. E. Golay, “Multi-slit spectrometry,” J. Opt. Soc. Am. 39(6), 437–444 (1949). [CrossRef]   [PubMed]  

18. P. Gottlieb, “A television scanning scheme for a detector-noise limited system,” EEE Trans. Inform. Theory 14(3), 428–433 (1968). [CrossRef]  

19. E. E. Fenimore, “Coded aperture imaging: predicted performance of uniformly redundant arrays,” Appl. Opt. 17(22), 3562–3570 (1978). [CrossRef]   [PubMed]  

20. W. T. Cathey and E. R. Dowski, “New paradigm for imaging systems,” Appl. Opt. 41(29), 6080–6092 (2002). [CrossRef]   [PubMed]  

21. N. A. Riza and M. A. Arain, “Code-multiplexed optical scanner,” Appl. Opt. 42(8), 1493–1502 (2003). [CrossRef]   [PubMed]  

22. K. Kearney and Z. Ninkov, “Characterization of a digital micro-mirror device for use as an optical mask in imaging and spectroscopy,” Proc. SPIE 3292, 81–92 (1998). [CrossRef]  

23. J. Castracane and M. Gutin, “DMD-based bloom control for intensified imaging systems,” Proc. SPIE 3633, 234–242 (1999). [CrossRef]  

24. S. Nayar, V. Branzoi, and T. Boult, “Programmable imaging using a digital micro-mirror array,” in Proceedings of IEEE on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 436–443.

25. N. A. Riza, M. J. Amin, and J. P. La Torre, “Coded Access Optical Sensor (CAOS) Imager,” J. Eur. Opt. Soc.:Rapid Publ.10(15021), (2015). [CrossRef]  

26. N. A. Riza, “Coded Access Optical Sensor (CAOS) imager and applications,” Proc. SPIE 9896, 98960A (2016).

27. N. A. Riza, “Compressive optical display and imager,” US Patent 8783874 B1 (2014).

28. T. Scheimpflug, “Improved Method and Apparatus for the Systematic Alteration or Distortion of Plane Pictures and Images by Means of Lenses and Mirrors for Photography and for other purposes,” GB Patent No. 1196 (1904).

29. K. Lee, S. Seo, S. Huang, Y. Joo, W. A. Doolittle, S. Fike, N. Jokerst, M. Brooke, and A. Brown, “Design of a Smart Pixel Multispectral Imaging Array Using 3D Stacked Thin Film Detectors on Si CMOS Circuits,” in Proceedings of IEEE Conference on Electronic-Enhanced Optics, Optical Sensing in Semiconductor Manufacturing, Electro-Optics in Space, Broadband Optical Networks (IEEE, 2000), pp. 157–158. [CrossRef]  

30. Y. Kim, T.-H. Lai, J. W. Lee, J. R. Manders, and F. So, “Multi-spectral imaging with infrared sensitive organic light emitting diode,” Sci. Rep. 4(5946), 5946 (2014). [PubMed]  

31. UI-1250SE-M-GL detailed datasheet, IDS, Germany.

32. D. V. Blerkom, C. Basset, and R. Yassine, “CMOS DETECTORS: New techniques recover dynamic range as CMOS pixels shrink,” Laser Focus World 46(6), 45 (2010).

33. PDA36A datasheet, Thorlabs, (2015).

34. Photodetector technical documents, Thorlabs, Germany.

35. V. Mackowiak, J. Peuplelmann, Y. Ma, and A. Gorges, “NEP-Noise Equivalent Power”, White Paper, Thorlabs.

36. J. Arrillega, D. A. Bradley, and P. S. Bodger, Power System Harmonics, (John Wiley and Sons, 1985).

37. DLP7000 DLP 0.7 XGA datasheet, Texas Instruments, USA (2015).

38. S. P. Kim and M. J. Kim, “A constant amplitude coding for code select CDMA system,” in Proceedings of IEEE TENCON Conference on Computers, Communications, Control and Power Engineering, (IEEE, 2002), pp. 1035–1038. [CrossRef]  

39. ADC ADS52J90, Texas Instruments, USA.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The CAOS-CMOS Camera design.
Fig. 2
Fig. 2 Snapshot of the CAOS-CMOS camera experimentally setup in the laboratory.
Fig. 3
Fig. 3 Field of View of the target scene as viewed from the CAOS-CMOS camera in the experimental demonstration.
Fig. 4
Fig. 4 Images of the scene viewed using the CMOS mode of the CAOS-CMOS camera. (a) Unsaturated scene with a torch off, (b) saturated scene due to torch lighting, (c) scene of (b) attenuated by a factor of 1,000, (d) scene of (b) attenuated by a factor of 3,200, (e) scene of (b) attenuated by a factor of 10,000, and (f) logarithmic scale image of the irradiance data from image (e).
Fig. 5
Fig. 5 2-D image reconstruction of the target scene using the CAOS mode of the CAOS-CMOS camera. (a) scaled irradiance map I(x,y) is shown, (b) scaled irradiance map of the logarithm of I(x,y) values is shown, and (c) same plot as in (b) but with additional labels of regions R1, R2, R3 and R4 as well as location of traces T1, T2 and T3 used for quantitative image analysis purposes.
Fig. 6
Fig. 6 The frequency domain plots of chosen CAOS acquired signals at regions (a) R1 covering a segment of the torch, (b) R2 covering the filament, (c) R3 covering the traffic sign, and (d) R4 covering the black background. Note that only |S(f)| peaks at frequencies f1 = 133.4 and f2 = 200.2 indicate the scaled relative optical irradiance at the corresponding agile pixels. The peak at 150 Hz appearing in each plot is due to the 3rd harmonic of the 50 Hz electricity mains supply.
Fig. 7
Fig. 7 Plots showing the measured scaled irradiances along Fig. 5(c) labeled traces (a) T1, (b) T2, and (c) T3.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

i(t)=K n=1 N c n (t) I n ( x n , y n ) .
i A (t)=G A i(t).
S(f)=FT{ i A (t) }=FT{ G A K n=1 N c n (t) I n ( x n , y n ) }, S(f)=FT{ G A K n=1 N cos(2π f n t) I n ( x n , y n ) }.
S(f)=G G A K n=1 N I n ( x n , y n ) δ(f f n ), S(f)=G G A K{ I 1 ( x 1 , y 1 )δ(f f 1 )+ I 2 ( x 2 , y 2 )δ(f f 2 )+....+ I N ( x N , y N )δ(f f N ) }.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.