Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Demonstration of the CDMA-mode CAOS smart camera

Open Access Open Access

Abstract

Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

There are several real world scenarios such as automotive sensing, industrial machine vision and surveillance (e.g., night vision) where one engages bright light targets within scenes having extreme image contrasts. When one adds targets having diverse spectral signatures from ultraviolet to infrared wavelengths, the challenge to view such bright targets and scenes gets even harder when using current image sensor technologies based on Complementary Metal Oxide Semiconductor (CMOS), Charge Coupled Device (CCD) and Focal Plane Array (FPA) devices. Previously, attempts have been made using Spatial Light Modulators (SLMs) like the DMD to design and demonstrate optical systems working with pixelated Photo-detector Array (PDA) sensors [1–3], point detectors [4], and a combination [5] of PDA and point Photo-Detectors (PDs) to control, condition, and image incident light maps with a limited DR and operational speed. The spectral versatility and device robustness offered via a point optical detector-based imager was recognized many years ago and led to efforts in the 1960’s to construct optical cameras for space-borne applications [6,7]. Recently, introduced is a new variant of the point-PD based imager called the CAOS smart camera [8]. This collaborative camera design engages the DMD, the point PD, and a prior-art CMOS/CCD/FPA sensor combined with image processing methods, all working in unison to enable target extraction using Radio Frequency (RF) wireless inspired space-time-frequency coding and processing techniques for the imaged light. Specifically, this RF communications and processing inspired light encoding and decoding method enables extraction of true un-attenuated target irradiance maps under extreme conditions of high contrast and high brightness levels [9]. The CAOS camera can engage various operational modes borrowed from RF wireless multiple access techniques such as Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), CDMA and a combination of these modes where multiple CAOS pixels in the selected scene zone are encoded and decoded with its appropriate time-frequency based signal processing. Previous demonstrations of the CAOS camera have deployed the FDMA-TDMA mode that engages time domain RF spectrum analysis via the DSP-based Fast Fourier Transform (FFT) algorithm to recover pixel irradiances levels. Ref [8] demonstrated white light imaging of a three target scenario with a 82 dB linear DR while [10] demonstrated a multi-spectral imager with an image pixel extreme linear DR reaching 136 dB. Use of TDMA combined with FDMA as well as a 400 Hz frame rate DMD limited the earlier demonstrated speed of operation of the CAOS camera to several minutes frame acquisition time [11,12]. This paper for the first time describes the design and demonstration of a faster speed CAOS smart camera empowered by a higher speed DMD that implements the pure CDMA-mode of operation where one does not use TDMA-mode CAOS pixel management to acquire the CAOS image frame. Specifically, the paper first describes the design theory deployed to implement the CDMA-mode within the CAOS smart camera. Next various bright light targets are deployed to experimentally test the camera, show-casing the robustness of the captured un-attenuated image data versus the CMOS-mode of the camera that requires engaging high optical attenuation of the scene to prevent CMOS sensor saturation. Experiments also demonstrate the various programmable features of the camera such as CAOS capture signal frame time control, targeted zoomed higher resolution imaging within sub-sections of an initial CAOS image, and variable gain imaging to improve output Signal-to-Noise Ratio (SNR). The paper concludes with a dual bright-weak targets test to achieve both bright and weak target recovery using the CDMA-mode.

2. CDMA-mode CAOS smart camera design

Figure 1 shows the CAOS smart camera design that can be programmed with the RF style time-frequency (Hertz domain) CDMA-mode to enable simultaneous viewing of P selected CAOS pixels within a CMOS-mode guided target scene zone. In addition, the size, shape, and location of each CAOS pixel in the P-pixel set is also programmed using the scene intelligence gathered using the CMOS-mode operations. Specifically, imaged scene light enters via the imaging lens L1 and falls on the DMD plane that is initially programmed in its –θ state of micromirrors to divert all imaged light to the PDA sensor such as a CMOS sensor for the visible light band. Note that the DMD is also a programmable spatial filter, although with a limited spatial rejection ratio (e.g., 20-30 dB on/off ratio), and this feature can also be used to direct imaged light away from the CMOS sensor or the point PD in order to optimize the usage of PD quantum well capacities [13]. The Scheimpflug imaging condition is maintained between the DMD plane and the point PD and PDA planes. The three optional Smart Modules (SM) SM1, SM2, and SM3 may contain programmable variable optical attenuators, wavelength filters, and apertures to optimally condition the light striking the DMD, point PD, and PDA devices, respectively. The point PD unit may incorporate a built in variable gain electronic amplifier. It is conceivable that the point PD is replaced by an all-optical amplification device that connects to an optical-to-electrical transducer that generates the high speed electrical signal for electronic signal processing. Although the Fig. 1 design shows a single optical detector set (one point PD and one PDA), multiple point PD and PDA sets suited to the desired viewing optical bands can be deployed for simultaneous viewing of multi-spectral image data. Furthermore, programmable spectral filters within the SMs can further optimize spectral image content. It is interesting at this stage of the camera design discussion to note the natural synergy between the human eye hardware and its visual processing operations and the CAOS smart camera hardware and its operations. The retina in the human eye has many photo-receptors called rods that are responsible for lower visual acuity lower light level monochromatic off-axis viewing (Visual acuity commonly refers to the clarity of vision, i.e., sharp image) [14]. In contrast, the retina also has fewer photo-receptors called cones that are solely responsible for higher light level and higher visual acuity on-axis (in the fovea) color viewing. Human visual processing in demanding stressful applications such as experienced by race car drivers [15] has shown initial brain controlled eye tracking to first get full off-axis scene views using mainly rod-based monochromatic vision and then switching to focused sub-scene cone (fovea) based viewing for high acuity color vision. When comparing human vision system to the CAOS smart camera vision system hardware, one can deduce that the many non-CAOS mode pixels in the CAOS smart camera operate like the rods in the eye while the fewer CAOS-mode pixels operate like the spatially directional cones in the human eye. Furthermore, the operations of the human versus CAOS smart camera processing is similar where one first gathers a general wider scene view using one set of hardware and then one uses a higher precision hardware to generate a more in-depth view of a target region of interest. In this context, it is also well known that a lower spatial resolution color image provides greater pattern recognition power via human visual processing versus a higher spatial resolution monochromatic image of the scene. Thus the CAOS smart camera with its spectrally flexible operations matching the human visual system hardware and processing operations possesses inherent attributes that are expected to further enable not only better human vision-based systems but also improved machine vision systems.

 figure: Fig. 1

Fig. 1 CAOS smart camera design programmed for operation in the CDMA-mode.

Download Full Size | PDF

Today’s mobile phone networks are dominated by the CDMA-based RF network design that has its roots in early RF coded microwave radar [16] of the 1960’s as well as the CDMA cellular phone networks of the 1980’s [17]. Both the radar and cellular airspace is inundated with trillions of time-frequency coded RF waveforms. Nevertheless, the signal processing power of time/frequency domain correlation and spectral signal processing allows the extraction of the desired signal using the appropriate matched time-frequency code. CAOS operates on this same basic principle assigning unique time-frequency codes to the incident image selected pixel irradiances.

Let cp(t) denote the CDMA encoding temporal code assigned to the pth CAOS mode pixel irradiance  Ip in the CMOS sensor guided image zone incident on the DMD with p = 1, 2, 3,..., j,....P. For this discussion, one can assume that  Ip is not varying during the CAOS pixel encoding duration T where T is the CDMA signal waveform duration. All P CAOS pixel CDMA encoded temporal waveforms simultaneous illuminate the point PD generating the photo-current iPD(t). This current when viewed in the time domain looks like a white noise RF signal or a common spread-spectrum RF signal in the frequency (Hz) domain, a feature of detected signals in CDMA RF systems. It is well known from RF radar and cellular systems design that for best signal recovery one must use CDMA codes within the multiple access coding network with a minimal cross-correlation (i.e., minimum side-lobe level) and a maximum autocorrelation value. In other words, deployed temporal codes in the multiple access system ideally should satisfy the cross-correlation result:

cp(t)cj(t)=0(pj)
where represents the temporal correlation signal processing operation. Hence time-frequency code families should be chosen that display the best attributes of minimal cross-correlation and maximum auto-correlation. The photocurrent due to the pthCAOS pixel irradiance Ip is given by:
ip(t)=K×Ip×cp(t)
where K is a proportionality factor containing various aspects of the point PD module such as electronic gain and quantum efficiency and the DMD micro-mirror reflectivity. The total photo-current from the point PD due to P irradiance values from the P simultaneously encoded CAOS pixels is given by:
iPD(t)=p=1Pip(t)=Kp=1PIpcp(t)
To independently recover each of the scaled P observed image irradiance values Ij from iPD(t), the temporal correlation operation is performed using j = 1,2,…P, giving:
Ij=wj(t)=iPD(t)cj(t)
Note that the temporal correlation operation implemented to recover the jth CAOS pixel irradiance Ij using a decoding real-value signal codecj(t) and the CAOS smart camera point PD real value photo-current iPD(t) is given by:
Ij=+iPD(τ)cj(τt)dτ
Here t represents the relative delay between the point PD signal and the temporal decoding signal Thus to extract the scaled irradiance Ij of any arbitrary jth pixel,
Ij=K+[i1(τ)+i2(τ)+i3(τ)+iP(τ)]cj(τt)dτIj=K+[I1c1(τ)+I2c2(τ)+I3c3(τ)+IPcP(τ)]cj(τt)dτIj=KIj+cj(τ)cj(τt)dτ+Kp=1pjP[Ip+cp(τ)cj(τt)dτ]
The first part of the Eq. (6) right hand side includes the time domain auto-correlation operation of the jth coding signal that recovers the jth CAOS pixel irradiance Ij. The second part of Eq. (6) contains the cross-correlation of the jth decoding signal with all other (P-1) encoding signals. For recovery of irradiance Ij of jth pixel, the auto-correlation value should be much greater than the cross-correlation value. It is well known that the maximum autocorrelation value occurs when the relative time delay is zero between two identical waveforms undergoing the correlation operation. For Eq. (6), this means using a t = 0 value giving the result:
Ij=wj(t=0)=KIj+cj(τ)cj(τ)dτ+Kp=1pjP[Ip+cp(τ)cj(τ)dτ]
Using the important condition in Eq. (1) that temporal codes deployed for CAOS CDMA-mode have ideally zero cross-correlation values, one can write the decoded jth CAOS pixel scaled irradiance I′j as:
Ij=KIj+cj(τ)cj(τ)dτ=KIj[cj(t)cj(t)]
with the auto-correlation operation computed for the zero relative delay setting giving a constant value of K equal to the time averaged energy in the time-frequency coding signal cj(t) for all j values. Hence KK is the scaling factor associated with the decoded jth CAOS pixel irradiance I′j where Ij is the true non-scaled jth CAOS pixel irradiance. Thus by performing P independent time domain correlation operations with the P different decoding signals, all scaled irradiances of the observed CDMA-mode CAOS camera seen image can be recovered to produce the CAOS frame image.

Since the 1960’s, extensive work has been done to design analog, digital, and hybrid coding signals cj(t) for CDMA-style RF systems. Specifically, research has been conducted into optimal binary (i.e., on/off or two state) sequences for CDMA [18] where the on-state is represented by a “+1” signal state (e.g., 5 V) and the off-state is represented by a “0” signal state (e.g., 0 V) or a “ ─ 1” signal state (e.g., ─ 5 V). For the CDMA-mode CAOS smart camera application, the binary sequence encoding signalcpE(t)for the pth CAOS pixel is:

cpE(t)=q=N21N/2apqErect[tqTbTb]
To represent one code bit slot in time, the classic rect function is used with a window or bit duration of Tb. The bit period used is also Tb seconds with code bit rate B = 1/Tb bits/sec. Here N is an even integer and represents the number of sequential bits that make up one encoding signal. N is even because half the code bits across all P codes are designed to occupy a “1” state and the other half are designed to occupy a “0” state. Such a design ensures optimal code bit usage efficiency for orthogonal code generation and also keeps the same optical exposure time of 0.5N Tb for all CAOS pixels at the point PD so all CAOS pixels experience the same photo-detection weighting factor. Using the degrees of freedom argument, N must be greater than or equal to the total number of CDMA-mode CAOS pixels P in one CAOS image frame. Designed for DMD light temporal modulation operations, the pth CAOS pixel encoding bit amplitude coefficient apqE has values of either 1 or 0. Here a value of 1 represents presence of the CAOS pth pixel light on the point PD and “0” represents the absence of the CAOS pth pixel light on the point PD. The total photo-current from the point PD due to P irradiance values from the P simultaneously encoded CAOS pixels using the binary sequence encoding signal waveform cpE(t) is given by:
iPD(t)=p=1Pip(t)=Kp=1PIpcpE(t)
This photo-current is recorded by a high speed data sampler like an ADC creating a recorded data set for high speed DSP operations of multi-channel time domain correlations. For the CDMA-mode CAOS smart camera application, the binary sequence decoding signalcjD(t) for the jth CAOS pixel can be written as:
cjD(t)=q=N21N/2ajqDrect[tqTbTb]
The jth CAOS pixel decoding bit amplitude coefficient ajqD has values of either 1 or ─1. To independently recover each of the scaled P observed image irradiance values Ij from iPD(t), again the temporal correlation operation (with zero relative delay between encoding and decoding signal) is performed using j = 1,2,…P, giving:
Ij=wj(t=0)=iPD(t)cjD(t)=Aj+C
Here Aj represents the auto-correlation result for the jth CAOS pixel and C represents the cross-correlations contribution within the Eq. (12) computation. Furthermore, one can write:
Aj=KIjcjE(t)cjD(t)=KIjq=N21N/2ajqEajqD=KIjajEajD=KIjN2
where perfect orthogonality between the encoding and decoding sequences is assumed such as found in Walsh functions (i.e., orthogonal sequences derived from the rows or columns of Sylvester-type N x N Hadamard matrices) [19–21]. For the jth CAOS pixel, ajE  and ajD are N-element encoding and decoding vector codes, respectively.  Here ° is the dot product (or inner product) symbol for the two mutually orthogonal encoding and decoding vectors. The dot product between the two designed auto-correlating encoding and decoding vectors gives a scalar value of N/2. Next, the cross-correlation contribution C in Eq. (12) can be written as:
C=Kp=1pjPIpcpE(t)cjD(t)=Kp=1pjPIpq=N21N/2apqEajqD=Kp=1pjPIpapEajD=0
Here one uses the condition that for the mutually orthogonal family of N-element codes, the dot product apE  ajD between cross-correlating (i.e., p ≠ j) encoding and decoding vectors is zero. Thus after using DSP, the decoded jth CAOS pixel scaled irradiance Ij is given by:
Ij=Aj+C=KIjN2
Equation (15) using j = 1,2,…,P, indicates that all P simultaneous CDMA-mode CAOS pixel scaled irradiances are recovered using the described design theory when using ideal Walsh function-based orthogonal binary code sequences. To demonstrate a simple example of Walsh function based encoding and decoding of a CAOS pixel in CDMA-mode, consider the case of P = 16 CAOS pixels using a N = 16 bits code design with a specific jth pixel encoding time sequence (vector) ajE  of [1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 ] and a jth pixel decoding time sequence (vector) ajD of [1 −1 −1 1 1 −1 −1 1 −1 1 1 −1 −1 1 1 −1]. As needed for Eq. (13) and Eq. (15), computing the autocorrelation (i.e., dot product) of these two vectors gives ajE ajD = 16/2 = 8. For a pth CAOS pixel encoding code design with a vector code of [1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1], computing the cross-correlation (i.e., dot product) with the decoding vector ajD given earlier with p ≠ j gives apEajD = 0, a result needed for Eq. (14). Thus the desired scaled irradiance of the CAOS jth pixel is recovered using the Walsh code time sequences of digital bits. One should note that using Walsh codes, the optical exposure time for all P CAOS pixels is designed to be 0.5N Tb. In other words for optimal coding and light exposure efficiency, one should not use a code sequence with an unbalanced number of “on” to “off” states, i.e., the N bits code should have N/2 on-states and N/2 off-states. This also means one may have to discard some columns/rows from the NxN code generation matrix. CAOS frame time is a sum of the total code time of N Tb plus the time it takes for DSP to reconstruct the CAOS image frame. The DMD frame rate is B = 1/ Tb. To achieve a faster CAOS frame time, NTb product and DSP time should be reduced using faster DMDs and DSP electronics as well as the minimum N (and P) values for sufficient target of interest recovery from an observed scene.

3. CAOS smart camera CDMA-mode demonstration experiment

The CAOS smart camera system of Fig. 1 is set-up in the laboratory using a Vialux high speed driver electronics-interfaced TI DMD model DLP7000 having a micro-mirror pitch of 13.68 µm, 1024 × 768 micro-mirror array in rectangular configuration, micro-mirror tilt of θ = ± 12° relative to flat state, maximum global array switching rate of 22,272 Hz and spectral range of λ = 400 nm to 700 nm [22]. L1, L2, and L3 have a diameter of 2.54 cm, 5.08 cm and 2.54 cm, respectively. L1, L2, and L3 have focal lengths are 4 cm, 5 cm and 2.54 cm, respectively. The distances between listed components are as follows: DMD and L1: 4.4 cm to 5.1 cm (target dependent), DMD and L2: 15.0 cm, L2 and PD: 7.5 cm, DMD and L3: 9 cm, L3 and CMOS sensor: 3.5 cm. The point PD is a Thorlabs Silicon Switchable Gain Detector PDA36A with an active area of 13 mm2 (3.6 mm × 3.6 mm). The point PD has variable gain setting from 0 dB to 70 dB and a spectral range of 350 nm to 1000 nm. For the experiment, the point PD is initially operated at the 20 dB gain setting with specifications of transimpedance gain GA = 1.5 × 104 V/A, maximum PD output Vmax = 10 V, Bandwidth (BW) = 1.0 MHz and a rise time tr. = 0.35 µs. The maximum output current imax = Vmax/GA = 10/(1.5 × 104) = 6.0 × 10−4 A. To compute the point PD DR for these settings, knowledge of the maximum and minimum detectable input power is required. For the imaging tests, the white light illumination light source used had a central wavelength at λ = 550 nm. Using a wavelength responsivity R(λ) of 0.34 A/W at λ = 550 nm, the maximum detectable optical power by the point PD is Pmax = imax /R(550 nm) = 6.0 × 10−4)/0.34 = 1.765 × 10−3 W. The wavelength dependent noise equivalent power is minimum at λ = 950 nm, which is also the NEPmin. Since the PD is operated at λ = 550 nm, NEP(550 nm) = NEPmin × R(950 nm)/R(550 nm), where R is the wavelength dependent responsivity. NEPmin = 2.34 × 10−12 W/√Hz. NEP(550 nm) = 4.47 × 10−12 W/√Hz. The minimum detectable input optical power Pmin = NEP(λ) × √BW = NEP(550 nm) × √(1 × 106) = 4.47 × 10−9 W. Hence the point PD has a designed DR = 20log(Pmax/Pmin) = 111.92 dB. The PDA used is IDS UI-1250LE-M-GL monochrome CMOS camera with E2V sensor EV76C570ABT, specified DR of 51.3 dB, a pixel pitch of 4.5 µm and 1600 × 1200 pixels.

The CAOS smart camera processing unit contains the following: a Vialux main circuit board model V4395 for DMD control, a model 6211 Data Acquisition (DAQ) card from National Instruments to acquire the PD output voltage in real time using the DAC’s 16-bit ADC, maximum sampling rate of 250 Kilo-samples per second (Ksps) and a DAQ Vmax of 10 V, and a DELL i5 laptop computer with a clock speed 2.8 GHz. The point PD digitized voltage signal is fed to the laptop using a USB communication serial interface. A program created in MATLAB 2015 creates a N × N Hadamard matrix used for Walsh bit sequence generation that is fed to another in-house built program using Visual Studio 2012 in the laptop to manage the CDMA-mode CAOS smart camera operations. This program allows the user to input the desired number of P CAOS pixels, the DMD frame rate 1/TB, the spatial region of the CDMA-mode image and the CAOS pixel size in number of micro-mirrors. Another custom program created in MATLAB controls the point PD signal acquisition and allows the user to input the desired DAQ card sampling rate. The program also performs the temporal correlation operations required for incident image scaled irradiance decoding of the P CDMA-mode CAOS pixels.

It is relevant to point out that as far back as 1949, time-frequency coding of light using spinning Two Dimensional (2-D) patterned gratings generating binary sequences in time were used in a spectrometer [23]. In 1968, the same essential approach was also independently proposed for an imaging display [24]. Then in 1988, again independently, time-frequency coding of light using two dimensional grating patterns on a spinning optical disk was used to generate a distributed local oscillator (in the Hertz domain) to implement optics-based RF fine frequency spectrum analysis [25]. Given the binary micro-mirror state nature of the DMD and the success of one dimensional Walsh functions [26,27] in coding of electromagnetic waves within the microwave radar, RF communications, and optical instrumentation arena, the CDMA-mode of the CAOS smart camera first experimental demonstration is designed using N-bit Walsh sequences in time. These codes form excellent orthogonal set of codes for time-frequency encoding the simultaneously sampled CAOS pixels and for scaled irradiance CAOS pixel decoding using time-domain correlation processing. The N-element Walsh sequences in time for the experiment representing the encoding and decoding N-element vectors are generated using columns/rows in a N x N Hadamard matrix [28,29]. The experiments described next showcase CDMA-mode CAOS smart camera imaging using four different types of test target scenes requiring different programmable features of the smart camera.

The first target observed by the CAOS smart camera is a Rolson 5 W Aluminium housing plastic cover Z2 LED flashlight with a luminosity of 180 lumens that is placed 47.0 cm from camera frontend capture lens L1. The DMD to L1 distance is set to 4.4 cm. First using a Canon SX500 IS camera and room lighting conditions, Fig. 2(a) shows an image of the flashlight captured with its LED off. For all target scene viewing experiments via the smart camera, all room lights are turned off to create a bright light targets with dark background conditions. Next, the smart camera is set to its initial CMOS-mode with all the DMD micro-mirrors directing light to the CMOS sensor. With no placement of an ND filter between the DMD and the CMOS sensor, the high brightness of the flashlight completely saturates the CMOS sensor. Hence a sequence of ND filters is applied to attenuate the scene image irradiance map to create an image of the attenuated viewed scene. With the ND filter reaching an Optical Density (OD) value of 2.3 giving a 200X irradiance attenuation, Fig. 2(b) shows the unsaturated captured flashlight image that gives a measured irradiance max/min contrast of 115:1. The exposure time of the CMOS sensor is set to its shortest 20 μs duration. Using the acquired CMOS-mode image, a region of interest of 300 x 300 micro-mirrors on the DMD is identified to implement a 60 x 60 CAOS pixels CDMA-mode of the camera with each CAOS pixel made of 5 x 5 micro-mirrors. In this case, no optical attenuation is applied to the incident scene light, thereby removing chances of spoiling the true optical irradiance distribution within the viewed scene. A N = 4096 bit length Walsh code set is generated to temporally encode the P = 3600 CDMA-mode CAOS pixels. The all “1” sequence Walsh code in the code set is not used for the camera as each CDMA code should have an equal number of “1”’s and “0”’s across the encoding set to produce the same point PD light exposure times. The Hadamard matrices for code generation have a 2n order with 212 = 4096 and 211 = 2048. As 211 is less than P = 3600, the 4096 bit length codes are deployed for the CDMA-mode of the camera. Figure 2(c) shows the CDMA-mode CAOS image captured by the smart camera using a code bit rate of 10 KHz giving a CDMA-mode image data acquisition time of 0.4096 seconds.

 figure: Fig. 2

Fig. 2 (a) Flashlight target image obtained with flashlight off and using ambient room lighting and an external CMOS Camera. (b) Target image seen using the CMOS-mode of the CAOS smart camera engaging a 2.3 OD ND filter. (c) Target image seen with flashlight on and using the CDMA-mode of CAOS smart camera with code bit rate of 10 KHz and using a 4 mm x 4 mm zone on the DMD.

Download Full Size | PDF

Figure 3(a) shows the factor of 200 attenuated unsaturated CMOS-mode gray-scale (color coded) image map with a 115:1 max/min contrast. Figure 3(b) shows the CDMA-mode CAOS image data grayscale map (color coded) that has a similar irradiance max/min contrast of 156:1, although this data is the true un-attenuated gray-scale image map of the flash light. These images point to the robustness of the CDMA image acquisition mode of the CAOS smart camera. Figure 3(c) shows a section of the DAC sampled point PD signal for the Fig. 2(c) image. A key point to note about the point PD output is that in the first bit time slot, all CDMA code sequences have a “1” value maximizing the point PD signal by showing a peaked signal. This peak signal location in time (see t = 0 value in Fig. 3(c) time trace) is used to time stamp the start of each new CAOS frame and allows extraction of the CAOS frame point PD signal for further signal processing and decoding. This feature of the operational camera provides a natural CAOS frame trigger signal within the camera electronics. The DAC sampling rate was set to oversample the PD signal so as to compute a time averaged voltage value over each bit duration to enable robust CAOS image computation. The Fig. 3(c) trace used a DAC sampling rate of 200 Ksps and 10 data samples per code bit duration were used to create an average point PD value. Figure 3(c) signal resembles a noisy waveform in time highlighting the natural security feature of the CAOS smart camera. Figure 3(d) shows the RF spectrum (DC to 31 KHz) of the Fig. 3(c) time domain signal and highlights the spread spectrum nature of the spectrum which is again showing a secure communications channel.

 figure: Fig. 3

Fig. 3 (a) CMOS-mode CAOS smart camera generated flashlight target gray-scale (color coded) image with a 115:1 max/min irradiance ratio when using a 200X optical attenuation via ND filter. (b) CDMA-mode CAOS smart camera generated un-attenuated true flashlight target gray-scale (color coded) image with a 156:1 max/min irradiance ratio. (c) Time domain plot of the voltage signal from the point PD over a CDMA-mode CAOS pixel encoding duration of 0.4096 seconds over 4096 code bits. (d) Frequency domain plot of the point PD signal generated by 3600 CDMA-mode CAOS pixels.

Download Full Size | PDF

Figure 4(a) shows a 1-D spatial trace of the normalized irradiance values of the 3600 CAOS pixels of the Fig. 2(c) image obtained using time-domain correlation processing using the assigned decoding binary sequences. This data showed a peak value of Imax = 24.58 and a minimum value Imin = 0.1571 indicating a viewed flashlight target CAOS-mode irradiance ratio of 156.5: 1 or an image DR = 20log(Imax/Imin) of 43.9 dB. The background computational noise floor in the correlation-based computed images is of the order of 10−4 that appears as the black and very dark blue image display colors in Figs. 2(c) and 3(b), respectively. To further check the robustness of the CDMA-mode 43.9 dB DR reading, the extreme DR capability FDMA-TDMA [8] mode of the CAOS camera is deployed for this viewed un-attenuated target giving a measured image DR of 42 dB. This DR value is similar to the CMOS-mode and CDMA-mode image DRs, again pointing to the robustness of the CDMA-mode image extraction process when viewing the approximately 42 dB DR bright test target.

 figure: Fig. 4

Fig. 4 (a) Normalized irradiance values of the CAOS pixels recovered after time domain correlation processing for 3600 CAOS pixels operating simultaneously in the CDMA-mode.

Download Full Size | PDF

Next, experiments are conducted with two different targets to highlight the programmability feature of the CAOS smart camera in terms of imaging specifications such as CDMA code Bit rate and Bit length (e.g., CAOS frame capture time), targeted image zone location, size and shape, and imaging pixel spatial resolution and electronic gain. Shown in Fig. 5(a) is the second target scene, a machined traffic sign transmissive aluminium target observed by the Canon camera under room lighting conditions. This target is placed 18.5 cm from capture lens L1 and the target is illuminated using the Rolson flashlight to create a bright target scene within a dark background (with room lights off). DMD to L1 distance is adjusted to 5.1 cm to optimize demagnification for imaging this target. With all the DMD micro-mirrors directing light to the CMOS sensor, Fig. 5(b) shows the CMOS-mode image of the target when using a 1.9 OD ND filter to suppress bright light from the traffic sign to enable generation of an unsaturated image from the CMOS sensor so the CAOS camera can select a target sub-zone for intelligent target information extraction. Using this CMOS-mode based target information, the CAOS-mode of the camera next uses a 60 × 60 CAOS pixel target scene grid, where each pixel is made of 5 × 5 micro-mirror size. A N = 4096 bit length Walsh code set is used to view P = 3600 CDMA-mode CAOS pixels. To test the variable bit rate option in the smart camera system, the bit rate is changed to 1 KHz. Figure 5(c) shows the CAOS image for the traffic sign target. The irradiances obtained using time-domain correlation processing show a peak value of 8.3307 and a minimum value of 0.065 indicating a viewed target irradiance ratio of 128.17: 1 or a DR of 42.2 dB.

 figure: Fig. 5

Fig. 5 (a) Machined traffic sign target seen using the Canon camera and room lighting. (b) CMOS-mode image from the CAOS Smart Camera when using a 1.9 OD filter to attenuate the flashlight lit traffic sign target observed with no room lighting. (c) CDMA-mode CAOS image from Smart Camera with no optical attenuation of the target. (d) An improved 2x2 micro-mirror spatial resolution CDMA-mode CAOS image of the top end of the traffic sign. (e) A further improved 1 micro-mirror spatial resolution CDMA-mode CAOS image of the top right end edge of the arrowhead using a 20 dB point PD gain setting. (f) Using a higher 60 dB point PD gain setting, a lower spatial noise image of the captured image shown in (e).

Download Full Size | PDF

To test the higher resolution imaging ability of the CAOS smart camera, the CAOS pixel size is programmed to a smaller size of 2x2 micro-mirrors and the captured CDMA-mode image of the top zone of the target (the arrowhead) is shown in Fig. 5(d).To further improve the imaged spatial resolution of a specific section of the target (top right edge of the arrowhead), the CAOS pixel size is programmed to be 1 micro-mirror. P is 3600 (60 x 60) CDMA-mode CAOS pixels and this pixel set forms a zoomed-in 0.82 mm × 0.82 mm viewing zone on the DMD that matches half of the arrowhead viewing zone. With these settings, Fig. 5(e) shows the obtained CAOS image when using the 20 dB electronic gain setting of the point PD module. With only a single micro-mirror spatial zone being used to encode the CAOS pixels in their CDMA-modes keeping P the same to zoom into a focused region of the incident image and assuming a uniform illumination target scene, a much smaller fraction of light irradiance from the image plane is used to produce the encoded photo-current signal for the P CAOS pixels. In other words for the described scenario, the SNR of the point PD signal drops when using a smaller size CAOS pixel. In imaging measurements, a common definition of SNR =  V¯°/σ ° where V¯ the mean of the signal and σ is the standard deviation (representing noise) of the signal [30]. The data acquired from the DAC is used to compute this SNR. Note that apart from the first bit duration in the point PD detected signal within one CAOS encoding time duration, the remaining N-1 bit time slots contain the same number of 1 value codes per bit slot across the full signal duration. Hence V¯, the time average across the N-1 bit slots can be computed using the point PD signal data provided by the DAC. V¯n is the arithmetic average of the point PD signal per bit time slot. σn  is the SD for the nth bit [31] and σ is the arithmetic mean of the SD over all N-1 bits of the time-frequency code. One can write:

σn=1F1[f=1F[VfV¯n]2],σ=1N1n=1N1σnandV¯=1N1n=1N1V¯n
Here V¯f is the fth voltage data sample from the DAC with f = 1, 2, 3,…F. For this experiment, F = 10. For the Fig. 5(e) image, the DAC data computed SNR = 360. To increase the point PD signal output SNR to enable reduction of the spatial noise of the Fig. 5(e) captured CAOS image, the point PD gain is therefore increased to 60 dB. As expected, Fig. 5(f) produces a more robust (i.e., reliable) higher SNR and lower spatial noise CAOS image of the target edge zone. Specifically, the Fig. 5(f) experimental image data SNR = 3394. To put things in context, the Rose criterion says that an SNR of at least 5 is needed to determine image features with 100% certainty [32]. Clearly, the CDMA-mode CAOS images shown meet this criteria.

The third target scene observed by the CAOS smart camera is a transmissive letter sign target that is an irregular non-uniform L-shaped cut out in a piece of black cardboard paper. This target is again lit by the Rolson flashlight creating a bright target surrounded by a dark background. Figure 6(a) shows an image of the target seen by the Cannon camera under room lighting conditions. The target to L1 capture lens distance is 18.5 cm and DMD to L1 distance is 5.1 cm. The smart camera is set to its initial CMOS-mode with all the DMD micro-mirrors directing light to the CMOS sensor and with the ND filter set to 0.9 OD, Fig. 6(b) shows the image of the target. Although this image has some saturated pixels, it allows selection of the CAOS-mode viewing zone on the DMD. Specifically, to test the variable CAOS-mode pixel count feature in the system, P is set to 17 x 11 = 187 CAOS pixel target scene grid. Here each CAOS pixel is 10 x 10 micro-mirrors implying that a 170 x 110 micro-mirrors zone is activated for CDMA-mode operations on the DMD. With N = 1024 bit length Walsh code set used to create P = 187 CDMA-mode CAOS pixels with a bit rate set to 1 KHz, Fig. 6(c) shows the CDMA-mode CAOS image of the target observed by the smart camera with a measured max/min irradiance contrast of 123:1, i.e. DR = 41.83 dB. To obtain a higher spatial resolution image, the CAOS pixel size is reprogrammed to 5 x 5 micro-mirrors. Using P = 748 with a 34 x 22 CAOS pixels grid, Fig. 6(d) shows the higher resolution CAOS image of the target with a measured max/min irradiance contrast of 77:1, i.e. DR = 39.2 dB. The Fig. 6 image data points again to the robustness of the image data provided by the CDMA-mode of the CAOS smart camera showcasing recovery of the non-uniform contours of the non-regular L-shaped target.

 figure: Fig. 6

Fig. 6 (a) Irregular contour L-shaped target. (b) CMOS-mode CAOS smart camera target image when attenuating the target with a 0.9 ND filter. (c) CDMA-mode CAOS smart camera captured target image using 187 CAOS pixels and 10 x 10 micro-mirrors CAOS pixel size. (d) Higher resolution CDMA-mode CAOS smart camera captured target image using 748 CAOS pixels and 5 x 5 micro-mirrors CAOS pixel size.

Download Full Size | PDF

Experiments so far have shown that the Rolson flashlight bright target has a DR of approximately 42 dB which is within the 51 dB DR of the CMOS sensor. To generate a controllable DR scene that exceeds 51 dB DR of the CMOS sensor, an electrically controlled small filament lamp acting as a weak light target is placed next to the bright Rolson flashlight (see Fig. 7(a)), thus forming a two target controllable DR scenario. This target is placed 47 cm from L1 with DMD to L1 distance set to 4.4 cm.

 figure: Fig. 7

Fig. 7 (a) Photo of a high DR target using a small filament bulb placed next to a flashlight. (b) Logarithmic scale CMOS mode image of filament and flashlight targets when using a 2.3 OD optical attenuation of scene. (c) Log-scale CMOS-mode image missing the filament target as filament light level has dropped to create a > 51 dB DR scene.

Download Full Size | PDF

The goal of next experiments is to simultaneously recover both the bright flashlight target and the weak target using the CDMA-mode when the target scene DR exceeds the DR of the smart camera CMOS-mode. With the filament light level electrically set so that the flashlight brightest level versus the filament brightest level is ~20 dB that is within the DR of the CMOS sensor, Fig. 7(b) shows both the recovered targets within the full scene that is also under a 2.3 OD attenuation to prevent CMOS sensor saturation. The filament appears as a light blue color protrusion at the top left of the Fig. 7(b) photo where the log to the base-10 of the scaled CMOS-pixel irradiance value is plotted in color. With the filament light level now decreased to create a 2-target scene with relative DR exceeding 51 dB, Fig. 7(c) shows that the CMOS sensor fails to see the filament target, a result expected as the scene DR exceeds the DR of the CMOS sensor in the CAOS smart camera.

Next, the CDMA-mode of the camera is engaged to image the dual targets scene using a setting of 625 CAOS pixels with a code sequence of 1024 bits per CAOS pixel and each CAOS pixel made up of 20 × 20 micro-mirrors. Figure 8(a) shows the log-scale CDMA-mode image that captures both targets in the scene and is taken for the same filament light level settings shown for the Fig. 7(b) image (except no NA filter is used). With the filament light level much lower than that of the Fig. 7(c) setting, Fig. 8(b) shows that the CDMA-mode is able to recover the filament light (see light blue pixels in black rectangle) with measured max bright flashlight pixel level = 580 and max filament pixel level = 1 with background surrounding pixel values of 10−2. For comparison, Fig. 8(c) shows the log-scale CDMA-mode image with the filament turned off. These results show that the experimentally acquired CDMA-mode CAOS image has a 55 dB DR that has enabled the dual-target detection. It is important to appreciate that further extreme DR extraction of a localized target zone (e.g., shown as the black rectangle zone in Fig. 8) can be achieved via the FDMA-TDMA mode of the CAOS smart camera thereby highlighting the camera multi-mode programmability [10].

 figure: Fig. 8

Fig. 8 Log-scale CDMA-mode images of the dual-targets scene acquired for different DR conditions. (a) Dual target DR is under the 51 dB limit of CMOS sensor. (b) Dual target DR is at 55 dB (c) The Filament is off while the flashlight is on so a scene background is observed.

Download Full Size | PDF

4. Conclusion

Designed and demonstrated is the CDMA-mode CAOS smart camera that combines the conventional PDA sensor (e.g., CMOS sensor) with time-frequency agile CAOS pixels operating in the CDMA-mode. RF style binary sequence Walsh CDMA codes encode the CAOS pixels for image capture and time-based correlation processing using a DAC and DSP decodes the observed irradiance map. CMOS sensor-based bright target imaging requires optical attenuation that in certain cases can optically spoil the imaged light characteristics, thereby impeding reliable machine vision. Using the proposed CAOS smart camera CDMA-mode, four different types of bright target visible band scenes without engaging optical scene attenuation are successfully observed showing the robustness of the CDMA-mode versus the CMOS-mode. Various programmability features of the smart camera are experimentally tested by changing the coding bit rate, code bit length, number of CAOS pixels, CAOS pixel electronic gain, CAOS pixel spatial resolution, and scene DR. Using a fast frame rate DMD and 4096 bit sequence Walsh codes, a 3600 CAOS pixel frame is generated in 0.4096 seconds and a best spatial resolution of one micro-mirror size is achieved. The demonstrated CDMA-mode of the CAOS smart camera is suited for imaging of bright multispectral targets where robustness of image data is critical for reliable system decisions. Potential applications may include industrial machine vision, food inspection, robot vision, and automotive sensing.

Acknowledgments

Special thanks to J. Pablo La Torre for assistance with the experiment. Partial support is via a joint EU-Enterprise Ireland commercialization ERDF fund 2014-2020.

References and links

1. K. Kearney and Z. Ninkov, “Characterization of a digital micro-mirror device for use as an optical mask in imaging and spectroscopy,” Proc. SPIE 3292, 81 (1998).

2. J. Castracane and M. Gutin, “DMD-based bloom control for intensified imaging systems,” Proc. SPIE 3633, 234 (1999).

3. S. Nayar, V. Branzoi, and T. Boult, “Programmable imaging using a digital micro-mirror array,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, (IEEE, 2004), pp.436–443.

4. S. Sumriddetchkajorn and N. A. Riza, “Micro-electro-mechanical system-based digitally controlled optical beam profiler,” Appl. Opt. 41(18), 3506–3510 (2002). [PubMed]  

5. N. A. Riza, “Compressive optical display and imager,” US Patent 8783874 B1 (2014).

6. S. Selivanov, V. N. Govorov, A. S. Titov, and V. P. Chemodanov, “Lunar Station Television Camera,” (Reilly Translations): NASA CR-97884 (1968).

7. F. O. Huck, and J. J. Lambiotte, “A Performance Analysis of the Optical-Mechanical Scanner as an Imaging System for Planetary Landers,” NASA TN D-5552 (1969).

8. N. A. Riza, J. P. La Torre, and M. J. Amin, “CAOS-CMOS camera,” Opt. Express 24(12), 13444–13458 (2016). [PubMed]  

9. N. A. Riza, M. J. Amin, and J. P. La Torre, “Coded Access Optical Sensor (CAOS) Imager,” J. Eur. Opt. Soc. Rap. Pub. 10, 15021 (2015).

10. N. A. Riza and J. P. La Torre, “Demonstration of 136 dB dynamic range capability for a simultaneous dual optical band CAOS camera,” Opt. Express 24(26), 29427–29443 (2016). [PubMed]  

11. N. A. Riza, “The CAOS Camera Platform – Ushering in a Paradigm Change in Extreme Dynamic Range Imager Design,” Proc. SPIE Vol. 10117, 101170L (2017).

12. N. A. Riza, “CAOS Smart Camera captures targets in extreme contrast scenarios,” (Photonics Spectra Magazine Technical Feature article, 2017). https://www.photonics.com/Article.aspx?AID=61645

13. N. A. Riza and M. A. Mazhar, “CAOS Smart Microscope,” in Proc. Photonics Ireland Conf. (2017).

14. Rochester Institute of technology, “Rods and Cones,” https://www.cis.rit.edu/people/faculty/montag/vandplite/pages/chap_9/ch9p1.html

15. M. F. Land and B. W. Tatler, “Steering with the head: The visual strategy of a racing driver,” Elsevier Current Biol. J., Brief Commun. 11(15), 1215–1220 (2001).

16. E. C. Farnett and G. H. Stevens, “Pulse compression radar,” in RCA/GE Aerospace, Radar Handbook, ed., M. I. Skolnik, (McGraw-Hill, 1990).

17. W. C. Y. Lee, “Overview of Cellular CDMA,” IEEE Trans. Veh.Technol. 40(2), (1991).

18. R. Gold, “Optimal binary sequences for spread spectrum multiplexing (corresp.),” IEEE Trans. Inf. Theory 13(4), 619–621 (1967).

19. V. DaSilva and E. S. Sousa, “Performance of orthogonal CDMA codes for quasi-synchronous communication systems,” in Proc. of IEEE Conference on Universal Personal Communications (IEEE, 1993), pp. 995–999.

20. E. H. Dinan and B. Jabbari, “Spreading codes for direct sequence CDMA and wideband CDMA cellular networks,” IEEE Commun. Mag. 36(9), 48–54 (1998).

21. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error Correcting Codes (North Holland 1986).

22. V-7001 SuperSpeed module datasheet, Vialux, Germany, (2017).

23. M. J. E. Golay, “Multi-slit spectrometry,” J. Opt. Soc. Am. 39(6), 437–444 (1949). [PubMed]  

24. P. Gottlieb, “A television scanning scheme for a detector-noise limited system,” IEEE Trans. Inf. Theory 14(3), 428–433 (1968).

25. N. A. Riza, Chapter 4: Optical Disk-based Acousto-Optic Spectrum Analysis, Caltech Ph.D. Thesis, Oct. 1989.

26. J. D. Gibson, ed., Mobile Communications Handbook, (CRC, 2013 III Edition).

27. J. L. Walsh, “A closed set of normal orthogonal functions,” Am. J. Math. 45(1), 5–24 (1923).

28. J. Hadamard, “Resolution d’une question relative aux determinants,” Bull. Sci. Math. 17(1), 240–246 (1893).

29. J. Seberry and M. Yamada, “Hadamard matrices, sequences, and block designs,” in Contemporary Design Theory: A Collection of Essays, J. H. Dinitz, and D.R. Stinson, ed. (Wiley 1992).

30. J. T. Bushberg, J. A. Seibert, E.M. Leidholdt, Jr, and J. M. Boone, The Essential Physics of Medical Imaging (Lippincott Williams & Wilkins, 2002).

31. D. Shafer and Z. Zhang, Introductory Statistics, (Saylor Foundation 2012).

32. A. Rose, Vision – Human and Electronic (Plenum 1973).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 CAOS smart camera design programmed for operation in the CDMA-mode.
Fig. 2
Fig. 2 (a) Flashlight target image obtained with flashlight off and using ambient room lighting and an external CMOS Camera. (b) Target image seen using the CMOS-mode of the CAOS smart camera engaging a 2.3 OD ND filter. (c) Target image seen with flashlight on and using the CDMA-mode of CAOS smart camera with code bit rate of 10 KHz and using a 4 mm x 4 mm zone on the DMD.
Fig. 3
Fig. 3 (a) CMOS-mode CAOS smart camera generated flashlight target gray-scale (color coded) image with a 115:1 max/min irradiance ratio when using a 200X optical attenuation via ND filter. (b) CDMA-mode CAOS smart camera generated un-attenuated true flashlight target gray-scale (color coded) image with a 156:1 max/min irradiance ratio. (c) Time domain plot of the voltage signal from the point PD over a CDMA-mode CAOS pixel encoding duration of 0.4096 seconds over 4096 code bits. (d) Frequency domain plot of the point PD signal generated by 3600 CDMA-mode CAOS pixels.
Fig. 4
Fig. 4 (a) Normalized irradiance values of the CAOS pixels recovered after time domain correlation processing for 3600 CAOS pixels operating simultaneously in the CDMA-mode.
Fig. 5
Fig. 5 (a) Machined traffic sign target seen using the Canon camera and room lighting. (b) CMOS-mode image from the CAOS Smart Camera when using a 1.9 OD filter to attenuate the flashlight lit traffic sign target observed with no room lighting. (c) CDMA-mode CAOS image from Smart Camera with no optical attenuation of the target. (d) An improved 2x2 micro-mirror spatial resolution CDMA-mode CAOS image of the top end of the traffic sign. (e) A further improved 1 micro-mirror spatial resolution CDMA-mode CAOS image of the top right end edge of the arrowhead using a 20 dB point PD gain setting. (f) Using a higher 60 dB point PD gain setting, a lower spatial noise image of the captured image shown in (e).
Fig. 6
Fig. 6 (a) Irregular contour L-shaped target. (b) CMOS-mode CAOS smart camera target image when attenuating the target with a 0.9 ND filter. (c) CDMA-mode CAOS smart camera captured target image using 187 CAOS pixels and 10 x 10 micro-mirrors CAOS pixel size. (d) Higher resolution CDMA-mode CAOS smart camera captured target image using 748 CAOS pixels and 5 x 5 micro-mirrors CAOS pixel size.
Fig. 7
Fig. 7 (a) Photo of a high DR target using a small filament bulb placed next to a flashlight. (b) Logarithmic scale CMOS mode image of filament and flashlight targets when using a 2.3 OD optical attenuation of scene. (c) Log-scale CMOS-mode image missing the filament target as filament light level has dropped to create a > 51 dB DR scene.
Fig. 8
Fig. 8 Log-scale CDMA-mode images of the dual-targets scene acquired for different DR conditions. (a) Dual target DR is under the 51 dB limit of CMOS sensor. (b) Dual target DR is at 55 dB (c) The Filament is off while the flashlight is on so a scene background is observed.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

c p (t) c j (t)=0(pj)
i p (t)=K× I p × c p (t)
i PD (t)= p=1 P i p (t)=K p=1 P I p c p (t)
I j = w j (t)= i PD (t) c j (t)
I j = + i PD (τ) c j (τt)dτ
I j =K + [ i 1 (τ)+ i 2 (τ)+ i 3 (τ)+ i P (τ)] c j (τt)dτ I j =K + [ I 1 c 1 (τ)+ I 2 c 2 (τ)+ I 3 c 3 (τ)+ I P c P (τ)] c j (τt)dτ I j =K I j + c j (τ) c j (τt)dτ+K p=1 pj P [ I p + c p (τ) c j (τt)dτ ]
I j = w j (t=0)=K I j + c j (τ) c j (τ)dτ+K p=1 pj P [ I p + c p (τ) c j (τ)dτ ]
I j =K I j + c j (τ) c j (τ)dτ=K I j [ c j (t) c j (t)]
c p E (t)= q= N 2 1 N/2 a pq E rect[ tq T b T b ]
i PD (t)= p=1 P i p (t) =K p=1 P I p c p E (t)
c j D (t)= q= N 2 1 N/2 a jq D rect[ tq T b T b ]
I j = w j (t=0)= i PD (t) c j D (t)= A j +C
A j =K I j c j E (t) c j D (t)=K I j q= N 2 1 N/2 a jq E a jq D =K I j a j E a j D = K I j N 2
C=K p=1 pj P I p c p E (t) c j D (t)=K p=1 pj P I p q= N 2 1 N/2 a pq E a jq D =K p=1 pj P I p a p E a j D =0
I j = A j +C=K I j N 2
σ n = 1 F1 [ f=1 F [ V f V ¯ n ] 2 ] ,σ= 1 N1 n=1 N1 σ n and V ¯ = 1 N1 n=1 N1 V ¯ n
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.