Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-pixel tracking of fast-moving object using geometric moment detection

Open Access Open Access

Abstract

Real-time tracking of fast-moving object have many important applications in various fields. However, it is a great challenge to track of fast-moving object with high frame rate in real-time by employing single-pixel imaging technique. In this paper, we present the first single-pixel imaging technique that measures zero-order and first-order geometric moments, which are leveraged to reconstruct and track the centroid of a fast-moving object in real time. This method requires only 3 geometric moment patterns to illuminate a moving object in one frame. And the corresponding intensities collected by a single-pixel detector are equivalent to the values of the zero-order and first-order geometric moments. We apply this new approach of measuring geometric moments to object tracking by detecting the centroid of the object in two experiments. The root mean squared errors in the transverse and axial directions are 5.46 pixels and 5.53 pixels respectively, according to the comparison of data captured by a camera system. In the second experiment, we successfully track a moving magnet with a frame rate up to 7400 Hz. The proposed scheme provides a new method for ultrafast target tracking applications.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Target tracking has important applications in various field, such as biomedical [1], computer vision [2,3] and non-line-of-light sensing [46]. The conventional method of target tracking based on plane array imaging to [710] relies mainly on photography. Image sensors are used to capture sequential images of a target, and image postprocessing and image analysis algorithms are further employed to determine the trajectory of the moving target object in the images. Hence, the accuracy of target tracking depends primarily on the quality of the sequential images captured and the performance of the algorithm used. Some high-performance high-speed cameras can capture sequential images with high signal-to-noise ratio within a short exposure time. However, when tracking a moving object in real time, high-speed cameras generally feature a huge data throughput, which is proportional to the frame rate of the camera resulting in excessive hardware requirements in practical applications. Furthermore, while high-speed cameras have superior performance in the visible region of the spectrum, they fail in some nonvisible regions, such as the infrared and terahertz wavebands thereby restricting application range of the plane array imaging technique for tracking the moving object.

Over the past two decades, a completely different imaging method -single-pixel imaging has been a major research focus [1114]. Single-pixel imaging retrieves spatial information by recording only the total light intensities resulting from each pattern measured by a single-pixel detector. Because of the wide spectral response of a single-pixel detector, single-pixel imaging exhibits superior performance in some nonvisible, such as infrared [13] and terahertz [14] wavebands. Therefore, when the single-pixel imaging technique is used to track a target, it can operate normally in wavebands where area-array cameras cannot work effectively.

There are two ways to obtain the trajectory of a moving object using single-pixel imaging. The first approach is similar to the plane array imaging method, and analyzes the trajectory of the moving object from successive reconstructed images [15,16]. The imaging speed determines the speed at which the target object’s position can be acquired. However, the frame rate of single-pixel imaging is limited by the modulation frequency of the spatial light modulator, which projects each of patterns individually onto the scene. The frame rate is therefore proportional to the ratio of the modulation frequency to the number of patterns. Since the single-pixel technique requires sequential measurements, the number of patterns used in the imaging process is very large. To reduce the number of patterns required for imaging, some researchers have proposed using compressed sensing algorithms [1719]. However, even at a compression ratio of ∼7% and sacrificing the image quality, reconstructing an image 64${\times} $ 64 still requires 600 patterns (include their inverse) [18]. The number of patterns is still relatively large. For the same spatial light modulator performance, it is assumed that a large number of illumination patterns directly corresponds to a lower imaging frame rate, resulting in a lower tracking frame rate. Therefore, to further increase the frame rate, some researchers have proposed a second approach in which the moving target's trajectory is directly obtained without reconstructing the target image [2022]. The key to this approach is to increase the frame rate by reducing the number of patterns used. For instance, we previously used 128 Hadamard patterns to track a moving object in real-time at 177 Hz [20]. Zhang et al. used 6 Fourier basis patterns to achieve the real-time tracking of a specific type of motion object at 1666 Hz in 2-D space [21]. Deng et al. improved the algorithm to assume that the object is far smaller than the scene and similarly used 6 Fourier basis patterns to track a moving object at 1666 Hz in 3-D space [22]. To date, this frame of 1666 Hz achieved using 6 Fourier basis patterns is the fastest frame rate attainable for target tracking using single-pixel imaging.

In this paper, we present a novel scheme to track a fast-moving object in real time by directly detecting the centroid of the object. To the best of our knowledge, this is the first demonstration of spatial light modulation and single-pixel detection for measuring zero-order and first-order geometric moments. Only 3 geometric moment patterns, constructed according to the geometric moment characteristics, are necessary for structured modulation, and the back-scattered intensities measured by a single-pixel detector are equivalent to the zero-order and first-order geometric moment values. The centroid of the object can be acquired by analyzing these zero-order and first-order geometric moment values. In this way, we experimentally demonstrate that the proposed method is capable of tracking a moving object with a frame rate up to 7400 Hz, which is more than 4 times greater than the currently attainable frame rate. Moreover, by combining time-of-flight [18], time encoding [23] and virtual fringe projection [24], the proposed technique can be extended to three-dimensional tracking.

2. Method

Single-pixel imaging is based on measurements of the level of correlation between a scene and a series of illumination patterns. The patterns can either be projected onto the scene, a technique known as structured illumination, or be used to passively mask an image of the scene, a technique known as structured detection [25]. The total amount of light reflected or transmitted by the scene is recorded by a single-pixel detector, representing a measurement. ${I_n}$. of the level of correlation between the scene $f({x,y} )$ and each pattern ${S_n}({x,y} )$

$${I_n} = \sum\limits_{x,y} {f({x,y} ){S_n}({x,y} )} .$$
where n is the pattern index. The patterns in single-pixel imaging are divided mainly into nonorthogonal and orthogonal basis patterns [26]. Here, we use a kind of non-orthogonal basis patterns known as ‘geometric moment patterns’, which are constructed according to the characteristics of geometric moment. As described below, we report three new illumination patterns herein, i.e., the geometric moment patterns ${S_1}({x,y} )$, ${S_2}({x,y} )$ and ${S_3}({x,y} )$. According to the literature [27], the geometric moments are nonorthogonal moments derived from the polynomials ${x^p}{y^q}.$ The geometric moment ${m_{pq}}$ of scene $f({x,y} )$ in 2-D space is defined as
$${m_{pq}} = \sum\limits_{x,y} {{x^p}{y^q}f({x,y} )} . $$
where p and q are non-negative integers and $({p + q} )$ is called the order of the geometric moment. Comparing Eq. (1) and Eq. (2), we find that when three 2-D illumination patterns are specified with ${S_1} = \left[ {\begin{array}{cccc} 1&1& \cdots &1\\ 1&1& \cdots &1\\ \vdots & \vdots & \vdots & \vdots \\ 1&1& \cdots &1 \end{array}} \right],$ ${S_2} = \left[ {\begin{array}{cccc} 1&2& \cdots &N\\ 1&2& \cdots &N\\ \vdots & \vdots & \vdots & \vdots \\ 1&2& \cdots &N \end{array}} \right]$ and ${S_3} = \left[ {\begin{array}{cccc} N&N& \cdots &N\\ {N - 1}&{N - 1}& \cdots &{N - 1}\\ \vdots & \vdots & \vdots & \vdots \\ 1&1& \cdots &1 \end{array}} \right]$ for the first quadrant of Cartesian coordinates, the back-scattered intensities ${I_n}({n = 1,2,3} )$ measured by the single-pixel detector are equivalent to the zero-order geometric moment ${m_{00}}$ and the two first-order geometric moments $({{m_{10}},{m_{01}}} ),$ respectively. After the zero-order and first-order geometric moment values are determined, the centroid $({{x_c},{y_c}} )$ of the scene can be obtained
$${x_c} = {m_{10}}/{m_{00}} = {I_2}/{I_1}$$
$${y_c} = {m_{01}}/{m_{00}} = {I_3}/{I_1}.$$

The above theoretical analysis shows that the proposed method uses only 3 geometric moment patterns for structured modulation, and the results of single-pixel detection are the zero-order and first-order geometric moment values. Based on the first application of geometric moment patterns, this method provides a new approach for measuring the geometric moment of an object and enable efficient centroid tracking. Furthermore, since only 3 geometric moment patterns are needed for one frame, we can achieve a fast-moving object with a high frame rate in the following experiments by using a 22.2 kHz spatial light modulator.

On the object plane, the projection size of the pattern is assumed to be $P \times P.$ At a certain speed, the temporal resolution is S1-S3 projection time $\Delta t.$ The maximum moving speed allowed in the x and y directions can be expressed as

$${V_x} = P/\Delta t$$
$${V_y} = P/\Delta t.$$

Thus, the maximum allowable moving speed of a fast-moving object is

$${V_{\max }} = \sqrt {V_x^2 + V_y^2} = \sqrt 2 P/\Delta t.$$

Obviously, at this maximum speed, the measurement results will have relatively large errors. For rigid motion, the motion blur caused to the object $g({x,y} )$ can be expressed as

$$f({x,y} )= \int_0^{\Delta t} {g({x - {V_x}t,y - {V_y}t} )dt.}$$

It can be seen that the motion blur becomes more and more serious with the increase of speed. Combining the centroid solution formula with Eq. (8), it can be found that the measured result $({{x_c},{y_c}} )$ is the centroid of the motion blurred image. The centroid measurement error increases with the increase of motion speed.

3. Experiments

The schematic diagram of the proposed method is shown in Fig. 1. In the illumination arm, a light-emitting diode (LED) with a maximum power of 200 mW (M530L4-C1, Thorlabs) serves as the light source. The nominal wavelength of LED is 530 nm and the bandwidth is 35 nm. A digital micromirror device (DMD, Texas Instruments Discovery V7000 with 1024${\times}$ 768 micromirrors) is used to project a set of patterns onto the experimental scene. The experimental scene, which contains the moving object located ∼2m from the tracking system is illuminated by structured modulated light through a projection lens (focal length 125 mm). In the detection arm, a high-speed photodiode (PMM02, Thorlabs) is used in conjunction with a collection lens (focal length 100 mm) to measure the back-scattered intensity resulting from each pattern. The analog photodiode output is sampled by a high-speed digitizer (M4i-2212-x8, Spectrum).

 figure: Fig. 1.

Fig. 1. Experimental setup. A light-emitting diode (LED) uniformly illuminates a digital micromirror device (DMD), used to provide structured illumination onto a scene through a projection lens (PL). A diffuse reflection paper driven by a 2-D motorized stage and a magnet in the scene are moving objects in the two set of experiments, respectively. Light from the scene is collected by a photomultiplier tube (PMT) through a collection lens (CL). The measured light intensities are used in a reconstruction algorithm run on an industrial personal computer (IPC) to reconstruct the centroid of the scene. (b) The photo geometric moment pattern S1

Download Full Size | PPT Slide | PDF

It is important to note that the 3 geometric moment patterns are grayscale. Therefore, to take advantage of high binary modulated rate of the DMD and considering that the grayscale modulation rate of the DMD is relatively low, we convert the geometric moment patterns from grayscale to binary by using the spatial dither method [28]. The procedure about spatial dithering is shown in Fig. 2. The spatial resolution of original grayscale pattern is 256${\times} $ 256 pixels as shown in Fig. 2(a). We then apply upsampling to the grayscale patterns using the ‘bicubic’ image interpolation algorithm. The size of the upsampled patterns (Fig. 2(b)) is 768${\times} $ 768 pixels. As such, each pixel in the original pattern will be represented by 3${\times} $ 3 pixels in the upsampled pattern. In other words, the upsampled pattern consist of 256${\times} $ 256 super pixels and each super pixel consist of 3${\times} $ 3 regular pixels similar to 3${\times} $ 3 pixels binning. Finally, we perform the Floyd-Steinberg error diffusion dithering method to the upsampled pattern and generate the binary geometric moment patterns (Fig. 2(c)). Figure 2(d) shows a partial view of the two parts in the binary geometric moment pattern. In our experiments, the spatial resolution of all 3 geometric moment patterns is 256${\times} $ 256. The partial photographs of experimentally generate binary geometric moment pattern (S2) are shown in Fig. 2(e). Three experiments are conducted: The first one is a comparative experiment at a relatively low frame rate (∼1800 Hz, online), The second is a tracking experiment at a high frame rate (∼7400 Hz, offline), and the third is a comparative experiment of tracking a fast-moving object at two different frame rates (∼2000 Hz and ∼7400 Hz, offline).

 figure: Fig. 2.

Fig. 2. Binary geometric moment patterns generation by spatial dithering. (a) Original grayscale geometric moment pattern S2 with a spatial resolution of 256${\times} $ 256 pixels. (b) Upsampled grayscale geometric moment pattern by using the ‘bicubic’ image interpolation algorithm. (c) Binary geometric moment pattern by using the Floyd-Steinberg error diffusion dithering method. (d) A partial view of the two parts in the binary geometric moment pattern. (e) Partial photographs of experimentally generate binary geometric moment patterns captured by a camera (iPhone X).To display clearly, white paper was used as the background during shooting.

Download Full Size | PPT Slide | PDF

In the first experiment, our goal is to track a moving diffuse reflection paper with a diameter of 50 mm driven by a 2-D motorized stage. The DMD, with 185 microseconds of modulated time, is illuminated by the LED, and 3 binary geometric moment patterns repeatedly displayed on the DMD are projected onto the scene (Fig. 3(a)). The back-scattered intensities measured by the single-pixel detector are used to calculate the centroid of the diffuse reflection paper. The obtained experimental results are depicted in Figs. 3(b)-3(d). To evaluate the object tracking accuracy, a side camera (30 Hz, iPhone) is used to capture successive images of the target object. These successive images are employed to analyze the trajectory of the object by the centroid algorithm [29]. The trajectories reconstructed by the two methods are plotted in Fig. 3(b), demonstrating that the trajectory calculated by the proposed method coincides with the corresponding result obtained by the camera. Notably, due to the limited frame rate of the camera, the amount of data captured by the camera is far smaller than the amount of data obtained by the proposed method in the tracking experiment. Therefore, to quantitatively assess the accuracy of object tracking with proposed method, we perform frame extraction technique to sample the experimental results, following which we compare the experimental results with the data captured by the camera, and find that the root mean squared errors in the transverse and axial directions are 5.46 pixels and 5.54 pixels respectively. Because noise directly affects the signal quality, resulting in errors in the measurement of the zero-order and first-order geometric moments, the centroid coordinates detected in real time fluctuate in the experiment. To evaluate the stability of the proposed method, the same trajectory of the diffuse reflection paper driven by the 2-D motorized stage is repeated ten times, and the average trajectory is illustrated in Fig. 3(b). The standard deviation of these ten sets of experimental data in the transverse and axial directions are provided in Fig. 3(d), showing that the standard deviation in the transverse and axial direction do not exceed 4.5 pixels, which demonstrates the stability of the proposed method. Since the modulated time of the DMD is 185 microseconds and each frame requires 3 patterns, the frame rate we achieve is ∼1800 Hz. Visualization 1 demonstrates the motion of the object and the corresponding real-time online detection results.

 figure: Fig. 3.

Fig. 3. Experimental results of the real-time online tracking of a moving diffuse reflection paper up to 1800 Hz. (a) Illustration of a scene containing diffuse reflection paper driven by a 2-D motorized stage and 3 geometric moment patterns. (b) The trajectories reconstructed from the two datasets. The solid blue line indicates the average trajectory of the object obtained by our method, while the solid red line represents the trajectory obtained from the continuous images captured by the camera. (see Visualization 1). The scale bar is 12.5 mm. (c) 1-D representation of the average trajectory by our method, where the red and blue lines denote the average trajectories in the transverse and axial directions, respectively. (d) Evolution of the error during the tracking procedure. The solid lines and the shaded areas indicate the average and standard deviation of ten trajectories in the transverse (top) and axial (bottom) directions.

Download Full Size | PPT Slide | PDF

To demonstrate the tracking performance for a fast-moving object, a second experiment is conducted with the same experimental setup as that in the first experiment except a magnet (47mm x 17mm x 4mm) is used as moving object in the experimental scene ( Fig. 4(a)) instead of the diffuse reflection paper. Additionally, the modulated time of DMD is reduced from 185 microseconds to 45 microseconds. Similar to the first experiment, the influence of noise on the coordinates of the centroid detected also fluctuate in this experiment. Thus, to mitigate the impact of this phenomenon, we apply a median filter to the original data. The experimental results, comprising 22,200 frames processed in approximately 3 seconds, are illustrated in Figs. 4(b)-4(c). Figure 4(b) shows the filtered trajectory of the moving magnet, demonstrating that the magnet rotates rapidly and irregularly over time. However, the magnet can still be tracked by the proposed method. The centroid coordinates are calculated based on the original data, and the filtered data are provided in Fig. 4(c). Since exploiting the DMD enables the display of 45 microseconds/pattern, resulting in a reconstructed frame rate up to 7400 Hz assuming that 3 geometric moment patterns are needed for one frame. The detailed procedure is also dynamically presented in Visualization 2.

 figure: Fig. 4.

Fig. 4. Experimental results of tracking a moving magnet up to 7400 Hz. (a) Illustration of a scene containing a magnet on a breadboard and 3 geometric moment patterns. (b) 3-D representation of the reconstructed trajectory after filtering. The scale bar is 12.5 mm. (c) 1-D representation of the reconstructed trajectory. The gray line represents the original data, while the red and blue lines indicate the filtered data.

Download Full Size | PPT Slide | PDF

In the third experiment, the same moving object were tracked at different frame rates. Figure 5(a) shows experimental scene containing a projector (VIALUX STAR-07) including a DMD 1, a projection lens (PL), a DMD 2 (Texas Instruments Discovery V7000), a collection lens (CL) and a PMT. The projector projects 1609 object images onto the DMD 2 target surface through the PL to simulate the moving object. The reflected light from DMD 2 displaying geometric moment pattens S1-S3 is collected by a single-pixel detector through the CL. The measured light intensities are used in the proposed algorithm to acquire the centroid of the moving object. In the experiment, the modulation time of the projector is unchanged at 45 microseconds, while the modulation time of DMD 2 is 45 microseconds and 166 microseconds, respectively. Since only 3 patterns are needed for each frame, the frame rate we achieved is ∼7400 Hz and ∼2000 Hz, respectively. Then the corresponding experimental results are illustrated in Figs. 5(b)-5(d). Figure 5(b) shows real trajectory and the reconstructed trajectories, demonstrating that when tracking a fast-moving object, the trajectory of the moving object depicted at ∼7400 Hz is more accurate than the corresponding result at ∼2000 Hz, and is closer to the real trajectory. In the 1-D representation of the trajectories shown in Figs. 5(c) and 5(d), the number of centroid points obtained at ∼7400 Hz is more than that at ∼2000 Hz. Tracking a moving object at a high frame rate can reflect the details of the moving object's trajectory more accurately. Compared to tracking moving objects at a low frame rate, the result obtained at a high frame rate is closer to the real trajectory.

 figure: Fig. 5.

Fig. 5. Tracking experimental results at frame rate ∼7400 Hz and ∼2000 Hz. (a) Illustration of a scene containing a projector including a DMD 1, a projection lens, a DMD 2, a collection lens and a PMT (photomultiplier tube). (b) Real trajectory and the reconstructed trajectories. The orange line represents the real trajectory, while the red dotted line and the blue line indicate the filtered trajectory at the frame rates of ∼7400 Hz and ∼2000 Hz, respectively. The scale bar is 1 mm. (c) 1-D representation of the reconstructed trajectory at frame rate ∼2000 Hz. The gray line represents the original data, while the yellow and green lines indicate the filtered data. (d) 1-D representation of the reconstructed trajectory at frame rate ∼7400 Hz. The gray line represents the original data, while the red and blue lines indicate the filtered data.

Download Full Size | PPT Slide | PDF

4. Discussion and conclusions

We have demonstrated that the data gathering capacity of measuring zero-order and first-order geometric moments that reconstruct the centroid for tracking a fast-moving object by single-pixel detection. The method represents the first use of geometric moment patterns for structured illumination. Single-pixel measurements are performed to directly acquire the zero-order and first-order geometric moments. In this process, only 3 geometric moment patterns are needed for measuring zero-order and first-order geometric moment values and detecting the centroid in one frame. Then by using a DMD with a modulation rate of 22.2 kHz, the method is capable of measuring the zero-order and first-order geometric moment values to detect the centroid of the object at ∼7.4 KHz. Therefore, the proposed method provides a new way for the geometric moments measuring and ultrafast target tracking.

The proposed scheme has two main limitations. First, this method can track only one object at present. When tracking multiple objects, the centroid of each object cannot be directly obtained by the proposed method. Second, the proposed method can achieve only 2-D tracking, and lacks the distance information of the object. In our future work, we intend to combine time-of-flight method to expand the proposed method to 3-D tracking. The benefits of combining time-of-flight method can not only make up for the lack of distance information in the proposed method, but also track multiple objects at different distances in 3-D space.

In conclusion, this paper introduces a single-pixel tracking approach capable of reconstructing the centroid of the moving object from zero-order and first-order geometric moment measurements in which a single-pixel detector is used to collect the reflected intensity resulting from each pattern. The proposed method can realize the real-time tracking of a fast-moving object with a high frame rate without imaging. Given the advantage of photosensitivity and wide operational spectrum of the single-pixel detector, the proposed method potentially works under low-light conditions or invisible wavebands, such as the infrared waveband, using modified source and detection optics. Last but not least, our method may also be combined with imaging of moving objects [30]. The proposed method can also obtain the speed of a moving object by the centroid of the time series, so that there is no need to foresee the speed of the moving object.

Funding

Foundation of Key Laboratory of Science and Technology Innovation of Chinese Academy of Sciences (CXJJ-20S028); Youth Fund of Advanced Laser Technology Laboratory of Anhui Province (20192201); Youth Innovation Promotion Association of the Chinese Academy of Sciences (2020438).

Acknowledgments

D.F.S. acknowledges support from Youth Innovation Promotion Association of the Chinese Academy of Sciences and Youth Fund of Advanced Laser Technology Laboratory of Anhui Province. J.H. acknowledges support from Foundation of Key Laboratory of Science and Technology Innovation of Chinese Academy of Sciences. L.B.Z. performed the experiment with the assistance of W.W.M., W.Y., R.B.J. and Y.F.C. D.F.S. conducted the numerical analysis. The paper was prepared by L.B.Z. with assistance of D.F.S., J.H., K.E.Y and Y.J.W. supervised the project.

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synthesis Lectures on Image, Video, and Multimedia Processing 2(1), 1–152 (2006). [CrossRef]  

2. M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” British Machine Vision Conference, Michel Valstar, ed. (BMVC, 2014), pp. 1–11.

3. Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.

4. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]  

5. J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016). [CrossRef]  

6. S. Chan, R. E. Warburton, G. Gariery, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10118 (2017). [CrossRef]  

7. M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1Dmorphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018). [CrossRef]  

8. M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009). [CrossRef]  

9. K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014). [CrossRef]  

10. P. W. Fuller, “An introduction to high speed photography and photonics,” Imaging Sci. J. 57(6), 293–302 (2009). [CrossRef]  

11. Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

12. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photon 13(1), 13–20 (2019). [CrossRef]  

13. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

14. R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020). [CrossRef]  

15. O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013). [CrossRef]  

16. S. Sun, H. Lin, Y. Xu, J. Gu, and W. Liu, “Tracking and imaging of moving objects with temporal intensity difference correlation,” Opt. Express 27(20), 27851–27862 (2019). [CrossRef]  

17. N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

18. M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

19. A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015). [CrossRef]  

20. D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019). [CrossRef]  

21. Z. Zhang, J. Ye, Q. Deng, and J. Zhong, “Image-free real-time detection and tracking of fast moving object using a single-pixel detector,” Opt. Express 27(24), 35394–35402 (2019). [CrossRef]  

22. Q. Deng, Z. Zhang, and J. Zhong, “Image-free real-time 3-D tracking of a fast-moving object using dual-pixel detection,” Opt. Lett. 45(17), 4734–4737 (2020). [CrossRef]  

23. J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020). [CrossRef]  

24. P. Kilcullen, J. Chen, T. Qzaki, and J. Liang, “Camera-free three-dimensional dual photograph,” Opt. Express 28(20), 29377–29388 (2020). [CrossRef]  

25. D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017). [CrossRef]  

26. M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017). [CrossRef]  

27. J. Flusser, T. Suk, and B. Zitova, 2D and 3D image analysis by moments, (John Wiley & Sons, Chichester, United Kingdom, 2016) Chap. 3.

28. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

29. Z. Li and X. Li, “Centroid computation for Shack-Hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31693 (2018). [CrossRef]  

30. W. Jiang, X. Li, X. Peng, and B. Sun, “Imaging high-speed moving targets with a single-pixel detector,” Opt. Express 28(6), 7889–7897 (2020). [CrossRef]  

References

  • View by:

  1. S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synthesis Lectures on Image, Video, and Multimedia Processing 2(1), 1–152 (2006).
    [Crossref]
  2. M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” British Machine Vision Conference, Michel Valstar, ed. (BMVC, 2014), pp. 1–11.
  3. Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.
  4. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
    [Crossref]
  5. J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
    [Crossref]
  6. S. Chan, R. E. Warburton, G. Gariery, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10118 (2017).
    [Crossref]
  7. M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1Dmorphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018).
    [Crossref]
  8. M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
    [Crossref]
  9. K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
    [Crossref]
  10. P. W. Fuller, “An introduction to high speed photography and photonics,” Imaging Sci. J. 57(6), 293–302 (2009).
    [Crossref]
  11. Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
    [Crossref]
  12. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photon 13(1), 13–20 (2019).
    [Crossref]
  13. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
    [Crossref]
  14. R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
    [Crossref]
  15. O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
    [Crossref]
  16. S. Sun, H. Lin, Y. Xu, J. Gu, and W. Liu, “Tracking and imaging of moving objects with temporal intensity difference correlation,” Opt. Express 27(20), 27851–27862 (2019).
    [Crossref]
  17. N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
    [Crossref]
  18. M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
    [Crossref]
  19. A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
    [Crossref]
  20. D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
    [Crossref]
  21. Z. Zhang, J. Ye, Q. Deng, and J. Zhong, “Image-free real-time detection and tracking of fast moving object using a single-pixel detector,” Opt. Express 27(24), 35394–35402 (2019).
    [Crossref]
  22. Q. Deng, Z. Zhang, and J. Zhong, “Image-free real-time 3-D tracking of a fast-moving object using dual-pixel detection,” Opt. Lett. 45(17), 4734–4737 (2020).
    [Crossref]
  23. J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020).
    [Crossref]
  24. P. Kilcullen, J. Chen, T. Qzaki, and J. Liang, “Camera-free three-dimensional dual photograph,” Opt. Express 28(20), 29377–29388 (2020).
    [Crossref]
  25. D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
    [Crossref]
  26. M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
    [Crossref]
  27. J. Flusser, T. Suk, and B. Zitova, 2D and 3D image analysis by moments, (John Wiley & Sons, Chichester, United Kingdom, 2016) Chap. 3.
  28. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
    [Crossref]
  29. Z. Li and X. Li, “Centroid computation for Shack-Hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31693 (2018).
    [Crossref]
  30. W. Jiang, X. Li, X. Peng, and B. Sun, “Imaging high-speed moving targets with a single-pixel detector,” Opt. Express 28(6), 7889–7897 (2020).
    [Crossref]

2020 (5)

2019 (4)

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Z. Zhang, J. Ye, Q. Deng, and J. Zhong, “Image-free real-time detection and tracking of fast moving object using a single-pixel detector,” Opt. Express 27(24), 35394–35402 (2019).
[Crossref]

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photon 13(1), 13–20 (2019).
[Crossref]

S. Sun, H. Lin, Y. Xu, J. Gu, and W. Liu, “Tracking and imaging of moving objects with temporal intensity difference correlation,” Opt. Express 27(20), 27851–27862 (2019).
[Crossref]

2018 (2)

M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1Dmorphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018).
[Crossref]

Z. Li and X. Li, “Centroid computation for Shack-Hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31693 (2018).
[Crossref]

2017 (4)

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
[Crossref]

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

S. Chan, R. E. Warburton, G. Gariery, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10118 (2017).
[Crossref]

2016 (3)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
[Crossref]

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

2015 (2)

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

2014 (2)

N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

2013 (2)

O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
[Crossref]

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

2009 (2)

P. W. Fuller, “An introduction to high speed photography and photonics,” Imaging Sci. J. 57(6), 293–302 (2009).
[Crossref]

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

2006 (1)

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synthesis Lectures on Image, Video, and Multimedia Processing 2(1), 1–152 (2006).
[Crossref]

Acton, S. T.

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synthesis Lectures on Image, Video, and Multimedia Processing 2(1), 1–152 (2006).
[Crossref]

Armstrong, D.

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

Baraniuk, R. G.

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

Barnett, S. M.

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

Blu, T.

R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Bowman, A.

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Bowman, R.

N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Bowman, R. W.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Boyd, R. W.

O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
[Crossref]

Chan, S.

Chen, H.

J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020).
[Crossref]

Chen, J.

Chen, M.

J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020).
[Crossref]

Danelljan, M.

M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” British Machine Vision Conference, Michel Valstar, ed. (BMVC, 2014), pp. 1–11.

Deen, M. J.

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

Deng, Q.

Ding, F.

Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.

Edgar, M. P.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photon 13(1), 13–20 (2019).
[Crossref]

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
[Crossref]

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

El-Desouki, M.

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

Faccio, D.

S. Chan, R. E. Warburton, G. Gariery, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10118 (2017).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Fang, Q. Y.

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

Felsberg, M.

M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” British Machine Vision Conference, Michel Valstar, ed. (BMVC, 2014), pp. 1–11.

Flusser, J.

J. Flusser, T. Suk, and B. Zitova, 2D and 3D image analysis by moments, (John Wiley & Sons, Chichester, United Kingdom, 2016) Chap. 3.

Fu, C.

Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.

Fuller, P. W.

P. W. Fuller, “An introduction to high speed photography and photonics,” Imaging Sci. J. 57(6), 293–302 (2009).
[Crossref]

Gao, Q.

J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020).
[Crossref]

Gariepy, G.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Gariery, G.

Gibson, G. M.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photon 13(1), 13–20 (2019).
[Crossref]

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Goda, K.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Gu, J.

Häger, G.

M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” British Machine Vision Conference, Michel Valstar, ed. (BMVC, 2014), pp. 1–11.

Henderson, R.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Hirosawa, K.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Horisaki, R.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Howell, J. C.

O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
[Crossref]

Howland, G. A.

O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
[Crossref]

Huang, J.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Huang, Z.

Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.

Hullin, M.

J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
[Crossref]

Iwasaki, A.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Jiang, W.

Kannari, F.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Kelly, K. F.

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

Khan, F.

M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” British Machine Vision Conference, Michel Valstar, ed. (BMVC, 2014), pp. 1–11.

Kilcullen, P.

Klein, J.

J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
[Crossref]

Lamb, R.

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

Laurenzis, M.

J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
[Crossref]

Leach, J.

S. Chan, R. E. Warburton, G. Gariery, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10118 (2017).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Li, X.

Li, Y.

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.

Li, Z.

Liang, J.

Liao, H.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Lin, H.

Liu, D.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Liu, L.

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

Liu, W.

Lu, G.

Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.

Magana-Loaiza, O. S.

O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
[Crossref]

Malik, M.

O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
[Crossref]

Martín, J.

J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
[Crossref]

Meng, L.

M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
[Crossref]

Mitchell, K. J.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Nakagawa, K.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Nakamura, A.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Oishi, A. Y.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Padgett, M. J.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photon 13(1), 13–20 (2019).
[Crossref]

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
[Crossref]

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Peng, X.

Peter, C.

J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
[Crossref]

Phillips, D. B.

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

Pickwell-MacPherson, E.

R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Qzaki, T.

Radwell, N.

M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
[Crossref]

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. J. Padgett, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Ray, N.

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synthesis Lectures on Image, Video, and Multimedia Processing 2(1), 1–152 (2006).
[Crossref]

Sakuma, I.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Sankaranarayanan, A. C.

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

Shi, D.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Stantchev, R. I.

R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Studer, C.

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

Suk, T.

J. Flusser, T. Suk, and B. Zitova, 2D and 3D image analysis by moments, (John Wiley & Sons, Chichester, United Kingdom, 2016) Chap. 3.

Sun, B.

W. Jiang, X. Li, X. Peng, and B. Sun, “Imaging high-speed moving targets with a single-pixel detector,” Opt. Express 28(6), 7889–7897 (2020).
[Crossref]

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Sun, M.

M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
[Crossref]

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

Sun, M. J.

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

Sun, Q.

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Sun, S.

Taylor, J. M.

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

Teng, J.

J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020).
[Crossref]

Tonolini, F.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Tse, F.

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

Tsukamoto, A.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Ushida, T.

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

Vittert, L. E.

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Wang, X.

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

Wang, Y.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Warburton, R. E.

Wei, M.

M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1Dmorphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018).
[Crossref]

Welsh, S.

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Welsh, S. S.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Xie, C.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Xing, F.

M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1Dmorphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018).
[Crossref]

Xu, L.

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

Xu, Y.

Yang, S.

J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020).
[Crossref]

Ye, J.

Yin, K.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

You, Z.

M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1Dmorphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018).
[Crossref]

Yu, X.

R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Yuan, K.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Zhang, Z.

Zheng, G.

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

Zhong, J.

Zhu, W.

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Zitova, B.

J. Flusser, T. Suk, and B. Zitova, 2D and 3D image analysis by moments, (John Wiley & Sons, Chichester, United Kingdom, 2016) Chap. 3.

APL Photonics (1)

J. Teng, Q. Gao, M. Chen, S. Yang, and H. Chen, “Time-encoded single-pixel 3D imaging,” APL Photonics 5(2), 020801 (2020).
[Crossref]

Appl. Phys. Lett. (1)

O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
[Crossref]

Imaging Sci. J. (1)

P. W. Fuller, “An introduction to high speed photography and photonics,” Imaging Sci. J. 57(6), 293–302 (2009).
[Crossref]

Light: Sci. Appl. (1)

M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1Dmorphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018).
[Crossref]

Nat. Commun. (2)

M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016).
[Crossref]

R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Nat. Photonics (2)

K. Nakagawa, A. Iwasaki, A. Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-opticalmapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Nature Photon (1)

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photon 13(1), 13–20 (2019).
[Crossref]

Opt. Commun. (1)

D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019).
[Crossref]

Opt. Express (6)

Opt. Lett. (1)

Optica (1)

Sci Rep (1)

J. Klein, C. Peter, J. Martín, M. Laurenzis, and M. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci Rep 6(1), 1–9 (2016).
[Crossref]

Sci. Adv. (1)

D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017).
[Crossref]

Sci. Rep. (3)

M. Sun, L. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017).
[Crossref]

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Science (1)

Q. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Sensors (1)

M. El-Desouki, M. J. Deen, Q. Y. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS imagesensors for high speed applications,” Sensors 9(1), 430–444 (2009).
[Crossref]

SIAM J. Imaging Sci. (1)

A. C. Sankaranarayanan, L. Xu, C. Studer, Y. Li, K. F. Kelly, and R. G. Baraniuk, “Video compressive sensing for spatial multiplexing cameras using motion-flow models,” SIAM J. Imaging Sci. 8(3), 1489–1518 (2015).
[Crossref]

Synthesis Lectures on Image, Video, and Multimedia Processing (1)

S. T. Acton and N. Ray, “Biomedical image analysis: tracking,” Synthesis Lectures on Image, Video, and Multimedia Processing 2(1), 1–152 (2006).
[Crossref]

Other (3)

M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” British Machine Vision Conference, Michel Valstar, ed. (BMVC, 2014), pp. 1–11.

Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “Autotrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020), pp. 11923–11932.

J. Flusser, T. Suk, and B. Zitova, 2D and 3D image analysis by moments, (John Wiley & Sons, Chichester, United Kingdom, 2016) Chap. 3.

Supplementary Material (2)

NameDescription
Visualization 1       Tracking at ~1800fps
Visualization 2       Tracking at ~7400fps

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Experimental setup. A light-emitting diode (LED) uniformly illuminates a digital micromirror device (DMD), used to provide structured illumination onto a scene through a projection lens (PL). A diffuse reflection paper driven by a 2-D motorized stage and a magnet in the scene are moving objects in the two set of experiments, respectively. Light from the scene is collected by a photomultiplier tube (PMT) through a collection lens (CL). The measured light intensities are used in a reconstruction algorithm run on an industrial personal computer (IPC) to reconstruct the centroid of the scene. (b) The photo geometric moment pattern S1
Fig. 2.
Fig. 2. Binary geometric moment patterns generation by spatial dithering. (a) Original grayscale geometric moment pattern S2 with a spatial resolution of 256${\times} $ 256 pixels. (b) Upsampled grayscale geometric moment pattern by using the ‘bicubic’ image interpolation algorithm. (c) Binary geometric moment pattern by using the Floyd-Steinberg error diffusion dithering method. (d) A partial view of the two parts in the binary geometric moment pattern. (e) Partial photographs of experimentally generate binary geometric moment patterns captured by a camera (iPhone X).To display clearly, white paper was used as the background during shooting.
Fig. 3.
Fig. 3. Experimental results of the real-time online tracking of a moving diffuse reflection paper up to 1800 Hz. (a) Illustration of a scene containing diffuse reflection paper driven by a 2-D motorized stage and 3 geometric moment patterns. (b) The trajectories reconstructed from the two datasets. The solid blue line indicates the average trajectory of the object obtained by our method, while the solid red line represents the trajectory obtained from the continuous images captured by the camera. (see Visualization 1). The scale bar is 12.5 mm. (c) 1-D representation of the average trajectory by our method, where the red and blue lines denote the average trajectories in the transverse and axial directions, respectively. (d) Evolution of the error during the tracking procedure. The solid lines and the shaded areas indicate the average and standard deviation of ten trajectories in the transverse (top) and axial (bottom) directions.
Fig. 4.
Fig. 4. Experimental results of tracking a moving magnet up to 7400 Hz. (a) Illustration of a scene containing a magnet on a breadboard and 3 geometric moment patterns. (b) 3-D representation of the reconstructed trajectory after filtering. The scale bar is 12.5 mm. (c) 1-D representation of the reconstructed trajectory. The gray line represents the original data, while the red and blue lines indicate the filtered data.
Fig. 5.
Fig. 5. Tracking experimental results at frame rate ∼7400 Hz and ∼2000 Hz. (a) Illustration of a scene containing a projector including a DMD 1, a projection lens, a DMD 2, a collection lens and a PMT (photomultiplier tube). (b) Real trajectory and the reconstructed trajectories. The orange line represents the real trajectory, while the red dotted line and the blue line indicate the filtered trajectory at the frame rates of ∼7400 Hz and ∼2000 Hz, respectively. The scale bar is 1 mm. (c) 1-D representation of the reconstructed trajectory at frame rate ∼2000 Hz. The gray line represents the original data, while the yellow and green lines indicate the filtered data. (d) 1-D representation of the reconstructed trajectory at frame rate ∼7400 Hz. The gray line represents the original data, while the red and blue lines indicate the filtered data.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I n = x , y f ( x , y ) S n ( x , y ) .
m p q = x , y x p y q f ( x , y ) .
x c = m 10 / m 00 = I 2 / I 1
y c = m 01 / m 00 = I 3 / I 1 .
V x = P / Δ t
V y = P / Δ t .
V max = V x 2 + V y 2 = 2 P / Δ t .
f ( x , y ) = 0 Δ t g ( x V x t , y V y t ) d t .

Metrics

Select as filters


Select Topics Cancel
© Copyright 2022 | Optica Publishing Group. All Rights Reserved