Abstract

An event-based image sensor works dramatically differently from the conventional frame-based image sensors in a way that it only responds to local brightness changes whereas its counterparts’ output is a linear representation of the illumination over a fixed exposure time. The output of an event-based image sensor therefore is an asynchronous stream of spatial-temporal events data tagged with the location, timestamp and polarity of the triggered events. Compared to traditional frame-based image sensors, event-based image sensors have advantages of high temporal resolution, low latency, high dynamic range and low power consumption. Although event-based image sensors have been used in many computer vision, navigation and even space situation awareness applications, little work has been done to explore their applicability in the field of wavefront sensing. In this work, we present the integration of an event camera in a Shack-Hartmann wavefront sensor and the usage of event data to determine spot displacement and wavefront estimation. We show that it can achieve the same functionality but with substantial speed and can operate in extremely low light conditions. This makes an event-based Shack-Hartmann wavefront sensor a preferable choice for adaptive optics systems where light budget is limited or high bandwidth is required.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
More Like This
Shack-Hartmann wavefront sensing based on binary-aberration-mode filtering

Shuai Wang, Ping Yang, Bing Xu, Lizhi Dong, and Mingwu Ao
Opt. Express 23(4) 5052-5064 (2015)

Holographic imaging with a Shack-Hartmann wavefront sensor

Hai Gong, Oleg Soloviev, Dean Wilding, Paolo Pozzi, Michel Verhaegen, and Gleb Vdovin
Opt. Express 24(13) 13729-13737 (2016)

Dynamic laser speckle analysis using the event sensor

Zhou Ge, Nan Meng, Li Song, and Edmund Y. Lam
Appl. Opt. 60(1) 172-178 (2021)

References

  • View by:

  1. P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits 43(2), 566–576 (2008).
    [Crossref]
  2. C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” IEEE J. Solid-State Circuits 46(1), 259–275 (2011).
    [Crossref]
  3. G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).
  4. T. Delbruck, “Neuromorophic vision sensing and processing,” in 2016 46th European Solid-State Device Research Conference (ESSDERC), (2016), pp. 7–14.
  5. R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
    [Crossref]
  6. T. Brosch, S. Tschechne, and H. Neumann, “On event-based optical flow detection,” Front. Neurosci. 9, 137 (2015).
    [Crossref]
  7. G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking Optical Flow for Event-Based Sensors Using Ibm’s Truenorth Neurosynaptic System,” arXiv e-prints arXiv:1710.09820 (2017).
  8. G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
    [Crossref]
  9. S. Afshar, A. P. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” arXiv e-prints arXiv:1911.08730 (2019).
  10. G. Cohen, S. Afshar, and A. van Schaik, “Approaches for astrometry using event-based sensors,” in Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, (2018), p. 25.
  11. A. Lambert and G. Cohen, “Study of a spatio-temporal sensor for turbulence characterisation and wavefront sensing (conference presentation),” in Environmental Effects on Light Propagation and Adaptive Systems, (2018).
  12. P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128 × 128 120 dB 30mW asynchronous vision sensor that responds to relative intensity change,” in 2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers, (2006), pp. 2060–2069.
  13. R. Berner, C. Brandli, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 × 180 120 dB 10mW 12 μs-latency sparse output vision sensor for mobile applications,” in 2013 Symposium on VLSI Circuits Digest of Technical Papers, (2013), pp. c186–c187.
  14. C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
    [Crossref]
  15. J. Hartmann, “Bermerkungen ueber den bau und die justierung von spektrographen,” Zeitschrift fuer Instrumentenkunde 20, 47 (1900).
  16. R. V. Shack and B. C. Platt, “Production and use of a lenticular hartmann screen,” J Opt Soc Am 61, 656–660 (1971).
  17. S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
    [Crossref]
  18. F. Kong, M. C. Polo, and A. Lambert, “Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing,” Appl. Opt. 56(23), 6466–6475 (2017).
    [Crossref]
  19. J. C. Shelton, “Denominator-free control algorithms for adaptive optics,” in Adaptive Optics and Applications, (1997), pp. 455–459.
  20. J. Hardy, Adaptive Optics for Astronomical Telescopes, Oxford series in optical and imaging sciences (Oxford University, 1998).
  21. F. Barranco, C. Fermuller, and E. Ros, “Real-time clustering and multi-target tracking using event-based sensors,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2018), pp. 5764–5769.
  22. S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
    [Crossref]
  23. H. Rebecq, D. Gehrig, and D. Scaramuzza, “ESIM: an open event camera simulator,” in Proceedings of The 2nd Conference on Robot Learning, vol. 87 of Proceedings of Machine Learning Research A. Billard, A. Dragan, J. Peters, and J. Morimoto, eds. (PMLR, 2018), pp. 969–982.
  24. D. Joubert, “Design of a simulated event-based sensor to evaluate its potential for space applications,” (2020). V1.02.
  25. V. Padala, A. Basu, and G. Orchard, “A noise filtering algorithm for event-based asynchronous change detection image sensors on truenorth and its implementation on truenorth,” Front. Neurosci. 12, 118 (2018).
    [Crossref]
  26. H. Liu, C. Brandli, C. Li, S.-C. Liu, and T. Delbruck, “Design of a spatiotemporal correlation filter for event-based sensors,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), pp. 722–725.
  27. R. Ghosh, A. Gupta, A. Nakagawa, A. Soares, and N. Thakor, “Spatiotemporal filtering for event-based action recognition,” arXiv e-prints arXiv:1903.07067 (2019).
  28. T. Delbruck, “Frame-free dynamic digital vision,” in International Symposium on Secure-Life Electronics, vol. 1 (University of Tokyo, 2008), pp. 21–26. In: Proceedings of International Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, Univ. of Tokyo, Mar. 6-7, 2008.
  29. X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
    [Crossref]
  30. B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.
  31. L. A. Poyneer, “Scene-based Shack-Hartmann wave-front sensing: Analysis and simulation,” Appl. Opt. 42(29), 5807–5815 (2003).
    [Crossref]

2019 (2)

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
[Crossref]

2018 (1)

V. Padala, A. Basu, and G. Orchard, “A noise filtering algorithm for event-based asynchronous change detection image sensors on truenorth and its implementation on truenorth,” Front. Neurosci. 12, 118 (2018).
[Crossref]

2017 (1)

2015 (2)

X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
[Crossref]

T. Brosch, S. Tschechne, and H. Neumann, “On event-based optical flow detection,” Front. Neurosci. 9, 137 (2015).
[Crossref]

2014 (1)

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
[Crossref]

2012 (1)

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

2011 (1)

C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” IEEE J. Solid-State Circuits 46(1), 259–275 (2011).
[Crossref]

2008 (1)

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits 43(2), 566–576 (2008).
[Crossref]

2006 (1)

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

2003 (1)

1971 (1)

R. V. Shack and B. C. Platt, “Production and use of a lenticular hartmann screen,” J Opt Soc Am 61, 656–660 (1971).

1900 (1)

J. Hartmann, “Bermerkungen ueber den bau und die justierung von spektrographen,” Zeitschrift fuer Instrumentenkunde 20, 47 (1900).

Afshar, S.

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
[Crossref]

S. Afshar, A. P. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” arXiv e-prints arXiv:1911.08730 (2019).

G. Cohen, S. Afshar, and A. van Schaik, “Approaches for astrometry using event-based sensors,” in Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, (2018), p. 25.

Alvarez, R.

G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking Optical Flow for Event-Based Sensors Using Ibm’s Truenorth Neurosynaptic System,” arXiv e-prints arXiv:1710.09820 (2017).

Barranco, F.

F. Barranco, C. Fermuller, and E. Ros, “Real-time clustering and multi-target tracking using event-based sensors,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2018), pp. 5764–5769.

Bartolozzi, C.

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Basu, A.

V. Padala, A. Basu, and G. Orchard, “A noise filtering algorithm for event-based asynchronous change detection image sensors on truenorth and its implementation on truenorth,” Front. Neurosci. 12, 118 (2018).
[Crossref]

Benosman, R.

X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
[Crossref]

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking Optical Flow for Event-Based Sensors Using Ibm’s Truenorth Neurosynaptic System,” arXiv e-prints arXiv:1710.09820 (2017).

Berner, R.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
[Crossref]

R. Berner, C. Brandli, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 × 180 120 dB 10mW 12 μs-latency sparse output vision sensor for mobile applications,” in 2013 Symposium on VLSI Circuits Digest of Technical Papers, (2013), pp. c186–c187.

Bessell, T.

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

Brandli, C.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
[Crossref]

R. Berner, C. Brandli, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 × 180 120 dB 10mW 12 μs-latency sparse output vision sensor for mobile applications,” in 2013 Symposium on VLSI Circuits Digest of Technical Papers, (2013), pp. c186–c187.

H. Liu, C. Brandli, C. Li, S.-C. Liu, and T. Delbruck, “Design of a spatiotemporal correlation filter for event-based sensors,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), pp. 722–725.

Brosch, T.

T. Brosch, S. Tschechne, and H. Neumann, “On event-based optical flow detection,” Front. Neurosci. 9, 137 (2015).
[Crossref]

Cassidy, A.

G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking Optical Flow for Event-Based Sensors Using Ibm’s Truenorth Neurosynaptic System,” arXiv e-prints arXiv:1710.09820 (2017).

Censi, A.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Clercq, C.

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Cohen, G.

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
[Crossref]

A. Lambert and G. Cohen, “Study of a spatio-temporal sensor for turbulence characterisation and wavefront sensing (conference presentation),” in Environmental Effects on Light Propagation and Adaptive Systems, (2018).

S. Afshar, A. P. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” arXiv e-prints arXiv:1911.08730 (2019).

G. Cohen, S. Afshar, and A. van Schaik, “Approaches for astrometry using event-based sensors,” in Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, (2018), p. 25.

Conradt, J.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Daniilidis, K.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Davison, A.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Delbruck, T.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
[Crossref]

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits 43(2), 566–576 (2008).
[Crossref]

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

T. Delbruck, “Neuromorophic vision sensing and processing,” in 2016 46th European Solid-State Device Research Conference (ESSDERC), (2016), pp. 7–14.

R. Berner, C. Brandli, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 × 180 120 dB 10mW 12 μs-latency sparse output vision sensor for mobile applications,” in 2013 Symposium on VLSI Circuits Digest of Technical Papers, (2013), pp. c186–c187.

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128 × 128 120 dB 30mW asynchronous vision sensor that responds to relative intensity change,” in 2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers, (2006), pp. 2060–2069.

H. Liu, C. Brandli, C. Li, S.-C. Liu, and T. Delbruck, “Design of a spatiotemporal correlation filter for event-based sensors,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), pp. 722–725.

T. Delbruck, “Frame-free dynamic digital vision,” in International Symposium on Secure-Life Electronics, vol. 1 (University of Tokyo, 2008), pp. 21–26. In: Proceedings of International Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, Univ. of Tokyo, Mar. 6-7, 2008.

Fermuller, C.

F. Barranco, C. Fermuller, and E. Ros, “Real-time clustering and multi-target tracking using event-based sensors,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2018), pp. 5764–5769.

Filliat, D.

X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
[Crossref]

Fusco, T.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

Gallego, G.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Gehrig, D.

H. Rebecq, D. Gehrig, and D. Scaramuzza, “ESIM: an open event camera simulator,” in Proceedings of The 2nd Conference on Robot Learning, vol. 87 of Proceedings of Machine Learning Research A. Billard, A. Dragan, J. Peters, and J. Morimoto, eds. (PMLR, 2018), pp. 969–982.

Ghosh, R.

R. Ghosh, A. Gupta, A. Nakagawa, A. Soares, and N. Thakor, “Spatiotemporal filtering for event-based action recognition,” arXiv e-prints arXiv:1903.07067 (2019).

Gupta, A.

R. Ghosh, A. Gupta, A. Nakagawa, A. Soares, and N. Thakor, “Spatiotemporal filtering for event-based action recognition,” arXiv e-prints arXiv:1903.07067 (2019).

Haessig, G.

G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking Optical Flow for Event-Based Sensors Using Ibm’s Truenorth Neurosynaptic System,” arXiv e-prints arXiv:1710.09820 (2017).

Hamilton, T. J.

S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
[Crossref]

Hardy, J.

J. Hardy, Adaptive Optics for Astronomical Telescopes, Oxford series in optical and imaging sciences (Oxford University, 1998).

Hartmann, J.

J. Hartmann, “Bermerkungen ueber den bau und die justierung von spektrographen,” Zeitschrift fuer Instrumentenkunde 20, 47 (1900).

Ieng, S.-H.

X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
[Crossref]

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Joubert, D.

D. Joubert, “Design of a simulated event-based sensor to evaluate its potential for space applications,” (2020). V1.02.

Jung, H.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Kim, J.-S.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Kim, S.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Kong, F.

Lagorce, X.

X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
[Crossref]

Lambert, A.

F. Kong, M. C. Polo, and A. Lambert, “Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing,” Appl. Opt. 56(23), 6466–6475 (2017).
[Crossref]

A. Lambert and G. Cohen, “Study of a spatio-temporal sensor for turbulence characterisation and wavefront sensing (conference presentation),” in Environmental Effects on Light Propagation and Adaptive Systems, (2018).

Lee, H.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Lee, K.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Leutenegger, S.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Li, C.

H. Liu, C. Brandli, C. Li, S.-C. Liu, and T. Delbruck, “Design of a spatiotemporal correlation filter for event-based sensors,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), pp. 722–725.

Lichtsteiner, P.

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits 43(2), 566–576 (2008).
[Crossref]

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128 × 128 120 dB 30mW asynchronous vision sensor that responds to relative intensity change,” in 2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers, (2006), pp. 2060–2069.

Liu, H.

H. Liu, C. Brandli, C. Li, S.-C. Liu, and T. Delbruck, “Design of a spatiotemporal correlation filter for event-based sensors,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), pp. 722–725.

Liu, S.-C.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
[Crossref]

R. Berner, C. Brandli, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 × 180 120 dB 10mW 12 μs-latency sparse output vision sensor for mobile applications,” in 2013 Symposium on VLSI Circuits Digest of Technical Papers, (2013), pp. c186–c187.

H. Liu, C. Brandli, C. Li, S.-C. Liu, and T. Delbruck, “Design of a spatiotemporal correlation filter for event-based sensors,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), pp. 722–725.

Matolin, D.

C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” IEEE J. Solid-State Circuits 46(1), 259–275 (2011).
[Crossref]

Meyer, C.

X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
[Crossref]

Michau, V.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

Morreale, B.

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

Nakagawa, A.

R. Ghosh, A. Gupta, A. Nakagawa, A. Soares, and N. Thakor, “Spatiotemporal filtering for event-based action recognition,” arXiv e-prints arXiv:1903.07067 (2019).

Neumann, H.

T. Brosch, S. Tschechne, and H. Neumann, “On event-based optical flow detection,” Front. Neurosci. 9, 137 (2015).
[Crossref]

Nicholson, A. P.

S. Afshar, A. P. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” arXiv e-prints arXiv:1911.08730 (2019).

Nicolle, M.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

Orchard, G.

V. Padala, A. Basu, and G. Orchard, “A noise filtering algorithm for event-based asynchronous change detection image sensors on truenorth and its implementation on truenorth,” Front. Neurosci. 12, 118 (2018).
[Crossref]

G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking Optical Flow for Event-Based Sensors Using Ibm’s Truenorth Neurosynaptic System,” arXiv e-prints arXiv:1710.09820 (2017).

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Ovsiannikov, I.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Padala, V.

V. Padala, A. Basu, and G. Orchard, “A noise filtering algorithm for event-based asynchronous change detection image sensors on truenorth and its implementation on truenorth,” Front. Neurosci. 12, 118 (2018).
[Crossref]

Park, J.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Park, K.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Platt, B. C.

R. V. Shack and B. C. Platt, “Production and use of a lenticular hartmann screen,” J Opt Soc Am 61, 656–660 (1971).

Polo, M. C.

Posch, C.

C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” IEEE J. Solid-State Circuits 46(1), 259–275 (2011).
[Crossref]

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits 43(2), 566–576 (2008).
[Crossref]

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128 × 128 120 dB 30mW asynchronous vision sensor that responds to relative intensity change,” in 2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers, (2006), pp. 2060–2069.

Poyneer, L. A.

Rebecq, H.

H. Rebecq, D. Gehrig, and D. Scaramuzza, “ESIM: an open event camera simulator,” in Proceedings of The 2nd Conference on Robot Learning, vol. 87 of Proceedings of Machine Learning Research A. Billard, A. Dragan, J. Peters, and J. Morimoto, eds. (PMLR, 2018), pp. 969–982.

Roh, Y.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Ros, E.

F. Barranco, C. Fermuller, and E. Ros, “Real-time clustering and multi-target tracking using event-based sensors,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2018), pp. 5764–5769.

Rousset, G.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

Rutten, M.

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

Ryu, H.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Scaramuzza, D.

H. Rebecq, D. Gehrig, and D. Scaramuzza, “ESIM: an open event camera simulator,” in Proceedings of The 2nd Conference on Robot Learning, vol. 87 of Proceedings of Machine Learning Research A. Billard, A. Dragan, J. Peters, and J. Morimoto, eds. (PMLR, 2018), pp. 969–982.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Shack, R. V.

R. V. Shack and B. C. Platt, “Production and use of a lenticular hartmann screen,” J Opt Soc Am 61, 656–660 (1971).

Shelton, J. C.

J. C. Shelton, “Denominator-free control algorithms for adaptive optics,” in Adaptive Optics and Applications, (1997), pp. 455–459.

Shin, C.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Soares, A.

R. Ghosh, A. Gupta, A. Nakagawa, A. Soares, and N. Thakor, “Spatiotemporal filtering for event-based action recognition,” arXiv e-prints arXiv:1903.07067 (2019).

Son, B.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Srinivasan, M.

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Suh, Y.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Taba, B.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

Tapson, J.

S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
[Crossref]

Thakor, N.

R. Ghosh, A. Gupta, A. Nakagawa, A. Soares, and N. Thakor, “Spatiotemporal filtering for event-based action recognition,” arXiv e-prints arXiv:1903.07067 (2019).

Thomas, S.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

Tokovinin, A.

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

Tschechne, S.

T. Brosch, S. Tschechne, and H. Neumann, “On event-based optical flow detection,” Front. Neurosci. 9, 137 (2015).
[Crossref]

van Schaik, A.

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
[Crossref]

S. Afshar, A. P. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” arXiv e-prints arXiv:1911.08730 (2019).

G. Cohen, S. Afshar, and A. van Schaik, “Approaches for astrometry using event-based sensors,” in Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, (2018), p. 25.

Wabnitz, A.

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

Wang, Y.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Wohlgenannt, R.

C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” IEEE J. Solid-State Circuits 46(1), 259–275 (2011).
[Crossref]

Woo, J.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

Yang, M.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
[Crossref]

R. Berner, C. Brandli, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 × 180 120 dB 10mW 12 μs-latency sparse output vision sensor for mobile applications,” in 2013 Symposium on VLSI Circuits Digest of Technical Papers, (2013), pp. c186–c187.

Appl. Opt. (2)

Front. Neurosci. (3)

S. Afshar, T. J. Hamilton, J. Tapson, A. van Schaik, and G. Cohen, “Investigation of event-based surfaces for high-speed detection, unsupervised feature extraction, and object recognition,” Front. Neurosci. 12, 1047 (2019).
[Crossref]

V. Padala, A. Basu, and G. Orchard, “A noise filtering algorithm for event-based asynchronous change detection image sensors on truenorth and its implementation on truenorth,” Front. Neurosci. 12, 118 (2018).
[Crossref]

T. Brosch, S. Tschechne, and H. Neumann, “On event-based optical flow detection,” Front. Neurosci. 9, 137 (2015).
[Crossref]

IEEE J. Solid-State Circuits (3)

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240×180 130 dB 3μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014).
[Crossref]

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits 43(2), 566–576 (2008).
[Crossref]

C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” IEEE J. Solid-State Circuits 46(1), 259–275 (2011).
[Crossref]

IEEE Trans. Neural Netw. Learning Syst (1)

X. Lagorce, C. Meyer, S.-H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE Trans. Neural Netw. Learning Syst 26(8), 1710–1720 (2015).
[Crossref]

J Opt Soc Am (1)

R. V. Shack and B. C. Platt, “Production and use of a lenticular hartmann screen,” J Opt Soc Am 61, 656–660 (1971).

J. Astronaut. Sci. (1)

G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci. 66(2), 125–141 (2019).
[Crossref]

Mon. Not. R. Astron. Soc. (1)

S. Thomas, T. Fusco, A. Tokovinin, M. Nicolle, V. Michau, and G. Rousset, “Comparison of centroid computation algorithms in a Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc. 371(1), 323–336 (2006).
[Crossref]

Neural Netw. (1)

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Zeitschrift fuer Instrumentenkunde (1)

J. Hartmann, “Bermerkungen ueber den bau und die justierung von spektrographen,” Zeitschrift fuer Instrumentenkunde 20, 47 (1900).

Other (17)

G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking Optical Flow for Event-Based Sensors Using Ibm’s Truenorth Neurosynaptic System,” arXiv e-prints arXiv:1710.09820 (2017).

J. C. Shelton, “Denominator-free control algorithms for adaptive optics,” in Adaptive Optics and Applications, (1997), pp. 455–459.

J. Hardy, Adaptive Optics for Astronomical Telescopes, Oxford series in optical and imaging sciences (Oxford University, 1998).

F. Barranco, C. Fermuller, and E. Ros, “Real-time clustering and multi-target tracking using event-based sensors,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2018), pp. 5764–5769.

S. Afshar, A. P. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” arXiv e-prints arXiv:1911.08730 (2019).

G. Cohen, S. Afshar, and A. van Schaik, “Approaches for astrometry using event-based sensors,” in Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, (2018), p. 25.

A. Lambert and G. Cohen, “Study of a spatio-temporal sensor for turbulence characterisation and wavefront sensing (conference presentation),” in Environmental Effects on Light Propagation and Adaptive Systems, (2018).

P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128 × 128 120 dB 30mW asynchronous vision sensor that responds to relative intensity change,” in 2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers, (2006), pp. 2060–2069.

R. Berner, C. Brandli, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 × 180 120 dB 10mW 12 μs-latency sparse output vision sensor for mobile applications,” in 2013 Symposium on VLSI Circuits Digest of Technical Papers, (2013), pp. c186–c187.

G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-Based Vision: a Survey,” arXiv e-prints arXiv:1904.08405 (2019).

T. Delbruck, “Neuromorophic vision sensing and processing,” in 2016 46th European Solid-State Device Research Conference (ESSDERC), (2016), pp. 7–14.

B. Son, Y. Suh, S. Kim, H. Jung, J.-S. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov, and H. Ryu, “4.1 a 640×480 dynamic vision sensor with a 9μm pixel and 300meps address-event representation,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC), (2017), pp. 66–67.

H. Rebecq, D. Gehrig, and D. Scaramuzza, “ESIM: an open event camera simulator,” in Proceedings of The 2nd Conference on Robot Learning, vol. 87 of Proceedings of Machine Learning Research A. Billard, A. Dragan, J. Peters, and J. Morimoto, eds. (PMLR, 2018), pp. 969–982.

D. Joubert, “Design of a simulated event-based sensor to evaluate its potential for space applications,” (2020). V1.02.

H. Liu, C. Brandli, C. Li, S.-C. Liu, and T. Delbruck, “Design of a spatiotemporal correlation filter for event-based sensors,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), pp. 722–725.

R. Ghosh, A. Gupta, A. Nakagawa, A. Soares, and N. Thakor, “Spatiotemporal filtering for event-based action recognition,” arXiv e-prints arXiv:1903.07067 (2019).

T. Delbruck, “Frame-free dynamic digital vision,” in International Symposium on Secure-Life Electronics, vol. 1 (University of Tokyo, 2008), pp. 21–26. In: Proceedings of International Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, Univ. of Tokyo, Mar. 6-7, 2008.

Supplementary Material (10)

NameDescription
Visualization 1       Zernike 2
Visualization 2       Zernike 3
Visualization 3       Zernike 4
Visualization 4       Zernike 5
Visualization 5       Zernike 6
Visualization 6       Zernike 7
Visualization 7       Zernike 8
Visualization 8       Zernike 9
Visualization 9       Zernike 10
Visualization 10       Zernike 11

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Abstracted DVS pixel diagram and principle of its operation [1].
Fig. 2.
Fig. 2. (a) Event stream simulated from a rotating Gaussian spot; (b) Histogram of the generated events within the 1 s simulated rotation cycle.
Fig. 3.
Fig. 3. Scatter plots of event stream after applying (a) background activity filter and (b) time surface activation test.
Fig. 4.
Fig. 4. Average position estimation using ON, OFF and both ON/OFF events.
Fig. 5.
Fig. 5. Estimation of spot motion using blob tracker.
Fig. 6.
Fig. 6. Blob tracking and average position errors by using ON, OFF and both ON/OFF events.
Fig. 7.
Fig. 7. Histograms of events simulated from varying Zernike modes.
Fig. 8.
Fig. 8. Wavefront reconstructions of the first 8 Zernike modes when cycling individual Zernike $Z_4$ Defocus, $Z_5$ Astigmatism 45 and $Z_6$ Astigmatism 0.
Fig. 9.
Fig. 9. (a) Optical setup for the event-based Shark-Hartmann wavefront sensing experiments (BS: beamsplitter, L: acrhomatic lens, ND: neutral density filter, MLA: micro lenslet array). (b) Davis 240C event camera with customized 3D printed mount compatiable with Thorlab’s 30mm cage systems.
Fig. 10.
Fig. 10. (a) An APS frame from DAVIS 240C; (b) An exponentially-decayed time surface within a time period of 50 ms with a decay constant of 10 ms; (c) Histogram of events within the same 50 ms time period.
Fig. 11.
Fig. 11. Comparison of the averaged spot displacements in all valid sub-apertures from APS frames and DVS events by cycling the DMP horizontally.
Fig. 12.
Fig. 12. From left to right: central $4\times 4$ sub-apertures from the APS frame after inserting ND1 filter; Histograms of DVS events after inserting ND1, ND2, ND3 and ND4 filters.
Fig. 13.
Fig. 13. Averaged displacement in $x$ direction estimated from events after reducing light level by 10, 100 and 1000 times using ND1, ND2 and ND3 filters. The excitation is 20 cycles of Tilt X aberration.
Fig. 14.
Fig. 14. Histograms of events from varying Zernike modes recorded on DAVIS 240C event camera. See Visualization 1, Visualization 2, Visualization 3, Visualization 4, Visualization 5, Visualization 6, Visualization 7, Visualization 8, Visualization 9, and Visualization 10, in the supplementary material for videos generated from events rendered frames with varying individual Zernike modes.
Fig. 15.
Fig. 15. Wavefront reconstructions coefficients of the first 8 Zernike modes in the presence of cycling individual Zernike $Z_4$ Defocus, $Z_5$ Astigmatism 45 and $Z_6$ Astigmatism 0 using the DVS event data.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

e i = [ u i , t i , p i ] T
e i , u k U : T i ( u k ) = { T i 1 ( u k ) u k u i t i u k = u i
u k U : S i ( u k ) = e ( T i ( u k ) t i ) / τ
Q P = ( T P F P ) / ( T P + F P )
Q N = ( T N F N ) / ( T N + F N )
x ^ t i = m x ^ t i 1 + ( 1 m ) x i
y ^ t i = m y ^ t i 1 + ( 1 m ) y i
p i ( u ) = 1 2 π | Σ i | 1 2 e 1 2 ( u μ i ) T Σ 1 ( u μ i )
μ i = [ μ i x , μ i y ] T
Σ = [ σ x 2 σ x y σ x y σ y 2 ]
μ t = a 1 μ t 1 + ( 1 a 1 ) u
Σ t = a 2 Σ t 1 + ( 1 a 2 ) Δ Σ
Δ Σ = [ ( x μ t x ) 2 ( x μ t x ) ( y μ t y ) ( x μ t x ) ( y μ t y ) ( y μ t y ) 2 ]
A i ( t ) = { A i ( t Δ t ) exp ( Δ t τ ) + p i u if  e ( u , t )  belongs to tracker  i A i ( t Δ t ) exp ( Δ t τ ) otherwise
d i s t a n c e ( P , Q ) = ( Δ x ) 2 + ( Δ y ) 2 = ( x t x ^ t ) 2 + ( y t y ^ t ) 2
g i ( r , c ) = Δ i ( r , c ) f M L = u ^ i ( r , c ) u r e f ( r , c ) f M L

Metrics