Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Detection approach for GEO space objects with a wide-field optical telescope array

Open Access Open Access

Abstract

In 2020, Changchun Observatory developed a 280 mm wide-field optical telescope array to improve surveillance of space debris in the geosynchronous belt. There are many advantages including a wide field of view, the ability to observe a large area of sky and high reliability. However, the wide field of view causes a significant number of background stars to appear in the image when photographing space objects, making it difficult to detect them. This research focuses on the precise detection of GEO space objects from images taken by this telescope array in order to position them in large quantities. Our work further investigates the motion feature of an object, namely that the object can be seen as being in a uniform linear motion for a brief length of time. Based on this feature, the belt can be divided into a number of smaller areas and the telescope array scans each smaller area one at a time from east to west. To detect objects in the subarea, a combination of image differencing with trajectory association is used. The image differencing algorithm is used to remove most stars and screen out suspected objects in the image. Next, the trajectory association algorithm is employed to further filter out the real objects among the suspected ones, and the trajectories attributed to the same object are linked. The feasibility and accuracy of the approach were verified by the experiment results. The accuracy rate of trajectory association exceeds 90% and on average, more than 580 space objects can be detected per observation night. Since the J2000.0 equatorial system can accurately describe the apparent position of an object, the object can be detected by using this coordinate system as opposed to the pixel coordinate system.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical observation technology is one of the most important technical means for ground-based space surveillance. Especially for Geosynchronous space objects (GEO objects), the existing radar technology is constrained by high operating costs, so it is difficult to be used as a conventional space surveillance means [1,2]. Therefore, ground-based optical observation becomes the main means.

Among the existing observation strategies, a single optical telescope can only observe a single GEO object per unit time in staring mode [3]. The strategy has low efficiency and cannot meet the requirements for cataloging space objects in modern times. Optical telescope arrays are developed to improve the observing efficiency. The telescope array entails a unique telescope design, which is composed of multiple identical telescopes [4].

Compared with distant stars, GEO objects are very close to the earth. Therefore, the telescope array can survey objects by employing unit telescopes with smaller apertures. The telescope array, unlike the optical interferometer in astronomy, does not combine signals from two or more telescopes to produce measurements with higher resolution than one of them could obtain independently [5]. First, it uses a short focal length to increase the field of view (FOV) of each unit telescope, and then uses many units to cover as wide area of the sky as possible.

The observation strategy used by the telescope array is a sidereal mode, which is suitable for search and discovery of GEO objects. When operating in sidereal mode, the telescope array first points to a fixed area of the sky and then remains stationary at a velocity of about 15 “/s (the speed of Earth’s rotation), waiting for the object to cross the FOV [3][6]. As a result, the telescope array does not require high-precision tracking capability, and its servo control is simple and reliable.

In summary, the characteristics of the telescope array stand out significantly from those of conventional telescopes. It is a space surveillance system which is easy to implement and cost-effective.

Its unit telescope is designed with a short focal length, a wide FOV, and a small aperture to improve observing efficiency and save costs. However, the telescope with the aforementioned qualities makes GEO objects into the form of dots in the image. At the same time, a vast number of background stars are introduced in the image under the effect of the wide FOV. Stellar images are also in the form of dots overwhelming the object. And the two are indistinguishable in terms of brightness. Therefore, one critical issue for space surveillance is to detect space objects among the vast number of stars.

Currently, scanning technique [7], mathematic morphological [8] and image deconvolution [9] are used to detect GEO objects on a single image. While stacking method [10], optical flow [11] and hypothesis testing [12] are employed on sequence images. The main topic of this research is concerned with multi-object detection in a wide FOV, which faces three major technical challenges.

  • (1) Since the object only takes up a small number of pixels in the image, none of its finely detailed features are visible. Furthermore, it is very difficult to distinguish it from background stars due to their extreme similarity.
  • (2) The telescope array is used as a sky survey telescope, and its task is to obtain as many surveillance data of the object as possible. Therefore, it must perform multi-object detection.
  • (3) During the observation, sometimes the object trajectory (i.e., the surveillance data) will be discontinuous due to changes in the attitude of the object. Therefore, it is necessary to accurately correlate the trajectory of each object so as to carry out the subsequent orbit improvements.

2. Dragonfly Telescope and its composition

The Changchun Observatory contains a number of telescope array sets. One of the most typical is the 280 mm wide-field optical telescope array which is primarily used for the observation of GEO objects. It consists of four 280 mm aperture telescopes and two T-type equatorial instruments. Two 280 mm unit telescopes are mounted on either side of one equatorial instrument (Fig. 1). The telescope array is also known as Dragonfly Telescope because of its dragonfly-like form.

 figure: Fig. 1.

Fig. 1. 280 mm wide-field optical telescope array. (a) The appearance. (b) The 3D structure.

Download Full Size | PDF

In order to improve light-gathering power and make lens adaptable to diverse environmental conditions for long-term operation (i.e., the len does not require frequent coating), the Dragonfly Telescope employs a fully refractive optical system.

According to the sky survey requirement in accordance with a wide FOV and small aberrations, its unit telescope has approximately a FOV about of 6.5° and a fast f/ratio of 1.15. A unit telescope monitors 42.5 square degrees, and the entire array covers 170 square degrees. The important parameters of the Dragonfly Telescope are shown in Table 1.

Tables Icon

Table 1. The important parameters of the Dragonfly Telescope

3. Motion characteristics of space objects

Due to the little distinction between space objects and background stars in terms of static characteristics such as size and shape parameters, it is challenging to accurately detect space objects merely by static features, and even if they can be detected, the detection rate is low. Space objects can be detected efficiently by using motion characteristics. Motion characteristics like position, velocity, and acceleration are crucial because they offer a solid foundation for improving the detection rate.

3.1 Calculate the velocity of a space object

It can be assumed that the space object moves in a circular orbit because its geocentric distance fluctuates very little in a short period of time. Motion characteristics of a space object is illustrated in Fig. 2. Let S and T denote the object and the telescope respectively. The altitude of the object S in relation to the Earth’s surface is denoted by H. The distance between the object S and the telescope T is denoted by ρ. The angle formed by the Earth’s center and the telescope T at the object S is indicated as φ. The angle formed by the telescope T and the object S at the Earth’s center is expressed as θ. The letters z and h are used to represent the zenith distance and the elevation angle, and both of them are complementary angles. Thus, we have

$$\left\{ {\begin{array}{l} {h + z = {{90}^o}}\\ {z = \theta + \varphi } \end{array}} \right.. $$

 figure: Fig. 2.

Fig. 2. Motion characteristics of a space object.

Download Full Size | PDF

The following formula can be used to determine the velocity v of the object S [14],

$$v = \sqrt {\mu /r}, $$
where, $r = {R_e} + H$ denotes the separation between the object S and the Earth’s center, and Re is the Earth’s radius (6378.14 km). The geocentric gravitational constant, denoted by the symbol µ, has a value of 398600.4 km3/s2.

3.2 Calculate the distance between the object and the telescope

According to the sine theorem, it follows that

$$\frac{{{R_e}}}{{\sin \varphi }} = \frac{r}{{\sin ({{180}^o} - z)}}. $$

So together, equations (1 ), (2 ) and (3 ) provide us with an

$$\theta = {90^o} - h - \arcsin \left( {\frac{{{R_e}}}{r}\cos h} \right). $$

As a result, the distance ρ between the object S and the telescope T is as follows,

$$\rho = \sqrt {{r^2} + R_e^2 - 2r{R_e}\cos \theta }. $$

3.3 Calculate the angular velocity and acceleration of the object

The elevation angle h and the altitude H are assumed to be the given values. When the velocity v of the object S is perpendicular to the intersection line between the orbital plane and the camera plane, the object S moves vertically upward in the camera plane, as shown in Fig. 2. At this time, the angular velocity ω acquires the minimum value ωmin, and the angular acceleration a reaches the maximum value amax, that is

$${\omega _{\min }} = \frac{{{\rho ^2} + {r^2} - R_e^2}}{{2{\rho ^2}r}}\sqrt {\mu /r}, $$
and
$${a_{\max }} = \frac{{({r^2} - R_e^2){R_e}\mu \sin \theta }}{{{\rho ^4}{r^2}}}. $$

When the velocity v of the object S is parallel to the intersection line between the orbital plane and the camera plane, the object S moves to the highest point in the camera plane. At this time, the angular velocity ω gets the maximum value ωmax, and the angular acceleration a reaches the minimum value amin, that is

$${\omega _{\max }} = \frac{1}{\rho }\sqrt {\mu /r}, $$
and
$${a_{\min }} = 0. $$

The distributions of the angular velocity ω and acceleration a of the object S in a circular orbit, with the elevation angle h and the altitude H as variables, are illustrated in Fig. 3, Fig. 4 and Fig. 5. As can be seen from these figures, the motion characteristics of an object in circular orbits are as follows.

  • (1) The angular velocity and acceleration of the object gradually decrease as the orbit height rises under the same elevation angle h.
  • (2) As the elevation angle h increases, the angular velocity of the object increases gradually.
  • (3) For the same object, the angular acceleration is significantly lower than the angular velocity. Therefore, the object can be seen as being in a uniform linear motion for a brief length of time, particularly for the GEO object.

 figure: Fig. 3.

Fig. 3. The maximum and minimum of angular velocity

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The maximum of angular acceleration

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The maximum ratio of angular acceleration to angular velocity

Download Full Size | PDF

4. GEO belt and observation strategy

As satellite launch activities continue to increase, the GEO orbit has become a scarce resource due to its non-renewable nature [15]. GEO objects are usually defined as man-made objects with semi-major axes at 35,786 ± 200 km altitude in near circular orbits [16]. At present, the number of GEO objects that have been catalogued is more than 1,000, including nearly 600 operating satellites [17]. The proportion of satellites with orbital inclinations of less than 0.1° is 65.4%, about 78% with inclinations of less than 1°, and up to 95.8% with inclinations of less than 15°.

The observation can be done per-sky-area in order to detect GEO objects in the belt. The observation area is first divided into a number of smaller areas, and the Dragonfly Telescope then scans each smaller area one at a time from east to west, as shown in Fig. 6. The observation is conducted for each subarea at stellar velocity (i.e., the sidereal mode, roughly 15′′/s). There is a 2-minute observation period for each subarea.

 figure: Fig. 6.

Fig. 6. Schematic illustration of the observation strategy for the GEO belt

Download Full Size | PDF

In 2 minutes, the apparent motion of GEO objects is demonstrated in Fig. 7. The graph proves theoretically that the GEO object can be seen as being in a uniform linear motion for a brief length of time. The trajectory coordinates of the GEO object are almost in an arithmetic sequence on continuous images acquired under equal exposure time. Therefore, the detection of GEO objects is carried out in accordance with the uniform linear motion.

 figure: Fig. 7.

Fig. 7. Theoretical positions of multiple GEO objects. Specifically, these positions are predicted ephemerides derived from TLE data. In these predicted ephemerides of GEO objects, the position is given at an interval of 1s. (a) The direction of right ascension. (b) The direction of declination.

Download Full Size | PDF

5. GEO objects detection

The detection of GEO objects is divided into three stages for each Dragonfly Telescope unit.

  • (1) The image is processed by first determining image thresholding and then measuring centroid coordinates of all stellar images.
  • (2) Most of the background stars in the image are removed by using the image difference of adjacent frames, which decreases the amount of calculation and screens out the suspected objects.
  • (3) The trajectory association is employed to further filter out the real objects among the suspected ones. At the same time, the trajectories attributed to the same object are linked to perform subsequent orbit improvement.

5.1 Image processing

The image is segmented by using image thresholding. Each pixel of the image with a gray value that is less than a particular threshold is thought to be a sky background, and others are considered to be stellar images.

There are several ways to calculate the threshold for images, with more or less time overhead. Given that the sky background in the area that the telescope array is pointing at is almost constant in a short period of time, the image can be segmented by using adaptive global thresholding.

Define $I(x,y,i)$ as the gray value of the coordinate (x, y) of the i-th frame. Assuming that the size of the image is $n \times m$ pixels, the average gray value is

$${\mu _i} = \frac{{\left( {\sum\limits_{y = 1}^n {\sum\limits_{x = 1}^m {I(x,y,i)} } } \right)}}{{m \times n}}, $$
and the variance is
$${\sigma _i} = \sqrt {\frac{{\left( {\sum\limits_{y = 1}^n {\sum\limits_{x = 1}^m {{{(I(x,y,i) - {\mu_i})}^2}} } } \right)}}{{m \times n}}}. $$

The adaptive global threshold of the image is defined as

$${T_i} = {\mu _i} + C \times {\sigma _i}, $$
where C is a constant whose value affects minimum signal-to-noise ratio of the stellar image. When C takes the value of 3, it can effectively distinguish between the stellar image and the sky background, as shown in Fig. 8.The signal-to-noise ratio of the stellar image has been significantly improved.

 figure: Fig. 8.

Fig. 8. Image segmentation effect. (a) The original image. (b) The segmented image.

Download Full Size | PDF

Following segmentation, connected component labeling [18] is used to extract all stellar images, and then their centroid coordinates are measured.

5.2 Image differencing

The Dragonfly Telescope employs an equatorial instrument whose purpose is not to counteract the effects of the Earth’s rotation so that stellar images remain fixed in the FOV. Its practical application is to point to a fixed area of the sky then follow the Earth's rotation and wait for the objects to cross the FOV. Therefore, when mounting the telescope and camera on the equatorial instrument, the camera is adjusted to the proper position so that the apparent motion directions of stellar images are all parallel to the x-direction of the camera.

The apparent velocity of the star relative to the ground-based telescope is about v = 15 “/s. Therefore, the stellar image appears in successive frames as a uniform linear motion with a velocity of

$$\left\{ {\begin{array}{l} {{v_x} = {v / {{s_x}}} = 2.6169 \; pixels/s}\\ {v_y}=0\end{array}} \right.$$
where, ${s_x} = 5.732^{\prime\prime}/pixel$ is the Instantaneous FOV [13].

The Earth’s rotation is the source of the background interference. The standard approach is to model the background motion in successive frames and then correct the background motion by resolving the affine transformation matrix between adjacent frames. The outcome of the background compensation is essential for further detecting the object.

First, the background motion is modeled based on the uniform motion characteristics of the star. We select the ith and (i + 1)th frames. Based on the time interval ti + 1-ti between two exposures, the displacement dx of the stellar image is calculated as

$$dx = {v_x}({t_{i + 1}} - {t_i}). $$

Then, the coordinates of all stellar images on ith frame are translated by dx pixels in the x-direction to obtain the new coordinates

$$\left\{ {\begin{array}{l} {x_k^{i\textrm{ + 1}} = x_j^i + dx}\\ {y_k^{i + 1} = y_j^i} \end{array}} \right.$$

Comparing the sets of coordinates of $\{{(x,y)_{(j)}^{(i)}|k = 1,2, \cdots \cdots ,{N_k}} \}$ and $\{(x,y)_{(k)}^{(i + 1)}|j = 1,2,$ $\cdots \cdots ,{N_j} \}$, if the difference between the elements is less than a threshold T, two elements will be judged as a same star, otherwise, they will be judged as suspicious objects.

$$D(k,j) = \left\{ {\begin{array}{ll} 0&{|{(x,y)_{(k)}^{(i + 1)} - (x,y)_{(j)}^{(i)}} |\le T} \\ 1&{|{(x,y)_{(k)}^{(i + 1)} - (x,y)_{(j)}^{(i)}} |> T} \end{array}} \right., $$
where, T is related to the following accuracy of the equatorial instrument and is generally taken as 1 pixel. When D (m,n) = 0, then the coordinate $(x,y)_{(n)}^{(i + 1)}$ and coordinate $(x,y)_{(m)}^{(i)}$ are a same star; when D (q,p) = 1, then the coordinate $(x,y)_{(q)}^{(i + 1)}$ or coordinate $(x,y)_{(p)}^{(i)}$ is a suspected object. Figure 9 depicts the simplified flow of the differencing algorithm that was previously discussed.

 figure: Fig. 9.

Fig. 9. Schematic of image differencing. The green icons represent suspect objects.

Download Full Size | PDF

Figure 10 shows an example of image differencing. The stellar images in the red circle are the suspected objects after processing. The image differencing method significantly reduces the number of suspected objects, which improves the efficiency of subsequent trajectory association, and also lowers its error rate. It is important to note that the luminosity of suspected objects is low because they primarily consist of cosmic rays, more distant stars, and real objects.

 figure: Fig. 10.

Fig. 10. The image processed by the image differencing method.

Download Full Size | PDF

5.3 Trajectory association

The J2000.0 equatorial system is a geocentric celestial reference system (GCRS) commonly used in the satellite orbital dynamics, which can accurately and quantitatively describe the apparent position of a satellite. In this inertial coordinate system, the satellite position is independent of both the Earth’s rotation and the observatory position. Therefore, the J2000.0 equatorial system can be employed for orbit determination. Through the use of a plate constant model, astronomical positioning [19] establishes a one-to-one correspondence between the pixel coordinates and the celestial coordinates. Star pattern matching is an essential part of astronomical positioning, which correlates the stars in the image with the stars in the catalog to solve the plate constant. General subgraph [20] and pattern recognition [21] are the two fundamental techniques used in star map matching.

To predict the position of the object, the linear extrapolation approach is used, which has a very high accuracy, especially for the uniform linear motion. Based on the given two positions $({t_{i - 1}},{\alpha _{i - 1}},{\delta _{i - 1}})$ and $({t_i},{\alpha _i},{\delta _i})$, one can extrapolate the position at the time ti + 1 by this approach. The linear extrapolation formula is as follows.

$$\left\{ {\begin{array}{ll} {{\alpha_{i + 1}} = {\alpha_i} + \frac{{{t_{i + 1}} - {t_i}}}{{{t_i} - {t_{i - 1}}}}({\alpha_i} - {\alpha_{i - 1}})}\\ {{\delta_{i + 1}} = {\delta_i} + \frac{{{t_{i + 1}} - {t_i}}}{{{t_i} - {t_{i - 1}}}}({\delta_i} - {\delta_{i - 1}})} \end{array}} \right. .$$

The object trajectory is closely related to a range gate, in addition to being predicted by using linear extrapolation. In order to accomplish the proper association, the range gate should be chosen in a way that maximizes the likelihood that real measurements will enter the gate.

The range of the gate is closely related to the angular velocity of the object. As a result, the range gate can be made by using the angular velocity information. Expression constraints are characterized by

$$\left\{ {\begin{array}{l} {\frac{{{{f^{\prime}}_{i + 1}} - {f_i}}}{{{t_{i + 1}} - {t_i}}} \le {\omega_{\max }}}\\ {{\omega_{\max }} = \frac{1}{\rho }\sqrt {\mu /r} } \end{array}} \right., $$
where, fi denotes the associated celestial coordinates of an object and ${f^{\prime}_{i + 1}}$ denotes the candidate celestial coordinates of the object. For the object at an altitude of 36,000 km, its maximum angular velocity ${\omega _{\max }}$ is about 15′′/s, which is equal to the angular velocity of the Earth’s rotation.

If more than one candidate coordinates exist within the range gate, the one with the smallest distance from predicted value can be chosen as the trajectory based on the Euclidean distance.

Only a stable trajectory (≥3 trace points) could be identified as a suspected object after using the linear extrapolation to establish the trajectory. The final confirmation of an object relies on the orbit data, from which the International ID of the object is determined. There will be a significant gap in the trajectory due to interruptions brought on by the weather or the uncontrollable tumbling object. If the linear extrapolation is still utilized, this may result in great deviation and may even lead to false detection.

We adopt the following processing strategy. The vector $\vec{f}^{\prime} = {f^{\prime}_n} - {f^{\prime}_{n - 1}}$ is first created using the two trace points ${f^{\prime}_{n - 1}}$ and ${f^{\prime}_n}$ at the end of trajectory 1. Second, the vector $\vec{f}^{\prime\prime} = {f^{\prime\prime}_2} - {f^{\prime\prime}_1}$ is built using the two trace points ${f^{\prime\prime}_1}$ and ${f^{\prime\prime}_2}$ at the start of trajectory 2. As a result, the angle $\varphi$ between the vector $\vec{f^{\prime}}$ and the vector $\vec{f^{\prime\prime}}$ is written as

$$\varphi = \arccos \left[ {\frac{{({{f^{\prime}}_n} - {{f^{\prime}}_{n - 1}}) \bullet ({{f^{\prime\prime}}_2} - {{f^{\prime\prime}}_1})}}{{|{{{f^{\prime}}_n} - {{f^{\prime}}_{n - 1}}} ||{{{f^{\prime\prime}}_2} - {{f^{\prime\prime}}_1}} |}}} \right]. $$

Then constraint conditions are

$$\varphi \le {\varphi _0}. $$

The setting value of ${\varphi _0}$ is related to the motion states and of an object; the more maneuverable the object is, the larger the angle of ${\varphi _0}$ setting will be. Considering the observation error, ${\varphi _0} = {5^o}$ is chosen. This indicates that no angular constraint is applied and the object is moving linearly.

Several trajectories attributed to the same object can be linked together by calculating the vector angle between trajectories. Orbit improvement may be carried out after linking numerous trajectories.

6. Experiments and analysis

Using the Dragonfly Telescope, experimental observations and measurements on the area in the GEO belt over the Changchun Observatory were undertaken in September 2022. The longitude of subsatellite point ranges from 80° to 170°, with a latitude coverage of -16°―16°.

6.1 Statistics of observation data

In order to assess the accuracy of the object detection, the correlation between the orbit data and the observation data is performed. The processing flow is to predict TLE (two-line element set) data in a Satellite database to the theoretical position of an object at the moment of observation by using a simplified perturbations model. It is then compared to the actual position of the object. The data correlation is considered successful when the difference between the two is less than a specified threshold, and the Satellite Catalog Number is determined.

The statistics for the object detection accuracy after the trial period are shown in Table 2. Symbol ① represents the number of trajectories. Symbol ② represents the number of trajectories successfully associated with the object in the database. Symbol ③ indicates the detection accuracy rate (i.e., the percentage of associated trajectories vs. all trajectories). Symbol ④ denotes the number of objects for which the satellite catalog number has been decided. The overall accuracy rate of the algorithm is greater than 90%. There are two possibilities for the failure to correlation of remaining data. The first possibility is that the trace points in a trajectory are too few to successfully match a cataloged object (short-arc orbit determination). The second possibility is that the trajectory belongs to an uncatalogued object. As a result, having a complete TLE database is a requirement for object detection. The likelihood that the trajectory will be correlated increases with the number of objects in the database. On average, more than 580 space objects can be detected per observation night, with the vast majority of them being GEO objects.

Tables Icon

Table 2. Statistics of observation data

6.2 Trajectory in different coordinate systems

As shown in Fig. 11, the pixel and celestial coordinates of the object are almost in an arithmetic sequence on continuous images, both of which can be used as the basis for the object detection.

 figure: Fig. 11.

Fig. 11. The coordinate sequence of the Norad ID 22608. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence

Download Full Size | PDF

As shown in Fig. 12, under the pixel coordinate system, the motion characteristics of the object are not obvious, then it is easy to have a false detection. In contrast, under the celestial coordinate system, especially along the direction of right ascension, the object moves in a straight line, and it is simple to associate its trace points.

 figure: Fig. 12.

Fig. 12. The coordinate sequence of the Norad ID 22963. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence.

Download Full Size | PDF

The Norad ID 24209 is almost at rest relative to the Earth’s surface due to its small orbital inclination. As seen in Fig. 13, it is almost stationary in the pixel coordinate system. Therefore, it is difficult to detect objects with small orbit inclination in this coordinate system. In contrast, under the celestial coordinate system, especially along the direction of right ascension, the object remains in a uniform linear motion, and it is easy to associate its trace points together.

 figure: Fig. 13.

Fig. 13. The coordinate sequence of the Norad ID 24209. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence.

Download Full Size | PDF

The Norad ID 34780 is a rocket body with an uncontrolled attitude. The result is that its light curve will change and its trajectory may be incomplete, as shown in Fig. 14. Under the celestial coordinate system, especially along the direction of right ascension, the slopes of its two trajectories are basically the same. Therefore, it is easy to associate two trajectories of the same object together by using the angle between two vectors. However, the slopes of its two trajectories cannot keep the same under the pixel coordinate system, as evidenced by the angle between the two vectors exceeds 5°. Therefore, the two trajectories cannot be associated together. If the angle is enlarged, it is simple to create a false association.

 figure: Fig. 14.

Fig. 14. The coordinate sequence of the Norad ID 34780. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence.

Download Full Size | PDF

The aforementioned four objects maintain the uniform linear motion along the direction of right ascension because they are located in the GEO belt and their speed is prograde to the speed of the Earth’s rotation. In summary, since the J2000.0 equatorial system can accurately describe apparent position of an object, it is more precise to detect its motion trajectory under this coordinate system. Conversely, the pixel coordinate system does not accurately reflect the position of the object because it has no physical meaning. The trace points as well as the trajectories cannot be accurately associated under this coordinate system, which reduces the object detection rate. The above experiments demonstrate once again that the GEO object can be seen as being in a uniform linear motion for a brief length of time.

7. Conclusion

To improve the search, tracking and cataloging capability of GEO objects, the Changchun Observatory developed the 280 mm wide-field optical telescope array. In this paper, an object detection method combining the image difference and the trajectory association is used to perform experiments on one month of observations collected by this telescope array. The experiment results show that the method is able to automatically and accurately detect multiple objects in a wide FOV.

Because of the great distance between GEO objects and Earth, the vast majority of them have faint and small imaging characteristics. Figure 10 demonstrates this point as well. We can raise the detection rate by further improving the image processing algorithm to extract more faint objects.

As the FOV of a telescope is further increased, the plate constant model becomes less accurate in converting the pixel coordinates to the celestial coordinates, which affects subsequent trajectory association. As a result, we can employ the split-field method to divide a wide FOV into several small FOVs, and then use the plate constant model to convert coordinate systems. The accuracy can be ensured in this manner.

Funding

National Natural Science Foundation of China (11903051, 12273080).

Acknowledgments

We would like to express our sincere gratitude for the support and help from the technical team at the Jilin Observatory. The presented results would not be so complete and important without the invaluable contribution of the technical team operating the 280 mm wide-field optical telescope array.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. N. Alvarez, X.C. Megias, O. Fuste, E. Ferrer, M. Albert, A. Amlou, A. Aguasca, and A. Broquetas, “Station-Keeping Manoeuvre Detection for Autonomous Precise Interferometric Tracking of Geosynchronous Satellites,” in 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (2021), pp.7783–7786.

2. F. A. Allahdadi, I. Rongier, and P.D. Wilde, Safety Design for Space Operations (Butterworth-Heinemann2013), chap.8, 411–602.

3. N. Rajan, T.H. Morgan, R.L. Lambour, and I. Kupiec, “Orbital Debris Size Estimation from Radar Cross Section Measurements,” in PROCEEDINGS OF THE 2000 SPACE CONTROL CONFERENCE (2001).

4. H. Luo, Y. D. Mao, Y. Yu, and Z. H. Tang, “FocusGEO observations of space debris at Geosynchronous Earth Orbit,” Adv. Space Res. 64(2), 465–474 (2019). [CrossRef]  

5. J. D. Monnier, “Optical interferometry in astronomy,” Rep. Prog. Phys. 66(5), 789–857 (2003). [CrossRef]  

6. T. Schildknecht, “Optical surveys for space debris,” Astron Astrophys Rev. 14(1), 41–111 (2007). [CrossRef]  

7. P. Seitzer, R. Smith, J. Africano, K. Jorgensen, E. Stansbery, and D. Monet, “MODEST observations of space debris at geosynchronous orbit,” Adv. Space Res. 34(5), 1139–1142 (2004). [CrossRef]  

8. M. B. Laas, G. Blanchet, M. Boer, E. Ducrotte, and A. Klotz, “A new algorithm for optical observations of space debris with the TAROT telescopes,” Adv. Space Res. 44(11), 1270–1278 (2009). [CrossRef]  

9. J. Nunez, A. Nunez, F. J. Montojo, and M. Condominas, “Improving space debris detection in GEO ring using image deconvolution,” Adv. Space Res. 56(2), 218–228 (2015). [CrossRef]  

10. T. Yanagisawa, A. Nakajima, K. I. Kadota, H. Kurosaki, T. Nakamura, F. Yoshida, B. Dermawan, and Y. Sato, “Automatic Detection Algorithm for Small Moving Objects,” Astron. Soc. Japan. 57(2), 399–408 (2005). [CrossRef]  

11. K. Fujita, T. Hanada, Y. Kitazawa, and A. Kawabe, “A debris image tracking using optical flow algorithm,” Adv. Space Res. 49(5), 1007–1018 (2012). [CrossRef]  

12. J. B. Xi, D. S. Wen, O. K. Ersoy, H. W. Yi, D. L. Yao, Z. X. Song, and S. B. Xi, “Space debris detection in optical image sequences,” Appl. Opt. 55(28), 7929–7940 (2016). [CrossRef]  

13. W. B. Yang, Y. Zhao, M. Liu, and D. L. Liu, “Method of space object detection by wide field of view telescope based on its following error,” Opt. Express 29(22), 35348–35365 (2021). [CrossRef]  

14. O. Montenbruck and E. Gill, Satellite orbits models methods application (Springer2000), Chap. 1.

15. T. Schildknecht, M. Ploner, and U. Hugentobler, “The search for debris in GEO,” Adv. Space Res. 28(9), 1291–1299 (2001). [CrossRef]  

16. H. Luo, J. H. Zheng, W. Wang, J. J. Cao, J. Zhu, G. P. Chen, Y. S. Zhang, C. S. Liu, and Y. D. Mao, “FocusGEO II. A telescope with imaging mode based on image overlay for debris at Geosynchronous Earth Orbit,” Adv. Space Res. 69(6), 2618–2628 (2022). [CrossRef]  

17. “UCS satellite database,” Union of Concerned Scientists (2022), https://www.ucsusa.org/resources/satellite-database.

18. H. Samet and M. Tamminen, “Efficient Component Labeling of Images of Arbitrary Dimension Represented by Linear Bintrees,” IEEE Trans. Pattern Anal. Machine Intell. 10(4), 579–586 (1988). [CrossRef]  

19. R. Schoebel, G.W. Hein, and B. Eissfeller, “Renaissance of Astrogeodetic Levelling Using GPS/CCD Zenith Camera,” in Proceedings of the IAIN World Congress and the 56th Annual Meeting of The Institute of Navigation (2000).

20. H. Liu, X. G. Wei, J. Li, and G. Y. Wang, “A star identification algorithm based on simplest general subgraph,” Acta Astronaut. 183, 11–22 (2021). [CrossRef]  

21. K. Ho, “A survey of algorithms for star identification with low-cost star trackers,” Acta Astronaut. 73, 156–163 (2012). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. 280 mm wide-field optical telescope array. (a) The appearance. (b) The 3D structure.
Fig. 2.
Fig. 2. Motion characteristics of a space object.
Fig. 3.
Fig. 3. The maximum and minimum of angular velocity
Fig. 4.
Fig. 4. The maximum of angular acceleration
Fig. 5.
Fig. 5. The maximum ratio of angular acceleration to angular velocity
Fig. 6.
Fig. 6. Schematic illustration of the observation strategy for the GEO belt
Fig. 7.
Fig. 7. Theoretical positions of multiple GEO objects. Specifically, these positions are predicted ephemerides derived from TLE data. In these predicted ephemerides of GEO objects, the position is given at an interval of 1s. (a) The direction of right ascension. (b) The direction of declination.
Fig. 8.
Fig. 8. Image segmentation effect. (a) The original image. (b) The segmented image.
Fig. 9.
Fig. 9. Schematic of image differencing. The green icons represent suspect objects.
Fig. 10.
Fig. 10. The image processed by the image differencing method.
Fig. 11.
Fig. 11. The coordinate sequence of the Norad ID 22608. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence
Fig. 12.
Fig. 12. The coordinate sequence of the Norad ID 22963. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence.
Fig. 13.
Fig. 13. The coordinate sequence of the Norad ID 24209. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence.
Fig. 14.
Fig. 14. The coordinate sequence of the Norad ID 34780. (a) The x coordinate sequence. (b) The y coordinate sequence. (c) The right ascension coordinate sequence. (d)The declination coordinate sequence.

Tables (2)

Tables Icon

Table 1. The important parameters of the Dragonfly Telescope

Tables Icon

Table 2. Statistics of observation data

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

{ h + z = 90 o z = θ + φ .
v = μ / r ,
R e sin φ = r sin ( 180 o z ) .
θ = 90 o h arcsin ( R e r cos h ) .
ρ = r 2 + R e 2 2 r R e cos θ .
ω min = ρ 2 + r 2 R e 2 2 ρ 2 r μ / r ,
a max = ( r 2 R e 2 ) R e μ sin θ ρ 4 r 2 .
ω max = 1 ρ μ / r ,
a min = 0.
μ i = ( y = 1 n x = 1 m I ( x , y , i ) ) m × n ,
σ i = ( y = 1 n x = 1 m ( I ( x , y , i ) μ i ) 2 ) m × n .
T i = μ i + C × σ i ,
{ v x = v / s x = 2.6169 p i x e l s / s v y = 0
d x = v x ( t i + 1 t i ) .
{ x k i  + 1 = x j i + d x y k i + 1 = y j i
D ( k , j ) = { 0 | ( x , y ) ( k ) ( i + 1 ) ( x , y ) ( j ) ( i ) | T 1 | ( x , y ) ( k ) ( i + 1 ) ( x , y ) ( j ) ( i ) | > T ,
{ α i + 1 = α i + t i + 1 t i t i t i 1 ( α i α i 1 ) δ i + 1 = δ i + t i + 1 t i t i t i 1 ( δ i δ i 1 ) .
{ f i + 1 f i t i + 1 t i ω max ω max = 1 ρ μ / r ,
φ = arccos [ ( f n f n 1 ) ( f 2 f 1 ) | f n f n 1 | | f 2 f 1 | ] .
φ φ 0 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.