Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Attitude-correlated frames adding approach to improve signal-to-noise ratio of star image for star tracker

Open Access Open Access

Abstract

When applied inside Earth’s atmosphere, the star tracker is sensitive to sky background produced by atmospheric scattering and stray light. The shot noise induced by the strong background reduces star detection capability and even makes it completely out of operation. To improve the star detection capability, an attitude-correlated frames adding (ACFA) approach is proposed in this paper. Firstly, the attitude changes of the star tracker are measured by three gyroscope units (GUs). Then the mathematical relationship between the image coordinates at different time and the attitude changes of the star tracker is constructed (namely attitude-correlated transformation, ACT). Using the ACT, the image regions in different frames that correspond to the same star can be extracted and added to the current frame. After attitude-correlated frames adding, the intensity of the star signal increases by n times, while the shot noise increases by n~n/2 times due to its stochastic characteristic. Consequently, the signal-to-noise ratio (SNR) of the star image enhances by a factor of n~2n. Simulations and experimental results indicate that the proposed method can effectively improve the star detection ability. Hence, there are more dim stars detected and used for attitude determination. In addition, the star centroiding error induced by the background noise can also be reduced.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A star tracker is an avionics instrument used to provide the absolute 3-axis attitude of a spacecraft utilizing star observations [1, 2]. First-generation star trackers are characterized by outputting positions of a few bright stars in sensor-referenced coordinates and requires external processing [3]. Its star observation for navigation purposes is performed by aiming the Stellar INS optoelectronic tools at the brightest stars and long-term holding them using gyrostabilized platforms [4].

The present second-generation star tracker, also called star imagers, performs this task by pattern recognition of the star constellations in the field of view (FOV) and by means of a star catalog covering the entire firmament [5]. It has a smoother and more robust operation, lower cost and higher accuracy compared to first-generation star trackers and becomes the preferred autonomous navigator onboard most spacecraft [6–9]. However, when used inside the atmosphere, it is susceptible to sky background produced by atmospheric scattering and stray light and the star image has an extremely low SNR. The dramatically SNR decline of star image reduces the detection capability and makes it out of operation during daytime [10–13]. Therefore, it is crucial to improve the SNR of star images and to overcome the obstacle of the daytime star detection under strong noise in order to adapt the star tracker to the near-ground condition [14–16].

Selecting an image sensor whose spectral response lies in the short-wave infrared band is the mainstream solution for daytime star trackers [17]. It has higher SNR compared with the camera detecting visible light because the background radiation in infrared spectrum region is much weaker than that in visible spectrum region. To further enhance the star detection ability, there are three ways to improve the SNR of infrared star images. The first way is to improve the optical system by using lens that has a large aperture and a long focal length, which definitely increases the cost and size of the system. For example, the daytime star tracker developed by Trex Enterprise Corp. adopts three optical lens of 200mm aperture and 0.5° × 0.4° FOV to observe stars on the H-band, enabling the detection of the 6.8 magnitude stars at sea level during daytime, and the total weight of the assembly is up to 100 pounds [18,19]. Extending the integration time is the second way, because the star signal increases faster than the noise with integration time [20]. The main problem is that the pixels are easy to saturate under strong background radiation, and the long exposure time will induce the blurring of the star image under dynamic conditions. Thirdly, Mark O’Malley proposed a frame adding approach as an alternative to long integration time to improve the SNR for low light detection [21]. By shortening the exposure time of each single-frame image and adding the frames directly, it effectively prevents the saturation of the electric detector under strong noise and has a good effect on image enhancement of static targets. However, under dynamic conditions, the star is shifting on the image with the angular motion of the vehicle [22,23]. Adding the star images directly results in motion-blurring of the image and the deterioration in star centroiding accuracy [24,25].

In order to solve the problems of the third way and make it adapted to the dynamic condition, a star images adding method based on the attitude-correlated frame (ACF) is proposed in this paper. In the ACF method [26], the star image frames measured in different time are correlated by the angular motions of the star tracker and calculated with the angular rates sensed by the strap-down GUs. Then the translation and rotation transformations between different frames (namely correlated transformation) are conducted by using the gyro-measured attitude changes. By correlating and adding successive image frames, the SNR of the star image is enhanced while the background noise is suppressed under dynamic conditions, and the star detection capacity can be improved.

The paper is organized as follows. The ACFA approach is depicted in detail in Section 2. Simulation results are presented in Section 3. In Section 4, the experimental result is given to verify the approach. Finally, the conclusion is drawn in Section 5.

2. Basic theory

2.1. Frame adding to improve SNR under static conditions

For optical sensors, shot noise occurs due to the random fluctuations (described by the Gaussian distribution) of the number of photons detected, and its standard deviation is proportional to the square-root of the average intensity. During the day time, the performance of the focal plane array (FPA) is limited by the shot noise induced by strong sky background radiation. The SNR of the image can be enhanced by increasing the integration time. However, the pixels are easy to saturate under strong radiation condition. Frame adding is a viable way to improve the SNR with successive short integration time images and to keep them from saturation.

Considering a single frame obtained for low light detection, the SNR is given by [21]:

SNR=s¯iNτ2+2N02,
where i is an estimate of the input signal, Nτ is the integration dependent noise including shot noise, dark current noise, and transfer noise, and N0 is the output noise, such as readout, amplifier, and quantization noise. It should be noted that the shot noise is the main source in strong sky background.

Consider n frames taken one after another and added together. The mean input signal increases to ns̄i. Assuming mean square fluctuations add, then the noise for n frames added together becomes n(Nτ2+2N02). Therefore, the SNR for frame adding is given by [21]:

SNRadd=ns¯in(Nτ2+2N02)=nSNR.

There is an obvious improvement of SNR (n) for n correlated frames over a single frame. Theoretically, adding successive star images can improve the signal-to-noise ratio of the image, but in fact, the attitude of the vehicle is constantly changing, causing a certain star to shift on different images. Consequently, directly adding the images, the energy of a certain star is no longer distributed within a small area. The method of frame adding to improve SNR is restricted to static target signal on different images. So this method needs to be modified before applied to dynamic conditions.

2.2. Attitude-correlated transformation process

Under dynamic conditions, the attitude of the camera is changing with the vehicle, therefore, the position of a certain star on Focal Plane Array (FPA) is shifting with time as shown in Fig. 1. Adding dynamic frames directly results in motion blurring of star image and deterioration in star centroiding accuracy. The Attitude Correlated Frames Adding (ACFA) Approach is proposed to solve the problem. Considering n successive frames, to make the input signal after addition increase to n times of a single frame, the pixels of a certain star on different images should be added. As shown in Fig. 1(b), for a certain point in the celestial sphere, its coordinate on the FPA of star tracker is (uj, vj) at tj. While its coordinate is (uk, vk) at tk. Because of the attitude change of the star tracker from tk to tj, there is a correlated transformation between (uj, vj) and (uk, vk). The information of GUs is used to calculate the attitude change between two moments and to eliminate its impact on the star coordinate on FPA. By transforming the coordinate at tk to find its position at other moments, then the corresponding pixels can be added. In this way, star radiation can be regarded as a static signal on the image, and adding successive frames after attitude-correlated transformation can improve SNR of star image and star detection capability under dynamic conditions.

 figure: Fig. 1

Fig. 1 Two successive frames taken by the star tracker at different moments. (a) The attitude of the star tracker is changing with time in the inertial coordinate system. (b) There are 4 same stars on two different images taken at tk and tj respectively. ST refers to star tracker, and GUs refers to gyroscope units.

Download Full Size | PDF

The coordinate systems used in this paper are defined as follows:

The Earth-centered inertial coordinate system, which is fixed in inertial space, is represented by i-frame. Its Xi axis is in the equatorial plane and points to the vernal equinox. The Zi axis is aligned with the Earth’s rotation axis and vertical to the equatorial plane. The Yi axis completes a right-handed frame with the other two axes. The star tracker coordinate system (s-frame) has its origin at the detector center. The Xs and Ys axes are parallel to a row and a column of the detector, respectively. The three axes satisfy the right-hand rule. The strap-down GUs system, which is represented by b-frame, has its Xb, Yb and Zb axes consistent with the three mutually orthogonal sensitive axes of the GUs. And it is rigidly mounted to the star tracker platform.

The vector of a certain star in i-frame is ri, and it can be transformed to s-frame when given the attitude matrix of the star tracker at the time of tk and tj respectively:

{rks=[xksykszks]T=Cis(k)rirjs=[xjsyjszjs]T=Cis(j)ri,
where Cis is calculated from the following equation with installation matrix (the transformation matrix from b-frame to s-frame) Cbs and attitude matrix of the GUs Cib:
{Cis(k)=CbsCib(k)Cis(j)=CbsCib(j).

It can be deduced that:

Cs(j)s(k)=CbsCb(j)b(k)(Cbs)T,
where (Cbs)T is the trasposed matrix of Cbs, and Cb(j)b(k) can be calculated from the information of GUs [27]:
Cb(j)b(k)=(I+[Φj×])(I+[Φk2×])(I+[Φk1×]),
where [Φ×] is the skew-symmetric matrix of the angle increment:
[Φ×]=[0ϕzϕyϕz0ϕxϕyϕx0].

For the sampling time of Δt, the angle increment can be calculated using the angular rate of GUs in the b-frame ωibb:

Φ=ωibbΔt=[ϕxϕyϕz].

The vectors of a certain star in i-frame are fixed at different time. Thus, the vectors of the star in s-frame between tk and tj have the following linear transformation relationship:

rks=Cs(j)s(k)rjs.

At two different moments, the vector of a certain star rks and rjs become correlated because of Cs(j)s(k). So Cs(j)s(k) is defined as the attitude-correlated matrix (ACM) from tk to tj. Tmn is the value in m row and n column of Cs(j)s(k), Eq. (9) can be rewritten as:

{xks=T11xjs+T12yjs+T13zjsyks=T21xjs+T22yjs+T23zjszks=T31xjs+T32yjs+T33zjs.

The ideal imaging of a star tracker can be simplified to a pinhole model [28]. If the principal point coordinates (u0, v0), the focal length f and the aberration of the camera δu, δv of the star tracker are known, the relationship between the vector of a star in s-frame is given by:

{u=u0+fxszs+δuv=v0+fyszs+δv.
(uj, vj) is the coordinate on FPA in frame # j, and (uk, vk) is the coordinate on FPA in frame # k:
{uj=u0+fxjszjs+δuvj=v0+fyjszjs+δv,
{uk=u0+fxkszks+δuvk=v0+fykszks+δv,
while (uk, vk) is the corresponding coordinate of (uj, vj) at tk under dynamic conditions, Eq. (9) can be rewritten as:
{uk=T11(uju0δu)+T12(vjv0δv)+T13T31(uju0δu)+T32(vjv0δv)+T33+δuvk=T21(uju0δu)+T22(vjv0δv)+T23T31(uju0δu)+T32(vjv0δv)+T33+δv.

We transform each pixel of frame # j through Eq. (14), finding the coordinate on FPA of corresponding pixels in frame # k.

For a moment tn when the attitude is to be measured, the former successive n − 1 star images are correlated to the nth image by the attitude-correlated matrices before added. The exact number of correlated images should be chosen considering some other factors as the dynamic condition of the vehicle, integration time, the frequency of star tracker, SNR threshold for star detection and etc.

2.3. Area weighed averaging adding method

For static star images, there is a one-to-one correspondence of the pixels between different frames. However, in reality, the integer coordinate becomes fractional after transformation. Consequently, the pixels between the correlated images are not perfectly overlapping. Considering that each row and column has hundreds of pixels and we add the entire image, the transformed pixels located at the edge of the image can be ignored. A certain transformed pixel in the jth image (j = 1, 2, ..., n − 1) is located in a window in the nth image. The transformed pixel of the jth image centered at A0 has overlapping areas with four pixels of the nth image with the center of A1, A2, A3 and A4 respectively, which is shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Coordinates of a transformed pixel: the coordinate of A0 is an integer between 1 and 512, in the jth image. But it becomes fractional after transformation, which has overlapping areas with four pixels of the nth image.

Download Full Size | PDF

The gray value of pixel A0 is divided into four parts according to the overlapping areas S and then added to the four pixels of nth image A1, A2, A3 and A4 by area weighed averaging adding method which is given by:

{I(A1)=In(A1)+S1×Ij(A0)I(A2)=In(A2)+S2×Ij(A0)I(A3)=In(A3)+S3×Ij(A0)I(A4)=In(A4)+S4×Ij(A0),
where In, Ij, I is the gray value function of the image at tn, tj(j = 1, 2, ..., n − 1) and after addition respectively.

Due to the area weighed averaging adding method, the original pixel is divided into four parts, the standard deviation is linearly split as well. While the images are added after the division, it is the variance of each frame that is linearly superimposed. The standard deviation of the image noise in a single frame is defined as σ. And the noise standard deviation of the correlated image is n/2 to n times that of a single frame (The specific derivation process is elaborated in the appendix):

n2σσaddnσ.

The star signal is still proportional to the number of correlated frames n. Therefore, the added SNR satisfies the following inequality:

nSNRSNRadd2nSNR.

The area weighted averaging method can solve the problem that the pixels between correlated images are not perfectly overlapping and meanwhile make the noise standard deviation after addition lower than n times of a single frame.

The flowchart shown in Fig. 3 depicts the procedure of the ACFA approach. The system consists of a star tracker and GUs which are rigidly fixed together. Firstly, the star tracker takes successive star images while the GUs outputs its angular rate to calculate the ACM. Then the mathematical relationship between the image coordinates at different time and the ACM are constructed. Using the ACT, the image regions in different frames which are corresponding to the same star can be correlated to the current frame. Next the area weighed averaging method is adopted to add the correlated frames. Finally the added frame with increased SNR is used for star extraction and identification.

 figure: Fig. 3

Fig. 3 Flow chart of ACFA approach.

Download Full Size | PDF

3. Simulation

3.1. Simulation parameters

The star images are generated by the star tracker simulator when given intrinsic parameters and attitude of the star tracker. The performance of a typical star tracker and GUs used for this simulation are listed in Table 1:

Tables Icon

Table 1. Simulation parameters

The three-axis attitude of the star tracker in i-frame is shown in Fig. 4. The GUs data is generated based on the given attitude of the star tracker at a sample rate of 50 Hz.

 figure: Fig. 4

Fig. 4 12 seconds’ three-axis attitude of the vehicle in i-frame.

Download Full Size | PDF

The feasibility of the proposed method is validated by the calculated SNR of star images and star point centroiding error. The gray values of the image pixels reflect the level of star signal and noise charges, which are used to calculate the SNR. Firstly, a sub-image of 50 × 50 pixels containing the navigation star is selected. And the mean of the gray values is subtracted from each pixel in the sub-image. Then a window of 5 × 5 pixels containing the star is extracted, and their mean gray value is calculated as the signal intensity. Next, the standard deviation of the remaining pixels (exclude the 5 × 5 window) in the sub-image is calculated as the noise intensity. Finally, the SNR is equal to the signal intensity divided by noise intensity.

3.2. Adding result of correlated images

Using the approach proposed in Section 2, the simulated successive star images are correlated and added. It can be seen in Fig. 5(a) that the star signal is overwhelmed by strong noise in a single frame. When 60 frames are correlated and added in Fig. 5(b), the star signal has an obvious improvement over the noise and is easier to be detected. Besides, the star point after addition still keeps its shape.

 figure: Fig. 5

Fig. 5 The same region of a single image and correlated image: (a) The SNR of a single frame is too low to see the dim star in the image. (b) The star signal becomes obvious when 60 frames are correlated and added.

Download Full Size | PDF

Then the star signal intensity and the noise intensity of correlated image are calculated. The signal of a certain star and noise growth with respect to the number of correlated frames are shown in Fig. 6. The star signal intensity in the window is proportional to the number of correlated frames, which indicates that the star point energy is still concentrated within a small region that is dominated by a certain star. The noise after the correlation and addition is between n/2 and n of a single frame, which verifies the result of Eq. (16). The SNR of the star image by ACFA approach as shown in Fig. 7(a) is between n and 2n times of a single frame, which agrees with Eq. (17).

 figure: Fig. 6

Fig. 6 The star signal growth and noise growth of of correlated images. (a) Star signal is proportional to number of added frames. (b) Noise is between n/2 and n of a single frame.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 SNR growth and centroiding accuracy estimation: (a) SNR is between n and 2n of a single frame. (b) The centroiding error of the navigation star with respect to the number of correlated frames. δx and δy represents the centroiding error in the x-axis and y-axis direction of the image coordinate system, δr represents the centroiding error in radical derenction.

Download Full Size | PDF

The star centroids are subsequently determined and compared with the present true value. As shown in Fig. 7(b), The centroiding error is always less than 0.1 pixel and declines with the number of correlated frames, and gradually approaches to the limit of systematic errors. Therefore the ACFA approach improves the SNR of star image making dim stars detectable and meanwhile keeps the shape of stars which guarantees the centroiding accuracy.

3.3. Influence of attitude-correlated matrix error

The attitude-correlated matrix (calculated by Eq. (5)) accuracy is one of the main factors affecting the adding result of the ACFA approach. Since the GUs information is used to calculate the ACM, the gyro error is the leading source of error. In practice, the GUs have two dominant errors including bias and noise. And typical performances of gyroscopes in different grades are listed in Table 2 [29]. GUs errors of three different grades are added (navigation, tactical and automotive respectively) while correlating the images, and the SNR of star image is analyzed. The simulation result is shown in Fig. 8(a). Compared with the error-free condition, the effect of navigation and tactical gyroscope errors on the SNR after the correlation is rather small, which can be ignored. As for the automotive grade, when the number of correlated frames is small, the simulation result agrees with theory. But when more frames are added, the SNR slightly decreases.

 figure: Fig. 8

Fig. 8 Adding result with errors: (a) GUs of four different error grades; (b) Different fixed angle errors of three perpendicular axes.

Download Full Size | PDF

As indicated in Eq. (5), the installation matrix Cbs is used to calculate the attitude-correlated matrix. As the fixed angle is used to calculate the installation matrix, if it deviates from the actual value, the calculated attitude-correlated matrix is inaccurate. And the correlating result of the ACFA approach is affected as well. Different levels of fixed angle error are added while correlating the images, and the SNR of the star image is analyzed. The simulation result is shown in Fig. 8(b). It can be seen that for fixed angle error less than [500, 500, 1000], its effect on the SNR growth is rather small, which can be ignored. As for higher-level fixed angle error, the SNR shows a decreasing trend when more frames are correlated. Therefore, the ACFA approach performs well with middle-level fixed angle errors.

The effects of two kinds of errors are then analyzed in detail. When the GUs error is automotive-grade, 10 frames with only star signal are correlated and added by ACFA approach, the result is shown in Fig. 9(a). The added star signal is no longer concentrated within a small region, and with increased noise, the SNR is reduced.

 figure: Fig. 9

Fig. 9 Analyze the reason of declined SNR: For automotive-graded GUs, star signal distribution of 10 correlated frames is dispersed.

Download Full Size | PDF

The result of ACFA depends on the accuracy of attitude-correlated matrix, so we further analyze the effects of GUs error and fixed angle error on attitude-correlated matrix. The error of attitude-correlated matrix can be calculated as follows [27]:

E=ICs(j)s(k)(C¯s(j)s(k))T

Where Cs(j)s(k) is the true value, C¯s(j)s(k) is the calculated matrix with errors, and I is the unit matrix. Therefore, the corresponding attitude error angle can be obtained by transforming E to three-axis Euler angles. The error of attitude-correlated matrix caused by automotive Gus as shown in Fig. 10(a) accumulates with navigation time. For a fixed angle error of [500, 500, 1000], the error of attitude-correlated matrix is shown in Fig. 10(b). It can be seen that the error fluctuates around zero and is less than 0.02 arc-seconds. So it will not accumulate with time and is too small to affect the ACFA approach.

 figure: Fig. 10

Fig. 10 Analyze the error of attitude-correlated matrix: (a) The output attitude error of gyro is accumulating with time. (b) The output attitude error of fixed angle is small.

Download Full Size | PDF

4. Experiment

The experiment was conducted to verify the proposed method at the laboratory building (Changsha, China) on 28th Nov. at 2 am. The equipment used in experiments are shown in Fig. 11. The star tracker and GUs are rigidly fixed together and mounted on a rotary table, and their performance specifications are listed in Table 3. The rotary table provided a dynamic condition for the system. The experiment last for about 40 minutes, and the three-axis attitude of the star tracker in i-frame is shown in Fig. 12.

 figure: Fig. 11

Fig. 11 Experimental system.

Download Full Size | PDF

Tables Icon

Table 3. Experimental parameters

 figure: Fig. 12

Fig. 12 Attitude change of the vehicle.

Download Full Size | PDF

During the process, 4556 successive star images were taken by star tracker. The three-axis angular rate sensed by the GUs can be transformed to the s frame through the pre-calibrated installation matrix Cbs. And then the attitude-correlated matrix between different frames can be calculated. Since the attitude-correlated matrix is calculated, the star images can be correlated and added by the ACFA approach.

The contrast of the star images before and after the correlation is shown in Fig. 13. The star on the image after addition still keeps its shape as shown in Fig. 13(b), so it guarantees the subsequent star centroiding accuracy. There is a dim star on the original image, and it can be detected after adding 5 frames. The star identification result is then analyzed. For the single image in Fig. 13(a), 9 stars can be identified. When 5 successive frames are correlated and added as shown in Fig. 13(b), the number of identified stars increases to 24. Since the dim stars become “brighter”, the number of detectable stars increases as well.

 figure: Fig. 13

Fig. 13 Contrast before and after addition: red circle is the identified stars. (a) A single frame; (b) 5 correlated frames.

Download Full Size | PDF

The noise and SNR growth with the number of correlated frames is shown in Fig. 14. The noise after the correlation and addition is between n/2 and n of a single frame, which verifies the simulation result. The SNR of n correlated frames is n~2n times that of a single frame, which means the experimental results are consistent with the theory.

 figure: Fig. 14

Fig. 14 Experimental result of ACFA approach: (a) The noise of n correlated frames is between n/2 and n of a single image. (b) The SNR of n correlated frames is between n and 2n of a single image.

Download Full Size | PDF

The 4556 images under dynamic conditions are divided into more than 400 parts, every 10 successive star images is a group and then correlated by ACFA approach. The statistical result is shown in Fig. 15. Stars with magnitude lower than 5 are detectable for a single frame. As the SNR of star signals is increasing with the number of correlated frames, the maximum magnitude of detectable stars rises as well and reaches 5.64 for 10 correlated frames. Therefore, the number of identified stars increases sharply from 1 to 9 when 7 frames are added. The star identification failure rate shows an opposite trend decreasing from 80% to 20%.

 figure: Fig. 15

Fig. 15 The influence of ACFA on the star detection and star identification: (a) Maximum of identified star magnitude. (b) The mean number of identified stars. (c) Failure rate. (d) RMS value of angular distance error.

Download Full Size | PDF

In the case where the number of bright stars is less than 3, the star identification is likely to fail, and the star tracker cannot output the attitude of the vehicle normally. So the ACFA approach can improve the threshold of detectable star magnitude and the star identification numbers. When the failure rate is accordingly lower, the application of star tracker can be expanded to strong background radiation condition. Besides, with more identified stars on the image, the output attitude accuracy of the star tracker is improved as well [9].

Since the true coordinates of the stars on the focal plane are unknown, the angular distance (the angular separation between two stars as observed from the detector) is used to estimate the star centroid error. In different coordinate systems, the angular distance between two certain stars is constant. In i-frame (taken for reference), it can be calculated using the right ascension and declination of the identified stars. The corresponding angular distance in s-frame (measured value) can be obtained through the coordinates on FPA and focal length. And their difference is the angular distance error. The root mean square(RMS) value of star angular distance error is shown in Fig. 15(d). The RMS of a single frame is up to 13.1 arc-seconds. It declines to approximately 7.5 arc-seconds as the number of correlated frames increases. But it cannot keep decreasing to zero due to the limit of systematic errors, which is consistent with the simulation result.

5. Conclusions

The star tracker is sensitive to strong background noise and can be completely out of operation. In this paper, an attitude-correlated frames adding (ACFA) approach is proposed to break the bottleneck. The attitude-correlated matrix at different time is calculated using the information of GUs. And then successive star images are correlated by transforming the pixels to find the regions of a certain star on different images. Next, add the correlated images to enhance star signal intensity and to improve the SNR of star images. Finally, the method of area weighed averaging is adopted to add the transformed pixels which are not perfectly overlapping. Simulations under dynamic conditions and experimental results under oscillating conditions indicate that the star signal of n frames correlated by ACFA approach is n times that of a single frame and the SNR increases to n~2n times. It also improves the star centroid accuracy. And the impact of GUs error (including bias and noise) and fixed angle can be ignored at low angular rate of the vehicle, which verifies the effectiveness of the proposed approach.

For the hardware implementation, the ACFA method, which uses sequent frames for attitude determination, requires larger memory space to store the image data and more computation resource for image adding. That will cost more hardware resource and time for image processing. The increment of image processing time will induce more time delay of the attitude output. To solve this problem, we can use a high-speed central processing unit (CPU), of which the clock speed is up to several GHz, for image processing. Moreover, the computation cost can be reduced by processing the sub-images containing the signal of navigation stars. Because the integration of star tracker and GUs can provide the attitude information, we can predict the coordinates of the navigation stars on each frame. Then the sub-image (for example, a window of 15 pixels×15 pixels) containing a star can be extracted, with its center located at the predicted star coordinates. The ACFA method can only store and process the sequent sub-images. As a consequence, the hardware cost will be greatly reduced.

The method can be applied to star tracker and INS integrated navigation systems. The GUs of INS can provide accurate attitude change for this approach. In return, the accumulating error of the inertial navigation system can be compensated with more accurate attitude information of the star tracker. Using a short-wave infrared camera to detect stars during the daytime is another trend but the noise of infrared image is also high. The feasibility of this method for improving the SNR of infrared star images should be further assessed.

Appendix

After the correlated images are added by the area weighted averaging method, the SNR of the star image is analyzed.

Assuming that the noise of an image (M × N pixels) is Gaussian distribution, its mean value is Ī and standard deviation is σ:

{I¯=1MNi=1Mj=1NIi,jσ=1MNi=1Mj=1N(Ii,jI¯)2,

Considering that the pixels of correlated images after ACT are not perfectly overlapping as shown in the figure below, a pixel is divided into four parts: S1, S2, S3, and S4. Before addition each pixel of a single frame is divided into the same four parts which have the ratio of k1, k2, k3 and k4 respectively. So for each sub-pixel, we have:

{Ii,jS1=k1Ii,j,Ii,jS2=k2Iij,Ii,jS3=k3Ii,jIi,jS4=k4Ii,jk1+k2+k3+k4=1,
where Ii,jS1, Ii,jS2, Ii,jS3, Ii,jS4 denotes the gray value of the S1, S2, S3, S4 potion of the pixel Ii,j and 0 ⩽ k1, k2, k3, k4 ⩽ 1. The mean value and standard deviation of all the sub-pixels whose area are S1 can be calculated:
{I¯S1=1MNi=1Mj=1NIi,jS1=k1MNi=1Mj=1NIi,j=k1I¯σ1=1MNi=1Mj=1N(Ii,jS1I¯S1)2=1MNi=1Mj=1N(k1Ii,jk1I¯)2=k1σ,

Similarly, the standard deviation of S2, S3, and S4 sub-pixels are:

σ2=k2σ,σ3=k3σ,σ4=k4σ.

Therefore, the standard deviations of the four arrays has the equation:

σ=σ1+σ2+σ3+σ4.

When these divided parts of n frames with the same noise intensity (standard deviation σ) are added together, it is their variances that add linearly. And the noise intensity of the added frame is σadd:

σadd2=f=1nσf12+σf22+σf32+σf42,
where σf1, σf2, σf3, σf4 are the standard deviation of the sub-pixels with the area of S1, S2, S3 and S4 in the fth frame respectively. We have:
σ=σf1+σf2+σf3+σf4.

For any real number, there exists an inequality:

14(a+b+c+d)2a2+b2+ci32+d2(a+b+c+d)2,
hence, for f = 1, 2, ..., n,
14(σf1+σf2+σf3+σf4)2σf12+σf22+σf32+σf42(σf1+σf2+σf3+σf4)2,
It can be deduced that:
f=1n14(σf1+σf2+σf3+σf4)2f=1nσf12+σf22+σf32+σf42f=1n(σf1+σf2+σf3+σf4)2,
that is:
n4σ2σadd2nσ2.

Thus,

n2σσaddnσ.

The star signal is proportional to the number of correlated frames n. Therefore, the SNR of the correlated frame after addition satisfies the following inequality:

nSNRSNRadd2nSNR.

Funding

National Natural Science Foundation of China (NSFC) (61803378, 61573368).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. C. C. Liebe, “Accuracy performance of star trackers - a tutorial,” IEEE Trans. Aerosp. Electron. Syst. 38(2), 587–599 (2002). [CrossRef]  

2. S. Levine, R. Dennis, and K. L. Bachman, “Strapdown astro-Inertial navigation utilizing the optical Wide-angle lens startracker,” Navigation 37(2), 347–362 (1990). [CrossRef]  

3. A. R. Eisenman, C. C. Liebe, and J. L. Joergensen, “New generation of autonomous star trackers,” Proc. SPIE 3221, 524–535 (1997). [CrossRef]  

4. G. A. Avanesov, R. V. Bessonov, A. N. Kurkina, M. B. Lyudomirskii, I. S. Kayutin, and N. E. Yamshchikov, “Autonomous strapdown stellar-inertial navigation systems: design principles, operating modes and operational experience,” Gyroscopy and Navig. 4(4), 204–215 (2013). [CrossRef]  

5. C. C. Liebe, “Star trackers for attitude determination,” IEEE Aerosp. Electron. Syst. Mag. 10(6), 10–16 (1995). [CrossRef]  

6. J. L. Jorgensen, T. Denver, M. Betto, and P. V. d. Braembussche, “The PROBA satellite star tracker performance,” Acta Astronaut. 56(1), 153–159 (2005). [CrossRef]  

7. E. F. Young, R. Mellon, J. W. Percival, K. P. Jaehnig, J. Fox, T. Lachenmeier, B. Oglevie, and M. Bingenheimer, “Sub-arcsecond performance of the ST5000 star tracker on a balloon-borne platform,” in 2012 IEEE Aerospace Conference (IEEE, 2012), pp. 1–7.

8. I. S. Kruzhilov, “Evaluation of instrument stellar magnitudes without recourse to data as to star spectral classes,” Proc. SPIE 0635, 063537 (2012).

9. W. Tan, S. Qin, R. M. Myers, T. J. Morris, G. Jiang, Y. Zhao, X. Wang, L. Ma, and D. Dai, “Centroid error compensation method for a star tracker under complex dynamic conditions,” Opt. Express 25(26), 33559–33574 (2017). [CrossRef]  

10. W. Wang, X. Wei, J. Li, and G. Wang, “Noise suppression algorithm of short-wave infrared star image for daytime star sensor,” Infrared Phys. Technol. 85, 382–394 (2017). [CrossRef]  

11. N. Truesdale, M. Skeen, J. Diller, K. Dinkel, Z. Dischner, A. Holt, T. Murphy, S. Schuette, and A. Zizzi, “DayStar: modeling the daytime performance of a star tracker for high altitude balloons,” in 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition (American Institute of Aeronautics and Astronautics, 2013), https://arc.aiaa.org/doi/abs/10.2514/6.2013-139. [CrossRef]  

12. G. Wang, F. Xing, M. Wei, and Z. You, “Rapid optimization method of the strong stray light elimination for extremely weak light signal detection,” Opt. Express 25(21), 26175–26185 (2017). [CrossRef]   [PubMed]  

13. M. Rex, E. Chapin, M. J. Devlin, J. Gundersen, J. Klein, E. Pascale, and D. Wiebe, “BLAST autonomous daytime star cameras,” Proc. SPIE 6269, 62693H (2006). [CrossRef]  

14. K. Ho and S. Nakasuka, “Novel star identification algorithm utilizing images of two star trackers,” in 2010 IEEE Aerospace Conference (IEEE, 2010), pp. 1–10.

15. M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images,” Light Sci. Appl. 7, 18006 (2018). [CrossRef]  

16. J. Lu and L. Yang, “Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation,” Rev. Sci. Instruments 89(5), 054501(2018). [CrossRef]  

17. W. Wang, X. Wei, J. Li, and G. Zhang, “Guide star catalog generation for short-wave infrared (SWIR) All-Time star sensor,” Rev. Sci. Instruments 89, 075003 (2018). [CrossRef]  

18. M. Belenkii, D. G. Bruns, V. A. Rye, and T. Brinkley, “Daytime stellar imager,” US007349804B2 (Mar. 252008).

19. Trex Enterprises Corporation Products/services, “Optical GPS” (Trex Enterprises Corporation, 2016), http://www.trexenterprise-s.com/Pages/Products

20. N. A. Truesdale, K. J. Dinkel, Z. J. B. Dischner, J. H. Diller, and E. F. Young, “DayStar: Modeling and test results of a balloon-borne daytime star tracker,” in IEEE Aerospace Conference (SPIE, 2013), pp. 1–12.

21. M. J. O’Malley and E. O’Mongain, “Charge-coupled devices: frame adding as an alternative to long integration times and cooling,” Opt. Eng. 31(3), 522–526 (1992). [CrossRef]  

22. J. Yan, J. Jiang, and G. Zhang, “Dynamic imaging model and parameter optimization for a star tracker,” Opt. Express 24(6), 5961–5983 (2016). [CrossRef]   [PubMed]  

23. J. Yan, J. Jiang, and G. Zhang, “Modeling of intensified high dynamic star tracker,” Opt. Express 25(2), 927–948 (2017). [CrossRef]   [PubMed]  

24. R. A. Fowell, S. I. Saeed, R. Li, and Y.-W. A. Wu, “Mitigation of angular acceleration effects on optical sensor data,” U.S. Patent 6, 863, 244 (2005).

25. R. A. Fowell, R. Li, and Y.-W. A. Wu, “Method for compensating star motion induced error in a stellar inertial attitude determination system,” U.S. Patent 7, 487, 016 (2009).

26. L. Ma, D. Zhan, G. Jiang, S. Fu, H. Jia, X. Wang, Z. Huang, J. Zheng, F. Hu, W. Wu, and S. Qin, “Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions,” Appl. Opt. 54(25), 7559–7566 (2015). [CrossRef]   [PubMed]  

27. D. H. Titterton and J. L. Weston, Strapdown Inertial Navigation Technology (The Institution of Engineering and Technology, 2004), chap. 12. [CrossRef]  

28. P. Sturm, S. Ramalingam, J. P. Tardif, S. Gasparini, and J. Barreto, Camera Models and Fundamental Concepts Used in Geometric Computer Vision (Now Publishers, 2004), chap. 3.

29. J. D. Gautier, GPS/INS Generalized Valuation Tool (GIGET) for the Design and Testing of Integrated Navigation System (Stanford University, 2003).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Two successive frames taken by the star tracker at different moments. (a) The attitude of the star tracker is changing with time in the inertial coordinate system. (b) There are 4 same stars on two different images taken at tk and tj respectively. ST refers to star tracker, and GUs refers to gyroscope units.
Fig. 2
Fig. 2 Coordinates of a transformed pixel: the coordinate of A0 is an integer between 1 and 512, in the jth image. But it becomes fractional after transformation, which has overlapping areas with four pixels of the nth image.
Fig. 3
Fig. 3 Flow chart of ACFA approach.
Fig. 4
Fig. 4 12 seconds’ three-axis attitude of the vehicle in i-frame.
Fig. 5
Fig. 5 The same region of a single image and correlated image: (a) The SNR of a single frame is too low to see the dim star in the image. (b) The star signal becomes obvious when 60 frames are correlated and added.
Fig. 6
Fig. 6 The star signal growth and noise growth of of correlated images. (a) Star signal is proportional to number of added frames. (b) Noise is between n / 2 and n of a single frame.
Fig. 7
Fig. 7 SNR growth and centroiding accuracy estimation: (a) SNR is between n and 2 n of a single frame. (b) The centroiding error of the navigation star with respect to the number of correlated frames. δx and δy represents the centroiding error in the x-axis and y-axis direction of the image coordinate system, δr represents the centroiding error in radical derenction.
Fig. 8
Fig. 8 Adding result with errors: (a) GUs of four different error grades; (b) Different fixed angle errors of three perpendicular axes.
Fig. 9
Fig. 9 Analyze the reason of declined SNR: For automotive-graded GUs, star signal distribution of 10 correlated frames is dispersed.
Fig. 10
Fig. 10 Analyze the error of attitude-correlated matrix: (a) The output attitude error of gyro is accumulating with time. (b) The output attitude error of fixed angle is small.
Fig. 11
Fig. 11 Experimental system.
Fig. 12
Fig. 12 Attitude change of the vehicle.
Fig. 13
Fig. 13 Contrast before and after addition: red circle is the identified stars. (a) A single frame; (b) 5 correlated frames.
Fig. 14
Fig. 14 Experimental result of ACFA approach: (a) The noise of n correlated frames is between n / 2 and n of a single image. (b) The SNR of n correlated frames is between n and 2 n of a single image.
Fig. 15
Fig. 15 The influence of ACFA on the star detection and star identification: (a) Maximum of identified star magnitude. (b) The mean number of identified stars. (c) Failure rate. (d) RMS value of angular distance error.

Tables (3)

Tables Icon

Table 1 Simulation parameters

Tables Icon

Table 3 Experimental parameters

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

SNR = s ¯ i N τ 2 + 2 N 0 2 ,
SNR add = n s ¯ i n ( N τ 2 + 2 N 0 2 ) = n SNR .
{ r k s = [ x k s y k s z k s ] T = C i s ( k ) r i r j s = [ x j s y j s z j s ] T = C i s ( j ) r i ,
{ C i s ( k ) = C b s C i b ( k ) C i s ( j ) = C b s C i b ( j ) .
C s ( j ) s ( k ) = C b s C b ( j ) b ( k ) ( C b s ) T ,
C b ( j ) b ( k ) = ( I + [ Φ j × ] ) ( I + [ Φ k 2 × ] ) ( I + [ Φ k 1 × ] ) ,
[ Φ × ] = [ 0 ϕ z ϕ y ϕ z 0 ϕ x ϕ y ϕ x 0 ] .
Φ = ω i b b Δ t = [ ϕ x ϕ y ϕ z ] .
r k s = C s ( j ) s ( k ) r j s .
{ x k s = T 11 x j s + T 12 y j s + T 13 z j s y k s = T 21 x j s + T 22 y j s + T 23 z j s z k s = T 31 x j s + T 32 y j s + T 33 z j s .
{ u = u 0 + f x s z s + δ u v = v 0 + f y s z s + δ v .
{ u j = u 0 + f x j s z j s + δ u v j = v 0 + f y j s z j s + δ v ,
{ u k = u 0 + f x k s z k s + δ u v k = v 0 + f y k s z k s + δ v ,
{ u k = T 11 ( u j u 0 δ u ) + T 12 ( v j v 0 δ v ) + T 13 T 31 ( u j u 0 δ u ) + T 32 ( v j v 0 δ v ) + T 33 + δ u v k = T 21 ( u j u 0 δ u ) + T 22 ( v j v 0 δ v ) + T 23 T 31 ( u j u 0 δ u ) + T 32 ( v j v 0 δ v ) + T 33 + δ v .
{ I ( A 1 ) = I n ( A 1 ) + S 1 × I j ( A 0 ) I ( A 2 ) = I n ( A 2 ) + S 2 × I j ( A 0 ) I ( A 3 ) = I n ( A 3 ) + S 3 × I j ( A 0 ) I ( A 4 ) = I n ( A 4 ) + S 4 × I j ( A 0 ) ,
n 2 σ σ add n σ .
n SNR SNR add 2 n SNR .
E = I C s ( j ) s ( k ) ( C ¯ s ( j ) s ( k ) ) T
{ I ¯ = 1 M N i = 1 M j = 1 N I i , j σ = 1 M N i = 1 M j = 1 N ( I i , j I ¯ ) 2 ,
{ I i , j S 1 = k 1 I i , j , I i , j S 2 = k 2 I i j , I i , j S 3 = k 3 I i , j I i , j S 4 = k 4 I i , j k 1 + k 2 + k 3 + k 4 = 1 ,
{ I ¯ S 1 = 1 M N i = 1 M j = 1 N I i , j S 1 = k 1 M N i = 1 M j = 1 N I i , j = k 1 I ¯ σ 1 = 1 M N i = 1 M j = 1 N ( I i , j S 1 I ¯ S 1 ) 2 = 1 M N i = 1 M j = 1 N ( k 1 I i , j k 1 I ¯ ) 2 = k 1 σ ,
σ 2 = k 2 σ , σ 3 = k 3 σ , σ 4 = k 4 σ .
σ = σ 1 + σ 2 + σ 3 + σ 4 .
σ add 2 = f = 1 n σ f 1 2 + σ f 2 2 + σ f 3 2 + σ f 4 2 ,
σ = σ f 1 + σ f 2 + σ f 3 + σ f 4 .
1 4 ( a + b + c + d ) 2 a 2 + b 2 + c i 3 2 + d 2 ( a + b + c + d ) 2 ,
1 4 ( σ f 1 + σ f 2 + σ f 3 + σ f 4 ) 2 σ f 1 2 + σ f 2 2 + σ f 3 2 + σ f 4 2 ( σ f 1 + σ f 2 + σ f 3 + σ f 4 ) 2 ,
f = 1 n 1 4 ( σ f 1 + σ f 2 + σ f 3 + σ f 4 ) 2 f = 1 n σ f 1 2 + σ f 2 2 + σ f 3 2 + σ f 4 2 f = 1 n ( σ f 1 + σ f 2 + σ f 3 + σ f 4 ) 2 ,
n 4 σ 2 σ add 2 n σ 2 .
n 2 σ σ add n σ .
n SNR SNR add 2 n SNR .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.