Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Star centroiding error compensation for intensified star sensors

Open Access Open Access

Abstract

A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.

© 2016 Optical Society of America

1. Introduction

By utilizing the imaging system to observe stars [1], a star sensor, which is an aerospace measuring instrument, can provide the attitude information of space vehicles especially the satellites, with high precision, offset-free, high automation, and high reliability features. With the development of the satellite, attitude information and even angular velocity information should be acquired by the star sensor under the condition of fast maneuver (e.g., the angular velocity is greater than 5°/s) [2–6]. Under high dynamic conditions, star image smearing will lead to the rapid decline of the sensitivity of the star sensor. The number of stars that can be observed within the field of view is reduced, resulting in a significant drop in the success rate of the star pattern recognition. Generally, the traditional star sensor cannot output the correct attitude information when its angular velocity is greater than 5°/s. Improving the sensitivity by introducing the image intensifier as the imaging device of the star sensor is an effective method. In addition, the image intensifier has lower noise, smaller size, and lower power consumption compared with other imaging devices [2]. However, the error introduced by the image intensifier severely decreases the star centroiding accuracy, and without compensation, the star centroiding error of the intensified star sensor is five times larger than the traditional star sensor under the same condition.

In recent studies, the star centroiding position error of the traditional star sensor has been systematically analyzed. Grossman et al. [7] determined that the star centroiding error would decrease with the increase of fuzzy degree and the expansion of the star dispersion spot. However, Hegedus et al. [8] indicated that the error first decreases and then increases with the increase of the Gaussian radius. Stanton et al. [9] concluded that the relationship between the systematic error and the true position of the star point approximately meets the sinusoidal relationship with regard to the fixed-size dispersive spot. Alexander et al. [10] obtained the expression of the star centroiding position by using the frequency domain method. Based on the method of Alexander et al., Jean [11] obtained the sub-pixel location of the complex signals by proposing the Fourier phase-shift method. Rufino et al. [12] considered the diffraction and the point spread function of the star energy distribution under the defocus condition and compensated the systematic error by using the BP neural network method. However, the preceding methods were not able to compensate for the star centroiding error induced by the image intensifier, and the related applications of the intensified imaging technique in the high precision measurement are still not reported.

Based on the demand of the precision location of the intensified star sensor, an effective compensation method for the centroiding error is proposed in this paper. The imaging model of the intensified detector is established based on the orthographic projection by analyzing the error introduced by the image intensifier, which includes the deformation parameter of the optical fiber panel. Thereafter, the centroiding error at the target point is obtained by using the optimization method. The nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error on the image plane. Last, the effectiveness of the proposed method is verified through the laboratory calibration experiment as well as the night sky experiment.

2. Error analysis of the intensified imaging device

2.1 The structure of the image intensifier

The intensified star tracker employed in this study uses the Generation II + double-proximity focusing image intensifier, and its internal structure is shown in Fig. 1 [2]. The imaging device comprises (from the front to the back, in order) the input window, photocathode, micro-channel plate (MCP), fluorescent screen, coupling component, and CMOS image sensor chips. The photocathode supplies primary photoelectrons to the input face of the MCP through its chemical coating, which transforms optical signals into electric signals. A few photoelectrons enter the MCP and are multiplied under the effect of the high-potential electric field. The secondary electrons from the MCP attack the fluorescent screen, which emits photons; thereafter, the intensified optical image is obtained. The intensified optical signal passes through the coupling component of the optical fiber panel and finally occurs into the coupled CMOS image sensor chip.

 figure: Fig. 1

Fig. 1 Schematic of the image intensifier structure.

Download Full Size | PDF

2.2 Error analysis of the intensified detector

The unique error of the intensified imaging device mainly comes from its core component, which is the image intensifier. The two core components inside the image intensifier are the optical fiber panel and the MCP, which are the main sources of errors.

2.2.1 Error analysis of the MCP gain fluctuation

As the core component of the image intensifier, the MCP is actually a micro hollow channel glass fiber panel, in which the inner wall has good secondary emission properties. More secondary electrons are produced when primary electrons bombard the inner wall of the fiber, thus achieving the purpose of electron multiplication. Based on the theory of electron emission, the absorption of electron energy, the excitation, and the migration escape of the secondary electrons are all randomly distributed, thereby leading to a random fluctuation error associated with the gained weak signal. The stability of the MCP becomes the main noise source of the output image because the electronic gain multiple of the MCP is tremendous. Based on the analysis in [13], the gain multiple of the MCP meets the Polya distribution and can be regarded as the Poisson distribution at low gain expectations. This indicates that the variance of the gain multiple of the single channel is approximately equal to its expectation. The distribution of the star centroiding error can be described; however, it cannot be compensated because the star centroiding position error caused by such error belongs to instantaneous error. The Monte-Carlo simulation experiment shows that the star centroiding error induced by the gain fluctuation of the MCP increases with the increase of the gain voltage, which is shown in Fig. 2. Fortunately, the maximum value of the preceding error is less than 0.1 pixel, which has quite a slight influence on the star centroiding accuracy; thus, this error can be ignored.

 figure: Fig. 2

Fig. 2 Statistical results of the centroiding error under different gain voltages of the MCP and different intensities of the star spot. (a) The standard deviation of the centroiding error along the X direction. (b) The scatter radius of the centroiding error.

Download Full Size | PDF

2.2.2 Error analysis of the sampling of the optical fiber panel

The interior design of the image intensifier utilizes the multistage optical fiber panel structure, which is adopted as the original image transmission. The discretization and reappearance of the image will influence the gray distribution of the star spot, which produces an S-curve error similar to the ordinary imaging device and affects the star position in the form of systematic error. As analyzed in [14], this kind of error periodically repeats on the scale of the monofilament of the fiber panel and does not change with time; however, this error belongs to the high-frequency spatial error. The systematic error will be modulated by the point spread function because the image intensifier contains multilayer optical fiber panels, thereby making the compensation of such an error difficult. Figure 3(a) shows the amplitude of the centroiding error induced by the sampling of the optical fiber panel. The figure also shows that the range of the amplitude is 0.8–1.4 μm. If the selected CMOS imaging chip has 5.5 μm size pixels, the centroiding error is less than 0.3 pixel. In addition, based on the reference [14], when the pixel size and the core radius of the optical fiber are satisfied with a certain relationship, the centroiding error will be further reduced to 0.1 pixel, as shown in Fig. 3(b); therefore, this error can also be ignored.

 figure: Fig. 3

Fig. 3 (a) Simulation results of the centroiding error of the single layer optical fiber panel. (b) The relation between the amplitude of the centroiding error and the ratio of the dispersion radius to the monofilament spacing.

Download Full Size | PDF

2.2.3 Error analysis of the optical fiber panel deformation

Several multifilament fusion pressure processes exist in the optical fiber panel fabricated with the multifilament. In these processes, the stress difference between the center and the edge of the multifilament results in non-uniformity in the stretching amount of the multifilament. The monofilament will bend because of this non-uniformity, thus resulting in a position offset in the transmission image of the optical fiber panel. In addition to the distortion caused by the stress difference between the center and the edge of the multifilament, the image offset also noticeably includes the distortion attributed to the irregular distribution of the local stress which is the result of the individual differences of the multifilament. The preceding processing technology defect of the optical fiber panel is referred to as the optical fiber panel deformation effect. The imaging results of the standard grid pattern through the optical fiber panel under a microscope is shown in Fig. 4, in which two fractures occur in the grid pattern due to the effect of the optical fiber deformation.

 figure: Fig. 4

Fig. 4 Optical fiber deformation under the microscope.

Download Full Size | PDF

The deformation of the optical fiber panel results in the changing position of the image element. The deformation generally does not change with time because it is produced during the preparation process of the optical fiber panel, which belongs to the spatial error. In addition, the deformation inevitably results in the change of the star spot position of the intensified star sensor, which has a great influence on the centroiding positioning accuracy. Based on the experimental analysis, the centroiding error induced by the deformation is far larger than the two preceding centroiding errors induced by the sampling of the optical fiber panel and the gain fluctuation of the MCP. In this paper, the imaging model of the intensified detector is first established. Thereafter, the centroiding error induced by the deformation is compensated based on the model.

3. The imaging model of the intensified detector

The image intensifier contains the multi-stage optical fiber panel, which has the image transmission property, that is, the input image can be transmitted to the surface of the imaging chip through the multi-stage optical fiber panel structure. The output image of the image intensifier has the same size and consistency as the input image, which is called the orthographic projection imaging model.

Assuming that there is one target on the input end face of the image intensifier, the imaging model of the image intensifier can be established through the target imaging process. The coordinate systems are respectively established, as shown in Fig. 5. The target reference frame is in the plane wherein the calibration target lies; the origin of the target reference frame is established at the center of the target; the x- and y-axes are along the two alignment directions of the target points, respectively, whereas the z-axis is perpendicular to the xy plane. The image reference frame is in the image plane of the imaging chip, which is located at the back-end of the image intensifier; the origin of the image reference frame is established at the center of the image plane; the X and Y axes are along the two alignment directions of the pixels, respectively, whereas the Z axis is perpendicular to the XY plane.

 figure: Fig. 5

Fig. 5 The orthographic projection imaging model of the image intensifier.

Download Full Size | PDF

A deviation including shifting and rotating, which is influenced by the alignment error of the calibration target, inevitably exists between the target reference frame and the image reference frame:

(XYZ)=Rot(z,θ)(xyz)+T,
where Rot (z, θ) is the rotation matrix representing a spin around the z-axis by the angle θ; T is the shifting vector of the origin of the target reference frame. Their expressions are as follows:

Rot(z,θ)=[cosθsinθ0sinθcosθ0001],
T=[xT,yT,zT]Τ.

Simplify the Eq. (1) into the two-dimensional form:

(XY)=[cosθsinθsinθcosθ](xy)+(xTyT).

In the target reference frame, assume that there are target points of 2N + 1 rows and 2N + 1 columns; denote the distance between the two neighboring rows or columns as d. Denote the column number from the left to the right as {-N, -N + 1, …, 0, …, N-1, N} and the row number from the top to the bottom as {-N, -N + 1, …, 0, …, N-1, N}. Thus, the orthographic projection position of the target point in the ith row and the jth column in the target reference frame can be expressed as

{xij=jdyij=id.

In the image reference frame, the position of the target point is denoted as (Xij, Yij). Based on the relation between the target reference frame and the image reference frame:

(XijYij)=R(jdid)+e,T
where R is the rotation matrix, eT is the shifting deviation, and the expressions of R and eT are as follows:

R=[cosθsinθsinθcosθ],
eT=(xT,yT)Τ.

The centroiding error induced by the deformation of the optical fiber panel mainly contains the distortion and the dislocation. The distortion is expressed as eDIS(exDIS,eyDIS)T by establishing the second order model, thereby meeting Eq. (9). By contrast, the dislocation is expressed as eTRAN(exTRAN,eyTARN)T and it cannot be described by the concrete model.

{exDIS=eqX'+epX'(X'2+Y'2)eyDIS=eqY'+epY'(X'2+Y'2),
where eq and ep are the first and second order deformation coefficients, respectively, whereas (X',Y') is the coordinate when taking the deformation center as the origin, namely,
{X'=XCXY'=YCY,
where (X,Y) is the centroid coordinate in image reference frame, whereas(CX,CY) is the coordinate of the deformation center.

In summary, the imaging model of the star spot passing through the image intensifier can be expressed in the image reference frame as follows:

(XijYij)=R(xijyij)+(xTyT)+(exDISeyDIS)+(exTRANeyTRAN).

4. Method description

Based on the preceding imaging model, the least square LM optimization method is utilized to solve the parameters of the model and the position error of the target point on the image plane in this paper. Thereafter, the star centroid position error at any area can be compensated by using the nearest trigonometric interpolation method.

4.1 Target position error obtaining

All the parameters of the imaging model should be obtained first to determine the error of each target point induced by the image intensifier. Among the parameters, R, eT, and eDIS can be modeled, whereas the dislocation error eTRAN cannot be described by the concrete model. Therefore, the optimization goal in this paper is selected as minimizing the sum of the dislocation error eTRAN and other errors (e.g., the model error and the calculation error). The dislocation error can be used as the approximation of the total sum to obtain the optimization solution because this error is far larger than the other errors. By utilizing the least square LM optimization method to determine the parameters of the model, the optimization goal can be expressed as

min(i=N,j=NN,N(exTRANijeyTRANij)2).

The model equation can be written as

{X^ij=jdcosθidsinθ+xT+exDIS=fx(n)Y^ij=jdsinθ+idcosθ+yT+eyDIS=fy(n),
where n is a parameter vector comprising all the model parameters [θ,xT,yT,cx,cy,ep,eq]. The nonlinear least square method is used to estimate the parameter vector n because the two functions of the model are both nonlinear. Based on the assumption that (Xij, Yij)T is the actual centroid, (X^ij,Y^ij)Τ is the corresponding estimated value, Δn is the vector estimation bias, and Δx and Δy are the estimation biases of vectors X and Y, respectively. Thereafter,
{Δx=XX^AΔnΔy=YY^BΔn,
where A and B are the sensitive matrices, and they can be expressed as

{A=[δfxδθδfxδxTδfxδyTδfxδcxδfxδcyδfxδepδfxδeq]B=[δfyδθδfyδxTδfyδyTδfyδcxδfyδcyδfyδepδfyδeq].

By combining the estimation biases Δx and Δy, as well as the sensitive matrices, the iterative equations of the parameter vector can be established as follows:

Δn(k+1)=Δn(k)(MkΤMk)1MkΤP(k),
where k is the number of iteration, which is obtained by using the LM algorithm; P comprises the estimation biases of Δx and Δy; and M comprises the sensitive matrices of A and B. Their expressions are as follows:

P=[ΔxN,NΔxN,NΔyN,NΔyN,N],
M=[AN,NAN,NBN,NBN,N].

When the LM least square algorithm is converged, the parameters of the model can be solved and the estimated position (X^ij,Y^ij)Τ of each target point can be acquired. Thereafter, the centroiding error is expressed as follows:

{exij=XijX^ijexTRANijeyij=YijY^ijeyTRANij.

4.2 The nearest trigonometric interpolation for error compensation

The errors of the image intensifier at every target position can be obtained:

(Xij,Yij)(exij,eyij).

Using the local small region to compensate for the error is suitable because the dislocation error locally exists. In addition, the imaging position (Xij,Yij) of each orthogonal grid point on the target plane is no longer strictly orthogonally distributed due to the preceding errors, which is shown in Fig. 6.

 figure: Fig. 6

Fig. 6 (a) The ideal orthogonal distribution of the imaging position. (b) The actual distribution of the imaging position.

Download Full Size | PDF

Therefore, the nearest trigonometric interpolation method is presented to compensate for the error in this paper based on the preceding imaging characteristic. The proposed method utilizes the triangular area determined by three adjacent non-collinear target points to realize the interpolation. In Fig. 7, the point P to be compensated falls into the triangle ΔABC, which is called the compensation triangle for P.

 figure: Fig. 7

Fig. 7 The schematic of the nearest trigonometric interpolation.

Download Full Size | PDF

The intensifier error of the to-be-compensated point P is acquired by the linear interpolation through the errors of three vertexes of the compensation triangle ABC:

{exP=aexA+bexB+cexCeyP=aeyA+beyB+ceyC,
where a, b, and c are interpolation coefficients, which are respectively equal to the area ratio of the sub-triangles opposite to the vertexes A, B, and C to the large triangle ABC:

{a=SΔPBC/SΔABCb=SΔPAC/SΔABCc=SΔPAB/SΔABC.

5. Experiment and analysis

5.1 Error results of the image intensifier

Aiming at one intensified star sensor (its basic parameters are shown in Table 1), the error distribution of the centroid positions of the targets is solved based on the imaging model of the image intensifier. The distortion error of the image intensifier is shown in Fig. 8, and the dislocation error of the image intensifier is shown in Fig. 9. The distortion error of the image intensifier on the image plane has reached 1.5 level of pixels. A large number of hexagonal nets exist in the dislocation error, which shows that this error mainly appears along the multifilament edges. In addition, these hexagonal nets do not distribute based on a single regular pattern, which indicates that the entire panel structures of the multilayer optical fiber in the image intensifier will have deformation effects.

Tables Icon

Table 1. Imaging detector parameters of the intensified star sensor.

 figure: Fig. 8

Fig. 8 The distortion error of the image intensifier. (a) The component in the X direction. (b) The component in the Y direction.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 The dislocation error of the image intensifier. (a) The component in the X direction. (b) The component in the Y direction.

Download Full Size | PDF

5.2 Laboratory calibration and test

Under laboratory conditions, the aforementioned intensified star sensor is calibrated and tested by using a two-dimension rotating platform with high precision. The position data of the dot matrix of the size 169 in the center area of the FOV within 12° × 12° is collected with a sample interval of 1°. Figure 10 presents the residual error and the position of the principal point, which are both obtained through the calibration of the intensified star sensor using the data obtained before and after the error compensation of the image intensifier. The residual error after compensation is obviously smaller than the one before compensation, and the position of the principal point after calibration is also closer to the center of the image plane. The single-star pointing accuracy tested before and after the error compensation of the image intensifier, as well as the single-star pointing accuracy of the traditional star sensor, is shown in Table 2.

 figure: Fig. 10

Fig. 10 Calibration residuals of the original data and the compensated data.

Download Full Size | PDF

Tables Icon

Table 2. Pointing accuracy under different conditions.

5.3 Night sky experiment

The proposed compensation method is further verified by using the actually observed star images in the outer field. As shown in Fig. 11, the night sky experiment is conducted in the National Astronomical Observatories of China in Xinglong on a clear night with no moonlight. The star pattern recognition is implemented for the same star image before and after the error compensation of the image intensifier, respectively. As shown in Fig. 12, the errors of the star angular distance can be obtained by comparing the actual angular distance between two arbitrary guide stars with the standard angular distance in the star database.

 figure: Fig. 11

Fig. 11 Night sky experiment.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Angular distance error before and after the intensifier error compensation for a real star image.

Download Full Size | PDF

Obtaining one star image as an example, 22 guide stars exist in the FOV; therefore, 231 star matches can be produced. Before compensation, the maximal error of the angular distances is more than 0.15° (540”), whereas the errors of the angular distances of most star matches are under 100” after compensation. Furthermore, based on the proposed compensation method, the intensified star sensor can maintain the stable tracking under the angular velocity up to 25°/s in the real night sky experiment.

6. Conclusion

Based on the characteristics of the deformation error of the image intensifier optical fiber panel, an error solving method as well as a compensation method is proposed in this paper. First, the imaging model of the intensified detector is established based on a detailed analysis of its error. Thereafter, the distribution of the deformation error of the optical fiber panel at the target point is obtained through the optimization method. In addition, the estimation error at the to-be-compensated point can be obtained by the nearest trigonometric interpolation method. The position error of the intensified star sensor can be directly and efficiently compensated based on the proposed method. Last, through the laboratory calibration and test as well as the night sky experiment, the results show that the accuracy of the intensified star sensor after compensation is approximately five times higher than that before compensation. This finding indicates that the compensation method can significantly improve the accuracy of the intensified star sensor. In addition, the proposed compensation method is highly efficient due to directly compensating the centroiding error of the star sensor.

Funding

National Natural Science Foundation of China (NSFC) (61222304); Specialized Research Fund for the Doctoral Program of Higher Education of China (20121102110032).

References and links

1. C. C. Liebe, “Star trackers for attitude determination,” IEEE Trans. Aerosp. Electron. Syst. 10(6), 10–16 (1995). [CrossRef]  

2. A. B. Katake, “Modeling, image processing and attitude estimation of high speed star sensors,” Ph.D. dissertation (Texas A&M University, 2006).

3. T. M. Brady, C. E. Tiller, R. A. Brown, A. R. Jimenez, and A. S. Kourepnes, “The inertial stellar compass: a new direction in spacecraft attitude determination,” presented at the AIAA 16th Annual USU Conference on Small Satellites, Logan, Utah (2002).

4. J. L. Crassidis, “Angular velocity determination directly from star tracker measurements,” J. Guid. Control Dyn. 25(6), 1165–1168 (2002). [CrossRef]  

5. C. C. Liebe, K. Gromov, and D. M. Meller, “Toward a stellar gyroscope for spacecraft attitude determination,” J. Guid. Control Dyn. 27(1), 91–99 (2004). [CrossRef]  

6. M. A. Samaan, “Toward faster and more accurate star sensors using recursive centroiding and star identification,” Ph.D. dissertation (Texas A&M University, 2006).

7. S. B. Grossman and R. B. Emmons, “Performance analysis and size optimization of focal planes for point-source tracking algorithm applications,” Opt. Eng. 23(2), 167–176 (1984). [CrossRef]  

8. Z. S. Hegedus and G. W. Small, “Shape measurement in industry with sub-pixel definition,” Acta Polytech. Scand. Appl. Phys. 150, 101–104 (1985).

9. R. H. Stanton, J. W. Alexander, E. W. Dennison, T. A. Glavich, and L. F. Hovland, “Optical tracking using charge-coupled devices,” Opt. Eng. 26(9), 930–938 (1987). [CrossRef]  

10. B. F. Alexander and C. N. Kim, “Elimination of systematic error in subpixel accuracy centroid estimation,” Opt. Eng. 30(9), 1320–1330 (1991). [CrossRef]  

11. J. P. Fillard, “Subpixel accuracy location estimation from digital signals,” Opt. Eng. 31(11), 2465–2471 (1992). [CrossRef]  

12. G. Rufino and D. Accardo, “Enhancement of the centroiding algorithm for star tracker measure refinement,” Acta Astronaut. 53(2), 135–147 (2003). [CrossRef]  

13. A. Frenkel, M. A. Sartor, and M. S. Wlodawski, “Photon-noise-limited operation of intensified CCD cameras,” Appl. Opt. 36(22), 5288–5297 (1997). [CrossRef]   [PubMed]  

14. K. Xiong and J. Jiang, “Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers,” Sensors (Basel) 15(6), 12389–12409 (2015). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Schematic of the image intensifier structure.
Fig. 2
Fig. 2 Statistical results of the centroiding error under different gain voltages of the MCP and different intensities of the star spot. (a) The standard deviation of the centroiding error along the X direction. (b) The scatter radius of the centroiding error.
Fig. 3
Fig. 3 (a) Simulation results of the centroiding error of the single layer optical fiber panel. (b) The relation between the amplitude of the centroiding error and the ratio of the dispersion radius to the monofilament spacing.
Fig. 4
Fig. 4 Optical fiber deformation under the microscope.
Fig. 5
Fig. 5 The orthographic projection imaging model of the image intensifier.
Fig. 6
Fig. 6 (a) The ideal orthogonal distribution of the imaging position. (b) The actual distribution of the imaging position.
Fig. 7
Fig. 7 The schematic of the nearest trigonometric interpolation.
Fig. 8
Fig. 8 The distortion error of the image intensifier. (a) The component in the X direction. (b) The component in the Y direction.
Fig. 9
Fig. 9 The dislocation error of the image intensifier. (a) The component in the X direction. (b) The component in the Y direction.
Fig. 10
Fig. 10 Calibration residuals of the original data and the compensated data.
Fig. 11
Fig. 11 Night sky experiment.
Fig. 12
Fig. 12 Angular distance error before and after the intensifier error compensation for a real star image.

Tables (2)

Tables Icon

Table 1 Imaging detector parameters of the intensified star sensor.

Tables Icon

Table 2 Pointing accuracy under different conditions.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

( X Y Z ) = Rot ( z , θ ) ( x y z ) + T ,
Rot ( z , θ ) = [ cos θ sin θ 0 sin θ cos θ 0 0 0 1 ] ,
T = [ x T , y T , z T ] Τ .
( X Y ) = [ cos θ sin θ sin θ cos θ ] ( x y ) + ( x T y T ) .
{ x i j = j d y i j = i d .
( X i j Y i j ) = R ( j d i d ) + e , T
R = [ cos θ sin θ sin θ cos θ ] ,
e T = ( x T , y T ) Τ .
{ e x D I S = e q X ' + e p X ' ( X ' 2 + Y ' 2 ) e y D I S = e q Y ' + e p Y ' ( X ' 2 + Y ' 2 ) ,
{ X ' = X C X Y ' = Y C Y ,
( X i j Y i j ) = R ( x i j y i j ) + ( x T y T ) + ( e x D I S e y D I S ) + ( e x T R A N e y T R A N ) .
min ( i = N , j = N N , N ( e x T R A N i j e y T R A N i j ) 2 ) .
{ X ^ i j = j d cos θ i d sin θ + x T + e x D I S = f x ( n ) Y ^ i j = j d sin θ + i d cos θ + y T + e y D I S = f y ( n ) ,
{ Δ x = X X ^ A Δ n Δ y = Y Y ^ B Δ n ,
{ A = [ δ f x δ θ δ f x δ x T δ f x δ y T δ f x δ c x δ f x δ c y δ f x δ e p δ f x δ e q ] B = [ δ f y δ θ δ f y δ x T δ f y δ y T δ f y δ c x δ f y δ c y δ f y δ e p δ f y δ e q ] .
Δ n ( k + 1 ) = Δ n ( k ) ( M k Τ M k ) 1 M k Τ P ( k ) ,
P = [ Δ x N , N Δ x N , N Δ y N , N Δ y N , N ] ,
M = [ A N , N A N , N B N , N B N , N ] .
{ e x i j = X i j X ^ i j e x T R A N i j e y i j = Y i j Y ^ i j e y T R A N i j .
( X i j , Y i j ) ( e x i j , e y i j ) .
{ e x P = a e x A + b e x B + c e x C e y P = a e y A + b e y B + c e y C ,
{ a = S Δ P B C / S Δ A B C b = S Δ P A C / S Δ A B C c = S Δ P A B / S Δ A B C .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.