Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Motion measurements of explosive shock waves based on an event camera

Open Access Open Access

Abstract

Shock wave measurement is vital in assessing explosive power and designing warheads. To obtain satisfactory observation data of explosive shock waves, it is preferable for optical sensors to possess high-dynamic range and high-time resolution capabilities. In this paper, the event camera is first employed to observe explosive shock waves, leveraging its high dynamic range and low latency. A comprehensive procedure is devised to measure the motion parameters of shock waves accurately. Firstly, the plane lines-based calibration method is proposed to compute the calibration parameters of the event camera, which utilizes the edge-sensitive characteristic of the event camera. Then, the fitted ellipse parameters of the shock wave are estimated based on the concise event data, which are gained by utilizing the characteristics of the event triggering and shock waves’ morphology. Finally, the geometric relationship between the ellipse parameters and the radius of the shock wave is derived, and the motion parameters of the shock wave are estimated. To verify the performance of our method, we compare our measurement results in the TNT explosion test with the pressure sensor results and empirical formula prediction. The relative measurement error compared to pressure sensors is the lowest at 0.33% and the highest at 7.58%. The experimental results verify the rationality and effectiveness of our methods.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Shock waves are the primary destructive element in conventional warhead explosions [1]. Based on the motion parameters of shock waves, overpressure and propagation models can be established. These models are crucial for accurately analyzing damage mechanisms, quantitatively representing damage characteristics, and assessing damage effectiveness [2]. Furthermore, the assessment of explosive power directly guides the structural design, performance evaluation, and operational use of warheads [3]. Therefore, accurate and reliable motion measurement of shock waves during practical experiments is crucial for enhancing overall combat capabilities.

Existing methods for measuring shock waves struggle to meet the demand for accurate characterization of warhead power. Respectively, these methods have problems such as limited sampling points, time-consuming, laborious deployment, susceptibility to damage, poor stability, and difficulty in quantification [4]. Inspired by [4], the advantages and limitations of various methods are analyzed in Table 1. The primary measurement method remains the high-accuracy electrical measurement method, including both wire testing and wireless storage testing method [57]. Alternative methods employing equivalent pressure tanks or conducting biological experiments face challenges when quantitatively characterizing shock waves [8,9]. Although high-speed photography can record the complete wave propagation process safely, overexposure caused by explosive firelight poses a significant challenge. Hence, cameras need to avoid the firelight generally, causing the loss of explosive center information and the inability to record the shock wave in all directions with a single device [10,11]. Furthermore, the high-speed camera's substantial weight and high-power consumption make field experiment setup relatively complex. Additionally, the high cost and limited continuous recording capabilities pose further challenges. The above factors pose challenges in measuring shock waves using high-speed photography.

Tables Icon

Table 1. Performances of various methods [4] (more ⋆ represent better performance)

In recent years, emerging event cameras have provided a new solution to measure the motion parameters of explosive shock waves. Event cameras trigger asynchronous sparse event data by sensing changes in brightness, which has advantages such as high-dynamic range ($140$ dB), low latency (in the order of μs), no-motion blur, small size, cheap (under the same spatiotemporal resolution, the price is around one-third of a high-speed camera) and low power consumption (as low as $5$ mW) [12,13]. In the measurement tasks of explosive shock waves, they can overcome the influence of explosive flames and capture comprehensive high-resolution observation data at a low cost. Their small size and low power consumption make them no need for an external power supply, simple layout, and are suitable for field observation. Additionally, since the event camera only records changes, high-speed non-physical shock waves are incredibly significant in the event data. Overall, the emergence of event cameras provides a powerful tool for obtaining precise and comprehensive data in these complex scenes.

To the best of our knowledge, this paper is the first to measure the motion parameters of explosive shock waves using an event camera. This paper designs a complete procedure to achieve measurement. The main contributions of this paper are as follows:

  • 1. We propose the plane lines-based calibration method to compute the calibration parameters of the event camera. The energy function using event information is used to optimize line extraction, which is served for obtaining accurate control points based on the geometric constraints of a checkerboard.
  • 2. We propose the event-based shock wave extraction algorithm to estimate the fitted ellipse parameters. The spatiotemporal distribution and change information of events are used to remove unrelated events for describing the shock front. Then, sampling events by angle is served for homogenizing events to avoid parameter degradation during the ellipse fitting process.
  • 3. We derive the geometric relationship between the ellipse parameters and the radius of a shock wave, which establishes an optical measurement model for shock wave propagation along the ground. The motion parameters of the shock wave are estimated with this model.

2. Related works

This section reviews the critical tasks of shock wave optical measurement and existing work in event-based vision.

2.1 Shock wave optical measurement

In recent years, limited research has been conducted on the application of high-speed cameras for measuring shock waves. Several related works have been undertaken, albeit in different explosive backgrounds. Li et al. [14] utilized a high-speed camera to observe the underwater shock waves and estimate their radius. Similarly, Jaka et al. [15] employed a high-speed camera to measure the pressure and velocity of plasma shock waves. Zhang et al. [16] and Slangen et al. [17] individually observed explosive shock waves in pipes or tubes with a 100000fps and 20000fps high-speed camera. Both studies quantitatively analyzed the overpressure laws about shock waves in the pipeline and non-pipeline directions. Brian et al. [18] measured the shock wave and analyzed the turbulent boundary layer interaction with a 50000fps camera. Higham et al. [19] and Rigby et al. [20] employed a Photron FASTCAM SA-Z camera with 160000fps observing the rapid expansion of the detonation product fireball following an explosion. Gomez et al. [21] measured the overpressure of the shock-front with a 2000000fps high-speed camera. The explosive scenes of these works [1921] are near-field. Notably, the work of Kyle et al. [22] is particularly relevant to this paper. They employed four high-speed cameras for exams and extracted shock waves in images, ultimately deriving the curve depicting changes in the radius. It is important to note that measuring shock waves above ground is considerably more intricate than underwater due to the substantial impact of the background. Additionally, traditional explosion tests without pipes or not in the near-field often produce intense explosive flames, which can be challenging for high-speed cameras susceptible to overexposure. These findings underscore the dearth of research on high-speed photography methods tailored explicitly for measuring shock waves in conventional explosion tests. Therefore, further exploration and development of techniques are crucial to address this domain's intricacies and challenges effectively.

2.2 Event-based vision

Inspired by the concept of bionic vision, the event-based vision was first put forward in 1992 [23] and developed rapidly over the past decade. Event cameras only focus on luminance changes in the scene and trigger a binary event when the change value exceeds the contrast threshold. Unlike standard cameras that output synchronous image frames, event cameras output asynchronous event streams [24]. These changes in how visual information is obtained enabled it to achieve performances with high dynamic range, low latency, no-motion blur, and low power consumption [25,26]. These excellent performances made its great potential in many visual application scenarios, such as motion segmentation, three-dimensional reconstruction, feature tracking, etc. Zhou et al. [27] and Parameshwara et al [28]. employed the event cameras to identify independently moving objects, which overcame the motion blur and exposure artifacts in standard cameras. Muglikar et al. [29] employed an event camera for three-dimensional reconstruction. Using continuous measurements as the sensor moved, they effectively leveraged the camera's ability to detect and respond to scene edges. Hu et al. [30] proposed an eCDT algorithm for event-based feature tracking and handled the asynchronous event stream data directly. This work enabled the tracking of features with extremely high temporal resolution and unlocked the potential of event cameras. Furthermore, the blinking LED patterns [31] or blinking screens [32] were used for event cameras calibration. Muglikar et al. [33] calibrated event cameras by utilizing events to reconstruct images. These calibration work unlocked the event cameras’ potential for measurement. Thanks to the rapid development of event cameras and their applications, it is possible to chronicle the transient explosive processes. Accordingly, this study leverages event cameras to measure the motion of shock waves in the conventional explosion test. Adopting this approach minimizes the limitations associated with high-speed traditional cameras in balancing high dynamic range, high temporal resolution, and cost-effectiveness. Therefore, detailed information regarding shock wave dynamics can be captured, facilitating a more profound comprehension and analysis of the explosion process.

3. Motion measurements of explosive shock wave

At the onset of this section, we introduce the event generate model. Unlike conventional frame-based optical sensors, event cameras monitor the luminance change and output asynchronous event streams based on a predefined contrast threshold ${\it{\Phi}}$. An individual event is denoted as ${e_i}({{x_i},{y_i},{t_i},{\varphi_i}} )$, where ${x_i},{y_i}$ is the coordinate, ${t_i}$ is the time stamp, ${\varphi _i} = \{{ + 1,0} \}$ is the polarity, and i is the subscript. Indicating that the luminance changed at the time ${t_i}$ through Eq. (1) [13],

$$|{L({{x_i},{y_i},{t_i}} )- L({{x_i},{y_i},{t_i} - \xi } )} |\ge {\it{\Phi} }$$
where L is the luminance, and the $\xi $ is a tiny amount of time. If the value on the left-hand side of Eq. (1) is positive, ${\varphi _i} ={+} 1$ indicates an increase in luminance, and vice versa.

The overview of our shock wave measurement method is shown in Fig. 1. The blue block represents the data flow, while the green block represents our proposed method. We employ an event camera for the optical measurement of shock waves. Firstly, a novel calibration method is introduced to address the sensitivity of event cameras to edges. This method utilizes plane lines for the precise computation of both intrinsic and extrinsic parameters. The event coordinates and time information are utilized to create a distance energy function for optimizing line extraction. This process involves extracting control points based on the known geometric constraints of the checkerboard, which are used for calibration purposes. Secondly, an advanced extraction algorithm is introduced specifically for the estimation of the fitted ellipse parameters associated with a shock wave. This algorithm has two purposes: removing irrelevant events based on their distribution and uniformly sampling events to maintain accuracy in ellipse fitting. Finally, the geometric relationship between the ellipse parameters and the radius of the shock wave is derived. This relationship supports establishing an optical measurement model for shock wave propagation along the ground. The motion parameters of conventional explosive shock waves are estimated with the model based on calibration and extracting results.

 figure: Fig. 1.

Fig. 1. Overview of our shock wave measurement method: The blue block represents the data flow, while the green block represents our proposed method. The method mainly contains three procedures: calibration, extraction, and measurement.

Download Full Size | PDF

3.1 Plane lines-based calibration

Since the event cameras are susceptible to the edges, our method for event sensor calibration employs the line-feature to extract control points. The overview of the extraction procedure for our plane lines-based calibration method is shown in Fig. 2, ${C_1}$,……, $\; {C_m}$ expresses m different camera poses. The line-feature checkerboard is used to gather the event streams by inducing the relative motion between the checkerboard and the camera. Once a line-feature moves in the image plane, the brightness values of its adjacent pixels undergo distinct changes. This phenomenon leads to a large number of events triggering parallel to the direction of the line-feature. To ensure smoother line-feature, we find it more effective to accumulate events with a number window rather than a time window. Therefore, we set the initial sampling number ${n_0}$ to correspond to the camera pose ${C_1}$. For a slice ${E_j}$ of the event stream defined by Eq. (2) is related to the camera pose ${C_j}$,

$${E_j} = \{{{e_i}({{x_i},{y_i},{t_i},{\varphi_i}} )|({j - 1} )\Delta n + {n_0} \le i < j\Delta n + {n_0}} \}$$
where $\Delta n$ represents the size of the accumulated number window, i represents the index of an event, and j represents the index of an event frame. We accumulate events for each camera pose and utilize the LSD (Line Segments Detection) algorithm [34,35] to efficiently extract rough lines. Additionally, we remove short-line results by checking their length in pixels. Afterward, we proceed to optimize the lines using the energy function and extract the intersection points as described in section 3.1.1. Lastly, control points are chosen based on the strategy outlined in section 3.1.2.

 figure: Fig. 2.

Fig. 2. Overview of the extraction procedure for our plane lines-based calibration method. Sampling is started at the event number ${\textrm{n}_0}$, and events are accumulated for each $\Delta \textrm{n}$; then, the LSD [34,35] algorithm is used to extract lines. After that, lines are optimized, and all the intersection points are calculated. Finally, the control points are selected.

Download Full Size | PDF

3.1.1 Optimize lines and extract intersection points

To obtain more accurate extraction results of control points, we start with line extraction optimization. For any slice ${E_j}$ of the event stream, events triggered in each pixel are counted firstly. This pre-treatment approach avoids calculating each event during the optimization and improves efficiency. Then, by searching all the inliers of each line, the LM non-linear optimization algorithm is utilized to optimize the parameters of lines based on the energy function defined by Eq. (3), shown as Fig. 3. The physical meaning is to minimize the sum of distances from all inliers to the line,

$$\mathrm{{\cal L}} = \mathop \sum \limits_{s = 1}^\omega {N_s}\left( {\frac{{|{A{x_s} + B{y_s} + C} |}}{{\sqrt {{A^2} + {B^2}} }}} \right)\; $$
where $A,\; B,\; {\textrm{and}}\; C$ are the line parameters, s is the index of inliers, $\omega $ is the number of all inliers, and ${N_s}$ is the event number of the pixel $({{x_s},\; {y_s}} )$. The objective function of optimization is to minimize the $\mathrm{{\cal L}}$. Then, all intersection points are calculated by connecting the equations of all straight lines in pairs.

 figure: Fig. 3.

Fig. 3. The red line represents the line extracted by LSD [34,35]. Subsequently, all events (blue) within the threshold distance (between two black lines) from the red line are used to optimize the extraction results of that line.

Download Full Size | PDF

3.1.2 Select control points

After the procedure mentioned in section 3.1.1, there are still inevitably some noise points due to the complex outdoor environment. To select control points, we employ a strategy for selection based on proximity. As illustrated in Fig. 4, the strategy for our selection is to search the top three nearest points for each point. The yellow point represents the current checkpoint, the three nearest points to the current point are shown in red, and the other points are shown in blue. There will be two cases for a control point: it can either be one of the four corners (Fig. 4(b)) or not (Fig. 4(a)). Let us define ${d_1},\; {d_2},\; {d_3}$ as the Euclidean distance between the top three nearest points and the current checkpoint. If a point is a control point, it should meet one of the two cases mentioned above. In the first case that a control point is not located at a corner, ${d_1},\; {d_2},\; {d_3}$ are very close to each other and satisfy Eq. (4). In the second case that a control point is positioned at a corner, ${d_1}$ is close to ${d_2}$, and ${d_3}$ is close to $\sqrt 2 {d_1}$. These conditions adhere to Eq. (5),

$$\left|{1 - \frac{{{d_1}{d_3}}}{{d_2^2}}} \right|+ |{{d_3} - {d_1}} |< \delta $$
$$\left|{1 - \frac{{\sqrt 2 {d_1}{d_3}}}{{2d_2^2}}} \right|+ \left|{{d_3} - \left( {\sqrt 2 - 1} \right){d_2} - {d_1}} \right|< \delta $$
where $\delta $ is the threshold. After the selection process, for all intersection points that have passed the criteria, we check the total number of points based on prior knowledge of the checkerboard size (e.g., 36 points for a 6 × 6 checkerboard). If the number of points matches the expected number, these points are chosen as control points.

 figure: Fig. 4.

Fig. 4. Two different cases in the selection process. If candidate points fit one of the two cases, it is chosen as a control point.

Download Full Size | PDF

Ultimately, for the camera pose ${C_j}$, a relational mapping between grid points in two-dimensional and three-dimensional space is established to facilitate calibration.

3.1.3 Planar checkerboard-based calibration

Once we obtain all control points, the calibration method using a planar checkerboard [36] is employed. The known three-dimensional relationships of all the control points form the world coordinate. Then, each control point's two-dimensional (image) and three-dimensional relationships are established. The calibration task is converted to minimize the reprojection error using Eq. (6),

$$Err = \mathop \sum \limits_{u = 1}^U ({\Pi ({K,\lambda ,R,T,{{\it{\Psi}}_u}})- {\psi_u}}),{{\it{\Psi}}_u} \in {\mathrm{\mathbb{R}}^3},{\psi _u} \in \mathrm{\mathbb{R}^2}$$
where u is the index of a control point, ${\it{\Psi}}_u$ is the three-dimensional coordinates, ${\psi _u}$ is the two-dimensional coordinates, U is the total number of control points, $\Pi $ is the re-project function, K is the intrinsic matrix which includes the focal length f and principal point, $\lambda $ is the distortion parameters and $R,\; T$ is the extrinsic parameters.

After minimizing the reprojection error in Eq. (6), the calibration parameters are estimated. Mainly due to the complexity of lens design and technological limitations, the imaging system cannot strictly meet the central perspective projection relationship. Therefore, there may be lens distortion that causes a slight deviation of light. In order to obtain more accurate measurement results, an aberration model is used to correct the critical image points (such as the explosive center, etc.). The distortion parameters ${\lambda _1},\; {\lambda _2},\; {\lambda _3},\; {\lambda _4},\; {\textrm{and}}\; {\lambda _5}$ have been obtained during the calibration process, and the following equation Eq. (7) is used to correct the image points,

$$\left\{ {\begin{array}{*{20}{c}} {\tilde{x} = x - ({{\lambda_1}{x_g} + {\lambda_2}} )({x_g^2 + y_g^2} )- {\lambda_4}x_g^2 - {\lambda_5}{x_g}{y_g}}\\ {\tilde{y} = y - ({{\lambda_1}{y_g} + {\lambda_3}} )({x_g^2 + y_g^2} )- {\lambda_4}{x_g}{y_g} - {\lambda_5}y_g^2} \end{array}} \right.$$
where $\tilde{x}$, $\tilde{y}$ is the coordinates of the ideal image point, x, $\; y$ is the coordinates of the actual image point. ${x_g},\; {y_g}$ is the normalized coordinate of the image point.

3.2 Shock wave extraction

For event cameras, a pixel is triggered when the luminance changes, but only if the pixel receives enough photons. For shock waves propagating along the ground, some pixels are not triggered due to complex issues. Therefore, fitting or solving the shock wave's ellipse parameters directly often proves ineffective. In order to solve the issues above, an extraction algorithm for estimating the fitted ellipse parameters of a shock wave is raised. The events are accumulated as event frames with high temporal resolution. Our shock wave extraction method consists of several steps, as illustrated in Fig. 5. Firstly, the ROI region is extracted to improve efficiency and eliminate some noise. Secondly, unrelated events to the shock wave are removed according to their density, coordinates, and polarity. Then, points obtained by accumulating events are sampled using uniform angles to avoid parameter degradation during the ellipse fitting. Finally, the best-fitting ellipse parameters, including the major and minor axes, are determined by searching for the parameters that best fit the sampled points.

 figure: Fig. 5.

Fig. 5. Overview of our method for shock wave extraction. Events are accumulated as event frames. Firstly, the ROI region is extracted to reduce redundant data. Secondly, events unrelated to shock waves are removed according to their spatiotemporal distribution and change information. Then, points are sampled using uniform angles to avoid parameter degradation during the fitting. Finally, the ellipse parameters are estimated.

Download Full Size | PDF

3.2.1 Extract ROI

Background noise outside the shock wave range will interfere with the extraction. Hence, extracting a Region of Interest (ROI) is advantageous. Our approach to defining the ROI is based on calculating the density of points that share the same X-coordinate or Y-coordinate with the explosive center. In this context, the density ${\sigma _q}$ for a point ${p_q}$ is defined as Eq. (8),

$${\sigma _q} = \mathop \sum \limits_{x = {x_q} - \theta }^{{x_q} + \theta } \mathop \sum \limits_{y = {y_q} - \theta }^{{y_q} + \theta } |{\varphi ({x,\; y} )} |$$
where $\theta $ is the search radius, $\varphi ({x,y} )$ is the polarity of the event at the coordinate $({x,y} )$. Then, we formed the discrete functions Eq. (9) and Eq. (10) based on the density counted in Eq. (8). In detail, we use the density $\mathrm{\sigma }$ as the dependent variable and the horizontal/vertical coordinates as the dependent variable,
$$F(x )= \sigma ({x,\; {y_c}} ),x \in {\mathrm{\mathbb{N}}^\ast } \cap x \in [{0,{\tau_x}} ]$$
$$G(y )= \sigma ({{x_c},\; y} ),y \in {\mathrm{\mathbb{N}}^\ast } \cap y \in [{0,{\tau_y}} ]\; $$
where ${x_c}$ is the X-coordinate of the explosive center, ${y_c}$ is the Y-coordinate, ${\tau _x}$, $\; {\tau _y}$ are the pixel resolution of the event camera. Then, we generate the line chart for $F(x )$ to visualize the density distribution. For instance, as shown in Fig. 6, the horizontal axis represents the x-axis pixel coordinate while the vertical axis represents the density. Then, we travel the x and find the first $x$′ and the last $x$ that satisfy Eq. (11) and Eq. (12), based on the function $F(x )$ derived in Eq. (9),
$$F(x )> \delta \; \cap \; F({x + l} )- F(x )> 0,l \in \{{1,2,3} \}$$
$$F(x )> \delta \; \cap \; F({x - l} )- F(x )> 0,l \in \{{1,2,3} \}$$
where $\delta $ is the threshold, we get the first $y$′ and the last $y$ in vertical coordinates with the same process. Then, we select the region with tolerance window error $\theta $.
$$x \in [{x^{\prime} - \theta ,x^{\prime\prime} + \theta } ]\cap y \in [{y^{\prime} - \theta ,y^{\prime\prime} + \theta } ]$$

 figure: Fig. 6.

Fig. 6. The curve of the density changes over horizontal coordinate. The horizontal coordinate range of ROI is chosen between the green range.

Download Full Size | PDF

In fact, for y, it is not essential to ensure $y \ge y^{\prime} - \theta $ because this region is the detonation product covering the shock wave. We will remove this region in the following procedure.

3.2.2 Remove unrelated events

The shock wave generated by the explosion propagates outward from the center. At the same time, the detonation products consistently overlay certain sections of the shock wave situated at the top of the explosive center in the image field. Therefore, the events unrelated to the shock wave are removed with the following strategies.

Strategy 1 Density Check: Events corresponding to the shock front are expected to trigger continuously. Therefore, if a pixel relates to the shock front, its density should surpass a predefined threshold $\delta $. Otherwise, we consider it as background noise. The density of the current pixel is computed by Eq. (8). For ease of understanding, an example is shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. An example for understanding the density of a pixel. For pixel ${p_q}$, there are six inlier points. Therefore, the density of the pixel ${p_q}$ equals 6.

Download Full Size | PDF

Strategy 2 Coordinates Check: Considering the region of detonation products, for each pixel, if the angle formed by the ray connecting the explosive center to this pixel and the X-coordinate axis indicates that it is positioned above the explosive center in the image field, it will be removed.

Strategy 3 Polarity Check: For the current frame j and a pixel ${p_q}$, we accumulate the polarity of all events triggered in this pixel before frame j. Then, we select all the newly triggered pixels not triggered in the $j - 1$ frame. Since the shock wave propagates outward from the explosive center, a pixel expected to be related to the shock wave must be in a new coordinate position and contain a newly triggered event. Hence, it must satisfy Eq. (14),

$$\left|{\mathop \sum \limits_{m = 1}^j {\varphi_j}({{x_q},{y_q}} )} \right|= 1 \cap {\varphi _j}({{x_q},{y_q}} )\ne 0 \cap {\varphi _{j - 1}}({{x_q},{y_q}} )= 0$$
where ${\varphi _m}({{x_q},{y_q}} )$ is the polarity of the latest triggered event at the pixel coordinate $({{x_q},{y_q}} )$ of frame m.

3.2.3 Sample with angle

During the shock wave propagation, some pixels are not triggered due to complex issues, while others are still triggered within the shock wave range, making the data complex when fitting ellipses. Furthermore, when the camera pose has a slight pitch angle, the distribution of events becomes highly non-uniform. Hence, in this section, points are sampled according to their angle evenly to ensure data distribution uniformity during the ellipse fitting. The angle of a pixel is the included angle between the ray connecting the explosive center to this pixel and the horizontal axis.

For each angle, only one point is sampled. As depicted in Fig. 8, the explosive center is denoted by the yellow point, the red point represents the final sampled point for the angle, and the blue points indicate other points. All points near the ray within the distance $\theta $ (${p_1}$, ${p_2}$, ${p_3}$) are selected first, described as Eq. (15). Then, the furthest point ${p_3}$ is chosen as the sampled point because the shock front is always at the outermost point, which is denoted by Eq. (16),

$$Q \leftarrow {p_q},{d_q} < \theta $$
$$S \leftarrow \mathop {\textrm{max}}\limits_q ||ve{c_q}||_2$$
where Q is the candidate points near the ray, ${d_q}$ is the distance from point ${p_q}$ to the ray, $ve{c_q}$ is the vector from the explosive center to point ${p_q}$, and S is the sampling set. Finally, a rectangle sliding window checks all the sampled points and removes points with extreme density.

 figure: Fig. 8.

Fig. 8. The strategy for sampling. Candidate points are selected for each angle within threshold $\theta $ to the red ray. Then, the farthest point is chosen as our sampled point, even if point ${p_3}$ is farther than ${p_1}$ to the angle ray.

Download Full Size | PDF

3.2.4 Fit with ellipse

Solving the major and minor axes of the ellipse directly with the known center only uses two points. However, these points come from events that do not strictly satisfy the ellipse elliptical analytic formula, hence will get a degenerate solution or resulting unsolvable. In our work, ellipse parameters are fitted by searching the major and minor axes instead of solving them. This approach enhances the robustness of the fitting algorithm, especially when dealing with degenerate ellipses. Furthermore, we employ the following strategies to reduce the search space and minimize computation. The major axis is not less than the abscissa of the vector extending from the explosive center to the farthest sampled point. In comparison, the minor axis is not less than the modulus of the vector extending from the explosive center to the nearest sampled point, yet smaller than the major axis. Therefore, the search space is chosen based on Eq. (17) and Eq. (18), which denote the strategies mentioned above,

$${\Omega _a} \in [{|{ve{c_{mq}}(1 )} |,\mu\| ve{c_{mq}}\|_2} ],with\; mq = \mathop {\textrm{max}}\limits_{q,q \in S} \|ve{c_q}\|_2\; $$
$${\Omega_b} \in [{{\textrm{min}}\|ve{c_q}\|_2,a}],q \in S$$
where $\Omega $ is the search space, $a,b$ are the major axis, and the minor axis, and $\mu $ is the tolerance error. Then, we form the energy function Eq. (19) and search for $a,\; b$ to minimize them,
$$\mathrm{{\cal L}}\mathrm{^{\prime}} = \mathop \sum \limits_{q = 1}^{{\textrm{Card}}(S)} ({Ax_q^2 + By_q^2 + C{x_q} + D{y_q} + 1} )$$
where $A,\; B,\; C, \; D$ are ellipse parameters that can be transformed by $a,\; b,\; {x_c},$ and ${y_c}$, q are the index of all points in S.

3.3 Motion measurement

This section first derives the geometric relationship in optical measurement between the ellipse parameters and the shock wave's radius. Then, the motion parameters of the shock wave are measured.

3.3.1 Geometric relationship

The following geometric relationship is derived to estimate the radius of the ground-level shock wave, as shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. The geometric relationship in the measurement of shock waves.

Download Full Size | PDF

Point C is the event camera, $\pi $ is the image plane, the explosive center O is projected as o in the plane $\pi $, P is the principal point of the event camera, the segment $OB$ is the radius of the shock wave on the ground level, and $oA$ is the major axis.

Since the P is the principal point, we have $CP \bot \pi $. Segment $CP$ is the focal length. Notice that segment $oP$ is in the plane $\pi $, we have $CP \bot oP$. We extract the two-dimensional coordinates of point o, and we can calculate the pixel length of the segment $oP$. Further, the Pythagorean theorem is used to calculate the segment $oC$, as denoted by Eq. (20),

$$oC = \sqrt {{f^2} + \|({({{x_\beta } - {x_c}} ),({{y_\beta } - {y_c}} )} )\|} $$
where $f$ is the focal length we gained in calibration steps, ${x_\beta }$ and ${y_\beta }$ are the two-dimensional coordinates of the principal point, ${x_c}$ and ${y_c}$ are the two-dimensional coordinates of the explosive center we extracted.

Then, we extract the shock wave in the image plane $\pi $ and fit it with ellipse parameters. Since the triangle $CoA$ is similar to triangle $COB$, for the major axis $oA$ and radius $oB$, we get the geometric relationship utilizing proportional relationships, as denoted by Eq. (21).

$$\frac{{oA}}{{oC}} = \frac{{OB}}{{OC}}$$

The major axis $oA$ is obtained after fitting the ellipse. The distance of segment $OC$ can also be estimated by setting the calibration checkerboard near the three-dimensional explosive center. By simultaneous Eq. (20) and Eq. (21), the radius of the shock waves can be estimated based on Eq. (22),

$$r = \frac{{ad}}{{\sqrt {{f^2} + \|({({{x_\beta } - {x_c}} ),({{y_\beta } - {y_c}} )} )\|} }}$$
where r is the radius of a shock wave, a is the major axis, and d is the distance between the optical and explosive center.

3.3.2 Motion parameters estimation

After calibrating the event camera and extracting the shock wave, the three-dimensional radius can be estimated using the above-mentioned geometric relationship. As soon as we estimate the radius of each frame obtained by accumulating events with Eq. (21), the time information is utilized for velocity estimating. Owing to the rapid propagation speed of the shock wave, even a slight 1-pixel extraction error can result in a significant estimating error. In order to address this issue, the velocity of each event frame is estimated by sampling several frames using Eq. (23). As a result, it can significantly reduce the error caused by using small quantities as dividing denominators,

$${v_j} = \frac{{{r_{j^{\prime}}} - {r_{j^{\prime\prime}}}}}{{\Delta t\Delta j}};with\; j^{\prime} = j + \lfloor\frac{{\Delta j}}{2}\rfloor,j^{\prime\prime} = j - \lceil\frac{{\Delta j}}{2}\rceil$$
where $\Delta j$ is the number of sampled event frames, $\Delta t$ is the accumulated time window, ${v_j}$ is the velocity of the frame j. Finally, the shock wave's radius and velocity results are obtained at each moment.

4. Experiment results

In our experiments, the event camera is utilized with an equivalent frame rate above 10000fps, and the lenses are 8mm. The explosive charge is $1$ kg TNT, placed at a height of approximately 1.5 meters above the ground. The events generated by the shock wave are accumulated within the time window of $1000$ μs, and the total measure time of the shock wave is approximately $30ms$. Firstly, our event camera is calibrated using the method described in section 3.1. Following the calibration, the shock wave is extracted using the method mentioned in section 3.2. Finally, the motion parameters of the shock wave are measured based on the calibration and extraction results.

4.1 Event camera calibration

The intrinsic and extrinsic parameters of our event camera are calibrated using the method described in section 3.1. Experiments in the laboratory and the field are conducted to verify the effectiveness of our proposed plane lines-based calibration method. The number window $\Delta n$ in our experiments is chosen as $100000$. In the laboratory experiments, the event camera is moved to trigger events and acquire different poses. Considering that background movements can also trigger noise events, this allows us to test the robustness of our method. The line-feature screen checkerboard we used in the laboratory is illustrated in Fig. 10(a). In the field experiments, we employed a line-feature calibration checkerboard, shown in Fig. 10(b). In this scenario, events are generated purely due to the camera being fixed while observing the scene. Extracting control points becomes more straightforward due to the avoiding of background noise. The results of control point extraction in the laboratory and field are presented separately in the first and second lines of Fig. 11. From left to right are the original image, extraction results of LSD [34,35], intersection extraction results, and final control point results in sequence. Overall, the control point extraction results demonstrate the effectiveness and robustness of our control point extraction method.

 figure: Fig. 10.

Fig. 10. The plane line-feature checkerboard in our calibration task

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Control points extraction results in the laboratory and field: The first line represents the laboratory results, and the second represents the field results. (a) and (e) are the original pictures gained by accumulating events; (b) and (f) are the preliminary results of lines; (c) and (g) are candidates of control points; (d) and (h) are final control points.

Download Full Size | PDF

After calibrating the event camera using the method introduced in section 3.1.3, we compare our results with the manual extracting results, as shown in Table 2. The manual extracting method we used is based on the calibration toolbox [33]. Additionally, we evaluate the reprojection error, depicted in Fig. 12. The reason for better accuracy in field calibration experiments may be that the calibration board is smoother than the screen. The calibration accuracy of our proposed method is better than the manual extracting method, which is evident in both mean error and standard deviation. This experiment has verified the effectiveness of our calibration steps.

 figure: Fig. 12.

Fig. 12. The reprojection error for all control points: (a) our method in the laboratory, the maximum error is less than 0.94 pixels; (b) manual extracting results in the laboratory, the maximum error is less than 2.57 pixels; (c) our method in the field, the maximum error is less than 0.57 pixels; (d) manual extracting results in the field, the maximum error is less than 1.43 pixels.

Download Full Size | PDF

Tables Icon

Table 2. Calibration accuracy compared to manual extracting (Unit: pixel)

4.2 Shock wave extraction

Figure 13 shows shock wave extraction results of a total of four exemplar event frames. The original picture obtained by accumulating events is displayed in the first line (Fig. 13(a)). The second line (Fig. 13(b)) shows the results for points after extracting ROI and the removal process. The third line (Fig. 13(c)) showcases the points and segments generated during the sampling procedure with the angle. Each red segment represents the vector from the explosive center to a sampled point. Lastly, the shock wave extraction results are shown in the last line (Fig. 13(d)), with ellipse parameters used to describe them. The extraction results are consistent with the shock front. Overall, the extracting results verified the effectiveness of our algorithm.

 figure: Fig. 13.

Fig. 13. Results of our shock wave extraction. (a) is the original accumulated picture; (b) and (c) are the pilot processes in handling; (d) is the final extraction results.

Download Full Size | PDF

4.3 Motion parameters estimation

According to the geometric relationship we derived in section 3.3, the radius of the shock wave can be estimated using Eq. (22). The detailed parameters are as following: $f = 515.6$ pix, ${x_\beta } = 338.0$ pix, ${y_\beta } = 272.9$ pix, ${x_c} = 358.0$ pix, ${y_c} = 239.0$ pix. The radius and velocity change from the time curve is estimated by us using the above parameters. Then, the upper and lower bound of measurement is analyzed according to Eq. (22) and Eq. (23). The partial derivatives of radius r with respect to a and d are always greater than 0. Therefore, the larger the forward error of a and d, the bigger error for radius r. Similarly, the error of radius is positively correlated with the negative error in focal length f and the negative error in the absolute value of the ${x_\beta } - {x_c}$ and ${y_\beta } - {y_c}$. Therefore, the upper and lower bounds of radius error are calculated based on the following parameter errors: $\Delta a ={\pm} 3$ pix, $\Delta d ={\pm} 0.01$ m, $\Delta f ={\pm} 19.22$ pix, $\Delta {x_\beta } ={\pm} 1$ pix, $\Delta {y_\beta } ={\pm} 1$ pix, $\Delta {x_c} ={\pm} 9.84$ pix, $\Delta {y_c} ={\pm} 8.79$ pix. Due to the unchanged intrinsic parameters, observation distance, and explosion center extraction values in the observation, the velocity measurement error is only related to the major axis extraction error after determining the above parameters. The upper and lower bounds on the velocity measurement error are determined by considering extreme values.

All the results above are presented in Fig. 14(a) and (b), the blue curve represents the result of polynomial fitting scatter data points. The black curve is the upper error bound, while the pink curve is the lower error bound. We observe that the radius of the shock wave gradually increases over time, and the growth rate gradually slows down. Similarly, the propagation speed of the shock wave gradually slows down.

 figure: Fig. 14.

Fig. 14. Our measurement results of shock wave motion parameters.

Download Full Size | PDF

In order to verify the performance of our method, the pressure sensor measurement results are used for comparison, as shown in Fig. 14(c). Due to uncertain factors in the experiment, measurements need to be taken in multiple directions for mutual verification. Therefore, the pressure sensors PCB-113B (produced by Suzhou Hefu Electronics Co., Ltd) are placed in the north and west at distances of 1, 2, 3, 4, 6, 8, and 10 meters from the explosive center, as shown in Fig. 9. However, the measurement data from pressure sensors located at 1 and 2 meters were lost due to damage during the experiment. Hence, we compare with others. The shock wave overpressure is measured by the pressure sensors and converted into velocity using the Rankine-Hugoniot condition from [37],

$$P = \frac{{2\gamma }}{{\gamma + 1}}({{M^2} - 1} ){P_s}\; $$
where P is the overpressure, $\gamma = 1.4$ is the specific heat ratio, M is the Mach number, and ${P_s} = 101.325\textrm{kPa}$ is the ambient atmospheric pressure. Then, we get the velocity change curve result of the shock wave, as presented in Fig. 14(c). The green and yellow curves separately represent the velocity change from radius in two different directions.

Furthermore, since our experiment belongs to ordinary ground explosions, the empirical formula Eq. (25) and Eq. (26) [3840] for shock wave overpressure is used for comparison,

$$Z = \frac{r}{{\sqrt[3]{W}}}$$
$$P = \frac{{0.102}}{Z} + \frac{{0.399}}{{{Z^2}}} + \frac{{1.26}}{{{Z^3}}}$$
where r is the radius of the shock wave, W is the TNT explosive equivalent, Z is the scaled distance, and P is the shock wave overpressure. Hence, the velocity predicted by Eq. (25) and Eq. (26) is included in Fig. 14(c).

Our event-based optical measurement results are very close to the results of pressure sensors and the predicted values from the empirical formula. The error of measurement at distances of 3, 4, 6, 8, and 10 meters are summarized in Table 3. The pressure sensor measurement results of the shock wave velocity indicate that the shock wave velocity gradually decays and the attenuation rate slows down over time. Remarkably, our measurement method observes the same pattern of shock wave velocity attenuation as the pressure sensors. The relative measurement error compared to pressure sensors is the lowest at 0.33% and the highest at 7.58%. This experiment results prove that our motion measurement method is practical and feasible.

Tables Icon

Table 3. The error in our measurement result (“PSN” means Pressure Sensors (north), “PSW” means Pressure Sensors (west), “EFP” means Empirical formula prediction)

5. Conclusions

In this paper, the event camera is applied to measure the motion of an explosive shock wave for the first time. This work utilizes the advantages of event cameras, such as high dynamic range, high temporal resolution, and low cost, expanding the application of optical camera measurement in conventional explosion tests. We propose the whole algorithm processes for handling asynchronous sparse event streams to measure shock waves. The whole process includes three parts: plane lines-based calibration, shock wave extraction, and motion measurement. To verify the performance of our method, we compare our measurement results in the TNT explosion test with the pressure sensor results and empirical formula prediction. The experimental results verify the rationality and effectiveness of our whole measurement procedures.

Funding

National Natural Science Foundation of China (12372189); Natural Science Foundation of Hunan Province for Excellent Young Scholars (2023JJ20045).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Lu, X. Li, Z. Tian, et al., Weapon Damage and Assessment (Science Press, 2021).

2. N. Du, W. Xiong, T. Wang, et al., “Study on energy release characteristics of reactive material casings under explosive loading,” Def. Technol. 17(5), 1791–1803 (2021). [CrossRef]  

3. N. H. Guertin, “Director, Operational Test and Evaluation FY2022 Annual Report,” (2023).

4. X. Ye, J. Su, and J. Ji, “Review of effect target method for shock wave measurement,” J. Ordnance Equip. Eng. 40(12), 55–61 (2019).

5. X. Li, Z. Duan, and J. Ji, “Research on vibration compensation technology of pressure sensors in shock wave overpressure measurement,” J. Phys.: Conf. Ser. 2369, 012075 (2022).

6. H. Du and J. Zu, “Research on wireless shockwave overpressure test system,” Fire Control & Command Control. 37(1), 198–200 (2012).

7. C. Ma, H. Chao, T. Zhang, et al., “A study of reliable triggering technology for shock wave overpressure testing in the actual combat environment,” International Conference on Defence Technology (2018).

8. Z. Xiong and C. Bai, “Study of fuel-air explosive weapon power evaluation indexes,” Chin. J. Explos. Propellants. 2, 19–22 (2002).

9. J. Zu, T. Ma, J. Fan, et al., New Concept Dynamic Testing (National Defense Industry Press, 2016).

10. H. Li, X. Zhang, and K. Yan, “Warhead fragment position measurement method by using two light filed cameras,” microw. opt. technol. Lett. 61(12), 2910–2918 (2019). [CrossRef]  

11. B. Jim, O. Eric, and S. Geogre, “Stereo Camera Optical Tracker,” ITEA Insrtumentation Conference. (2016).

12. D. Cho and T. Lee, “A review of bioinspired vision sensors and their applications,” Sens. Mater. 27(6), 447–463 (2015).

13. G. Gallego, T. Delbruck, G. Orchard, et al., “Event-based vision: a survey,” IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 154–180 (2022). [CrossRef]  

14. S. Li, A. Zhang, R. Han, et al., “Experimental and numerical study of two underwater explosion bubbles: Coalescence, fragmentation and shock wave emission,” Ocean Eng. 190, 106414 (2019). [CrossRef]  

15. J. Mur, F. Reuter, J. Kočica, et al., “Multi-frame multi-exposure shock wave imaging and pressure measurements,” Opt. Express 30(21), 37664–37674 (2022). [CrossRef]  

16. G. Zhang, “Experimental study on shock wave propagation of the explosion in a pipe with holes by high-speed schlieren method,” Shock Vib. 2020, 1–9 (2020). [CrossRef]  

17. S. Pierre, L. Pierre, H. Frederic, et al., “High-speed imaging optical techniques for shockwave and droplets atomization analysis,” Opt. Eng 55(12), 121706 (2016). [CrossRef]  

18. J. Bolton, B. Thurow, F. Alvi, et al., “Single camera 3D measurement of a shock wave-turbulent boundary layer interaction,” Aerospace Sciences Meeting.

19. J. Higham, O. Isaac, and S. Rigby, “Optical flow tracking velocimetry of near-field explosions,” Meas. Sci. Technol. 33(4), 047001 (2022). [CrossRef]  

20. S. Rigby, R. Knighton, S. Clarke, et al., “Reflected Near-field Blast Pressure Measurements Using High Speed Video,” Exp Mech 60(7), 875–888 (2020). [CrossRef]  

21. M. Gomez, S. Grauer, J. Ludwigsen, et al., “Megahertz-rate background-oriented schlieren tomography in post-detonation blasts,” Appl. Opt. 61(10), 2444–2458 (2022). [CrossRef]  

22. O. Kyle and J. Michael, “Three-dimensional shock wave reconstruction using multiple high-speed digital cameras and background-oriented schlieren imaging,” Exp. Fluids. 60(6), (2019).

23. M. Mahowald, “VLSI analogs of neuronal visual processing: a synthesis of form and function,” Calif. Inst. Technol. (1992).

24. L. Steffen, D. Reichard, J. Weinland, et al., “Neuromorphic stereo vision: a survey of bio-inspired sensors and algorithms,” Front. Neurorobot. 13, 28 (2019). [CrossRef]  

25. Y. Suh, S. Choi, M. Ito, et al., “A 1280×960 dynamic vision sensor with a 4.95-μm pixel pitch and motion artifact minimization,” International Symposium on Circuits and Systems (IEEE, 2020).

26. T. Finateu, A. Niwa, D. Matolin, et al., “5.10 A 1280×720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86µm pixels, 1.066GEPS readout, programmable event-rate controller and compressive data-formatting pipeline,” IEEEInternational Solid-State Circuits Conference112–114 (2020).

27. Y. Zhou, G. Gallego, X. Lu, et al., “Event-based motion segmentation with spatio-temporal Graph cuts,” IEEE transactions on neural networks and learning systems 1–13 (2021).

28. C. Parameshwara, N. Sanket, and A. Gupta, “MOMS with events: multi-object motion segmentation with monocular event cameras,” (2020).

29. H. Rebecq, G. Gallego, E. Mueggler, et al., “EMVS: event-based multi-view stereo—3D reconstruction with an event camera in real-time,” Int J Comput Vis 126(12), 1394–1414 (2018). [CrossRef]  

30. S. Hu, Y. Kim, H. Lim, et al., “eCDT: event clustering for simultaneous feature detection and tracking,” IEEE/RSJ International Conference on Intelligent Robots and Systems (2022).

31. M. J. Domínguez-Morales, Á. Jiménez-Fernández, and G. Jiménez-Moreno, “Bio-inspired stereo vision calibration for dynamic vision sensors,” IEEE Access. 7, 138415 (2019). [CrossRef]  

32. E. Mueggler, B. Huber, and D. Scaramuzza, “Event-based,6-DOFposetrackingforhigh-speedmaneuvers,” IEEE/RSJ Int.Conf. Intell. Robot. Syst., 2761–2768 (2014).

33. M. Muglikar, M. Gehrig, D. Gehrig, et al., “How to Calibrate Your Event Camera,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1403 (2021).

34. Y. Salaun, R. Marlet, and P. Monasse, “Multiscale line segment detector for robust and accurate SfM,” International Conference on Pattern Recognition (2016).

35. R. Gioi, J. Jakubowicz, J. Morel, et al., “LSD: a line segment detector,” Image Processing On Line. 2(4), 35–55 (2012). [CrossRef]  

36. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

37. H. Yu, J. Deng, and L. Zhou, Fluid Mechanics (Mechanical Industry Press, 2022).

38. C. Knock, N. Davies, and T. Reeves, “Predicting Blast Waves from the Axial Direction of a Cylindrical Charge,” Propellants, Explos., Pyrotech. 40(2), 169–179 (2015). [CrossRef]  

39. W. Paul, Explosives Engineering (Wiley, 2018).

40. Z. Han, B. Wang, J. Li, et al., Explosion Theory (Beijing Institute of Technology Press, 2022).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Overview of our shock wave measurement method: The blue block represents the data flow, while the green block represents our proposed method. The method mainly contains three procedures: calibration, extraction, and measurement.
Fig. 2.
Fig. 2. Overview of the extraction procedure for our plane lines-based calibration method. Sampling is started at the event number ${\textrm{n}_0}$, and events are accumulated for each $\Delta \textrm{n}$; then, the LSD [34,35] algorithm is used to extract lines. After that, lines are optimized, and all the intersection points are calculated. Finally, the control points are selected.
Fig. 3.
Fig. 3. The red line represents the line extracted by LSD [34,35]. Subsequently, all events (blue) within the threshold distance (between two black lines) from the red line are used to optimize the extraction results of that line.
Fig. 4.
Fig. 4. Two different cases in the selection process. If candidate points fit one of the two cases, it is chosen as a control point.
Fig. 5.
Fig. 5. Overview of our method for shock wave extraction. Events are accumulated as event frames. Firstly, the ROI region is extracted to reduce redundant data. Secondly, events unrelated to shock waves are removed according to their spatiotemporal distribution and change information. Then, points are sampled using uniform angles to avoid parameter degradation during the fitting. Finally, the ellipse parameters are estimated.
Fig. 6.
Fig. 6. The curve of the density changes over horizontal coordinate. The horizontal coordinate range of ROI is chosen between the green range.
Fig. 7.
Fig. 7. An example for understanding the density of a pixel. For pixel ${p_q}$, there are six inlier points. Therefore, the density of the pixel ${p_q}$ equals 6.
Fig. 8.
Fig. 8. The strategy for sampling. Candidate points are selected for each angle within threshold $\theta $ to the red ray. Then, the farthest point is chosen as our sampled point, even if point ${p_3}$ is farther than ${p_1}$ to the angle ray.
Fig. 9.
Fig. 9. The geometric relationship in the measurement of shock waves.
Fig. 10.
Fig. 10. The plane line-feature checkerboard in our calibration task
Fig. 11.
Fig. 11. Control points extraction results in the laboratory and field: The first line represents the laboratory results, and the second represents the field results. (a) and (e) are the original pictures gained by accumulating events; (b) and (f) are the preliminary results of lines; (c) and (g) are candidates of control points; (d) and (h) are final control points.
Fig. 12.
Fig. 12. The reprojection error for all control points: (a) our method in the laboratory, the maximum error is less than 0.94 pixels; (b) manual extracting results in the laboratory, the maximum error is less than 2.57 pixels; (c) our method in the field, the maximum error is less than 0.57 pixels; (d) manual extracting results in the field, the maximum error is less than 1.43 pixels.
Fig. 13.
Fig. 13. Results of our shock wave extraction. (a) is the original accumulated picture; (b) and (c) are the pilot processes in handling; (d) is the final extraction results.
Fig. 14.
Fig. 14. Our measurement results of shock wave motion parameters.

Tables (3)

Tables Icon

Table 1. Performances of various methods [4] (more ⋆ represent better performance)

Tables Icon

Table 2. Calibration accuracy compared to manual extracting (Unit: pixel)

Tables Icon

Table 3. The error in our measurement result (“PSN” means Pressure Sensors (north), “PSW” means Pressure Sensors (west), “EFP” means Empirical formula prediction)

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

| L ( x i , y i , t i ) L ( x i , y i , t i ξ ) | Φ
E j = { e i ( x i , y i , t i , φ i ) | ( j 1 ) Δ n + n 0 i < j Δ n + n 0 }
L = s = 1 ω N s ( | A x s + B y s + C | A 2 + B 2 )
| 1 d 1 d 3 d 2 2 | + | d 3 d 1 | < δ
| 1 2 d 1 d 3 2 d 2 2 | + | d 3 ( 2 1 ) d 2 d 1 | < δ
E r r = u = 1 U ( Π ( K , λ , R , T , Ψ u ) ψ u ) , Ψ u R 3 , ψ u R 2
{ x ~ = x ( λ 1 x g + λ 2 ) ( x g 2 + y g 2 ) λ 4 x g 2 λ 5 x g y g y ~ = y ( λ 1 y g + λ 3 ) ( x g 2 + y g 2 ) λ 4 x g y g λ 5 y g 2
σ q = x = x q θ x q + θ y = y q θ y q + θ | φ ( x , y ) |
F ( x ) = σ ( x , y c ) , x N x [ 0 , τ x ]
G ( y ) = σ ( x c , y ) , y N y [ 0 , τ y ]
F ( x ) > δ F ( x + l ) F ( x ) > 0 , l { 1 , 2 , 3 }
F ( x ) > δ F ( x l ) F ( x ) > 0 , l { 1 , 2 , 3 }
x [ x θ , x + θ ] y [ y θ , y + θ ]
| m = 1 j φ j ( x q , y q ) | = 1 φ j ( x q , y q ) 0 φ j 1 ( x q , y q ) = 0
Q p q , d q < θ
S max q | | v e c q | | 2
Ω a [ | v e c m q ( 1 ) | , μ v e c m q 2 ] , w i t h m q = max q , q S v e c q 2
Ω b [ min v e c q 2 , a ] , q S
L = q = 1 Card ( S ) ( A x q 2 + B y q 2 + C x q + D y q + 1 )
o C = f 2 + ( ( x β x c ) , ( y β y c ) )
o A o C = O B O C
r = a d f 2 + ( ( x β x c ) , ( y β y c ) )
v j = r j r j Δ t Δ j ; w i t h j = j + Δ j 2 , j = j Δ j 2
P = 2 γ γ + 1 ( M 2 1 ) P s
Z = r W 3
P = 0.102 Z + 0.399 Z 2 + 1.26 Z 3
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.