Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Noninvasive material anisotropy estimation using oblique incidence reflectometry and machine learning

Open Access Open Access

Abstract

Anisotropy reveals interesting details of the subsurface structure of a material. We aim at noninvasive assessment of material anisotropy using as few measurements as possible. To this end, we evaluate different methods for detecting anisotropy when observing (1) several sample rotations, (2) two perpendicular planes of incidence, and (3) just one observation. We estimate anisotropy by fitting ellipses to diffuse reflectance isocontours, and we assess the robustness of this method as we reduce the number of observations. In addition, to support the validity of our ellipse fitting method, we propose a machine learning model for estimating material anisotropy.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical imaging and spectroscopic techniques are widely used to assess the attributes of different materials [13]. The chemical composition of a material (dissolved substances) affects its optical absorption, while the density, shape, and size distribution of scattering particles determine its scattering behavior. Several techniques have been developed for estimating optical properties of materials based on approximate solutions of radiative transfer, and these have been applied in numerous fields [36]. Many techniques assume a homogeneous, isotropic medium and scattering anisotropy is sometimes considered. Scattering anisotropy refers to the preferred direction of the scattered light relative to the direction of the incident light. This kind of anisotropy is different from material anisotropy, where the optical properties (including scattering anisotropy) change with the global rotation of the medium. If we illuminate a fixed point on the surface of a flat medium exhibiting material anisotropy, the scattering of light (including scattering anisotropy) changes with rotation of the medium around the surface normal.

Estimation of material anisotropy is important in various fields. In the medical field, Elsheikh et al. [7] attempted to use anisotropy to evaluate corneal abnormalities. Pierpaoli et al. [8] proposed that the development of quantitative magnetic resonance imaging (MRI) measurements of diffusion anisotropy may have important biological and clinical applications. Anisotropy is also important in additive manufacturing with respect to understanding the optical and mechanical properties of printed objects [912]. In addition, anisotropic properties are used in food science for estimating meat quality and textural attributes such as tenderness [13,14].

Many techniques for measuring anisotropy however require several observations [8] or use mechanical stretching, which may cause irreversible damage to the samples [7]. Noninvasive methods may be preferred, especially for inline observations. On the other hand, noninvasive spectroscopic measurements usually require a large range of wavelengths. Saeys et al. [15] used three wide spectrum (350 – 2200 nm) measurements, two of them involving an integrating sphere, to estimate the optical properties of samples. Similarly, López et al. [16] measured potato tissue samples using a double-integrating sphere (DIS), and two detectors were set on the sphere wall to measure different wavelength ranges from 500–1900 nm. Many other studies are similar [1719].

The measurement methods mentioned are usually cumbersome and if you aim at using them to assess anisotropy in a production setting, they can be difficult to apply. It is a challenge to have onsite, fast feedback on quality when samples require to be taken out of a line for invasive quality assessment. Inline, fast assessment of quality would result in a decrease in waste and more sustainable production. In the dairy industry, for example, anisotropy has a great influence on the properties of mozzarella [20,21]. In a medium like mozzarella, the anisotropy of the medium is linked to its extensional viscosity and this affects the meltability and stretchability of the cheese. These properties define the functional quality of the product. Obtaining a desired anisotropy is therefore very important in mozzarella production. Anisotropy can be observed at various length scales, visually, or using rheological techniques or microscopic techniques such as confocal microscopy [22]. However, these measurements require extracting and preparing samples, which is disruptive, time-consuming, and not practical for inline use.

In this work, we look for a practical alternative to invasive methods and focus on noninvasive anisotropy detection using as few observations as possible. Our objective is a general and practical method for distinguishing between isotropic and anisotropic subsurface microstructure. To meet this objective while keeping our method noninvasive, we use dual direction oblique incidence reflectometry.

2. Related work

2.1 OIR and anisotropy

Oblique incidence reflectometry (OIR) is analysis of the spatially resolved diffuse reflectance observed when a sample is illuminated in a small spot at an oblique angle. OIR was introduced by Wang and Jacques [23]. The concept is to estimate the shift between the entry point of the light and a diffusion center in the reflectance profile. Finding the shift enables estimation of the optical properties (absorption and reduced scattering coefficients) [2426]. If these properties are different for different rotations of the sample around the plane of incidence, the observed material is anisotropic. Marquez et al. [27] applied this technique to chicken breast tissue and detected anisotropy. These authors used observations for two perpendicular rotations of the sample. Inspired by such previous work, we perform OIR with two sets of two beams each set with perpendicular planes of incidence and the two sets have different wavelengths (short/violet and long/red). However, we found that considering only profiles did not provide a robust detection of anisotropy across a broader variety of materials. We thus decided to include more information and fit ellipses to isocontours of the oblique incidence reflectance.

2.2 Isocontours of diffuse reflectance images

Some of the first work using image intensity isocontours to investigate material anisotropy is that of Nickell et al. [28]. They estimated scattering properties and used a reflectometry system to capture isocontours of images on the abdomen. Interestingly, they found that the shape of the isocontours would change with the distance to the illumination center point and suggested using the eccentricity of the isocontour curves as a measure of the anisotropy of the light propagation. We follow up on this suggestion.

Investigating light scattering in teeth, Kienle et al. [29] studied the dependence of light propagation on the microstructure of a turbid medium. They applied the theory of scattering by an infinitely long cylinder to simulate the propagation of light in a dentin slab. Employing laser scanning microscopy (LSM), they visualized the three-dimensional arrangement of tubules in human teeth and found that the obtained scattering pattern had a ring-like structure associated with it. Later, the same research team posited that the elliptical isocontours observed in spatially-resolved diffuse reflectance images of many biological materials could be attributed to scattering from cylindrical structures [30]. Based on these observations, they used spatially and time-resolved reflectance to analyze the bovine achilles tendon. They acquired reflectance contours with an elliptical shape and found the ellipse direction perpendicular to the direction of collagen fibers when close to the incident illumination but parallel to this direction at larger distances [31]. Similar phenomena have been demonstrated by the same group [32,33] in studies exploring the validity of the anisotropic diffusion equation. These researchers succeeded in noticing the elliptical pattern of spatially-resolved diffuse reflectance due to light scattered in an anisotropic medium. However, they did not fit ellipses to the patterns in order to determine whether a material is anisotropic or not.

Ranasinghesagara et al. [34] noticed a unique optical reflection pattern from fresh prerigor skeletal muscle, neither circular as in an intralipid solution nor elliptical as in fibrous extrudates. They developed a numerical fitting function for this to quantify the isocontours of the acquired reflection images. In subsequent research [35], they estimated tenderness in beef muscles by this numerical fitting method using two parameters derived from the isocontour profile of the reflectance pattern. As a follow-up on this study, Van Beers et al. [36] used the numerical fitting first developed by Ranasinghesagara et al. [34] to investigate the effect of anisotropic structure on light propagation in different beef muscle tissues. They concluded that anisotropic light propagation depends on the initial fiber orientation, muscle type, and wavelength. This related work focuses on how the reflectance patterns vary between an ellipse and a rhombus based on the tendency of the skeletal muscle. Our work has a broader focus on the anisotropy of materials. To have a method that does not depend on prior knowledge of the subsurface structure of the material, such as optical density and orientation of the fibers, we use the previously mentioned OIR with two sets of two beams.

Cha et al. [37] followed up on the work by Kienle et al. [30] also taking advantage of the isocontours of the scattering pattern. They utilized spatially-resolved diffuse reflectance of infrared (IR) light to measure fiber orientation in human skin. This was done using the ellipse orientation at a certain distance from the entry area of the light, which confirms the relationship between anisotropy and ellipse orientation that we further explore.

The mentioned related work supports the use of isocontours in diffuse reflectance images for analyzing the propagation of light in anisotropic materials. Some researchers [28,30,31,36,37] have observed changes in ellipse eccentricity and ellipse orientation in anisotropic materials, but they did not determined which of these two parameters is more indicative of the anisotropy of the material. In our machine learning model, we find that the ellipse orientation is more reliable with respect to judging the anisotropy of the material. Thus, we combine isocontour analysis methods and oblique incidence reflectometry to analyze the anisotropy of various materials, and we employ a machine learning model to establish a correlation between isocontours and anisotropy.

3. Experiments

3.1 Instrumentation

A customized oblique incidence reflectometry device (VideometerSLS by Videometer, Denmark) was used to collect diffuse reflectance images for several samples. The device has two violet lasers (405 nm) and two red lasers (650 nm). These are of the type Acculase with pulse width modulation (PWM) and the output is 5 mW for both 405 and 650 nm. The spot diameters of the lasers are around 2.5 mm. The lasers are monochromatic and have fixed wavelengths. The four lasers are turned on sequentially in the order Violet1, Violet2, Red1, Red2. When a laser is on, the camera records an image. After four cycles, four images have been recorded and the software by Videometer combines them into a layered image with image layers 1–4 corresponding in order to the sequence of the four lasers listed above. The time needed to acquire a 4-layers image is less than 1 second. In Fig. 1, the four lasers are marked by their names. It is worth noting that the beams of the instrument are circular. We adjust the strobe time of the laser based on the reflectivity of the materials to avoid overexposed images. For low reflectivity, it is set longer and vice versa. Sources and samples are secured in an enclosure to prevent ambient light pollution.

 figure: Fig. 1.

Fig. 1. Illustration of our instrument for oblique incidence reflectometry. The view direction of the camera is perpendicular to the horizontal plane and the angle between the laser and the horizontal plane is $60^{\circ }$. The rotation stage is adjustable in height.

Download Full Size | PDF

Figure 1 is a schematic diagram of the instrument. From the top view, the irradiation directions of the lasers of the same color are perpendicular to each other. The emitted laser spots are circular and linearly polarized parallel to the plane spanned by the irradiation direction and the view direction of the camera. The angle of incidence is $30^{\circ }$ if the sample surface is smooth and aligned with the horizontal plane. The sample is placed on a rotation stage inside the device. The height of the stage is carefully adjusted to ensure that the four lasers irradiate the same position on the sample surface. The stage can be manually rotated at 10-degree intervals. Four jigs on the stage help us fix the samples in a specific location for every measurement. A high resolution ($5472\times 3648$), monochrome, 12 bits CMOS camera is placed at the top of the device to capture grayscale images of the sample.

3.2 Materials preparation

We prepared the following expected isotropic materials (Xrite ColorChecker White Balance Target, milk, chocolate, semi-hard cheese, 3D printed isotropy object) and anisotropic materials which visually showed aligned fibers (mozzarella cheese, bamboo, marshmallow, chicken breast, 3D printed anisotropy object) for our experiments.

3.2.1 Isotropic material

ColorChecker White Target. The ColorChecker White Balance Target by X-Rite/Calibrite is produced for camera white balancing. According to the manufacturer, it was engineered to be spectrally flat and to provide a neutral point of reference across different types of lighting conditions. Thus, the White Balance target should exhibit consistent reflectance in all directions. Supposedly, this sample then has isotropy superior to regular paper.

Milk. We use a homogenized commercial sample (Arla Mini 0.4%). This is ordinary milk containing randomly distributed fat and protein particles, which makes it an isotropic material.

Chocolate. Generally, chocolate is liquid when it is made, and then solidifies into a solid. Although it creates microscopic crystals, it does not show laminar or fiber alignment so we consider it isotropic at the tested length scale.

Semi-hard cheese. Unlike mozzarella cheese, semi-hard cheese (Danbo) is produced by forming the curd and pressing it into a homogeneous mass which when cut into slices or cubes exhibits no visible anisotropy.

3D printed polymeric block. We printed a block of size 25x50x10 mm$^3$ using liquid-crystal display (LCD) 3D printing technology (Elegoo Saturn). The layer thickness was 0.05 mm, exposure time was 2.5 s with retract and lifting speeds at 210 mm/s and 70 mm/min, respectively, and we had anti-aliasing activated. This type of 3D printing is referred to as vat photopolymerization. A related vat photopolymerization technology has reached a percentage of mechanical isotropy of more than 95% [38], and according to Ward [39] mechanical and optical properties are correlated. We thus printed our objects with an expectation of reaching a reasonable degree of optical isotropy in our samples.

3.2.2 Anisotropic material

Mozzarella cheese. We used a commercial sample of wet mozzarella (Galbani), which is considered anisotropic. The white bulk has elongated fibers. A cheese cutter was used to cut the mozzarella to ensure it has a smooth and horizontal surface.

Bamboo. We used bamboo from chopsticks. Bamboo has a fiber structure, it is an ideal anisotropic material as we can observe the fiber direction.

Marshmallow. Marshmallows can exhibit anisotropy if stretched, and the direction of the fiber is aligned with the stretch direction.

Chicken breast. Chicken breast, as it is composed of muscle tissue, is anisotropic in nature, with protein fibers that have a visible direction. The optical anisotropy of this material has been confirmed in previous work [27].

3D printed anisotropic block. We used fused filament fabrication 3D printing technology (Creality Ender 3) to print a block of size $25\times 50\times 10$ mm$^3$. The layer thickness was 0.12 mm, print speed at 50 mm/s and temperature at 200$^{\circ }$C. This printing technology deposits a filament as a printhead moves along a path. We created a path resulting in a material interior that exhibits visible optical material anisotropy.

3.3 Experiments process

We position the container holding the samples in the center of the device stage. To ensure that the four lasers hit the same spot on the sample surface, we activate the live mode, where the four lasers will light up in sequence. This allows us to adjust the height of the stage to align the four illumination spots as closely as possible. We monitor the brightness of the spots and adjust the probe time if needed. We mark the starting position as 0 degrees of rotation, capture a layered image of the sample, rotate the stage by 10 degrees, capture another layered image, and repeat until we have 19 images with rotation angles from 0 to 180 degrees.

4. Methods

The laser spots created by laser beams of the same wavelength incident on a sample surface from different directions will have different shapes of ellipses if the sample has a fiber structure. Ranasinghesagara et al. [40] reported that the fiber direction of anisotropic muscle tissue would influence light propagation [34]. Van Beer et al. [36] and Binzoni et al. [41] also found that photons tend to travel along the muscle fiber. In addition, Monte Carlo simulations confirm the general principle that light has a higher probability of scattering along the direction of the fibers [30].

4.1 Ellipse fitting method

Given an image of a laser spot on a sample surface, each pixel has image coordinates $x, y$ and a pixel value $I_{xy}$. We use the direct least squares method of Fitzgibbon et al. [42] to fit an ellipse to the image coordinates of pixels with values above a selected threshold $I_0$. To calculate the distance between a point and an ellipse, we use the following implicit second-order polynomial:

$$F(\mathbf{a,x})= \mathbf{x}^T\mathbf{a} = ax^2 +bxy+cy^2+dx+ey+f=0,$$
where $\mathbf {a}=[a\:b\:c\:d\:e\:f]^T$ and and $\mathbf {x} = [x^2\:\,xy\,\:y^2\:x\:y\:1]^T$. Here, we have the freedom to arbitrarily scale the parameters in $\mathbf {a}$, so we incorporate the scaling into a constraint and impose the equality constraint $4ac-b^2=1$, which may be expressed in the matrix form $\mathbf {a}^T\mathbf {C}\mathbf {a}=1$. Given a set $A$ of $n$ points $\mathbf {x}_i=(x_i, y_i), i=1,\dots,N$, the sum of squared distances is
$$\mathcal{D}_A(\mathbf{a})=\sum_{i=1}^n \left(F(\mathbf{a}, \mathbf{x}_i)\right)^2,$$
which we rewrite as
$$\mathcal{D}_A(\mathbf{a}) = \sum_{i=1}^n\mathbf{a}^T\mathbf{x}_i \mathbf{x}_i^T\mathbf{a}=\mathbf{a}^T\mathbf{Sa}$$
with $\mathbf {S=\sum }_i \mathbf {x_i}\mathbf {x_i}^T$. Thus, the ellipse fitting problem becomes the following optimization:
$${\arg\min_{\mathbf{a}}(\mathbf{a}^T\mathbf{Sa})} \quad \textrm{subject to}\; \mathbf{a}^T\mathbf{Ca} = 1.$$

Introducing the Lagrange multiplier $\lambda$ and differentiating,

$$\mathcal{L}(\mathbf{a},\lambda)=\mathbf{a}^T\mathbf{Sa}-\lambda (\mathbf{a}^T\mathbf{Ca}-1)$$
by calculating $\nabla \mathcal {L}(\mathbf {a},\lambda )=0$, we obtain
$$\mathbf{Sa}=\lambda\mathbf{Ca}\,.$$

This leaves us with a system of equations for the minimization of $\mathcal {D}_A (\mathbf {a})$ subject to $4ac-b^2=1$ that yields exactly one solution. This solution corresponds, by virtue of the constraint, to an ellipse. Figure 2 shows an example of this ellipse fitting method.

 figure: Fig. 2.

Fig. 2. Fitting ellipses (outlines) to the image coordinates of pixels with values larger than $I_0 = 150$ (red points) for the case of milk. We show the result for the four different image layers (a–d) corresponding to the different laser beams in the instrument (Violet1, Violet2, Red1, Red2). The green line represents the direction of the major axis of the ellipse, with the 12 o’clock direction defined as 0 degrees. The blue line in (d) is the ellipse direction obtained from the PCA analysis described in Sec. 6.1. The result of this is only slightly different from the ellipse fitting method.

Download Full Size | PDF

4.2 Image analysis

When capturing data, we obtain a raw image with four layers for each configuration, that is, each rotation of each sample. The image layers are an image for each of the lasers: violet 1, violet 2, red 1, and red 2, respectively (see Fig. 1). We select a $400\times 400$ pixels region of interest (ROI) using an axis-aligned square with the midpoint of the brightest area as its center. Since the brightness of each image layer can vary, we perform ellipse fitting on each image layer individually. To observe potentially different scattering patterns at different distances from the entry area of the laser, we perform 11 fits for each image layer. Our decision to use 11 fits is a compromise between use of computational resources and the number of data points needed to understand the behavior of the ellipse shape and orientation for different intensity levels. We select pixel intensity thresholds $I_0$ that result in pixels with values above the threshold ($I_{xy} > I_0$) giving a particular coverage of the image. Increasing from the minimum pixel intensity, we pick our first threshold when coverage is 75% of the total image. The last threshold is when we reach a 5% coverage of the total image. In-between, we pick evenly spaced intensity thresholds so that the result is 11 ellipses fitted in total for a single image layer. The fitted ellipse has five parameters: the $x$ and $y$ coordinates of its center, the semi-major radius $a$, the semi-minor radius $b$, and the ellipse angle. For the ellipse angle, we use the angle of the semi-major axis with the 12 o’clock direction in the image. In addition, we use ellipse eccentricity

$$e = \sqrt{1 - \frac{b^2}{a^2}}$$
to evaluate whether ellipses are similar or not.

4.3 Rotation series analysis

For each sample of each experiment, we have $19\times 4$ images of rotation angles from 0 to 180 degrees with intervals of 10 degrees. We can thus observe the scattering pattern trends when the sample is rotated. When an anisotropic sample is rotated, we expect that the scattering patterns obtained at different angles will be different, which will lead to changes in the fitted isocontours. We apply our analysis method to all image layers of the different layered images in a rotation series and calculate the average value of the ellipse angle and the eccentricity for the 11 different pixel intensity thresholds.

5. Results

5.1 Single image

We first analyze our captured image data based on the method described in Section 4 to assess the anisotropy of the different materials included in the study.

Eccentricity. In the configuration of the instrument, the expected eccentricity of an observed laser spot incident on a flat perfectly diffuse material aligned with the horizontal plane should be 0.5 (because the beam is circular $b/a=\cos \theta$, where $\theta =30^\circ$ is the angle of incidence).

Figure 3(a) shows the ellipse eccentricity trends of the four image layers as the pixel intensity increases for the ColorChecker White Target. As the results of different color lasers incident from different directions show, the ColorChecker White Target exhibits an almost perfect agreement with the theoretical value of 0.5 for the eccentricity, due to its engineered diffuse nature. On the other hand, mozzarella cheese (Fig. 3(a)) has a rather different eccentricity between layers 1 and 2, as well as between layers 3 and 4. This is due to the anisotropy of the material, as two beams of the same wavelength incident from different directions form laser spots of different shapes. The difference between layers 1 and 3 versus layers 2 and 4 is due to a preferred direction of scattering in the subsurface microstructure of the material. Since light scatters along fiber directions, we expect the shape of the reflected dot to change more at different intensities the larger the misalignment between fiber directions and the direction of incidence of the light. The directions of incidence of layers 2 and 4 are thus nearly aligned with the fiber direction, while the directions of incidence of layers 1 and 3 are nearly perpendicular to the fiber direction. The difference between the results of layer 1 and layer 3 is a result of the different optical properties at red and violet wavelengths in mozzarella cheese. We note that in this particular case, two image layers seem sufficient to distinguish whether a material is isotropic or anisotropic. If the laser is from a direction orthogonal to the structures giving the anisotropy such as fibers or laminar structures, we can even distinguish between an isotropic and an anisotropic material using just one image layer (layer 1 or 3, in this case). However, if we are unlucky with our direction of incidence, we cannot reliably detect the anisotropy (layer 2 or 4, in this case).

 figure: Fig. 3.

Fig. 3. Ellipse eccentricity of Xrite ColorChecker White Target (a) and mozzarella cheese (b) as the pixel intensity increases. Layers 1 and 2 are captured by violet lasers whose incident directions are perpendicular to one another, and similarly layers 3 and 4 are captured by red lasers. The small difference of layer 1 in (a) is due to a calibration problem, the laser Violet1 could not irradiate the exact same area as the other lasers.

Download Full Size | PDF

Ellipse angle. When fitting the ellipse, we obtain the ellipse angle, that is, the angle between the semi-major axis and the 12 o’clock direction of the image. As examples, Figs. 4(a) and 4(b) show the fitted ellipses of layer 1 for milk and mozzarella cheese. The increasing pixel intensity thresholds are illustrated with colors. In Fig. 4(a), we see that the ellipse angle for isotropic milk does not vary much with pixel intensity, whereas the ellipse angles of mozzarella cheese in Fig. 4(b) tend to change as the fitted ellipses get farther away from the laser spot center, and the contour becomes more circular making it uncertain to determine the direction. Figures 4(c) and 4(d) show the variation of the ellipse angles for the four image layers of milk and mozzarella cheese with pixel intensity. The ellipse angles for the four layers of milk do not change much with pixel intensity. In contrast, the ellipse angles for layers 1 and 3 of the mozzarella cheese increase with pixel intensity, while the ellipse angles for layers 2 and 4 do not change much. As for the eccentricity, this is due to the direction of lasers 2 and 4 being nearly aligned with the direction of elongation of the subsurface microstructure, while the lasers in layers 1 and 3 are perpendicular to this direction. As the ellipse gets closer to the center, the direction of the ellipse is more closely aligned with the direction of incidence of the laser, while as it gets farther away from the center, the ellipse angle is more influenced by the direction of the material fibers and the scattering. Thus, the ellipse angles carry some of the same information as the eccentricities, but the signal is stronger.

 figure: Fig. 4.

Fig. 4. Ellipse angles comparison between milk and mozzarella. Fig. (a) and (b) show the variation of the ellipse angle with the pixel intensity of layer 1 of milk (a) and mozzarella cheese (b), respectively. The line represents the angle of the ellipse, and the correspondence between the fitted ellipse contour and the line is represented by the same color. Fig. (c) and Fig. (d) represent the relationship between the ellipse angles of the four image layers and pixel intensity of milk (c) and mozzarella cheese (d).

Download Full Size | PDF

In cases where the anisotropy is relatively low or the surface is not very smooth, it can be a challenge to estimate anisotropy from only one layered image. An example of this is the semi-hard cheese (see the Supplement 1, Fig. S1). Although we label semi-hard cheese as isotropic, our analysis reveals that it is not as isotropic as some of the other isotropic materials. If only one layered image is used, we might accidentally classify it as an anisotropic material. The relationship between the ellipse angle and the pixel intensity in a single layer may not be clearly discernible due to noise or speckle. Additionally, if the fiber direction coincides with the midline between two mutually perpendicular lasers, comparing the results from the two laser points may not provide a clear indication of anisotropy. This problem can be solved by taking a series of layered images with different sample rotation angles.

5.2 Rotation series

Figure 5 shows how the ellipse eccentricity and ellipse angle of milk and mozzarella vary with the sample rotation angles. In Fig. 5(a) and 5(b), the eccentricity and ellipse angle hardly change with the rotation of the sample, while the opposite is the case in Figs. 5(c) and 5(d). In Fig. 5(c), at first Violet1 and Violet2 (layers 1 and 2) have similar eccentricity, probably for the reason that the fiber direction is halfway between the planes of incidence of the two lasers. When we rotate the sample, the eccentricity starts to change periodically, which is the expected property of an anisotropic material. A similar periodical change is seen for the ellipse angles in Fig. 5(d). These results demonstrate that by rotating the sample, we can more reliably determine the anisotropy of the material. This method is useful for obtaining ground truth for unknown samples.

 figure: Fig. 5.

Fig. 5. Relationship between the sample rotation angle and the the fitted ellipse when the milk (a,b) and mozzarella (c,d) is rotated on the stage. The horizontal axis is the sample rotation angle and the vertical axis is the average eccentricity (a,c) and ellipse angle (b,d) in a single image considering 11 different pixel intensities. The plots include the results of fitting multiple ellipses in each layer using various pixel intensities, resulting in multiple values of eccentricity and ellipse angle for each rotation angle. The plots depict an approximation of the mean and a 95% confidence interval.

Download Full Size | PDF

To verify our initial assumptions regarding the isotropy or anisotropy of the different materials in our experiments, we provide rotation series plots for the different materials in the Supplement 1 (Fig. S1). As a summary of these, we listed the mean sum of the variances of the different image layers for ellipse eccentricity and angle in Table 1. When the mean sum of variances of the ellipse angles for a full rotation series is high, it strongly suggests anisotropy.

Tables Icon

Table 1. Mean sum of the variances of the four image layers across a full rotation series computed for ellipse eccentricity and ellipse angle. In the case of the ellipse angle, this number clearly distinguishes isotropic materials from anisotropic materials (note the one or two orders of magnitude difference), which strongly suggests that our initial assumptions regarding isotropy or anisotropy of the different materials were right.

6. Machine learning models

In Section 5, we demonstrated the relationship between anisotropy and ellipse eccentricity, ellipse angle and their trends when rotating samples. To determine the number of observations that we would need to reliably judge whether a sample is anisotropic or not, we use machine learning models. We test the ability of various models with respect to estimating the anisotropy of various materials based on their ellipse eccentricities and angles.

6.1 Feature extraction

Using the layered images of the materials mentioned in Section 3, we fit multiple ellipses by the method described in Section 4. As the pixel intensity threshold $I_0$ increases, fewer points are used to fit the ellipse leading to higher uncertainty. Thus, to ensure robustness, we also tested the use of principal component analysis (PCA) with two components to obtain the principal direction of the data (first eigenvector, providing the ellipse angle) and the spread of the data in two directions (the two eigenvalues, corresponding to the ellipse radii) for a set of pixels with $I_{xy} > I_0$. The blue line in Fig. 2(d) shows the direction of the first eigenvector obtained from the PCA analysis. This is expected to correspond to the direction of the major axis of the ellipse, which it does with a slight difference. The core of PCA is to transform the original data into a new coordinate system such that the variance of the data on the first dimension in the new coordinate system is maximized, the variance of the data on the second dimension is second only to that of the first dimension, and so on. Thus, up to numerical robustness of the methods, the first two eigenvectors of the PCA and their corresponding eigenvalues are expected to be nearly identical to the directions of the major and minor ellipse axes and their corresponding radii. We use the two eigenvalues to calculate the ‘eccentricity’ of the PCA. Table 2 shows an example of the data we extracted from our images.

Tables Icon

Table 2. Feature extraction for machine learning. Each row of the table contains the results of fitting an ellipse or performing PCA on a single image layer using a certain pixel intensity threshold. The column "Img No" indicates the index of a layered image in the material. The column "Label" specifies whether the material is considered anisotropic (1) or isotropic (0). The column "Transform" indicates whether the row represents an original image (0) or a data augmentation version (1). We use "ecc" as an abbreviation of eccentricity.

6.2 Preprocessing

For machine learning, we do data augmentation by applying random image rotations as well as contrast and brightness adjustments to our layered images. Data augmentation is a technique that expands a dataset by adding new synthetic data generated from the original data. This helps preventing overfitting in machine learning models by acting as a regularizer, and it has been found to improve model performance [43]. For our data augmentation, we use the full rotation range from 0 to 360 degrees and an adjustment range for contrast and brightness from 0.5 to 1.5. We apply 19 transformations to each image and then fit ellipses to the augmented images to extract numerical data from them. The data is split into training and test sets, with the training set containing combined original and augmented data and the test set containing only original data.

We divide the test and training sets according to material type to keep balance in each set. Our data set includes 10 different materials (5 isotropic, 5 anisotropic) and we randomly select 4 materials (2 isotropic, 2 anisotropic) as the test set, and the rest as the training set. During the training, we randomly select 20% from the training set as a validation set, not based on material type. We perform 50 training iterations using different test sets each time to calculate the average performance of the model.

6.3 Machine learning models

We take some of the popular used models, use their default parameters to compare their performance, and select the best-performing model for fine-tuning to obtain better performance. We select radial basis function (RBF) kernel support vector machine (SVM) [44], logistic regression, decision tree [45], ExtraTrees [46], random forest [47], fully connect neural network, AdaBoost [46], naïve Bayes, and quadratic discriminant analysis (QDA) [48] to compare the performance and differences between each model and choose an appropriate model. The models are implemented using the open source Python package scikit-learn [49]. All of our machine learning models are trained using the data in Table 2.

Based on the results in Table 3, it was determined that the neural network had the best performance, thus its parameters were adjusted to enhance it further. We improved the performance of the neural network by adjusting its parameters. Initially, the neural network had only one hidden layer with 100 neurons, but we increased this to five layers with 400, 200, 100, 50, and 20 neurons. Additionally, to prevent overfitting, we employed L2 regularization with a weight of 1 and implemented an early stop during training. L2 regularization is a technique that helps prevent overfitting by adding a penalty term to the loss function that is proportional to the squared magnitude of the weights. This forces the weights to be small, but not exactly zero, which reduces the complexity of the model and avoids overfitting to the noise in the data. The maximum number of training iterations was set at 10,000.

Tables Icon

Table 3. Median value of train and test AUROC scores for each model. Four image layers and one image layer data contain information of both eccentricity and angle calculated by the ellipse fitting method.

6.4 Model performance

To assess the performance of our models, we use the area under the receiver operating characteristic (AUROC) curve score. AUROC is a metric used to evaluate the performance of a binary classifier system and is plotted by graphing the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The TPR is calculated as the fraction of true positives among all positives, while the FPR is the fraction of false positives among all negatives [50]. An unbiased estimator of the AUROC for a predictor $f$ can be calculated using the Wilcoxon-Mann-Whitney statistic,

$$\mathrm{AUROC}(f)=\frac{\sum_{t_0 \in \mathcal{D}^0} \sum_{t_1 \in \mathcal{D}^1} \mathbf{1}\!\left[f(t_0)<f(t_1)\right]}{\left|\mathcal{D}^0\right|\left|\mathcal{D}^1\right|},$$
where the indicator function $\mathbf {1}\!\left [f(t_0)<f(t_1)\right ]$ returns 1 if the condition in the bracket is true and 0 otherwise. The set of negative examples is represented by ${\mathcal {D}}^{0}$ and the set of positive examples is represented by ${\mathcal {D}}^{1}$. If the AUROC score of a model is 0.5, the model is no better than random chance, indicating that the model does not have the ability to make predictions. The maximum AUROC score is 1, which means that the model was able to correctly identify all true cases and did not make any false detections.

From Table 3, we find that all models perform poorly when only one image layer is used. In the case of using four image layers, the neural network model performed best, although other models (e.g. ExtraTrees, AdaBoost, random forest) have extremely high validation scores, the test scores are very low, which is likely due to overfitting on the training set. Given the good performance of the neural network model, we adjust its hyperparameters to improve its performance and thoroughly analyze the results.

Figure 6 shows that the neural network performs best when using four image layers, with a median AUROC score of 0.9. When using only two image layers, the performance decreases to 0.83. Table 4 shows the test set average miss classification rate after 50 epochs of training for different materials irradiated by different lasers. The results of using both eccentricity and angle are similar to using only angle, but when only eccentricity is used, the model cannot reliably distinguish the anisotropy of the sample. It appears that the model is memorizing the samples instead of generalizing. We suspect that the eccentricity has a high correlation with the texture of the material. The performance of the red laser is slightly better than that of violet, which may be due to the fact that the red laser penetrates deeper into the materials and thus contains more scattering information. Additionally, PCA has smaller variance, indicating better stability. We recommend two samplings to obtain more accurate results. If the two sampling results are inconsistent, more measurements are required.

 figure: Fig. 6.

Fig. 6. Neural network AUROC score. Results were obtained by randomly sampling four materials (two isotropic and two anisotropic) from ten materials (five isotropic and five anisotropic) and run 50 epochs to get the overall performance of different combinations. Where (a) is the AUROC score of the validation set, and (b) is the AUROC score of the test set. The horizontal axis of the plots represents the outcome of training under various conditions: "4layers" indicates that all four layers were used during training simultaneously, "ang" implies that each layer holds only information about the angle of the ellipse/PCA, "ecc" means that each layer only contains information about the eccentricity of the ellipse/PCA, "ecc, ang" implies that each of the 2 layers holds both angle and eccentricity information, and "2layers" means that only two layers were utilized during training, with "r" denoting red and "v" violet.

Download Full Size | PDF

Tables Icon

Table 4. Average test set miss classification rate after 50 epochs of training for different materials irradiated by different lasers. Different lasers have different classification accuracy for different materials. The classification error rate of violet for chicken breast is much lower than that of red, for example. Conversely, the classification error rate of red for bamboo is much lower than that of violet. In five of the ten cases, we obtain better results when using the combined red+violet. Both ellipse eccentricity and angle are used for training and evaluation.

The quality of the samples used for the training set will also affect the performance of the models, such as surface smoothness. After using 3D printed objects with a relatively smoother surface, the AUROC score of the testset all increased to varying degrees. Using 3D printed objects with a smoother surface resulted in a slight increase in the AUROC score of the test set. The variance of the AUROC score of the test set using ellipse fitting for feature extraction is also reduced, which makes the variance of the two methods similar (we omit these results for brevity).

We further validate our results by a two-way multivariate analysis of variance (MANOVA):

$$\begin{array}{r} \begin{bmatrix} \mathrm{ecc,ang} &\mathrm{ang} &\mathrm{ecc} \end{bmatrix} = \mu + \mathrm{imagelayers}_i + \mathrm{method}_j, \\ i=1,2,4 \; , \; j = 1,2, \end{array}$$
where $\mu$ is an intercept. The $P$-value for Wilks’ lambda is <0.0001 and 0.0011 for $\mathrm {imagelayers}$ and $\mathrm {method}$, respectively. Interestingly, when considering the marginal tests, the number of image layers used (indicated by $\mathrm {imagelayers}$) is strongly significant for all three measures, but for $\mathrm {method}$, no significant difference is found for $\mathrm {ecc,ang}$ ($P=0.1231$). When considering the impact of wavelength, we use
$$\begin{bmatrix}\mathrm{ecc,ang} & \mathrm{ang} & \mathrm{ecc} \end{bmatrix} = \mu + \mathrm{red} + \mathrm{violet},$$
where $\mu$ is an intercept, and $\mathrm {red}$ and $\mathrm {violet}$ are dummy variables indicating what color was used for a given measurement. Using Wilks’ lambda the $P$-value for both $\mathrm {red}$ and $\mathrm {violet}$ are insignificant at 0.1762 and 0.1767, respectively. Although not significant, interaction plots showed that red had a positive effect on $\mathrm {ecc,ang}$ and a negative effect on $\mathrm {ecc}$, and vice versa for $\mathrm {violet}$.

7. Discussion

We have demonstrated that using the ellipse eccentricity and ellipse angles calculated from the isocontour of the scattering pattern, we can determine whether a sample is anisotropic. We supported our analysis method using machine learning models, which suggested that the method is reliable. We can achieve high accuracy with few observations. For example, using only two 640 nm red lasers, we can achieve an AUROC score of 0.83. Using two 640 nm red lasers and 405 nm violet lasers can raise the AUROC score to 0.90.

In the outset, we tried to estimate optical properties for different sample rotations using existing OIR methods [23,26,27]. However, when examining more highly scattering substances, like mozzarella cheese, this approach became very challenging. The reason is that we look for the distance between the entry point of the light and the diffusion centre ($\Delta x$) in these methods, and this is very small and hidden by noise in the case of highly scattering materials.

We tested a new method for identifying anisotropy in materials using a layered image with images for two sets of two lasers with perpendicular planes of incidence and the same wavelength in each set. Our method is based on the assumption that light scatters along the direction of fibers in anisotropic materials. However, as mentioned in Section 5.1, using a single layered image is not always reliable. Thus, we recommend rotating the sample by at least 45 degrees and measuring again to determine if the result is isotropic. The most reliable method is to rotate the sample from 0 to 180 degrees and measure for every 10 degrees, then calculate the average ellipse eccentricity and angle at each rotation to determine the overall tendency of the sample. If the ellipse eccentricity and angle remains almost constant across different rotations, we conclude that the sample is isotropic.

Machine learning models are more effective at discovering patterns in data and can identify the anisotropy of materials using just a photo with either four or two layers. In the model we developed, we found that the neural network model can judge the anisotropy of the material well depending on the ellipse angle. However, when using ellipse eccentricity, all models perform poorly. We speculate that the eccentricity of the ellipse may be closely related to the type of material, making it unable to generalize to all unknown materials. Moreover, materials exhibit varying degrees of anisotropy rather than being solely anisotropic or isotropic. Thus, using only binary values of 0 or 1 to represent complete isotropy or anisotropy during training is an incomplete description of the reality. A better model would be one capable of predicting the degree of material anisotropy. However, a metric to describe the degree of anisotropy would be required for this.

A limitation of our method is that it requires the sample to be placed horizontally and have a relatively smooth surface. Unlike liquids which always have a smooth surface and stay horizontal, solid substances such as mozzarella can have a very uneven and tilted surface which does not meet our measurement requirements. Further research on uneven and tilted surfaces might be needed for an inline deployment.

8. Conclusion

We proposed a practical and versatile method for detecting optical anisotropy in materials. By acquiring layered images of various materials using an OIR device and fitting ellipses to pixel sets at different intensity thresholds, we noticed a relationship between the eccentricity of the ellipse and the anisotropy of the material. In addition, we observed a relationship between the directions of the major axes of the ellipses and the material anisotropy. Using lasers at oblique incidence, we had an expected eccentricity and major axis direction for a diffuse, isotropic material. Observing that the eccentricities and directions gradually deviated from these expectations in anisoptropic materials when considering different pixel intensity thresholds, led us to believe that our relatively simple OIR configuration is suitable for inline use for assessing a material’s anisotropy.

Finally, we developed a neural network model based on our findings and found that with few observations, the neural network model can effectively identify material anisotropy using ellipse angles as input. Our method and model are simple and efficient, with the potential to be applied to a wide range of fields, such as medical, industrial, and food production.

Funding

Innovationsfonden (0223-00041B); Villum Fonden (0037759); Horizon 2020 Framework Programme (814158).

Acknowledgments

We thank Niels Christian Krieger Lassen and the rest of the team at Videometer for their good work on the customized VideometerSLS instrument that we purchased from them for this project.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [51].

Supplemental document

See Supplement 1 for supporting content.

References

1. V. V. Tuchin, Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnostics (SPIE, 2015), 3rd ed.

2. S. Stocker, F. Foschum, P. Krauter, F. Bergmann, A. Hohmann, C. Scalfi Happ, and A. Kienle, “Broadband optical properties of milk,” Appl. Spectrosc. 71(5), 951–962 (2017). [CrossRef]  

3. R. Lu, R. Van Beers, W. Saeys, C. Li, and H. Cen, “Measurement of optical properties of fruits and vegetables: A review,” Postharvest Biol. Technol. 159, 111003 (2020). [CrossRef]  

4. A. N. Bashkatov, E. A. Genina, V. I. Kochubey, A. A. Gavrilova, S. V. Kapralov, V. A. Grishaev, and V. V. Tuchin, “Optical properties of human stomach mucosa in the spectral range from 400 to 2000 nm: prognosis for gastroenterology,” Med. Laser Appl. 22(2), 95–104 (2007). [CrossRef]  

5. J. Qin and R. Lu, “Measurement of the absorption and scattering properties of turbid liquid foods using hyperspectral imaging,” Appl. Spectrosc. 61(4), 388–396 (2007). [CrossRef]  

6. B. Park and R. Lu, Hyperspectral Imaging Technology in Food and Agriculture (Springer, 2015).

7. A. Elsheikh, M. Brown, D. Alhasso, P. Rama, M. Campanelli, and D. Garway-Heath, “Experimental assessment of corneal anisotropy,” J. Refract. Surg. 24(2), 178–187 (2008). [CrossRef]  

8. C. Pierpaoli and P. J. Basser, “Toward a quantitative assessment of diffusion anisotropy,” Magn. Reson. Med. 36(6), 893–906 (1996). [CrossRef]  

9. J. B. Nielsen, E. R. Eiriksson, R. L. Kristensen, J. Wilm, J. R. Frisvad, K. Conradsen, and H. Aanaes, “Quality assurance based on descriptive and parsimonious appearance models,” in Workshop on Material Appearance Modeling, R. Klein and H. Rushmeier, eds. (The Eurographics Association, 2015).

10. M. Spoerk, C. Savandaiah, F. Arbeiter, G. Traxler, L. Cardon, C. Holzer, and J. Sapkota, “Anisotropic properties of oriented short carbon fibre filled polypropylene parts fabricated by extrusion-based additive manufacturing,” Compos. Part A: Appl. Sci. Manuf. 113, 95–104 (2018). [CrossRef]  

11. A. Camposeo, L. Persano, M. Farsari, and D. Pisignano, “Additive manufacturing: applications and directions in photonics and optoelectronics,” Adv. Opt. Mater. 7(1), 1800419 (2019). [CrossRef]  

12. N. Zohdi and R. Yang, “Material anisotropy in additively manufactured polymers and polymer composites: a review,” Polymers 13(19), 3368 (2021). [CrossRef]  

13. J.-L. Damez, S. Clerjon, S. Abouelkaram, and J. Lepetit, “Beef meat electrical impedance spectroscopy and anisotropy sensing for non-invasive early assessment of meat ageing,” J. Food Eng. 85(1), 116–122 (2008). [CrossRef]  

14. J.-L. Damez and S. Clerjon, “Meat quality assessment using biophysical methods related to meat structure,” Meat Sci. 80(1), 132–149 (2008). [CrossRef]  

15. W. Saeys, M. A. Velazco-Roa, S. N. Thennadil, H. Ramon, and B. M. Nicolaï, “Optical properties of apple skin and flesh in the wavelength range from 350 to 2200 nm,” Appl. Opt. 47(7), 908–919 (2008). [CrossRef]  

16. A. López-Maestresalas, B. Aernouts, R. Van Beers, S. Arazuri, C. Jarén, J. De Baerdemaeker, and W. Saeys, “Bulk optical properties of potato flesh in the 500–1900 nm range,” Food Bioprocess Technol. 9(3), 463–470 (2016). [CrossRef]  

17. Y. Huang, R. Lu, and K. Chen, “Development of a multichannel hyperspectral imaging probe for property and quality assessment of horticultural products,” Postharvest Biol. Technol. 133, 88–97 (2017). [CrossRef]  

18. R. Van Beers, B. Aernouts, L. León Gutiérrez, C. Erkinbaev, K. Rutten, A. Schenk, B. Nicolaï, and W. Saeys, “Optimal illumination-detection distance and detector size for predicting braeburn apple maturity from vis/nir laser reflectance measurements,” Food Bioprocess Technol. 8(10), 2123–2136 (2015). [CrossRef]  

19. K. Mollazade and A. Arefi, “Optical analysis using monochromatic imaging-based spatially-resolved technique capable of detecting mealiness in apple fruit,” Sci. Hortic. 225, 589–598 (2017). [CrossRef]  

20. A. Renda, D. M. Barbano, J. J. Yun, P. S. Kindstedt, and S. J. Mulvaney, “Influence of screw speeds of the mixer at low temperature on characteristics of mozzarella cheese,” J. Dairy Sci. 80(9), 1901–1907 (1997). [CrossRef]  

21. R. Feng, S. Barjon, F. W. van den Berg, S. K. Lillevang, and L. Ahrné, “Effect of residence time in the cooker-stretcher on mozzarella cheese composition, structure and functionality,” J. Food Eng. 309, 110690 (2021). [CrossRef]  

22. M. C. Gonçalves and H. R. Cardarelli, “Composition, microstructure and chemical interactions during the production stages of mozzarella cheese,” Int. Dairy J. 88, 34–41 (2019). [CrossRef]  

23. L. Wang and S. L. Jacques, “Use of a laser beam with an oblique angle of incidence to measure the reduced scattering coefficient of a turbid medium,” Appl. Opt. 34(13), 2362–2366 (1995). [CrossRef]  

24. S.-P. Lin, L. Wang, S. L. Jacques, and F. K. Tittel, “Measurement of tissue optical properties by the use of oblique-incidence optical fiber reflectometry,” Appl. Opt. 36(1), 136–143 (1997). [CrossRef]  

25. P. Sun, R. Q. Yang, F. H. Xie, J. Q. Ding, F. Q. Zhang, and X. P. Cao, “A method for determining optical properties of human tissues by measuring diffuse reflectance with CCD,” in Optics in Health Care and Biomedical Optics IV, vol. 7845 of Proc. SPIE (SPIE, 2010), pp. 396–407.

26. O. H. A. Abildgaard, F. Kamran, A. B. Dahl, J. L. Skytte, F. D. Nielsen, C. L. Thomsen, P. E. Andersen, R. Larsen, and J. R. Frisvad, “Non-invasive assessment of dairy products using spatially resolved diffuse reflectance spectroscopy,” Appl. Spectrosc. 69(9), 1096–1105 (2015). [CrossRef]  

27. G. Marquez, L. V. Wang, S.-P. Lin, J. A. Schwartz, and S. L. Thomsen, “Anisotropy in the absorption and scattering spectra of chicken breast tissue,” Appl. Opt. 37(4), 798–804 (1998). [CrossRef]  

28. S. Nickell, M. Hermann, M. Essenpreis, T. J. Farrell, U. Krämer, and M. S. Patterson, “Anisotropy of light propagation in human skin,” Phys. Med. Biol. 45(10), 2873–2886 (2000). [CrossRef]  

29. A. Kienle, F. K. Forster, R. Diebolder, and R. Hibst, “Light propagation in dentin: influence of microstructure on anisotropy,” Phys. Med. Biol. 48(2), N7–N14 (2003). [CrossRef]  

30. A. Kienle, F. K. Forster, and R. Hibst, “Anisotropy of light propagation in biological tissue,” Opt. Lett. 29(22), 2617–2619 (2004). [CrossRef]  

31. A. Kienle, C. Wetzel, A. L. Bassi, D. Comelli, P. Taroni, and A. Pifferi, “Determination of the optical properties of anisotropic biological media using an isotropic diffusion model,” J. Biomed. Opt. 12(1), 014026 (2007). [CrossRef]  

32. A. Kienle, “Anisotropic light diffusion: an oxymoron?” Phys. Rev. Lett. 98(21), 218104 (2007). [CrossRef]  

33. A. Kienle, F. Foschum, and A. Hohmann, “Light propagation in structural anisotropic media in the steady-state and time domains,” Phys. Med. Biol. 58(17), 6205–6223 (2013). [CrossRef]  

34. J. Ranasinghesagara and G. Yao, “Imaging 2D optical diffuse reflectance in skeletal muscle,” Opt. Express 15(7), 3998–4007 (2007). [CrossRef]  

35. J. Ranasinghesagara, T. M. Nath, S. J. Wells, A. D. Weaver, D. E. Gerrard, and G. Yao, “Imaging optical diffuse reflectance in beef muscles for tenderness prediction,” Meat Sci. 84(3), 413–421 (2010). [CrossRef]  

36. R. V. Beers, B. Aernouts, M. M. Reis, and W. Saeys, “Anisotropic light propagation in bovine muscle tissue depends on the initial fiber orientation, muscle type and wavelength,” Opt. Express 25(18), 22082–22095 (2017). [CrossRef]  

37. J. Cha, J. Kim, and S. Kim, “Noninvasive determination of fiber orientation and tracking 2-dimensional deformation of human skin utilizing spatially resolved reflectance of infrared light measurement in vivo,” Measurement 142, 170–180 (2019). [CrossRef]  

38. M. Monzón, Z. Ortega, A. Hernández, R. Paz, and F. Ortega, “Anisotropy of photopolymer parts made by digital light processing,” Materials 10(1), 64 (2017). [CrossRef]  

39. I. M. Ward, “Optical and mechanical anisotropy in crystalline polymers,” Proc. Phys. Soc. 80(5), 1176–1188 (1962). [CrossRef]  

40. J. Ranasinghesagara, F. Hsieh, and G. Yao, “A photon migration method for characterizing fiber formation in meat analogs,” J. Food Sci. 71(5), E227–E231 (2006). [CrossRef]  

41. T. Binzoni, C. Courvoisier, R. Giust, G. Tribillon, T. Gharbi, J. Hebden, T. Leung, J. Roux, and D. Delpy, “Anisotropic photon migration in human skeletal muscle,” Phys. Med. Biol. 51(5), N79–N90 (2006). [CrossRef]  

42. A. W. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least squares fitting of ellipses,” in Proceedings of 13th International Conference on Pattern Recognition, (IEEE, 1996), pp. 253–257.

43. C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” J. Big Data 6(1), 60 (2019). [CrossRef]  

44. S.-i. Amari and S. Wu, “Improving support vector machine classifiers by modifying kernel functions,” Neural Networks 12(6), 783–789 (1999). [CrossRef]  

45. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees (Chapman & Hall / CRC, 1984).

46. D. H. Wolpert, “Stacked generalization,” Neural Networks 5(2), 241–259 (1992). [CrossRef]  

47. L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001). [CrossRef]  

48. T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, vol. 2 (Springer, 2009).

49. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research 12(85), 2825–2830 (2011).

50. T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett. 27(8), 861–874 (2006). [CrossRef]  

51. L. Wang, S. A. Bigdeli, A. N. Christensen, M. Corredig, R. Tonello, A. B. Dahl, and J. R. Frisvad, “Data for noninvasive material anisotropy estimation,” Technical University of Denmark Collection, 2023, https://doi.org/10.11583/DTU.c.6605581 .

Supplementary Material (1)

NameDescription
Supplement 1       Rotation series

Data availability

Data underlying the results presented in this paper are available in Ref. [51].

51. L. Wang, S. A. Bigdeli, A. N. Christensen, M. Corredig, R. Tonello, A. B. Dahl, and J. R. Frisvad, “Data for noninvasive material anisotropy estimation,” Technical University of Denmark Collection, 2023, https://doi.org/10.11583/DTU.c.6605581 .

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Illustration of our instrument for oblique incidence reflectometry. The view direction of the camera is perpendicular to the horizontal plane and the angle between the laser and the horizontal plane is $60^{\circ }$. The rotation stage is adjustable in height.
Fig. 2.
Fig. 2. Fitting ellipses (outlines) to the image coordinates of pixels with values larger than $I_0 = 150$ (red points) for the case of milk. We show the result for the four different image layers (a–d) corresponding to the different laser beams in the instrument (Violet1, Violet2, Red1, Red2). The green line represents the direction of the major axis of the ellipse, with the 12 o’clock direction defined as 0 degrees. The blue line in (d) is the ellipse direction obtained from the PCA analysis described in Sec. 6.1. The result of this is only slightly different from the ellipse fitting method.
Fig. 3.
Fig. 3. Ellipse eccentricity of Xrite ColorChecker White Target (a) and mozzarella cheese (b) as the pixel intensity increases. Layers 1 and 2 are captured by violet lasers whose incident directions are perpendicular to one another, and similarly layers 3 and 4 are captured by red lasers. The small difference of layer 1 in (a) is due to a calibration problem, the laser Violet1 could not irradiate the exact same area as the other lasers.
Fig. 4.
Fig. 4. Ellipse angles comparison between milk and mozzarella. Fig. (a) and (b) show the variation of the ellipse angle with the pixel intensity of layer 1 of milk (a) and mozzarella cheese (b), respectively. The line represents the angle of the ellipse, and the correspondence between the fitted ellipse contour and the line is represented by the same color. Fig. (c) and Fig. (d) represent the relationship between the ellipse angles of the four image layers and pixel intensity of milk (c) and mozzarella cheese (d).
Fig. 5.
Fig. 5. Relationship between the sample rotation angle and the the fitted ellipse when the milk (a,b) and mozzarella (c,d) is rotated on the stage. The horizontal axis is the sample rotation angle and the vertical axis is the average eccentricity (a,c) and ellipse angle (b,d) in a single image considering 11 different pixel intensities. The plots include the results of fitting multiple ellipses in each layer using various pixel intensities, resulting in multiple values of eccentricity and ellipse angle for each rotation angle. The plots depict an approximation of the mean and a 95% confidence interval.
Fig. 6.
Fig. 6. Neural network AUROC score. Results were obtained by randomly sampling four materials (two isotropic and two anisotropic) from ten materials (five isotropic and five anisotropic) and run 50 epochs to get the overall performance of different combinations. Where (a) is the AUROC score of the validation set, and (b) is the AUROC score of the test set. The horizontal axis of the plots represents the outcome of training under various conditions: "4layers" indicates that all four layers were used during training simultaneously, "ang" implies that each layer holds only information about the angle of the ellipse/PCA, "ecc" means that each layer only contains information about the eccentricity of the ellipse/PCA, "ecc, ang" implies that each of the 2 layers holds both angle and eccentricity information, and "2layers" means that only two layers were utilized during training, with "r" denoting red and "v" violet.

Tables (4)

Tables Icon

Table 1. Mean sum of the variances of the four image layers across a full rotation series computed for ellipse eccentricity and ellipse angle. In the case of the ellipse angle, this number clearly distinguishes isotropic materials from anisotropic materials (note the one or two orders of magnitude difference), which strongly suggests that our initial assumptions regarding isotropy or anisotropy of the different materials were right.

Tables Icon

Table 2. Feature extraction for machine learning. Each row of the table contains the results of fitting an ellipse or performing PCA on a single image layer using a certain pixel intensity threshold. The column "Img No" indicates the index of a layered image in the material. The column "Label" specifies whether the material is considered anisotropic (1) or isotropic (0). The column "Transform" indicates whether the row represents an original image (0) or a data augmentation version (1). We use "ecc" as an abbreviation of eccentricity.

Tables Icon

Table 3. Median value of train and test AUROC scores for each model. Four image layers and one image layer data contain information of both eccentricity and angle calculated by the ellipse fitting method.

Tables Icon

Table 4. Average test set miss classification rate after 50 epochs of training for different materials irradiated by different lasers. Different lasers have different classification accuracy for different materials. The classification error rate of violet for chicken breast is much lower than that of red, for example. Conversely, the classification error rate of red for bamboo is much lower than that of violet. In five of the ten cases, we obtain better results when using the combined red+violet. Both ellipse eccentricity and angle are used for training and evaluation.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

F ( a , x ) = x T a = a x 2 + b x y + c y 2 + d x + e y + f = 0 ,
D A ( a ) = i = 1 n ( F ( a , x i ) ) 2 ,
D A ( a ) = i = 1 n a T x i x i T a = a T S a
arg min a ( a T S a ) subject to a T C a = 1.
L ( a , λ ) = a T S a λ ( a T C a 1 )
S a = λ C a .
e = 1 b 2 a 2
A U R O C ( f ) = t 0 D 0 t 1 D 1 1 [ f ( t 0 ) < f ( t 1 ) ] | D 0 | | D 1 | ,
[ e c c , a n g a n g e c c ] = μ + i m a g e l a y e r s i + m e t h o d j , i = 1 , 2 , 4 , j = 1 , 2 ,
[ e c c , a n g a n g e c c ] = μ + r e d + v i o l e t ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.