Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical hyperdimensional soft sensing: speckle-based touch interface and tactile sensor

Open Access Open Access

Abstract

Hyperdimensional computing (HDC) is an emerging computing paradigm that exploits the distributed representation of input data in a hyperdimensional space, the dimensions of which are typically between 1,000–10,000. The hyperdimensional distributed representation enables energy-efficient, low-latency, and noise-robust computations with low-precision and basic arithmetic operations. In this study, we propose optical hyperdimensional distributed representations based on laser speckles for adaptive, efficient, and low-latency optical sensor processing. In the proposed approach, sensory information is optically mapped into a hyperdimensional space with >250,000 dimensions, enabling HDC-based cognitive processing. We use this approach for the processing of a soft-touch interface and a tactile sensor and demonstrate to achieve high accuracy of touch or tactile recognition while significantly reducing training data amount and computational burdens, compared with previous machine-learning-based sensing approaches. Furthermore, we show that this approach enables adaptive recalibration to keep high accuracy even under different conditions.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Hyperdimensional computing (HDC), also known as vector symbolic architecture, is a brain-inspired computing paradigm [13] that has attracted significant attention driven by global trends in the search for alternatives to the conventional von Neumann computing paradigm. HDC takes advantage of the high-dimensional distributed representation of input data. For example, an input is represented by a long binary (or bipolar) quasi-random vector, frequently referred to as a hyper vector (HV), the dimensions of which are typically greater than 1,000 [1]. HV representation enables basic arithmetic operations, such as multiplication and addition for HVs, with simple logic operations, such as logical exclusive OR and counters, to build composite HVs that represent objects of interest. Because of the hyperdimensional space, two different HVs are likely to be almost orthogonal, which can lead to a holographic representation of input information [1]. Another significant advantage of the HV representation is its fault-robust characteristic, that is, the HV representation avoids error-prone bits, such as the most significant bits and the sign bit in conventional binary representation. Considering the aforementioned advantages, HDC offers remarkable efficiency as a promising alternative to traditional machine-learning models and has demonstrated its advantages in emerging hardware, such as in-memory computing [4]. HDC-based cognitive processing has been achieved with non-iterative learning without requiring optimization and model turning and applied to various tasks, including language identification [5], speech recognition [6], and object recognition for robotics [7]. It has also been used for analogy-based reasoning [8], bio-signal processing in electroencephalography [9], secure computing for the Internet of Things [10], and local processing in adaptive machine learning for wearable devices [11].

In-sensor computing (i.e., the local and real-time processing of sensory signals) has the advantage of reducing communication to computational devices and improving latency and security. HDC is promising for such in-sensor or near-sensor computing because HDC can encode various types of sensory information as a single HV and enables energy-efficient processing and low latency. In this study, we considered an HDC-based approach for optical touch-based or tactile sensing. In general, such sensors can acquire information regarding force, texture, shape, and temperature through the elastic deformation by a physical contact [12]. They can be extended to various applications, such as robot interaction, medical probes, and haptic interfaces.

As opposed to electronics-based sensors, optical-sensing approaches exhibit remarkable features, such as high sensitivity, remote access, and immunity to electromagnetic interference. There are many optical-sensing approaches, including stretchable waveguide- [13,14] or fiber-based sensors [15], and speckle-based sensors [1618]. Among them, the speckle-based sensing approach enables highly sensitive detection, combined with an image correlation technique [16,17] or deep learning [18,19]. However, the image correlation technique suffers from the limitation of vulnerability to noise and lack of dynamic range [19]. Deep learning-based techniques can achieve higher accuracy and consistency with a wider dynamic range [19] and multimodal sensing [18]. However, they normally require substantial training samples and suffer from the computational burden and adaptivity to environmental changes.

In this study, we employ the HDC concept for the efficient, adaptive, accurate processing of optical sensing data with low computational burden. The implementation of the HDC is based on an optical encoding for HV generation. Various types of sensory information can be encoded as a speckle pattern, which is used as a significantly long HV (typically with more than 250,000 dimensions). The high-dimensional distributed representation can be achieved naturally through optical scattering without additional computational burden. Similar speckle-based high-dimensional mapping techniques have already been utilized in extreme-learning machines and reservoir computing [2023]; however, their application to HDC has not been reported to date. The proposed HV generation approach enables accurate cognitive processing based on a straightforward and low-precision operation without an iterative training process using a large amount of training data, which is different from traditional deep learning approaches. We apply the proposed approach to a soft touch sensor and a tactile sensor and demonstrate the features of the proposed approach. This approach paves the way for an optical in-sensor computing paradigm that seamlessly integrates optical-sensing capabilities and information processing.

2. Principle and methods

2.1 Classification using HDC

Here, we briefly describe the use of HDC in cognitive processing tasks, such as classification [24]. HDC consists of an encoder and a memory and can achieve classification in a straightforward manner [Fig. 1(a)]. In HDC, the encoder is used to transform input data into HVs, while the memory is used to store and process the HVs. In general, HVs are represented as binary or bipolar vectors with $D$ dimensions, which are chosen independently from $\{0,1\}^D$ or $\{-1,1\}^D$. The probability of bit 1 is 0.5. Thus, the generated HVs are nearly orthogonal to each other.

 figure: Fig. 1.

Fig. 1. (a) Classification using HDC. (b) Optical HV generation scheme using the speckle-based distributed representation of input sensory information. The touch sensory information is mapped into a speckle pattern, which is used to generate HVs.

Download Full Size | PDF

Let $\{X_i,l_i\}_{i=1}^N$ be a training dataset with $N$ samples, where $X_i$ and $l_i$ are the $i$-th input vector and target label, respectively. The number of the target labels is defined as $L$. $X_i$ is encoded as HV $V_i$ through an encoding function $\Psi : X_i \rightarrow V_i$. Then, let $P_l$ be an HV representing class $l$, called a prototype vector. Let $\{V_k^l\}_{k=1}^{N_s}$ be a set of the HVs belonging to class $l$, where $k$ and $N_s$ are the sample index and the number of the HVs, respectively. For the sake of simplicity, we assumed $N_s = N/L$. The prototype vector $P_l$ for class $l$ is generated with point-wise addition as follows:

$$P_l = \left[V_1^l + V_2^l + \cdots + V_{N_s}^l\right],$$
where $\left [\cdot \right ]$ is the binarization operation used to transform any $D$-dimensional vector into a $D$-dimensional binary vector based on the majority rule. In this study, we used a simple majority rule, in which “0”(“1”) is taken if the number of “0”(“1”) is larger. The bias of adding an even number of HVs can be reduced by adding an extra random vector [25]; however, this bias problem was not addressed in this study. All trained prototype vectors, $\{P_1, P_2, \ldots, P_L\}$, are stored in a memory.

During the inference phase, unknown data can be classified as follows. First, the data value is mapped to a high-dimensional space using the same encoding scheme. This HV is called the query vector $V_q$. Then, the similarity between the generated query vector and all stored prototype vectors is measured. For any two binary HVs, $A = (A_1, \ldots, A_D)$ and $B = (B_1, \ldots, B_D)$, the similarity can be measured using the Hamming distance, which is given by

$$\mbox{Ham}(A,B) = \sum_{j=1}^D A_j \oplus B_j ,$$
where $\oplus$ is the XOR operation, which is unity if and only if arguments $A_j$ and $B_j$ differ; otherwise, it is zero. $\mbox {Ham}(A,B) = 0$ only for $A = B$, whereas $\mbox {Ham}(A,B) \approx 0.5D$ if $A$ and $B$ are nearly orthogonal or dissimilar. Finally, the unknown data are classified into class $l^*$, with which it has the highest similarity, that is, the shortest Hamming distance, as follows:
$$l^* = \mbox{argmin}_{l} \mbox{Ham}(P_l,V_q).$$

2.2 Optical hyperdimensional mapping for generating HVs

The core of HDC is its encoding process to map the input data into HVs. In conventional HDC, encoding methods are based on mathematical operations [24]. For example, a record-based encoding scheme utilizes two types of HVs, representing the feature position and feature value [24]. For the position encoding of a feature vector with $m$ elements, $m$ HVs are randomly generated. Then, each feature value is discretized to $n$ levels. The level values are represented by $n$ HVs, which in turn are generated such that HVs of neighbor levels are correlated. Details of this operation and various other encoding schemes can be found in the literature [26]. These standard encoding schemes require arithmetic operations, which are typically computationally expensive. In contrast to previous work, we did not use arithmetic operations in this study; rather, an optical scattering process was used for HV generation.

The proposed HV generation is based on optical encoding using the modulation sensitivity of speckle patterns in soft materials [Fig. 1(b)]. As is well known, optical scattering in a diffusive material is highly sensitive to external stimuli to the material [27,28]. Therefore, material deformation created by touch interactions is encoded as a speckle pattern. Thus, as opposed to conventional approaches, some arithmetic operations are skipped, and the memory for HV storage is not required in our optical approach.

The sensing of touch stimuli is treated as a classification problem using HDC (Fig. 2). To classify the stimulus information, speckle images are captured using a camera and then used as feature HVs. In our approach, the feature HVs are thresholded and binarized such that “0” and “1” appear with the same probability of 0.5. Within the same label, HVs are added to generate a prototype vector [see Eq. (1)]. The prototype vector contains the information features of each class. The test data, which are also obtained as speckle patterns, are mapped to a query HV and classified using the similarity measurement of each prototype vector [Eq. (3)].

 figure: Fig. 2.

Fig. 2. Overview of the HDC-based classification approach for optical soft sensors.

Download Full Size | PDF

A remarkable feature of the optical approach is the natural generation of HVs with more than 100,000 dimensions using a simple optical setup. The dimensions of the generated HVs depend on the number of pixels in the image sensor used to measure the speckle patterns. High dimensionality is important for the orthogonality between different HVs and a reliable symbolic representation. However, longer HVs require more memory for the storage; therefore, they should be shorter. Typically, existing HDC approaches use HVs with 1,000–10,000 dimensions and require them to be stored for each input value. Our approach can generate HVs directly from an external stimulus; thus, a significant memory usage reduction is expected.

3. Results

3.1 Optical touch sensor

3.1.1 Setup

Figure 3(a) shows an optical touch sensor used for an interface device. The sensing part consists of a transparent silicone elastomer, which is coupled with an optical fiber and a compact camera. Laser light is scattered inside the silicone and forms speckle patterns owing to the complex scattering process. The speckle patterns are captured using the camera. (See Supplement 1, Sections 1A–1C, for the details.) Considering that the speckle patterns change depending on the contact with the surface of the sensing part, the information concerning the contact action can be identified by learning the change characteristics. We used the HDC-based approach to identify the contact positions in the sensing part. As a proof-of-concept, we demonstrated the identification of contact positions, labeled as L1, L2, R1, R2, and None [Fig. 3(b)]. “None” represents no contact with the sensing part. In this experiment, the training data were collected automatically using a robotic arm.

 figure: Fig. 3.

Fig. 3. (a) Optical touch sensor. (b) Schematic of the touch sensor. Contact positions can be identified using the HDC-based approach.

Download Full Size | PDF

The silicone surface was pushed at the positions labeled L1, L2, R1, and R2 by a solid indenter mounted on the robotic arm. The contact positions (L1, L2, R1, and R2) were randomly chosen. The pushing depth was estimated as < 2 mm. The resulting speckle patterns were measured using the camera. The temperature was approximately 23.3 $\pm$ 0.2 $^\circ$C. Data samples $\{X_i,l_i\}_{i=1}^{N_T}$ were collected, where $X_i$ and $l_i \in \{\mbox {L1, L2, R1, R2, None}\}$ represent the $i$-th speckle pattern and the label of contact position, respectively. $N_T = 500$ is the total number of collected data samples, and 80 ${\%}$ of the data samples were used for training (i.e., prototype vector generation).

3.1.2 Speckle, HVs, and prototype vectors

Each speckle pattern was trimmed into an image of approximately 500$\times$500 pixels, flattened, and thresholded to generate binary HVs with approximately 250,000 dimensions. Subsequently, the prototype vectors for each class were generated according to Eq. (1). Figure 4(a) shows examples of the speckle patterns, the corresponding HVs, and the corresponding prototype vectors, for each contact position (L1, L2, R1, R2, and None), where the HVs and prototype vectors are reshaped to the same shape as the corresponding speckle images for comparison. Different speckle patterns were formed for each contact. To demonstrate this feature, we measured the mean correlation between the speckle patterns,

$$C^{s}_{ll'} = \dfrac{1}{M}\sum_{k=1}^{N_s}\sum_{k'\ne k}^{N_s}\left( \dfrac{\langle (X_{lk}(i,j)-m^{x}_{lk})(X_{l'k'}(i,j)-m^{x}_{l'k'})\rangle_{ij} } { \sigma^{x}_{lk}\sigma^{x}_{l'k'}} \right),$$
where $X_{lk}(i,j)$ represents the $k$-th speckle pattern labeled by $l \in \{\mbox {L1, L2, R1, R2, None}\}$, $M = N_s(N_s-1)$, and $N_s = 100$. $(i,j)$ denotes the two-dimensional pixel coordinate in the speckle image. $\langle \cdot \rangle _{ij}$ represents the mean with respect to $i$ and $j$. $m^{x}_{lk}$ and $\sigma ^{x}_{lk}$ denote the mean and standard deviation of speckle pattern $X_{lk}(i,j)$, respectively. The correlation matrix $C^{s}_{ll'}$ is shown in Fig. 4(b). The diagonal elements of the matrix $C^{s}_{ll}$ (i.e., correlation values between speckle images belonging to the same class $(l = l')$) are larger than the non-diagonal elements $C^{s}_{ll'}$ $(l \ne l')$. However, $C^{s}_{ll}$ was at most 0.178. The maximum contrast between correlation values $\Delta C^{s} = \max _{ll'}|C^{s}_{ll}-C^{s}_{ll'}|$ was approximately 0.055. It is difficult to find any common features among the speckle images belonging to each class.

 figure: Fig. 4.

Fig. 4. (a) Measured speckle patterns, binary HV images, and prototype vector images for each label. (b) Correlation matrix $C^{s}$ between speckle patterns. (c) Correlation matrix $C^{b}$ between HVs. (d) Correlation matrix $C^{p}$ between prototype vectors and binary HVs. (e) Correlation contrasts for speckle patterns $\Delta C^{s}$, binary HVs $\Delta C^{b}$, and prototype vectors $\Delta C^{p}$.

Download Full Size | PDF

We also measured the mean correlation matrix between binary HVs,

$$C^{b}_{ll'} = \dfrac{1}{M}\sum_{k=1}^{N_s}\sum_{k'\ne k}^{N_s}\left( \dfrac{\langle (V_{lk}(i)-m^{v}_{lk})(V_{l'k'}(i)-m^{v}_{l'k'})\rangle_i } { \sigma^{v}_{lk}\sigma^{v}_{l'k'}} \right),$$
where $V_{lk}(i)$ represents the $i$-th component of the $k$-th sample HV labeled as $l$. $m^{v}_{lk}$ and $\sigma ^{v}_{lk}$ denote the mean and standard deviation of $V_{lk}(i)$, respectively. The results are shown in Fig. 4(c). One can see a similar trend with the result shown in Fig. 4(b). The correlation values were low even for $l = l'$, and the correlation contrast $\Delta C^{b} = \max _{ll'}|C^{b}_{ll}-C^{b}_{ll'}|$ for the binary HVs was approximately 0.036. However, the prototype vectors might contain the information of all HVs belonging to the same class by the bundling operation [Eq. (1)] and constitute features representing each class. To gain further insight into the role of prototype vectors, we computed them with 80 HVs for each class and measured the correlation matrix between the prototype vectors and binary HVs for each class,
$$C^{p}_{ll'} = \dfrac{1}{N_s}\sum_{k=1}^{N_s}\left( \dfrac{\langle (PV_{l}(i)-m^{p}_{l})(V_{l'k}(i)-m^{v}_{l'k})\rangle_i } { \sigma^{p}_{l}\sigma^{v}_{lk}} \right),$$
where $PV_{l}(i)$ represents the $i$-th component of the prototype vector labeled as $l$. $m^{p}_{l}$ and $\sigma ^{p}_{l}$ denote the mean and standard deviation of $PV_{l}(i)$, respectively. As seen in Fig. 4(d), the correlation matrix shows relatively high correlation values for the same class, $l = l'$. The correlation contrast $\Delta C^{p} = \max _{ll'}|C^{p}_{ll}-C^{p}_{ll'}|$ was approximately 0.1, which was higher than that for the speckle patterns and binary HVs [Fig. 4(e)]. This improvement suggests the capability of the prototype vectors to achieve better classifications.

3.1.3 Classification

Here, we discuss the classification of the contact positions. During the training phase, five prototype vectors were generated with $N = 80\times 5 = 400$ samples. For the performance evaluation, we used 100 different test samples and generated query HVs. The similarity between the query HVs and the prototype vectors was evaluated [Eq. (2)], and the contact positions (class labels) were identified. The classification results are shown in Fig. 5(a), exhibiting an accuracy of 100${\%}$. Figure 5(b) shows the accuracy dependence on the number of training samples $N$. For this simple task, the accuracy exceeded 90${\%}$ only when $N > 20$. This demonstrates the capability of the HDC-based approach to achieve a high classification accuracy using only a few training samples.

 figure: Fig. 5.

Fig. 5. (a) Confusion matrix. The number of training samples, $N = 400$. (b) Accuracy dependence on the number of training samples $N$.

Download Full Size | PDF

Another advantage of the proposed HDC-based sensing approach is its straightforward learning method, which does not require an iterative training process using a large number of parameters, as opposed to traditional machine learning. We measured the training time of the HDC-based approach as the time required to generate the prototype vectors. The time was only 1.24 s for $N = 400$. For comparison, we also measured the training time, training accuracy, and test accuracy for several other machine-learning models. Table 1 presents the comparison results in terms of the training time and classification accuracy. We used Softmax regression, a convolutional neural network (CNN) with a single convolutional layer and max pooling, and a CNN with three convolutional layers and max pooling. For these computations, a personal computer (Apple Mac mini 2020, OS: macOS 13.2.1, CPU: Apple M1, memory: 16GB) was used. The training for these machine learning models was unstable because the number of training samples $N$ was limited to 400. The learning rate was set as 0.0001 to ensure stable training. The training time was measured as the time in which the accuracy exceeded 90${\%}$. The training accuracy and test accuracy were measured as the maximum values. As shown in Table 1, the training time of the proposed HDC-based approach was significantly shorter than that of the Softmax regression and CNN models, and the test accuracy was slightly higher than those of these machine-learning models. The proposed approach reduces the computation burden for the training and enables high classification accuracy with only a limited number of the data samples.

Tables Icon

Table 1. Comparisons of training time and classification accuracy. The number of training samples was set as $N = 400$. Softmax regression, a CNN with a single convolutional layer and max pooling [CNN (1 layer)], and a CNN with three convolutional layers and max pooling [CNN (3 layer)] were used in the comparison.

3.1.4 Human-machine interface

In the above demonstration, a solid indenter was used to accurately push the silicone material with the same pressure and direction. However, when we utilize the proposed scheme in a human-machine interface, position identification for indentation under various conditions is required. We investigated whether the sensor could identify contact with a person’s finger. Training samples were collected by repeatedly touching the surface of the sensing unit with a person’s index or middle finger [Fig. 6(a)]. The classification results are shown in Fig. 6(b). The accuracy was approximately 87.9${\%}$ for 20 test samples for each label. The error mainly occurred at positions R1, R2, and R3. R1 was confused with R2, whereas None was confused with R2 or R3. These contact positions are far from the camera, making it difficult to detect the optical signal including the information of the deformation around the contact positions. A straightforward approach to address this issue is to make the silicone material more diffusive by introducing scatterers inside the material, which generate stronger scattering, such that the optical signal containing contact information can be well detected by the camera.

 figure: Fig. 6.

Fig. 6. (a) Optical touch sensor as an interface device. L1, L2, L3, R1, R2, R3, and None can be identified with the HDC-based approach. (b) Confusion matrix.

Download Full Size | PDF

3.1.5 Spatial resolution

The HDC-based sensing approach does not require the integration of multiple sensors or extensive wiring. Spatially continuous position sensing is possible via optical scattering. To roughly estimate the spatial resolution for identifying the contact positions, we measured the speckle patterns and corresponding HVs formed at certain contact positions. In this experiment, the contact positions were shifted at a 1-mm interval, and the speckle patterns were measured at 51 contact positions, as labeled as $\{\text {''0''},\text {''1''},\ldots,\text {''50''}\}$ [See the inset in Fig. 7(a)]. The prototype vectors were computed using 80 samples for each position. Figure 7(a) shows the correlation matrix between the prototype vectors and the binary HVs. The correlation contrast shows that the sensor can identify different contact positions with a 1-mm resolution, which is close to the positioning precision of the indenter used in this experiment. Figure 7(b) shows the identification performance of 51 contact positions at a 1-mm resolution. The total accuracy was approximately 93${\%}$. These results suggest the scalability and capability of high-resolution identifications for the proposed speckle-based soft interface.

 figure: Fig. 7.

Fig. 7. Correlation matrix between the prototype vectors and binary HVs. The inset in (a) shows the contact positions on the sensing unit. (b) Confusion matrix.

Download Full Size | PDF

3.2 Adaptive update for varying conditions

Speckle-based sensing generally causes a stability issue because the speckle patterns are highly sensitive to external environmental variations, including temperature changes and laser fluctuations. For example, in our soft interface device, the accuracy was reduced to 32${\%}$ 16 days after training due to an environmental change. To address this stability issue, an adaptive recalibration strategy can be incorporated [11]. This enables updates of the prototype vectors and recovers accuracy without requiring a large number of training samples even under environmental changes.

In the update scheme [Fig. 8(a)], the HVs are acquired in a newly experimental environment, and the new prototype vectors are computed. The number of the acquired samples used for the newly computed prototype vectors, $N^{new}$, can be less than the number of samples used for the stored prototype vector, $N^{old}$. Then, the stored prototype vectors are updated by merging the stored and newly computed prototype vectors with the weight parameter $p$ [Fig. 8(a)]. Specifically, an updated prototype vector was made by randomly taking the elements from a newly computed prototype vector with the probability $p$ and replacing the elements of the stored prototype vector with them. For the proof-of-concept, we measured HVs 16 days after training and updated the prototype vectors by the aforementioned scheme. The total number of the acquired samples was set as $N^{new} = N^{new}_sL$, where $N_s^{new}$ and $L = 5$ represent the number of the acquired samples for one prototype vector and the number of classes, respectively. Figure 8(b) shows the $N^{new}$-dependence of the classification accuracy. We measured the accuracy for various values of $p$. For the stored (old) prototype vectors, the accuracy was 32 ${\%}$ but recovered up to 95 ${\%}$ when $p = 0.5$ or $0.75$ and $N^{new}$ increases to 50, which is 1/8 of $N^{old}$. The adaptivity (i.e., how much information on the old environment is forgotten and updated to that on the new environment) can be controlled by the parameter $p$.

 figure: Fig. 8.

Fig. 8. (a) Adaptive recalibration scheme based on the update of the prototype vectors. (b) Classification accuracy vs $N^{new}$. For $N^{new}/N^{old} =1/8$, the accuracy recovered to 95${\%}$ from 32${\%}$.

Download Full Size | PDF

3.3 Robotic finger for tactile sensing

Finally, we demonstrated that the proposed optical approach can be applied to a robotic finger for tactile identification. Figures 9(a) and 9(b) show the developed tactile sensor, which is deployed as a robotic finger. The simple sensor constitutes a soft silicone elastomer, coupled to an optical fiber, and a compact camera [Fig. 9(c)]. (See Supplement 1, Section 1D, for further details.) The tactile sensor differs from a vision-based tactile sensor [29] in the sense that markers are not used in the proposed approach. Instead, the speckle patterns were measured using the camera because the speckle-based approach allows for high sensitivity [18].

 figure: Fig. 9.

Fig. 9. (a) Optical tactile sensor mounted as a robotic finger. (b) Enlarged view of (a). (c) Dimensions and structure of the tactile sensor. (d) Confusion matrix.

Download Full Size | PDF

In the training phase, this sensor was repeatedly touched with three objects, namely, paper, and two sandpapers with grit sizes of 40 and 120, and the training samples were collected. The grit size of the sandpaper refers to the particle size of the abrading materials embedded in the sandpaper. During training, we used 80 samples for each label ($N = 80\times 3 = 240$) and generated the prototype vectors. The performance was evaluated using 120 test samples. The results are shown in Fig. 9(d). The identification accuracy for touching objects was 100${\%}$.

4. Conclusion

In this study, we presented an optical approach for HV generation in HDC. The proposed approach was applied to a soft interface for contact position identification and to an optical tactile sensor for touching object identification. The proposed approach enable fast and efficient cognitive processing without requiring an iterative training process using a large amount of data. The sensors can be adaptively recalibrated and maintained high classification accuracy by updating the prototype vectors. Such computational efficiency and adaptivity contrast with those of traditional techniques using deep learning.

Practically, it is important to improve the identification capability of the interface device and tactile sensor. This can be achieved by increasing the number of optical fibers and/or introducing scatterers inside the elastomer for increased diffusivity. In addition, the HDC-based approach can be extended to infer continuous values using a recently developed algorithm [30].

The proposed approach enables the resolution of various issues inherent in speckle-based sensing techniques and can be utilized for low-latency and adaptive in-sensor optical processing.

Funding

Japan Society for the Promotion of Science (JP22H01426, JP22H05198, JP22K18792); Precursory Research for Embryonic Science and Technology (JPMJPR19M4).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. P. Kanerva, “Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors,” Cognitive Comput. 1(2), 139–159 (2009). [CrossRef]  

2. D. Kleyko, D. A. Rachkovskij, E. Osipov, et al., “A survey on hyperdimensional computing aka vector symbolic architectures, part i: Models and data transformations,” ACM Comput. Surv. 55(6), 1–40 (2023). [CrossRef]  

3. D. Kleyko, D. Rachkovskij, E. Osipov, et al., “A survey on hyperdimensional computing aka vector symbolic architectures, part ii: Applications, cognitive models, and challenges,” ACM Comput. Surv. 55(9), 1–52 (2023). [CrossRef]  

4. G. Karunaratne, M. Le Gallo, G. Cherubini, et al., “In-memory hyperdimensional computing,” Nat. Electron. 3(6), 327–337 (2020). [CrossRef]  

5. A. Rahimi, P. Kanerva, and J. M. Rabaey, “A robust and energy-efficient classifier using brain-inspired hyperdimensional computing,” in Proceedings of the 2016 International Symposium on Low Power Electronics and Design, (Association for Computing Machinery, New York, NY, USA, 2016), ISLPED ’16, pp. 64–69.

6. M. Imani, D. Kong, A. Rahimi, et al., “Voicehd: Hyperdimensional computing for efficient speech recognition,” in 2017 IEEE International Conference on Rebooting Computing (ICRC), (2017), pp. 1–8.

7. P. Neubert, S. Schubert, and P. Protzel, “An introduction to hyperdimensional computing for robotics,” KI - Künstliche Intelligenz 33(4), 319–330 (2019). [CrossRef]  

8. P. Kanerva, “what we mean when we say “whats the dollar of mexico?”: Prototypes and mapping in concept space,” in AAAI Fall Symposium: Quantum Informatics for Cognitive, Social, and Semantic Processes, vol. FS-10-08 (2010), pp. 2–8.

9. A. Rahimi, A. Tchouprina, P. Kanerva, et al., “Hyperdimensional computing for blind and one-shot classification of eeg error-related potentials,” Mobile Netw. Appl. 25(5), 1958–1969 (2020). [CrossRef]  

10. M. Imani, Y. Kim, S. Riazi, et al., “A framework for collaborative learning in secure high-dimensional space,” in 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), (2019), pp. 435–446.

11. A. Moin, A. Zhou, A. Rahimi, et al., “A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition,” Nat. Electron. 4(1), 54–63 (2020). [CrossRef]  

12. M. I. Tiwana, S. J. Redmond, and N. H. Lovell, “A review of tactile sensing technologies with applications in biomedical engineering,” Sens. Actuators, A 179, 17–31 (2012). [CrossRef]  

13. H. Zhao, K. O’Brien, S. Li, et al., “Optoelectronically innervated soft prosthetic hand via stretchable optical waveguides,” Sci. Robot. 1(1), eaai7529 (2016). [CrossRef]  

14. T. Kim, S. Lee, T. Hong, et al., “Heterogeneous sensing in a multifunctional soft sensor for human-robot interfaces,” Sci. Robot. 5(49), eabc6878 (2020). [CrossRef]  

15. L. Xu, N. Liu, J. Ge, et al., “Stretchable fiber-bragg-grating-based sensor,” Opt. Lett. 43(11), 2503–2506 (2018). [CrossRef]  

16. E. Fujiwara and L. de Oliveira Rosa, “Agar-based soft tactile transducer with embedded optical fiber specklegram sensor,” Results Opt. 10, 100345 (2023). [CrossRef]  

17. E. Fujiwara, Y. T. Wu, M. F. M. dos Santos, et al., “Development of a tactile sensor based on optical fiber specklegram analysis and sensor data fusion technique,” Sens. Actuators, A 263, 677–686 (2017). [CrossRef]  

18. S. Shimadera, K. Kitagawa, K. Sagehashi, et al., “Speckle-based high-resolution multimodal soft sensing,” Sci. Rep. 12(1), 13096 (2022). [CrossRef]  

19. D. L. Smith, L. V. Nguyen, D. J. Ottaway, et al., “Machine learning for sensing with a multimode exposed core fiber specklegram sensor,” Opt. Express 30(7), 10443–10455 (2022). [CrossRef]  

20. S. Sunada, K. Kanno, and A. Uchida, “Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing,” Opt. Express 28(21), 30349–30361 (2020). [CrossRef]  

21. S. Sunada and A. Uchida, “Photonic neural field on a silicon chip: large-scale, high-speed neuro-inspired computing and sensing,” Optica 8(11), 1388–1396 (2021). [CrossRef]  

22. U. Paudel, M. Luengo-Kovac, J. Pilawa, et al., “Classification of time-domain waveforms using a speckle-based optical reservoir computer,” Opt. Express 28(2), 1225–1237 (2020). [CrossRef]  

23. M. Rafayelyan, J. Dong, Y. Tan, et al., “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10(4), 041037 (2020). [CrossRef]  

24. L. Ge and K. K. Parhi, “Classification using hyperdimensional computing: A review,” IEEE Circuits Syst. Mag. 20(2), 30–47 (2020). [CrossRef]  

25. M. Schmuck, L. Benini, and A. Rahimi, “Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory,” J. Emerg. Technol. Comput. Syst. 15(4), 1–25 (2019). [CrossRef]  

26. S. Aygun, M. S. Moghadam, M. H. Najafi, et al., “Learning from hypervectors: A survey on hypervector encoding,” arXiv, arXiv:2308.00685 (2023). [CrossRef]  

27. J. W. Goodman, “Some fundamental properties of speckle*,” J. Opt. Soc. Am. 66(11), 1145–1150 (1976). [CrossRef]  

28. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).

29. A. Yamaguchi and C. G. Atkeson, “Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables,” in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), (2016), pp. 1045–1051.

30. A. Hernández-Cano, C. Zhuo, X. Yin, et al., “Reghd: Robust and efficient regression in hyper-dimensional learning system,” in 2021 58th ACM/IEEE Design Automation Conference (DAC), (2021), pp. 7–12.

Supplementary Material (1)

NameDescription
Supplement 1       Supplement 1

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Classification using HDC. (b) Optical HV generation scheme using the speckle-based distributed representation of input sensory information. The touch sensory information is mapped into a speckle pattern, which is used to generate HVs.
Fig. 2.
Fig. 2. Overview of the HDC-based classification approach for optical soft sensors.
Fig. 3.
Fig. 3. (a) Optical touch sensor. (b) Schematic of the touch sensor. Contact positions can be identified using the HDC-based approach.
Fig. 4.
Fig. 4. (a) Measured speckle patterns, binary HV images, and prototype vector images for each label. (b) Correlation matrix $C^{s}$ between speckle patterns. (c) Correlation matrix $C^{b}$ between HVs. (d) Correlation matrix $C^{p}$ between prototype vectors and binary HVs. (e) Correlation contrasts for speckle patterns $\Delta C^{s}$, binary HVs $\Delta C^{b}$, and prototype vectors $\Delta C^{p}$.
Fig. 5.
Fig. 5. (a) Confusion matrix. The number of training samples, $N = 400$. (b) Accuracy dependence on the number of training samples $N$.
Fig. 6.
Fig. 6. (a) Optical touch sensor as an interface device. L1, L2, L3, R1, R2, R3, and None can be identified with the HDC-based approach. (b) Confusion matrix.
Fig. 7.
Fig. 7. Correlation matrix between the prototype vectors and binary HVs. The inset in (a) shows the contact positions on the sensing unit. (b) Confusion matrix.
Fig. 8.
Fig. 8. (a) Adaptive recalibration scheme based on the update of the prototype vectors. (b) Classification accuracy vs $N^{new}$. For $N^{new}/N^{old} =1/8$, the accuracy recovered to 95${\%}$ from 32${\%}$.
Fig. 9.
Fig. 9. (a) Optical tactile sensor mounted as a robotic finger. (b) Enlarged view of (a). (c) Dimensions and structure of the tactile sensor. (d) Confusion matrix.

Tables (1)

Tables Icon

Table 1. Comparisons of training time and classification accuracy. The number of training samples was set as N = 400 . Softmax regression, a CNN with a single convolutional layer and max pooling [CNN (1 layer)], and a CNN with three convolutional layers and max pooling [CNN (3 layer)] were used in the comparison.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

P l = [ V 1 l + V 2 l + + V N s l ] ,
Ham ( A , B ) = j = 1 D A j B j ,
l = argmin l Ham ( P l , V q ) .
C l l s = 1 M k = 1 N s k k N s ( ( X l k ( i , j ) m l k x ) ( X l k ( i , j ) m l k x ) i j σ l k x σ l k x ) ,
C l l b = 1 M k = 1 N s k k N s ( ( V l k ( i ) m l k v ) ( V l k ( i ) m l k v ) i σ l k v σ l k v ) ,
C l l p = 1 N s k = 1 N s ( ( P V l ( i ) m l p ) ( V l k ( i ) m l k v ) i σ l p σ l k v ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.