## Abstract

A block-based compressive imaging (BCI) system using sequential architecture is presented in this paper. Feature measurements are collected using the principal component analysis (PCA) projection. The linear Wiener operator and a nonlinear method based on the Field-of-Expert (FoE) prior model are used for object reconstruction. Experimental results are given to demonstrate the superior reconstruction performance of the FoE-based method over the Wiener operator. In addition, the effects of system parameters, such as the object block size, the number of features per block, and the noise level to the BCI reconstruction performance are discussed with different kinds of objects. Then an optimal block size is defined and studied for BCI.

© 2012 Optical Society of America

## 1. Introduction

Compressive imaging (CI) [1, 2] is an image acquisition technique which makes use of linear combinations of object pixels as system measurements, named as features. Compared to conventional imaging, which essentially collects an object’s isomorphic copy, CI has several advantages such as fewer measurements, a lower requirement for data storage and transmission, a lower energy consumption, and a better reconstruction performance in a small measurement signal-to-noise ratio (SNR) [3]. CI has been extensively studied in magnetic resonance imaging (MRI) [4], holography [5–7], spectroscopy [8, 9], and distributed sensor network [10] for applications such as reconstruction, detection, and recognition [11–13]. To implement a CI system, sequential, parallel, and photon-sharing architectures have been discussed [3]. Projections such as principal component analysis (PCA), discrete cosine transform (DCT), Hadamard, and random projections are used to collect features. For object reconstruction, the linear Wiener operator and nonlinear methods using the *ℓ*_{1} [14, 15] or total variation (TV) [16] regularization have been discussed with static and adaptive CI [3, 17].

Despite significant advances in compressive imaging, a main challenge remains in its implementation with large size objects. In a conventional CI system, an object is imaged as a unit. For large size objects, although the number of feature measurements is much smaller than the number of object pixels, generally it is still a large number, if fine details of the object need to be recovered. For example, for an object of size 1024 × 1024, even if the number of features is $\frac{1}{16}$ to the number of pixels, there are still 65536 features which need to be collected. On the other hand, whether for sequential, parallel, or photon-sharing CI architecture, feature measurement SNR reduces as the number of features increases, because more noise and less object intensity is collected in each feature measurement [3]. Therefore, it is difficult to obtain a small reconstruction error for a large-size object using a conventional system, or the object size is limited in conventional CI.

To overcome this difficulty, we did initial study with a block-based CI (BCI) system [18]. With an example, we demonstrate it feasible to image a large size object, such as 1024 × 768, by collecting features for each block in an object. Here in this paper, we continue this imaging strategy. Different from other work on BCI, we discuss the technology with an optical system implementation. Detector noise, measurement SNR, and their effects on system reconstruction performance are studied with details. We also investigate in particular the effect of the block size to the system performance. This is because in BCI, a common concern is the blocking artifact in reconstruction [19,20]. Various kinds of smoothing methods have been used to tackle this issue, such as using a Wiener filter [20] or the total variation (TV) regularization [21] to postprocess the initial reconstructions obtained in spatial domain, or averaging overlapped block reconstructions [19]. Here, we propose to use a nonlinear method based on an object prior model, Field-of-Expert (FoE) [22] to deal with the blocking issue. Besides using an object prior, the linear filters in the FoE model incorporate the spatial correlation among the blocks, hence they are not sensitive to the size of an object block. Due to the use of the trained object prior knowledge and the use of the correlation among blocks for reconstruction, the FoE-based method is expected to present improved performance.

The paper is organized as follows. In Section 2, a sequential architecture BCI is discussed with the PCA projection. Then the linear Wiener operator and a nonlinear method based on the FoE object prior model are studied for object reconstruction. In Section 3.1, experimental results using both methods are presented for comparison. In Section 3.2, an optimal block size for BCI is defined. System parameters such as the number of features per block, the detector noise, the reconstruction error, and the connections among these parameters are discussed. In Section 4, we draw the conclusions for this work.

## 2. Block-based compressive imaging

#### 2.1. Block-based compressive imaging system

Sequential architecture is used in all known CI experimental systems for making feature measurements, because it is easy to implement. Figure 1 presents a sequential architecture BCI system diagram. An object is imaged onto a spatial light modulator (SLM) for block-based modulation. Then the modulated image is refocused and detected by a CCD array. Each pixel of the CCD is assumed to collect the feature measurements of an object block. Thus, if an object has *K* blocks, assuming *K* is a square number, a CCD with
$\sqrt{K}\times \sqrt{K}$ pixels is used. In addition, for a sequential architecture BCI, to collect *M* features for a block, the detector is exposed *M* times sequentially. During each detector exposure, a modulation pattern defined by a projection vector is displayed on the SLM with the corresponding feature collected by the detector. In this work, PCA projection vectors [23] are used for feature collection, because they are optimal according to the reconstruction minimum mean square error (MMSE) criterion for the small detector noise case. Compared to random sensing matrices, PCA matrix also presents a better reconstruction performance [3].

We assume an object block is of size
$\sqrt{N}\times \sqrt{N}$ (similarly, taking *N* to be a square number). The pixels of a block are lexicographically ordered and represented by a vector **x** of size *N* × 1. Accordingly, there are *K* blocks for an object of size
$\sqrt{KN}\times \sqrt{KN}$. These blocks are also lexicographically ordered. Therefore, the object can be represented as a matrix X of size *N* × *K* with each column corresponding to one block. In the case of collecting *M* features for each block, we have a projection matrix H (dimensions *M* × *N*) same for all the blocks. With these definitions, the feature measurements in a BCI system can be represented as

*M*×

*K*, and that each column is the feature vector

**y**for an object block. The matrix N, also of size

*M*×

*K*, is for the additive white Gaussian detector noise 𝒩(0, ${\sigma}_{0}^{2}$) with energy per bandwidth equal to ${\sigma}_{0}^{2}$.

Note that the detector total exposure time is assumed as *T*_{0}. If this exposure time is uniformly distributed into all *M* features, then the time to collect one feature is *T*_{0}/*M*. Because the bandwidth of a detector is inversely proportional to its exposure time, the detector noise energy or the noise variance in a feature becomes
${\sigma}^{2}=M{\sigma}_{0}^{2}/{T}_{0}$. On the other hand, the measured object intensity in a feature is proportional to *T*_{0}/*M*. Therefore, as *M* becomes larger, the measurement SNR decreases and the reconstruction performance of BCI is degraded, due to the reduced object intensity and the increased noise variance. However, as *M* increases, more object features can be used for reconstruction. This benefits the performance of a BCI system. As such, these two factors balance each other at an optimal *M*_{opt} [3, 10]. For *M* < *M*_{opt}, the truncation error caused by shortage of feature measurements is the dominant error, otherwise measurement error becomes the main factor to decide the reconstruction performance.

#### 2.2. The Wiener operator and the FoE-based method

In BCI, feature measurements are collected independently for each block. In this work, the Wiener operator and a nonlinear FoE-based method are used for object reconstruction. The Wiener operator, which is optimal under the MMSE criteria, is used to reconstruct each object block independently. It is defined as

**of size**

_{x}*N*×

*N*is the auto-correlation matrix of an object block

**x**, and I is an identity matrix of size

*M*×

*M*. Using the Wiener operator, we can reconstruct the object as X

_{est}= WY.

Although there are other nonlinear methods for reconstruction in CI, such as conventional TV method and *L*_{1}-norm sparsity method, the focus of these methods is to reconstruct each block independently. Therefore, inherently these methods cannot overcome the blocky issue in BCI. Another nonlinear method is the BM3D method [24], which generates multiple overlapped object block estimations, and then aggregate them to obtain the final reconstructed object. Here a simpler nonlinear method based on Field-of-Expert (FoE) is discussed. FoE is developed as a framework to learn the prior of an image “patch”, or a set of image pixels [22]. It is a modified version of another prior model, Products-of-Experts (PoE). Either of the two models assumes the image pixels satisfy a statistical distribution in a projection domain. Therefore either model is defined using a set of linear filters. More specifically, this means the convolutions of the filters and an image patch can be described accurately using a multiplication of multiple Student-t distributions [22]. Note that, the linear filters are different from the PCA projection vectors defined in Section 2.1 and an image patch is not required to be the same as an object block. Compared to PoE, FoE presents advantages such as the non-sensitivity to the size of an image patch [22]. Because of such an advantage, the linear filters in FoE define an object prior knowledge beyond the limitation of individual image patches or object blocks. Thus, we expect the blocking issue as discussed in the introduction section can be avoided using the FoE model.

Before continuing the study on FoE, several notations are explained first as follows. In Section 2.1, we represent an object and one of its blocks as a matrix X of size *N* × *K* and a vector **x** of size *N* × 1, respectively. Following the same strategy, here we represent an image of size
$\sqrt{\tilde{K}\tilde{N}}\times \sqrt{\tilde{K}\tilde{N}}$ and its *k̃*^{th} image patch of size
$\sqrt{\tilde{N}}\times \sqrt{\tilde{N}}$ using a matrix X̃ of size *Ñ* × *K*̃ and a vector **x̃**(*k*̃) of size *Ñ* × 1. Figure 2 presents one example of an object consisted of 9 blocks and an image of the object with 4 image patches defined by the dash lines.

In FoE, an image X̃ is assumed to be a multidimensional random vector with the probability density function (pdf)

*z*(Θ) is the normalizing function for the pdf,

*θ*= {

_{i}*α*> 0,

_{i}**g**

*},*

_{i}**g**

*is the*

_{i}*i*

^{th}linear filter of size

*Ñ*× 1, and the Student-t expert

*ϕ*(·) has the form

_{i}*θ*= {

_{i}*α*,

_{i}**g**

*} is the same for all image patches,*

_{i}**x̃**(

*k*̃). Such homogeneous property ensures the insensitivity of the filters to the size of an image patch [22]. To simplify notations, the pdf can be re-written as

*θ*= {

_{i}*α*,

_{i}**g**

*}. To do so, a set of training samples are used to maximize the likelihood of the image,*

_{i}*p*(X̃|Θ). This is equivalent to minimizing the Kullback-Leibler divergence [22, 25, 26] between the model and the data distribution. Because there is no closed form solution to the problem, gradient ascent on the log-likelihood is performed to update the parameters. The iterative algorithm for searching

*θ*is therefore defined as

_{i}*η*is a user defined learning rate, 〈·〉

_{p0}denotes the expectation value with respect to the data distribution, and 〈·〉

_{pj}denotes the expectation value with respect to the distribution after

*j*Markov Chain Monte Carlo (MCMC) iterations [27]. In this work,

*j*= 1 MCMC iteration is used in each parameter update step [22, 26]. More details about the FoE model and the parameter update algorithm can be found in the literature [22, 26, 27].

The FoE model has been used for image denoising and inpainting problems [22]. In the former, the original object X̂ (dimensions
$\sqrt{KN}\times \sqrt{KN}$) is estimated from its noisy isomorphic measurement Ŷ (dimensions
$\sqrt{KN}\times \sqrt{KN}$) with white Gaussian noise. The estimation process is employed by maximizing the posterior probability *p*(X̂|Ŷ) ∝ *p*(Ŷ|X̂) · *p*(X̂), where the likelihood *p*(Ŷ|X̂) is

*x̂*and

_{i}*ŷ*representing the

_{i}*i*

^{th}pixel of the original object and the noisy measurement. The object pdf is defined using the FoE model. To find the object estimation, the gradient of the log-likelihood and the gradient of the log-prior defined as

*s*, G

*is the 2D linear filter corresponding to the filter*

_{i}**g**

*, G*

_{i}** X̂ is the convolution of an object X̂ with filter G*

_{i}*, and ${\text{G}}_{i}^{-}$ denotes the filter obtained by mirroring G*

_{i}*around its center pixel. Using*

_{i}*t*for the iteration index,

*η*′ for the update rate, and

*λ*for an optional weight, the gradient ascent denoising algorithm [22] is

The above is an FoE implementation for image denoising, of which our BCI problem has a close resemblance. For us, the measurements are features instead of a noisy image. The object prior is not changed, and can still be defined by the linear filters G* _{i}*. However, the denoising algorithm in BCI is changed. Following the same derivation as presented above, the gradient ascent algorithm for BCI becomes

*N*×

*K*) and 𝒪

^{−1}{·} is the inverse operation.

## 3. Experimental results

#### 3.1. Using the FoE model to improve the reconstructions obtained with the Wiener operator

In Fig. 3, we present the three objects used for the experiment study. They represent the objects with different sizes and different kinds of contents which can be encountered by BCI. PCA features are collected as system measurements. The PCA projection matrix H is defined using the *M* eigenvectors for the largest *M* eigenvalues of object autocorrelation matrix R** _{x}**. R

**is estimated as ${\text{R}}_{\mathbf{x}}=\frac{1}{{K}^{\prime}}\sum _{i=1}^{{K}^{\prime}}{\mathbf{x}}_{i}{\mathbf{x}}_{i}^{\text{T}}$ with**

_{x}*K*′ = 100, 000. The Wiener operator and the FoE based method are used to reconstruct the objects. The reconstruction error for each object is quantified by the normalized root mean square error (RMSE) $\frac{{\Vert \text{X}-{\text{X}}_{\text{est}}\Vert}_{\text{F}}}{{\Vert \text{X}\Vert}_{\text{F}}}$, where ||X||

_{F}is the Frobenius norm of X [28].

In the FoE model, the image patch size is chosen as
$\sqrt{\tilde{N}}\times \sqrt{\tilde{N}}=5\times 5$. Then *L* = 24 linear filters denoted as G* _{i}* are used to define the model. To obtain G

*, 134, 400 pre-whitened image samples cropped from 200 natural images in the Berkeley segmentation database [29] are used for the training process. Each image sample (dimensions $\sqrt{\tilde{K}\tilde{N}}\times \sqrt{\tilde{K}\tilde{N}}=15\times 15$) has*

_{i}*K̃*= 9 patches. Figure 4(a) presents 10×10 or a total of 100 such samples. Using the update algorithm defined with Eq. (7), 24 linear filters G

*as presented in Fig. 4(b) are obtained after 10, 000 iterations. The corresponding*

_{i}*α*value for each G

_{i}*is also labeled in the figure.*

_{i}In Fig. 5, we present the reconstructions using the Wiener operator and the FoE-based method for the object shown in Fig. 3(b). *M* = 16 PCA features for each 16 × 16 block are collected. The detector noise is assumed as *σ*_{0} = 0.938. This gives a measurement SNR of 19.42dB. Figure 5(a) presents the reconstruction obtained using the Wiener operator. The RMSE value for this reconstruction is 0.0906. We can observe that visually the high frequency information in the reconstruction is missed. The discontinuity between blocks, due to reconstructing blocks independently, also looks unpleasant. The reconstruction result using the FoE-based method is presented in (b). Then the parameters used in the gradient ascent denoising algorithm are *η*′ = 0.0526 and *λ* = 0.95. 340 iterations are used to obtain the result. Compared with (a), (b) avoids the blocking issue, presents less noise residue, and therefore has better visual quality. In addition, the RMSE value of 0.0842 for this reconstruction is also smaller than the RMSE value 0.0906 for (a).

We repeat the experiment with the other two objects in Fig. 3. The detector noise is still *σ*_{0} = 0.938. Figure 6 presents zoomed-in parts of the reconstructions obtained with *M* = 24 PCA feature measurements per block for the first object example. The RMSE values using the Wiener method and the FoE-based method are 0.180 and 0.159, respectively. Again, we observe the reconstruction using the FoE model avoids the blocking effect. Figure 6(b) presents smoother curves and better black-white contrast than (a), especially in the areas pointed by the red arrows. However, if the high frequency information has been lost due to not having enough feature measurements, the FoE model cannot help to recover it back. An example for such a case is marked by the ellipses in both (a) and (b). To further compare the two methods, Fig. 7 presents column 60 and column 265 in the reconstructions. Once again, it can be observed that less noise residue is present in the reconstruction using the FoE-based method compared to the Wiener operator. Additionally, we calculate the contrast values for the four areas pointed out by arrows in Fig. 6. These areas are also listed in the first row of Table 1. Using the original object, the pixels in the areas with a maximum of 255 and a minimum of 0 are selected. Then the average over the same pixels in the reconstructed areas are used to calculate the contrast ratios. Table 1 summarizes these results. It can be seen that the FoE method presents a better reconstruction contrast. Figure 8 presents 2 zoomed-in parts of the reconstructions for the object presented in Fig. 3(c). For each 16×16 block, *M* = 16 PCA features are collected. Once again, we observe the reconstruction using the nonlinear method has better visual quality. Its RMSE value is also reduced to 0.183 from 0.199 using the Wiener operator.

#### 3.2. The connections among the reconstruction RMSE, the detector noise σ_{0}, the number of features M per block, and the block size
$\sqrt{N}\times \sqrt{N}$

Here we first define the average RMSE over multiple objects and the compression ratio *r* for BCI. Note that the normalized RMSE defined in Section 3.1 is an error per object pixel. Therefore the averaged RMSE over the three object examples can be defined as
$\frac{{\Vert \overline{\text{X}}-{\overline{\text{X}}}_{\text{est}}\Vert}_{\text{F}}}{{\Vert \overline{\text{X}}\Vert}_{\text{F}}}$ with X̄ = [X_{1} X_{2} X_{3}] and X̄_{est} = [X_{1,est} X_{2,est} X_{3,est}], where X* _{i}* and X

_{i}_{,est}represent the

*i*th object and its reconstruction for

*i*= 1, 2, or 3. To simplify notation, from now on we use RMSE to refer to the averaged RMSE over the three objects. The reconstruction RMSE is a function of the number of features

*M*per block, the length

*N*of a block vector

**x**, and the number of blocks

*K*in an object. To compare RMSE for different (

*M*,

*N*,

*K*) values, the ratio of the total number of features for an object to the number of object pixels is defined. We name it as the compression ratio $r=\frac{KM}{KN}=\frac{M}{N}\in \left(0,1\right]$. Note that

*r*is independent of

*K*. For

*K*= 1 or in a conventional CI system, RMSE versus

*r*is equivalent to RMSE versus

*M*as studied in the literature [3, 11, 17].

We make use of the three objects with six different block sizes, namely 2×2, 3×3, 4×4, 6× 6, 8 × 8, and 10 × 10 in the first experiment. The detector noise standard deviation is assumed to be *σ*_{0} = 0.1. The Wiener operator is used to reconstruct the objects. Figure 9(a) presents the RMSE versus *r* for the six block sizes. We can observe that, for each block size, the RMSE value decreases first, and then becomes large as *r* keeps increasing. An optimal compression ratio *r*_{opt} corresponding to a minimal RMSE exists for each curve. Such an observation is in agreement with the discussion about the optimal number of features *M*_{opt} in Section 2.1.

The second observation is that for a high compression or a small *r* value, the RMSE using 8 × 8 or 10 × 10 blocks is smaller than the value from the other four cases. For example, at *r* = 0.25, the RMSE values for block sizes of 4 × 4, 6 × 6, and 8 × 8, are 0.092, 0.078, and 0.076, respectively. This demonstrates the benefit of implementing a large *N* in BCI when a high compression is required.

The third observation is that the RMSE value for a large *N* such as 8 × 8 or 10 × 10 reaches its minimum with a larger value at a smaller *r*_{opt}. As discussed in Section 2.1, for a fixed block size, a BCI system feature measurement SNR degraded as *M* increases by two factors, the reduced object intensity **x***/M* and the increased detector noise variance
$M{\sigma}_{0}^{2}/{T}_{0}$ or standard deviation
$\sqrt{M}{\sigma}_{0}/\sqrt{T}$. Quantitatively, SNR decreases with
${M}^{\frac{3}{2}}$ [17]. Therefore, for a large *N*, although the total intensity of a block **x** increases with *N*, SNR decreasing with
${M}^{\frac{3}{2}}$ becomes the dominant factor for the reconstruction error even for smaller *r* = *M/N* values.

In BCI, an important system parameter is the block size. Besides closely related to the reconstruction RMSE, it also determines the number of detectors in a system. Reducing the number of detectors is one of the aims for CI. Therefore, in the second experiment, we search for the optimal block size $\sqrt{{N}_{\text{opt}}}\times \sqrt{{N}_{\text{opt}}}$ using the results obtained in Fig. 9(a).

There are two kinds of optimal block size
$\sqrt{{N}_{\text{opt}}}\times \sqrt{{N}_{\text{opt}}}$. One is for the case that the reconstruction RMSE is minimized for a fixed compression ratio *r*. The other is defined when *r* is minimized for a fixed RMSE value. For example, if the largest tolerable reconstruction RMSE in Fig. 9(a) is chosen as 0.035, then the minimal *r* to satisfy the requirement can be plotted as a function of
$\sqrt{N}$ as shown in (b). Similarly, the RMSE value versus
$\sqrt{N}$ for a fixed *r* = 0.6 is presented in (c). From both figures, we can observe an optimal block size
$\sqrt{{N}_{\text{opt}}}\times \sqrt{{N}_{\text{opt}}}$ equal to 8 × 8.

To further understand the connections among RMSE,
${\sigma}_{0}^{2}$, *M*, and *N*_{opt}, we follow the derivation in [30], then write the RMSE for a sequential architecture PCA-based CI system using the Wiener operator for reconstruction as

*E*{·} is the expectation of a random variable, Tr{·} is the trace of a matrix, and

*d*, where

_{i}*i*= 1, 2,...,

*N*with

*d*

_{1}≥

*d*

_{2}≥ ··· ≥

*d*≥ 0, are the

_{N}*N*eigenvalues of R

**. The details about this derivation can be found in the Appendix.**

_{x}The RMSE value *ε* in Eq. (13) consists of two nonnegative terms. As *M* increases, the first term decreases, because fewer *d _{i}* values are summed. On the other hand, for the second term,
${M}^{3}{\sigma}_{0}^{2}/{d}_{i}$ becomes larger for each

*d*as

_{i}*M*increases. Hence, the factor $1-1/\left(1+{M}^{3}{\sigma}_{0}^{2}/{d}_{i}\right)$ increases towards unity, and we are also summing more weighted

*d*. Thus, the second term in Eq. (13) increases with

_{i}*M*. We can therefore conclude that a minimum RMSE exists at an optimal

*M*

_{opt}or

*r*

_{opt}=

*M*

_{opt}/

*N*. This is what we observe for each block size in Fig. 9(a).

Eq. (13) is for one block size
$\sqrt{N}\times \sqrt{N}$. As *N* changes, the object autocorrelation matrix R** _{x}** or essentially its eigenvalue

*d*varies. Note that R

_{i}**represents the second order statistic property of**

_{x}**x**. Therefore, R

**matrices are different for different kinds of objects. Figure 10(a) and (b) present several examples of the astronomy images [31] captured using telescopes and the natural images [29] taken with digital cameras. The eigenvalues of R**

_{x}**in descending order for the two kinds of objects with multiple block sizes are presented in Fig. 11(a). These**

_{x}*d*values are normalized by $\sum _{i=1}^{N}{d}_{i}$. Note that, for a larger $\sqrt{N}$, the curve

_{i}*d*versus

_{i}*i/N*decreases faster. Hence, fewer

*d*values dominate the sum of eigenvalues $\sum _{i=1}^{N}{d}_{i}$. Another observation from this figure is that, compared with the natural images, the eigenvalues for the astronomy images spread more uniformly. This is because they do not contain many clear structures. Such property makes them harder to be distinguished from detector noise, hence object reconstruction is a more challenging task for the astronomy images.

_{i}Using the same *d _{i}* for both kinds of objects, we repeat the study about

*r*

_{min}versus $\sqrt{N}$ for three noise levels,

*σ*

_{0}= 0.03, 0.04, and 0.05. The largest tolerable reconstruction error for the astronomy and the natural images are RMSE = 0.095 and RMSE = 0.06, respectively. The results are shown in Fig. 11(b) and (c). As expected, although the RMSE requirement for the astronomy images is more relaxed, generally it needs a larger

*r*

_{min}or more features to satisfy the requirement. From both figures, we can also observe that as

*σ*

_{0}increases,

*r*

_{min}increases and the optimal block size $\sqrt{{N}_{\text{opt}}}\times \sqrt{{N}_{\text{opt}}}$ moves towards smaller values. For example, $\sqrt{{N}_{\text{opt}}}$ decreases from 10 to 8 and 9 to 7 as

*σ*

_{0}increases from 0.03 to 0.05 for the astronomy and natural images, respectively. This requires a detector array with more pixels for collecting feature measurement.

Up to this point, although we have only use the Wiener operator for the study on *N*_{opt}, we can conclude that the optimal block size in a BCI system making PCA feature measurements is determined by two factors: first, the image content of an object represented by the eigenvalues *d _{i}* of the object autocorrelation matrix R

**, and, second, the detector noise or the feature measurement SNR.**

_{x}In the last experiment, we use the FoE-based method to repeat the study about RMSE versus *r* for various block sizes. The three objects in Fig. 3 are used. In Fig. 12(a), the reconstruction RMSE versus *r* for block size 6 × 6, 10 × 10, and 14 × 14 are presented. The noise standard deviation *σ*_{0} is 0.1. As a comparison, the RMSE values using the Wiener operator are also presented in the same figure. Once again, for all block sizes, the RMSE values using the FoE-based method are smaller than those using the Wiener operator. It can also be observed that the FoE model benefits the BCI system performance more for a larger block size.

Using Fig. 12(a), we repeat the study for *N*_{opt}. The largest tolerable RMSE for object reconstruction is chosen to be 0.1. Then, the minimal compression ratio *r*_{min} to satisfy this requirement is plotted as a function of
$\sqrt{N}$ in Fig. 12(b). For the Wiener operator, the minimum RMSE for the block sizes 14 × 14 and 16×16 are larger than the requirement RMSE = 0.1. Hence the curve *r*_{min} versus
$\sqrt{N}$ ends at the block size 12×12. Comparing with the linear Wiener method, the FoE-based method requires smaller *r*_{min} values to satisfy the reconstruction error requirement. Besides this advantage, the optimal block size
$\sqrt{{N}_{\text{opt}}}\times \sqrt{{N}_{\text{opt}}}$ is enlarged from 12 × 12 using the Wiener operator to 14 × 14 using the FoE model. The corresponding *r*_{min} values for the linear and the nonlinear methods are 0.1528 and 0.1378, respectively. As discussed before, a large block size benefits BCI by requiring fewer number of detectors, thus reducing the system hardware requirement and reducing the system energy consumption. For example, for an object of size 1008×2016, the number of detectors for a BCI system using a block size 14×14 is 10368, which is 26.5% less than the number of detectors 14112 for a system using a object block size 12 × 12.

## 4. Conclusions

In this work, we study a sequential architecture BCI system for high resolution object reconstruction. PCA features are collected for individual blocks, and then used to reconstruct the object with the linear Wiener operator and the nonlinear FoE-based method. By using the FoE object prior, the nonlinear method presents a smaller reconstruction error and an improved visual quality. In addition, we study 4 important parameters for BCI, the block size
$\sqrt{N}\times \sqrt{N}$, the number of features *M* per block, the detector noise *σ*_{0}, and the reconstruction RMSE. We find there is an optimal block size
$\sqrt{{N}_{\text{opt}}}\times \sqrt{{N}_{\text{opt}}}$ for each kind of object, represented by the object autocorrelation matrix R** _{x}**.

*N*

_{opt}decreases as the detector noise

*σ*

_{0}increases. By using the FoE-based method for reconstruction, a larger

*N*

_{opt}can be obtained compared with using the Wiener operator.

## Appendix

The reconstruction RMSE for a sequential architecture BCI system collecting PCA features and using the Wiener operator for reconstruction is

*E*{·} and Tr{·}, we have

**=**

_{x}*σ*

^{2}I is the autocorrelation matrix of detector noise vector

**n**. Because the Wiener operator is defined as W = R

_{x}H

^{T}(HR

_{x}H

^{T}+

*σ*

^{2}I)

^{−1}, the RMSE can be further simplified as

**. If we define the eigenvector of R**

_{x}**for the eigenvalue**

_{x}*d*as a vector

_{i}**q**

*(dimensions*

_{i}*N*× 1), then R

**and H can be written as**

_{x}**q**

_{i}^{T}

**q**

*= 1 if*

_{j}*i*=

*j*, otherwise it is 0. In addition, note that as

*M*PCA features are collected for each block, the collected object intensity in one feature is reduced by

*M*or the object energy in a feature degrades by

*M*

^{2}, and noise energy increases by

*M*. Therefore, the feature measurement SNR reduces by

*M*

^{3}, or equivalently detector noise variance

*σ*

^{2}becomes ${M}^{3}{\sigma}_{0}^{2}$. Considering all of these notations, we finally obtain the reconstruction RMSE as

## Acknowledgments

This work and its earlier conference version [18] are supported in part by the University Research Committee of the University of Hong Kong under Project 21476029.

## References and links

**1. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**, 1289–1306 (2006). [CrossRef]

**2. **E. J. Candès, “Compressive sampling,” in *Proceedings of the International Congress of Mathematicians*, (European Mathematical Society, 2006), pp. 1433–1452.

**3. **M. A. Neifeld and J. Ke, “Optical architectures for compressive imaging,” Appl. Opt. **46**, 5293–5303 (2007). [CrossRef] [PubMed]

**4. **M. Lustig, D. L. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magn. Reson. Med. **58**, 1182–1195 (2007). [CrossRef] [PubMed]

**5. **D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express **17**, 13040–13049 (2009). [CrossRef] [PubMed]

**6. **H. Di, K. Zheng, X. Zhang, E. Y. Lam, T. Kim, Y. S. Kim, T.-C. Poon, and C. Zhou, “Multiple-image encryption by compressive holography,” Appl. Opt. **51**, 1000–1009 (2012). [CrossRef] [PubMed]

**7. **X. Zhang and E. Y. Lam, “Sectional image reconstruction in optical scanning holography usingcompressed sensing,” in IEEE International Conference on Image Processing, (IEEE, 2010), pp. 3349–3352. [CrossRef]

**8. **M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express **15**, 14013–14027 (2007). [CrossRef] [PubMed]

**9. **Z. Xu and E. Y. Lam, “Image reconstruction using spectroscopic and hyperspectral information for compressive terahertz imaging,” J. Opt. Soc. Am. A **27**, 1638–1646 (2010). [CrossRef]

**10. **J. Ke, P. Shankar, and M. A. Neifeld, “Distributed imaging using an array of compressive cameras,” Opt. Commun. **282**, 185–197 (2009). [CrossRef]

**11. **J. Ke, A. Ashok, and M. A. Neifeld, “Block-wise motion detection using compressive imaging system,” Opt. Commun. **284**, 1170–1180 (2011). [CrossRef]

**12. **P. K. Baheti and M. A. Neifeld, “Recognition using information-optimal adaptive feature-specific imaging,” J. Opt. Soc. Am. A **26**, 1055–1070 (2009). [CrossRef]

**13. **X. Zhang and E. Y. Lam, “Edge-preserving sectional image reconstruction in optical scanning holography,” J. Opt. Soc. Am. A **27**, 1630–1637 (2010). [CrossRef]

**14. **D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal. **26**, 301–321 (2009). [CrossRef]

**15. **J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory **53**, 4655–4666 (2007). [CrossRef]

**16. **L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D **60**, 259–268 (1992). [CrossRef]

**17. **J. Ke, A. Ashok, and M. A. Neifeld, “Object reconstruction from adaptive compressive measurements in feature-specific imaging,” Appl. Opt. **49**, H27–H39 (2010). [CrossRef] [PubMed]

**18. **J. Ke and E. Y. Lam, “Nonlinear image reconstruction in block-based compressive imaging,” in IEEE International Symposium on Circuits and Systems, (IEEE, 2012), pp. 2917–2920.

**19. **S.-H. Cho, S.-H. Lee, N. Gung-Chan, S. Jun-Oh, J.-H. Son, H. Park, and C.-B. Ahn, “Fast terahertz reflection tomography using block-based compressed sensing,” Opt. Express **19**, 16401–16409 (2011). [CrossRef] [PubMed]

**20. **L. Gan, “Block compressed sensing of natural images,” in 2007 15th International Conference on Digital Signal Processing, (IEEE, 2007), pp. 403–406. [CrossRef]

**21. **L. Sun, X. Wen, M. Lei, H. Xu, J. Zhu, and Y. Wei, “Signal reconstruction based on block compressed sensing,” Artificial Intelligence and Computational Intelligence, Lecture Notes in Computer Science pp. 312–319 (2011).

**22. **S. Roth and M. J. Black, “Fields of experts,” Int. J. Comput. Vis. **82**, 205–229 (2009). [CrossRef]

**23. **I. T. Jolliffe, *Principle Component Analysis* (Springer, 2002).

**24. **K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. **16**, 2080–2095 (2007). [CrossRef] [PubMed]

**25. **M. Welling, G. Hinton, and S. Osindero, “Learning sparse topographic representations with products of Student-t distributions,” in *Advances in Neural Information Processing Systems* (MIT Press, 2003).

**26. **G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural Comput. **14**, 1771–1800 (2002). [CrossRef] [PubMed]

**27. **J. S. Liu, *Monte Carlo Strategies in Scientific Computing* (Springer, 2003).

**28. **G. H. Golub and C. F. V. Loan, *Matrix Computations* (The Johns Hopkins University Press, 1996).

**29. **The Berkeley Segmentation Dataset and Benchmark, http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/.

**30. **J. Ke, M. D. Stenner, and M. A. Neifeld, “Minimum reconstruction error in feature-specific imaging,” in Visual Information Processing XIV, Proc. SPIE **5817**,7–12(2005).

**31. ** National Optical Astronomy Observatory/Association of Universities for Research in Astronomy/National Science Foundation, http://www.noao.edu/image_gallery/.