# Sparse Reconstruction with Compressed Sensing

Jump to navigation Jump to search

Author: Ngoc Ly (SysEn 5800 Fall 2021)

## Introduction

### Goal

The goal of compressed sensing is to solve the underdetermined linear system in which the number of variables is much greater than the number of observations, resulting in an infinite number of signal coefficient vectors ${\displaystyle x}$ for the same set of compressive measurements ${\displaystyle y}$. The objective is to reconstruct a vector ${\displaystyle x}$ in a given of measurements ${\displaystyle y}$ and a sensing matrix A. Instead of taking a large number of high-resolution measurements and discarding the majority of them, consider taking far fewer random measurements and reconstructing the original ${\displaystyle x}$ with high probability from its sparse representation.

### Basic Idea and Notation

Begin with a linear equation ${\displaystyle y=Ax+e}$, where ${\displaystyle A\in \mathbb {R} ^{M\times N}}$ is a sensing matrix that must be obtained and will result in either exact or approximated optimum solution depending on how it is chosen, ${\displaystyle x\in \mathbb {R} ^{N}}$ is a signal vector with at most ${\displaystyle k}$-sparse entries, which means ${\displaystyle x}$ has ${\displaystyle k}$ non-zero entries, ${\displaystyle [N]=\{1,\dots ,N\}}$be an index set, ${\displaystyle y\in \mathbb {R} ^{M}}$ is a compressed measurement vector, ${\displaystyle [M]=\{1,\dots ,M\}}$, ${\displaystyle e\in \mathbb {R} ^{M}}$ is a noise vector and assumed to be bounded ${\displaystyle \|e\|_{2}\leq \eta }$ if it exists, and ${\displaystyle M\ll N}$ read as ${\displaystyle M}$ much less than ${\displaystyle N}$.

### Sparsity

A vector ${\displaystyle x}$ is said to be ${\displaystyle k}$-sparse in ${\displaystyle \mathbb {R} ^{N}}$ if it has at most ${\displaystyle k}$ nonzero coefficients. The support of ${\displaystyle x}$ is ${\displaystyle supp(x)=\{i\in [N]:x_{i}\neq 0\}}$, and ${\displaystyle x}$ is a ${\displaystyle k}$-sparse signal when the cardinality ${\displaystyle |supp(x)|\leq k}$. The set of ${\displaystyle k}$-sparse vectors is denoted by ${\displaystyle \Sigma _{k}=\{x\in \mathbb {R} ^{N}:\|x\|_{0}\leq k\}}$. Consequently, there are ${\displaystyle {\binom {N}{k}}}$ different subsets of ${\displaystyle k}$-sparse vectors. If a random ${\displaystyle k}$-sparse ${\displaystyle x}$ is drawn uniformly from ${\displaystyle \Sigma _{k}}$, its entropy ${\displaystyle \log {\binom {N}{k}}}$ is approximately equivalent to ${\displaystyle k\log {\frac {N}{k}}}$ bits are required for compression of ${\displaystyle \Sigma _{k}}$ [1][2].

The idea is to search for the sparsest ${\displaystyle x\in \Sigma _{k}}$ from the measurement vector ${\displaystyle y\in \mathbb {R} ^{M}}$ and a sensing matrix ${\displaystyle A\in \mathbb {R} ^{M\times N}}$ with ${\displaystyle M\ll N}$. If the number of linear measurements is at least twice as its sparsity ${\displaystyle x}$, i.e., ${\displaystyle M\geq 2k}$, then there exists at most one signal ${\displaystyle x\in \Sigma _{k}}$ that satisfies the constraint ${\displaystyle y=Ax}$ and produce the correct result for any ${\displaystyle x\in \Sigma _{k}}$ [2]. Hence, the reconstruction problem can be formulated as an ${\displaystyle l_{0}}$"norm" program.

${\displaystyle {\hat {x}}={\underset {x\in \Sigma _{k}}{argmin}}\|x\|_{0}\quad s.t.\quad y=Ax}$

This optimization problem minimizes the number of nonzero entries of ${\displaystyle x}$ subject to the constraint ${\displaystyle y=Ax}$, that is to find the sparsest element in the affine space ${\displaystyle \{x\in \mathbb {R} ^{N}:Ax=y\}}$ [3]. It turns out to be a combinatorial optimization problem, which is NP-Hard because it includes all possible sets of ${\displaystyle k}$-sparse out of ${\displaystyle [N]}$. Furthermore, if noise is present, the recovery is unstable [Buraniuk "compressed sensing"].

### Restricted Isometry Property (RIP)

A matrix ${\displaystyle A}$ is said to satisfy the RIP of order ${\displaystyle k}$ if for all ${\displaystyle x\in \Sigma _{k}}$ has a ${\displaystyle \delta _{k}\in [0,1)}$. A restricted isometry constant (RIC) of ${\displaystyle A}$ is the smallest ${\displaystyle \delta _{k}}$ satisfying this condition [4][5][6].

${\displaystyle (1-\delta _{k})\|x\|_{2}^{2}\leq \|Ax\|_{2}^{2}\leq (1+\delta _{k})\|x\|_{2}^{2}}$

Under projections through matrix ${\displaystyle A}$, the restricted isometry property allows ${\displaystyle k}$-sparse vectors to have unique measurement vectors ${\displaystyle y}$. If ${\displaystyle A}$ meets RIP, then ${\displaystyle A}$ does not send two distinct ${\displaystyle k}$-sparse ${\displaystyle x\in \Sigma _{k}}$ to the same measurement vector ${\displaystyle y}$, indicating that ${\displaystyle x}$ is a unique solution under RIP.

If the matrix ${\displaystyle A\in {\mathcal {R}}^{M\times N}}$ satisfies the RIP condition of order ${\displaystyle 2k}$ and the constant ${\displaystyle \delta _{2k}\in [0,1)}$, there are two distinct ${\displaystyle k}$-sparse vectors in ${\displaystyle \Sigma _{2k}}$. When they are equal, the restricted isometry property holds. If ${\displaystyle A}$ is a ${\displaystyle 2k}$-order RIP matrix, it means that no two ${\displaystyle k}$-sparse vectors are mapped to the same measurement vector ${\displaystyle y}$ by ${\displaystyle A}$. In other words, when working with sparse vectors, the RIP ensures that the columns of ${\displaystyle A}$ are nearly orthonormal. Furthermore, ${\displaystyle A}$ is an approximately norm-preserving function, which means that it preserves its distance when mapping for ${\displaystyle k}$-sparse signals for all or more as ${\displaystyle \delta _{k}}$ approaches zero. Candes, Romberg, and Tao [4] proved that if ${\displaystyle x}$ is ${\displaystyle k}$-sparse, and ${\displaystyle A}$ satisfies the RIP of order ${\displaystyle 2k}$ with RIP-constant ${\displaystyle \delta _{2k}<{\sqrt {(}}2)-1}$, then ${\displaystyle \ell _{1}}$ gives a unique sparse solution. The ${\displaystyle \ell _{1}}$ convex optimization problem is the same as the solution to the ${\displaystyle \ell _{0}}$ program and can solve by the Linear Program [3] [2]. Hence, the ${\displaystyle \ell _{1}}$ reconstruction problem is as followed which can be solved by basis pursuit [4] [7].

${\displaystyle {\hat {x}}={\underset {x\in \Sigma _{k}}{argmin}}\|x\|_{1}\quad s.t.\quad y=Ax}$

## Theory and Algorithmic Discussions

Two main things need to be considered when recovering ${\displaystyle x}$

• (1) The design of the sensing matrix
• (2) The recovery algorithm

#### Mutual Coherence

The recovery algorithm often refers to the measurement of quantities that are appropriate to the measurement matrix (sensing matrix), i.e., the coherence. In general, the performance of the recovery algorithm gets better if the coherence is getting smaller. It means the columns of the matrices that have medium size are well-conditioned (governed).

Let ${\displaystyle A\in R^{M\times N}}$ and assume ${\displaystyle \ell _{2}}$-normalized columns of ${\displaystyle A}$, the mutual coherence ${\displaystyle \mu =\mu (A)}$ is defined by

${\displaystyle \mu (A)={\underset {1\leq i\neq j\leq N}{max}}{\frac {|\langle a_{i},a_{j}\rangle |}{\|a_{i}\|_{2}\|a_{j}\|_{2}}}}$, where ${\displaystyle a_{i}}$ is the ${\displaystyle i}$th column of ${\displaystyle A}$[8].

The feasibility of attaining the lower bounds for the coherence of a matrix ${\displaystyle A\in R^{M\times N}}$, ${\displaystyle M with ${\displaystyle \ell _{2}}$-normalized columns, and the columns of the matrix are equiangular tight frames defined as

${\displaystyle \mu (A)\geq {\sqrt {\frac {N-M}{M(N-1)}}}}$ [8]

Let a matrix ${\displaystyle A\in R^{M\times N}}$ with ${\displaystyle \ell _{2}}$-normalized columns and let ${\displaystyle s\in [N]}$. for all ${\displaystyle k}$ sparse ${\displaystyle x\in \Sigma _{k}}$.

${\displaystyle (1-\mu (s-1))\|x\|_{2}^{2}\leq \|Ax\|_{2}^{2}\leq (1+\mu (s-1))\|x\|_{2}^{2}}$ [8]

### Sensing Matrix

Although the sensing matrix ${\displaystyle A}$ satisfies RIP of order ${\displaystyle k}$ in some situations, confirming a given matrix ${\displaystyle A}$ meets RIP's criteria is NP-hard in general. As a result, designing an efficient sensing matrix is critical. These sensing matrices are responsible for signal compression at the encoder end and accurate or approximate reconstruction at the decoder end. For signal compression, different sensing matrices are utilized in compressed sensing. There are random, deterministic, structural, and optimized sensing matrices are used in compressed sensing [9].

• Random Sensing Matrices

Some classes of random matrices satisfy RIP, specifically those matrices with independent and identically distributed (i.i.d.) entries drawn from a sub-Gaussian distribution. It requires the number of measurements, ${\displaystyle M=O(klog(N/k))}$ [4] [10] [7], to recover ${\displaystyle x}$ with high probability. Other popular random sensing matrices are Gaussian, Bernoulli, or Rademacher distributions i.e. matrix with entries ${\displaystyle \{0,\pm 1\}}$. (give examples) However, those random dense matrices incur tremendous computational and memory costs, making them unsuitable for large-scale applications. Several researchers have turned to sparse measurement matrices such as binary or bi-adjacency matrices rather than random matrices to address this issue. Nonetheless, those sparse matrices are unstructured in the same way that acyclic networks or trees are. Fortunately, the new random Weibull matrices ~ cite (Xiaoya Zhang and Song Li) give time of failure example. are built with appropriate observations and provide exact sparse signal reconstruction with a greater probability [9].

### Weibull Example

• Deterministic Sensing Matrices

Although deterministic sensing matrices are insufficient to satisfy the RIP condition, they can be used to provide instances for novel concepts. The Vandermond ${\displaystyle k\times N}$ matrices are one of the best deterministic matrices for recovering the ${\displaystyle k}$-sparse signal; however, the reconstruction technique gets unstable as ${\displaystyle N}$ rises. Despite the lack of RIP support in the deterministic sensing matrices, it successfully recovered the original sparse signal for the chirp function-based employing ${\displaystyle M\times M^{2}}$ complex-valued deterministic matrices. Proved binary, bipolar, and ternary matrices are deterministic constructions of sensing matrices that satisfy the RIP of order ${\displaystyle k}$ ~(cite) Arash Amini and Farokh Marvasti. Because of the cyclic feature, the reconstruction process can be sped up using a fast Fourier transform (FFT) technique. Another type of deterministic CS sensing matrix is one based on mutual coherence. The finite geometry-based sparse binary matrices were constructed with low coherence using a Steiner system ~cite (Shuxing Li and Gennian Ge) it's neat so give example. The sparseness property of matrices aids in reducing storage requirements and improving the reconstruction process [9].

### Finite Geometry Example

• Structural Sensing Matrices

Sensing technologies require structured measurement matrices to perform a variety of tasks. Those matrices can be easily created with a small number of parameters. Furthermore, structured matrices can be used to speed the recovery performance of algorithms, making these matrices suitable for large matrices. The Toeplitz akin to a linear sparse ruler and the circulant matrix a square Toeplitz matrix akin to a circular sparse ruler. Applications that follow a Toeplitz structure In terms of estimated accuracy, signal reconstruction speed, and coherence, these matrices perform similar to i.i.d. Gaussian satisfies RIP with high probability. The new sparse block circulant matrix (SBCM) structure greatly reduces computational complexity. Other structural sensing matrices include the Hadamard matrix, which provides near-optimal assurances of recovery while requiring less complexity and so allowing for simple hardware implementation [9].

### Toeplitz matrix Example

• Optimized Sensing Matrices

In order to accomplish high-quality signal reconstruction, the sensing matrices must satisfy RIP. Regardless of what might be expected, the RIP is difficult to verify. Another method for validating the RIP is to compute the mutual coherence between the sensing matrix and the sparse matrix or figure the Gram matrix as ${\displaystyle G=A^{T}A\in \mathbb {R} ^{N\times N}}$. The objective is to improve the sensing matrix to lower coherence using numerous strategies such as the random-detecting framework-based improvement strategy using the shrinkage technique and the irregular estimation of sensing matrix improvement employing a symmetrical strategy. Another technique is to optimize the sensing matrix using a block-sparsified dictionary approach, decreasing the total inter-block and sub-block coherence of the dictionary matrix and thereby significantly increasing the reconstruction [9]

### Algorithms

Several big groups of algorithms are:

• Optimization methods: includes ${\displaystyle \ell _{1}}$ minimization i.e. Basis Pursuit, and quadratically constraint ${\displaystyle \ell _{1}}$ minimization i.e. basis pursuit denoising.
• Greedy methods: include orthogonal matching pursuit (OMP) and compressive sampling matching pursuit (CoSaMP).
• Thresholding-based methods: such as iterative hard thresholding (IHT) and iterative soft thresholding (IST), approximate IHT or AM-IHT.
• Dynamic programming.

#### Algorithm IHT

The ${\displaystyle \ell _{1}}$ convex program mentioned in introduction has an equivalent nonconstraint optimization program. [11]

The threashholding operators is defined as: ${\displaystyle {\mathcal {H}}_{k}[x]={\underset {z\in \sum _{k}}{argmin}}\|x-z\|_{2}}$ selects the best-k term approximation for some k. The ${\displaystyle \ell _{2}}$ was proved to be RIP of order ${\displaystyle 3k}$. [5] With the stopping criterion is ${\displaystyle \|y-Ax^{(n)}\|_{2}\leq \epsilon }$ ${\displaystyle \ \iff \ {\mbox{RIC}}}$ ${\displaystyle \delta _{3k}<{\frac {1}{\sqrt {32}}}}$ [11].

In addition the ${\displaystyle {\hat {f{x}}}=arg{\underset {x\in \Sigma _{k}}{min}}{\frac {1}{n}}\|f{y}-Af{x}\|_{2}^{2}+\lambda \|f{x}\|_{1}}$ in statistics the ${\displaystyle \ell _{1}}$ regularization LASSO with ${\displaystyle \lambda }$ as the regularization parameter. This is the closest convex relaxation to ${\displaystyle \ell _{0}}$ the first program mentioned in the introduction.

Reducing the above loss function${\displaystyle {\frac {1}{2}}\|Ax-y\|_{2}^{2}}$ with the gradient ${\displaystyle z_{j}^{(n)}=\nabla f_{j}(x^{(n)})=-A_{j}^{T}(f{y}-Af{x})}$ for each iteration before pruning that is the hard threshold operator keeping the largest values while turning the values less than the threshold to zero.

Then ${\displaystyle x^{n+1}={\mathcal {H}}\left(f{x}^{(n)}-\tau \sum _{j\in N}^{N}z_{j}^{(n)}\right)}$

The IHT Algorithm reads as follow

• Input ${\displaystyle A,y,e\ {\mbox{with}}\ y=Ax+e}$ and ${\displaystyle k}$
• output ${\displaystyle IHT(y,A,{\mathcal {k}})}$, an ${\displaystyle k}$-sparse solution to ${\displaystyle x}$
• Set ${\displaystyle x^{(0)}=0;n=0}$
• While stopping criterion false do
• ${\displaystyle x^{(n+1)}\leftarrow {\mathcal {H}}_{k}\left[x^{(n)}+A^{*}(y-Ax^{(n)})\right]}$
• ${\displaystyle n\leftarrow n+1}$
• end while
• return: ${\displaystyle IHT(y,A,k)\leftarrow x^{(n)}}$

${\displaystyle A^{*}}$ is an Adjoint matrix i.e. the transpose of it's cofactor.

## Numerical Example

${\displaystyle x=\left[0.1779,1.1649,-0.8262,-1.014,-0.7975,0.1883,0.2648,-0.9036,0.9409,0.4295\right]}$

${\displaystyle A=\left[{\begin{matrix}1.24302&0.512088&0.727509&-0.621372&-0.167216&1.28429&1.80216&-0.860403&-0.190791&0.235666\\-0.199659&0.197401&-0.496226&1.1034&-1.06153&0.0130745&-0.402581&0.788863&0.591947&-0.355693\\0.0202145&-0.0043102&-1.39167&-0.873447&0.253648&1.59332&0.988392&-1.32271&-0.951241&-0.192874\\0.238206&1.43934&-1.08785&-0.807931&0.285027&-2.64908&-0.140972&-0.215606&-0.760412&1.77824\\-0.17931&-1.31525&-0.0348558&-0.550808&0.460217&-1.5188&-1.45967&0.508893&-0.942668&1.1059\\\end{matrix}}\right]}$

${\displaystyle {\hat {x}}=arg{\underset {x\in \Sigma _{x}}{min}}{\frac {1}{a}}\|y-Ax\|_{2}^{2}+\lambda \|x\|_{1}}$

${\displaystyle a=2}$ ${\displaystyle \lambda =10^{-1}}$

### Iteration 1

calculate ${\displaystyle b=x+{\frac {1}{a}}*A^{T}(Ax-y)}$

${\displaystyle b=[-1.1194851676858286,-0.5425433278394107,1.1296652382901562,3.3691735377460557,-1.7975322402864389,0.812781461654327,-1.5305641112291442,2.4886377108681788,2.7744632017898994,-2.450112036057253]}$

#### Thresholding

${\displaystyle x^{(1)}={\begin{cases}b^{1}&if\ |b^{(1)}|>\lambda /2a\\0&if\ |b^{(1)}|\leq \lambda /2a\end{cases}}}$ [12]

Also seen

${\displaystyle x^{(1)}={\begin{cases}b^{1}&if\ |b^{(1)}|>{\sqrt {\lambda }}\\0&if\ |b^{(1)}|\leq {\sqrt {\lambda }}\end{cases}}}$ [13]

${\displaystyle x=[-1.1194851676858286,0.0,1.1296652382901562,3.3691735377460557,-1.7975322402864389,0.812781461654327,-1.5305641112291442,2.4886377108681788,2.7744632017898994,-2.450112036057253]}$

Calculate the Error ${\displaystyle \|A*x-y\|_{2}=24.020025581623976}$

Calculate the new y

${\displaystyle y=Ax=[-7.325091157735414,10.392457780350403,-10.669635806747262,-13.670535847170495,-5.580526608842023]}$

Check if error is less then ${\displaystyle 10^{-5}}$

### Iteration 2

calculate ${\displaystyle b=x+{\frac {1}{a}}*A^{T}(Ax-y)}$

${\displaystyle b=[-1.1194851676858286,0.0,1.1296652382901562,3.3691735377460557,-1.7975322402864389,0.812781461654327,-1.5305641112291442,2.4886377108681788,2.7744632017898994,-2.450112036057253]}$

#### Thresholding

${\displaystyle x^{(2)}={\begin{cases}b^{2}&if\ |b^{(2)}|>\lambda /2a\\0&if\ |b^{(2)}|\leq \lambda /2a\end{cases}}}$

${\displaystyle x=[-1.1194851676858286,0.0,1.1296652382901562,3.3691735377460557,-1.7975322402864389,0.812781461654327,-1.5305641112291442,2.4886377108681788,2.7744632017898994,-2.450112036057253]}$

Calculate the Error ${\displaystyle \|A*x-y\|_{2}=0.0}$

Calculate the new y

${\displaystyle y=Ax=[-7.325091157735414,10.392457780350403,-10.669635806747262,-13.670535847170495,-5.580526608842023]}$

Check if error is less then ${\displaystyle 10^{-5}}$

Stopping condition meets.

## Applications and Motivations

### Low-Rank Matrices

The Netflix Prize was accompanied by low-rank matrix recovery or the matrix completion problem.  The approach then fills in the missing values in the user's ratings for movies that the user hasn't seen. These estimates are based on ratings from other users, who have similar ratings if a matrix is created with all the users as rows and the movie titles as columns. Because some users' interests will be similar and therefore overlap, it is possible to reduce the degrees of freedom significantly. This low-rank structure is frequently assumed for the problem domain of collaborative filtering [14][3].

### Dictionary Learning

The goal in dictionary learning is to infer the original dictionary as possible. Instead of using a predefined dictionary, researchers have found that learning the dictionary by obtaining "dynamic features" from training data often yields representation. Biometric features can be taken from video clips of each subject in a dataset and used to populate the dictionary's columns. Using random projections and sparse representations for iris detection for noncontact biometrics-based authentication systems from video samples has been proposed [15].

### Single-pixel cameras

Single-pixel cameras or single detector imaging are used in situations when detectors are either prohibitively expensive or difficult to miniaturize. A microarray is made up of a large number of miniature mirrors that can be individually turned on and off. The mechanism behind the random sampling, which results in low coherence between measurements, is the most important component of the single-pixel camera. This microarray reflects the light from the scene, and a lens combines all of the reflected beams into one sensor, which is the single detector of the camera used to capture the image ~cited citations. [16] [17].

## Conclusion

The theory was a buildup from what is an inverse problem and sparsity. It develops into the ${\displaystyle \ell _{0}}$ norm and then concludes the ${\displaystyle \ell _{1}}$ norm, which are sufficient conditions for basis pursuit. Candes, Romberg, Tao, and Donoho[4][7] were the first to overcome this problem.

Although the sensing matrix fulfills RIP of order k in some cases, establishing that a given matrix satisfies RIP's conditions is generally NP-hard. In many cases, verifying the sensing matrix isn't a reasonable task, so designing the sensing matrix is crucial. In addition to the computing costs, it must fulfill RIP and be well fitted to the problem domain. It demands some imagination as well as a grasp of the problem domain. It is possible to conclude that the sparse sensing matrix is the most significant components. To conclude, David Donoho saw sparsity everywhere and encouraged, mathematicians, engineers, and scientists to view problems through sparsity.

## References

1. Laska Jason Noah. Rice university regime change: Sampling rate vs. bit-depth in compressive sensing, 2011.
2. Giulio Coluccia, Chiara Ravazzi, and Enrico Magli. Compressed sensing for dis- tributed systems, 2015.
3. Niklas Koep, Arash Behboodi, and Rudolf Mathar. An introduction to compressed sensing, 2019.
4. Emmanuel Candes, Justin Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements. March 2005.
5. Emmanuel J. Candès and Terence Tao. Decoding by linear programming. IEEE
6. Richard Baraniuk, Mark Davenport, Ronald DeVore, and Michael Wakin. A simple proof of the restricted isometry property for random matrices. 28:253–263, 2008.
7. D. L. Donoho. Compressed sensing. 52:1289–1306, 2006.
8. Simon Foucart and Holger Rauhut. A mathematical introduction to compressive sens- ing. Applied and numerical harmonic analysis. Birkhäuser, New York [u.a.], 2013.
9. Y. V. Parkale and S. L. Nalbalwar, “Sensing Matrices in Compressed Sensing.” pp. 113–123, 2020. doi: 10.1007/978-981-32-9515-5_11.
10. E. J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. 52:489–509, 2006.
11. Thomas Blumensath and Mike E. Davies. Iterative hard thresholding for com- pressed sensing. May 2008.
12. Angshul Majumdar. Compressed sensing for engineers. Devices, circuits, and systems.
13. Mohammed Rostami. Compressed sensing with side information on feasible re- gion, 2013.
14. Bennett, James and Stan Lanning. “The Netflix Prize.” (2007).
15. J. K. Pillai, V. M. Patel, R. Chellappa, and N. K. Ratha. Secure and robust iris recognition using random projections and sparse representations. 33:1877–1893, 2011.
16. Richard G. Baraniuk. Compressive sensing [lecture notes]. IEEE Signal Processing Magazine, 24(4):118–121, 2007.
17. Martin Burger, Janic Föcke, Lukas Nickel, Peter Jung, and Sven Augustin. Recon- struction methods in thz single-pixel imaging, 2019.