Sparse Reconstruction with Compressed Sensing: Difference between revisions
Molokaicat (talk | contribs) No edit summary |
Molokaicat (talk | contribs) |
||
| Line 44: | Line 44: | ||
In search for a unique solution we have the following <math> l_0 = |supp(x)|</math> optimization problem. | In search for a unique solution we have the following <math> l_0 = |supp(x)|</math> optimization problem. | ||
<math>\mathbf{\hat{s}} = \underset{s}{arg min} \| \mathbf{s}\|_0 \quad s.t. \quad \mathbf{y} = \Phi \mathbf{s}</math> | <math>\mathbf{\hat{s}} = \underset{s}{arg min} \| \mathbf{s}\|_0 \quad s.t. \quad \mathbf{y} = \Phi \mathbf{s}</math>, which is an NP-Hard. | ||
From Results of Candes, Romberg, Tao, and Donoho | |||
If <math>\Phi</math> satisfies RIP and $\mathbf{y}$ is sparse the <math>l_0</math> has a unique solution. The equivelent <math>l_1</math> convex program to the <math>l_0</math>. | |||
<math>\mathbf{\hat{s}} = \underset{s}{arg \. min} \| \mathbf{s}\|_1 \quad s.t. \quad \mathbf{y} = \Phi \mathbf{s}</math> | |||
Revision as of 05:52, 28 November 2021
Author: Ngoc Ly (SysEn 5800 Fall 2021)
Compressed Sensing (CS)
Compressed Sensing summary here
Introduction
$ x \in \mathbb{R}^N $often not really sparse but approximately sparse
$ \Phi \in \mathbb{R}^{M \times N} $for i$ M \ll N $s a Random Gaussian or Bernoulli matrix
$ y \in \mathbb{R}^M $are the observed y samples
$ e \in \mathbb{R}^M $noise vector $ \| e \|_2 \leq \eta $k e k 2 ≤ η
s.t.
How can we reconstruct x from $ y = \Phi x + e $
The goal is to reconstruct $ x \in \mathbb{R}^N $given $ y $ and $ \Phi $
Sensing matrix $ \Phi $must satisfy RIP i.e. Random Gaussian or Bernoulli matrixies satisfies (cite)
let $ [ N ] = \{ 1, \dots , N \} $be an index set $ [N] $ enumerates the columns of $ \Phi $ and $ x $ $ \Phi $ is an under determined systems with infinite solutions. Why $ l_2 $ norm does not work
What is compression is synonymous with to the sparsity.
The problem formulation is to recover sparse data $ \mathbf{x} \in \mathbb{R}^N $
The support of $ \mathbf{x} $ is $ supp(\mathbf{x}) = \{i \in [N] : \mathbf{x}_i \neq 0 \} $ we say $ \mathbf{x} $ is $ k $ sparse when $ |supp(x)| \leq k $
We are interested in the smallest $ supp(x) $ , i.e. $ min(supp(x)) $
Before we get into RIP lets talk about RIC
Restricted Isometry Constant (RIC) is the smallest $ \delta_{|s} \ s.t. \ s \subseteq [N] $that satisfies the RIP condition introduced by Candes, Tao
Random Gaussian and Bernoulli satisfies RIP
Let $ \Phi \in \mathbb{R}^{M \times N} $ satisfy RIP, Let $ [N] $ be an index set For $ s $ is a restriction on $ \mathbf{x} $ denoted by $ x_{|s} $ $ x \in \mathbb{R}^N $ to $ s $ k-sparse $ \mathbf{x} $ s.t. RIP is satisfied the $ s = |\Gamma| $ i.e. $ s \subseteq [N] $and $ \Phi_{|s} \subseteq \Phi $ where the columns of $ \Phi_{|s} $ is indexed by $ i \in S $
In search for a unique solution we have the following $ l_0 = |supp(x)| $ optimization problem. $ \mathbf{\hat{s}} = \underset{s}{arg min} \| \mathbf{s}\|_0 \quad s.t. \quad \mathbf{y} = \Phi \mathbf{s} $, which is an NP-Hard.
From Results of Candes, Romberg, Tao, and Donoho
If $ \Phi $ satisfies RIP and $\mathbf{y}$ is sparse the $ l_0 $ has a unique solution. The equivelent $ l_1 $ convex program to the $ l_0 $. $ \mathbf{\hat{s}} = \underset{s}{arg \. min} \| \mathbf{s}\|_1 \quad s.t. \quad \mathbf{y} = \Phi \mathbf{s} $
�
x i , i f i ∈ S
( x | S ) i =
0 otherwise
RIP defined as
( 1 − δ s )k x k 22 ≤ k Φx k 22 ≤ ( 1 + δ s )k x k 22
3 Lemmas Page 267 Blumensath Davies IHT for CS
Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed
sensing), For all index sets Γ and all Φ for which RIP holds with s = | Γ | that is
s = supp ( x )
1k Φ Γ T k 2 ≤
q
1 + δ | Γ | k y k 2
( 1 − δ | Γ | )k x Γ k 22 ≤ k Φ Γ T Φ Γ x Γ k 22 ≤ ( 1 + δ | Γ | )k x Γ k 22
and
k( I − Φ Γ T Φ Γ )k 2 ≤ δ | Γ | k x Γ k 2
SupposeΓ ∩ Λ = ∅
k Φ Γ T Φ Λ ) x Λ k 2 ≤ δ s k x Λ k 2
Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery
from incomplete and inaccurate √ samples)
If Φ satisfies RIP k Φx s k 2 ≤ 1 + δ s k x s k 2 , ∀ x s : k x s k 0 ≤ s, Then ∀ x
k Φx k 2 ≤
p
1 + δ s k x k 2 +
p
1 + δ s
k x k 1
sqrts
Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery
from incomplete and inaccurate samples)
Let x s be the best s-term approximation to x. Let x r = x − x s Let
y = Φx + e = Φx s + Φx r + e = Φx s + ẽ
If Φ satisfies RIP for sparsity s, then the norm of error ẽ is bounded by
k ẽ k 2 ≤
p
1 + δ s k x − x s k 2 +
p
1 + δ s
k x − x s k 1
√
+ k e k 2
s
∀ x
Theory
Gel’fend n-width
Errors E ( S, Φ, D )
Definition Mutual Coherence
LetA ∈ R M × N , themutualcoherenceµ A isde f inedby :
µ A =
|h a i , a j i|
k a i kk a j k
i 6 = j
We want a small µ A because it will be close ot the normal matrix, which
will satisfies RIP
Algorithm IHT
- Initialize $ \Phi, \mathbf{y}, \mathbf{e} \ \mbox{with} \ \mathbf{y} = \mathbf{\Phi} \mathbf{x} | \mathbf{e} and \mathfrak{M} $
- output $ IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) $
- While halting criterion false do
- $ x^{(n+1)} \leftarrow \P_{\mathfrak{M} \left[ x^{(n) + \Phi^* (\mathbf{y} - \mathbf{\Phi x}^{(n)})}\right]} $
- $ n \leftarrow n + 1 $
end while return: $ IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) \leftarrow \mathbf{x}^{(n)} $
Numerical Example
Iterative Hard Thresholding IHT