Sparse Reconstruction with Compressed Sensing: Difference between revisions
Molokaicat (talk | contribs) No edit summary |
Molokaicat (talk | contribs) No edit summary |
||
| Line 1: | Line 1: | ||
Author: Ngoc Ly (SysEn 5800 Fall 2021) | Author: Ngoc Ly (SysEn 5800 Fall 2021) | ||
== Introduction == | |||
Introduction | <math>x \in \mathbb{R}^N</math>often not really sparse but approximately sparse | ||
<math>x \in \mathbb{R}^N</math>often not really sparse but | |||
<math>\Phi \in \mathbb{R}^{M \times N}</math>for i<math>M \ll N</math>s a Random Gaussian or Bernoulli matrix | <math>\Phi \in \mathbb{R}^{M \times N}</math>for i<math>M \ll N</math>s a Random Gaussian or Bernoulli matrix | ||
| Line 16: | Line 14: | ||
How can we reconstruct x from <math>y = \Phi x + e</math> | How can we reconstruct x from <math>y = \Phi x + e</math> | ||
The goal is to reconstruct | The goal is to reconstruct <math>x \in \mathbb{R}^N</math>given <math>y</math> and <math>\Phi</math> | ||
<math>\Phi</math> | Sensing matrix <math>\Phi</math>must satisfy RIP i.e. Random Gaussian or Bernoulli matrixies satisfies (cite) | ||
not work | let <math>[ N ] = \{ 1, \dots , N \} </math>be an index set <math>[N]</math> enumerates the columns of <math>\Phi</math> and <math>x</math> <math>\Phi</math> is an under determined systems with infinite solutions. Why <math>l_2</math> norm does not work | ||
What is compression is | What is compression is synonymous with to the sparsity. | ||
The problem formulation is to recover sparse data | The problem formulation is to recover sparse data <math> \mathbf{x} \in \mathbb{R}^N </math> | ||
we say <math>\mathbf{x}</math> is <math>k</math> sparse when <math>|supp(x)| \leq k</math> | The support of <math>\mathbf{x}</math> is <math>supp(\mathbf{x}) = \{i \in [N] : \mathbf{x}_i \neq 0 \}</math> we say <math>\mathbf{x}</math> is <math>k</math> sparse when <math>|supp(x)| \leq k</math> | ||
We are interested in the smallest <math>supp(x)</math> , i.e. <math>min(supp(x))</math> | We are interested in the smallest <math>supp(x)</math> , i.e. <math>min(supp(x))</math> | ||
Before we get into RIP lets talk about RIC | Before we get into RIP lets talk about RIC | ||
Restricted Isometry Constant (RIC) is the smallest <math>\delta_{|s} \ s.t. \ s \subseteq [N]</math>that satisfies the RIP condition introduced by Candes, Tao | |||
Random Gaussian and Bernoulli satisfies RIP | |||
indexed by i ∈ S | Let <math>\Phi \in \mathbb{R}^{M \times N}</math> satisfy RIP, Let <math>[N]</math> be an index set For <math>s</math> is a restriction on <math>\mathbf{x}</math> denoted by <math>x_{|s}</math> <math>x \in \mathbb{R}^N</math> to <math>s</math> k-sparse <math>\mathbf{x}</math> s.t. RIP is satisfied the s = | Γ | i.e. s ⊆ [ N ] and Φ | s ⊆ Φ where the columns of Φ | s is indexed by i ∈ S | ||
� | � | ||
| Line 122: | Line 103: | ||
y = Φx + e = Φx s + Φx r + e = Φx s + ẽ | y = Φx + e = Φx s + Φx r + e = Φx s + ẽ | ||
If Φ | If Φ satisfies RIP for sparsity s, then the norm of error ẽ is bounded by | ||
k ẽ k 2 ≤ | k ẽ k 2 ≤ | ||
| Line 144: | Line 125: | ||
∀ x | ∀ x | ||
Theory | == Theory == | ||
Gel’fend n-width | Gel’fend n-width | ||
| Line 164: | Line 144: | ||
We want a small µ A because it will be close ot the normal matrix, which | We want a small µ A because it will be close ot the normal matrix, which | ||
will | will satisfies RIP | ||
* | === Algorithm IHT === | ||
* Initialize <math> \Phi, \mathbf{y}, \mathbf{e} \ \mbox{with} \ \mathbf{y} = \mathbf{\Phi} \mathbf{x} | \mathbf{e} and \mathfrak{M}</math> | |||
* output <math>IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) </math> | * output <math>IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) </math> | ||
* While halting criterion false do | * While halting criterion false do | ||
* <math>x^{(n+1)} \leftarrow \P_{\mathfrak{M} \left[ x^{(n) + \Phi^* (\mathbf{y} - \mathbf{\Phi x}^{(n)})}\right]}</math> | *<math>x^{(n+1)} \leftarrow \P_{\mathfrak{M} \left[ x^{(n) + \Phi^* (\mathbf{y} - \mathbf{\Phi x}^{(n)})}\right]}</math> | ||
* <math>n \leftarrow n + 1 </math> | *<math>n \leftarrow n + 1 </math> | ||
end while | end while | ||
return: <math>IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) \leftarrow \mathbf{x}^{(n)}</math> | return: <math>IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) \leftarrow \mathbf{x}^{(n)}</math> | ||
== Numerical Example == | |||
Numerical Example | |||
Iterative Hard Thresholding IHT | Iterative Hard Thresholding IHT | ||
Revision as of 05:06, 28 November 2021
Author: Ngoc Ly (SysEn 5800 Fall 2021)
Introduction
$ x \in \mathbb{R}^N $often not really sparse but approximately sparse
$ \Phi \in \mathbb{R}^{M \times N} $for i$ M \ll N $s a Random Gaussian or Bernoulli matrix
$ y \in \mathbb{R}^M $are the observed y samples
$ e \in \mathbb{R}^M $noise vector $ \| e \|_2 \leq \eta $k e k 2 ≤ η
s.t.
How can we reconstruct x from $ y = \Phi x + e $
The goal is to reconstruct $ x \in \mathbb{R}^N $given $ y $ and $ \Phi $
Sensing matrix $ \Phi $must satisfy RIP i.e. Random Gaussian or Bernoulli matrixies satisfies (cite)
let $ [ N ] = \{ 1, \dots , N \} $be an index set $ [N] $ enumerates the columns of $ \Phi $ and $ x $ $ \Phi $ is an under determined systems with infinite solutions. Why $ l_2 $ norm does not work
What is compression is synonymous with to the sparsity.
The problem formulation is to recover sparse data $ \mathbf{x} \in \mathbb{R}^N $
The support of $ \mathbf{x} $ is $ supp(\mathbf{x}) = \{i \in [N] : \mathbf{x}_i \neq 0 \} $ we say $ \mathbf{x} $ is $ k $ sparse when $ |supp(x)| \leq k $
We are interested in the smallest $ supp(x) $ , i.e. $ min(supp(x)) $
Before we get into RIP lets talk about RIC
Restricted Isometry Constant (RIC) is the smallest $ \delta_{|s} \ s.t. \ s \subseteq [N] $that satisfies the RIP condition introduced by Candes, Tao
Random Gaussian and Bernoulli satisfies RIP
Let $ \Phi \in \mathbb{R}^{M \times N} $ satisfy RIP, Let $ [N] $ be an index set For $ s $ is a restriction on $ \mathbf{x} $ denoted by $ x_{|s} $ $ x \in \mathbb{R}^N $ to $ s $ k-sparse $ \mathbf{x} $ s.t. RIP is satisfied the s = | Γ | i.e. s ⊆ [ N ] and Φ | s ⊆ Φ where the columns of Φ | s is indexed by i ∈ S
�
x i , i f i ∈ S
( x | S ) i =
0 otherwise
RIP defined as
( 1 − δ s )k x k 22 ≤ k Φx k 22 ≤ ( 1 + δ s )k x k 22
3 Lemmas Page 267 Blumensath Davies IHT for CS
Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed
sensing), For all index sets Γ and all Φ for which RIP holds with s = | Γ | that is
s = supp ( x )
1k Φ Γ T k 2 ≤
q
1 + δ | Γ | k y k 2
( 1 − δ | Γ | )k x Γ k 22 ≤ k Φ Γ T Φ Γ x Γ k 22 ≤ ( 1 + δ | Γ | )k x Γ k 22
and
k( I − Φ Γ T Φ Γ )k 2 ≤ δ | Γ | k x Γ k 2
SupposeΓ ∩ Λ = ∅
k Φ Γ T Φ Λ ) x Λ k 2 ≤ δ s k x Λ k 2
Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery
from incomplete and inaccurate √ samples)
If Φ satisfies RIP k Φx s k 2 ≤ 1 + δ s k x s k 2 , ∀ x s : k x s k 0 ≤ s, Then ∀ x
k Φx k 2 ≤
p
1 + δ s k x k 2 +
p
1 + δ s
k x k 1
sqrts
Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery
from incomplete and inaccurate samples)
Let x s be the best s-term approximation to x. Let x r = x − x s Let
y = Φx + e = Φx s + Φx r + e = Φx s + ẽ
If Φ satisfies RIP for sparsity s, then the norm of error ẽ is bounded by
k ẽ k 2 ≤
p
1 + δ s k x − x s k 2 +
p
1 + δ s
k x − x s k 1
√
+ k e k 2
s
∀ x
Theory
Gel’fend n-width
Errors E ( S, Φ, D )
Definition Mutual Coherence
LetA ∈ R M × N , themutualcoherenceµ A isde f inedby :
µ A =
|h a i , a j i|
k a i kk a j k
i 6 = j
We want a small µ A because it will be close ot the normal matrix, which
will satisfies RIP
Algorithm IHT
- Initialize $ \Phi, \mathbf{y}, \mathbf{e} \ \mbox{with} \ \mathbf{y} = \mathbf{\Phi} \mathbf{x} | \mathbf{e} and \mathfrak{M} $
- output $ IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) $
- While halting criterion false do
- $ x^{(n+1)} \leftarrow \P_{\mathfrak{M} \left[ x^{(n) + \Phi^* (\mathbf{y} - \mathbf{\Phi x}^{(n)})}\right]} $
- $ n \leftarrow n + 1 $
end while return: $ IHT(\mathbf{y}, \mathbf{\Phi}, \mathfrak{M}) \leftarrow \mathbf{x}^{(n)} $
Numerical Example
Iterative Hard Thresholding IHT