User:Molokaicat: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
(testing latex conversion)
Line 5: Line 5:
                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                                                                                                                                                                                             


$x \in \mathbb{R}^N$ often not really sparse but aproximently sparse                                                                                                                                                                                                                                                         
Introduction


                                                                                                                                                                                                                                                                                                                             
x ∈ R N often not really sparse but approximately sparse


$\Phi \in \mathbb{R} ^{M \times N}$ for $M \ll N$ is a Random Gaussian or Bernoulli matrix                                                                                                                                                                                                                                   
Φ ∈ R M × N for M N is a Random Gaussian or Bernoulli matrix


                                                                                                                                                                                                                                                                                                                             
y ∈ R M are the observed y samples


$y \in  \mathbb{R}^M$ are the observed $y$ samples                                                                                                                                                                                                                                                                           
e ∈ R M noise vector k e k 2 ≤ η


                                                                                                                                                                                                                                                                                                                             
s.t.


$e \in \mathbb{R}^M$  nosie vector $ \| e\|_2 \leq \eta$                                                                                                                                                                                                                                                                     
How can we reconstruct x from y = Φx + e


                                                                                                                                                                                                                                                                                                                             
The goal is to reconstruct x 0 ∈ R N given y and Φ


s.t.         
Sensing matrix Φ must satisfy RIP i.e. Random Gaussian or Bernoulli ma-


traceries satisfies (cite)


<math>\sum_i^N</math>          
let [ N ] = { 1, . . . , N } be an index set


the support s is the smallest delta that satisfies RIP
[ N ] enumerates the columns of Φ and x


Φ is an undetermined systems with infinite solutions. Why l 2 norm does


<math>(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math>        
not work
 
What is compression is sunomunus with to the sparsity.
 
The problem formulation is to recover sparse data x 0 ∈ R N
 
The support of x 0 is supp ( x 0 ) = { i ∈ [ N ] : x 0 ( i ) 6 = 0 }
 
we say x is k sparse when | supp ( x )| ≤ k
 
We are interested in the smallest supp ( x ) , i.e. min(supp ( x ) )
 
Restricted isometry property (RIP) Introduced by Candes, Tao
 
Random Gaussian and Bernoulli satisfies RIP
 
Let Φ ∈ R M × N satsify RIP, Let [ N ] be an index set
 
Before we get into RIP lets talk about RIC


Restricted Isometry Constant (RIC) is the smallest δ | s s.t. s ⊆ [ N ] that satsifies the RIP condition


= Introduction = <nowiki><math display="inline">x \in \mathbb{R}^N</math></nowiki> often not really sparse but aproximently sparse <nowiki><math display="inline">\Phi \in \mathbb{R} ^{M \times N}</math></nowiki> for <nowiki><math display="inline">M \ll N</math></nowiki> is a Random Gaussian or Bernoulli matrix <nowiki><math display="inline">y \in  \mathbb{R}^M</math></nowiki> are the observed <nowiki><math display="inline">y</math></nowiki> samples <nowiki><math display="inline">e \in \mathbb{R}^M</math></nowiki> nosie vector <nowiki><math display="inline">\| e\|_2 \leq \eta</math></nowiki> s.t. How can we reconstruct <nowiki><math display="inline">x</math></nowiki> from <nowiki><math display="inline">y = \Phi x + e</math></nowiki> The goal is to reconstruct <nowiki><math display="inline">x_0 \in \mathbb{R}^N</math></nowiki> given <nowiki><math display="inline">y</math></nowiki> and <nowiki><math display="inline">\Phi</math></nowiki> Sensing matrix <nowiki><math display="inline">\Phi</math></nowiki> must satisfy RIP i.e. Random Gaussian or Bernoulli matraxies satsifies (cite) let <nowiki><math display="inline">[N] = \{1, \dots , N\}</math></nowiki> be an index set <nowiki><math display="inline">[N]</math></nowiki> enumerates the colums of <nowiki><math display="inline">\Phi</math></nowiki> and <nowiki><math display="inline">x</math></nowiki> <nowiki><math display="inline">\Phi</math></nowiki> is an underdetermined systems with infinite solutions. Why <nowiki><math display="inline">l_2</math></nowiki> norm does not work What is compression is sunomunus with to the sparcity. The problem formulation is to recover sparse data <nowiki><math display="inline">x_0 \in R^N</math></nowiki> The support of <nowiki><math display="inline">x_0</math></nowiki> is <nowiki><math display="inline">supp(x_0) = \{i \in [N]: x_0(i) \neq 0 \}</math></nowiki> we say <nowiki><math display="inline">x</math></nowiki> is <nowiki><math display="inline">k</math></nowiki> sparse when <nowiki><math display="inline">|supp(x)| \leq k</math></nowiki> We are interested in the smallest <nowiki><math display="inline">supp(x)</math></nowiki>, i.e. min(<nowiki><math display="inline">supp(x)</math></nowiki>) Restricted isometry property (RIP) Introduced by Candes, Tao Random Gausian and Bernoulli satsifies RIP Let <nowiki><math display="inline">\Phi \in \mathbb{R}^{M \times N}</math></nowiki> satsify RIP, Let <nowiki><math display="inline">[N]</math></nowiki> be an index set Before we get into RIP lets talk about RIC Resiricted Isometry Constant (RIC) is the smallest <nowiki><math display="inline">\delta |_s</math></nowiki> s.t. <nowiki><math display="inline">s \subseteq [N]</math></nowiki> that satsifies the RIP condition For <nowiki><math display="inline">s</math></nowiki> is a restrciton on <nowiki><math display="inline">x</math></nowiki> denoted by <nowiki><math display="inline">x_{|s}</math></nowiki> <nowiki><math display="inline">x \in \mathbb{R}^N \mbox{to} s</math></nowiki> k-sparse <nowiki><math display="inline">x</math></nowiki> s.t. RIP is satisified the s = <nowiki><math display="inline">|\Gamma|</math></nowiki> i.e. <nowiki><math display="inline">s \subseteq [N]</math></nowiki> and <nowiki><math display="inline">\Phi_{|s} \subseteq \Phi</math></nowiki> where the columns of <nowiki><math display="inline">\Phi_{|s}</math></nowiki> is indexed by <nowiki><math display="inline">i \in S</math></nowiki>
For s is a restriction on x denoted by x | s x R N tos k-sparse x s.t. RIP is


<nowiki><math display="block">(x_{|S})_i = \left\{</nowiki>
satisfied the s = | Γ | i.e. s ⊆ [ N ] and Φ | s ⊆ Φ where the columns of Φ | s is


      \begin{array}{c c}
indexed by i ∈ S


         x_i, & if \ i \in S\\


            0     & otherwise
x i , i f i ∈ S


      \end{array}
( x | S ) i =


      \right.<nowiki></math></nowiki>
0 otherwise


RIP defined as
RIP defined as


<nowiki><math display="block">(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math></nowiki>
( 1 − δ s )k x k 22 ≤ k Φx k 22 ≤ ( 1 + δ s )k x k 22


3 Lemmas Page 267 Blumensath Davies IHT for CS
3 Lemmas Page 267 Blumensath Davies IHT for CS


Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed sensing), For all index sets <nowiki><math display="inline">\Gamma</math></nowiki> and all <nowiki><math display="inline">\Phi</math></nowiki> for which RIP holds with <nowiki><math display="inline">s = |\Gamma|</math></nowiki> that is <nowiki><math display="inline">s = supp(x)</math></nowiki>
Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed


<nowiki><math display="block">\begin{split}</nowiki>
sensing), For all index sets Γ and all Φ for which RIP holds with s = | Γ | that is


<nowiki>   &\| \Phi_\Gamma^T\|_2 \leq \sqrt{1 + \delta_{|\Gamma|}} \| y\|_2\\</nowiki>
s = supp ( x )


   &(1 - \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \leq \| \Phi_{\Gamma}^T \Phi_{\Gamma} x_{\Gamma} \|_2^2 \leq (1 + \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \\
1k Φ Γ T k 2


      and & \\
q


          &\| (I - \Phi_{\Gamma}^T \Phi_{\Gamma})\|_2 \leq \delta _{|\Gamma|} \| x_{\Gamma}\|_2\\
1 + δ | Γ | k y k 2


          & Suppose \Gamma \cap \Lambda = \emptyset\\
( 1 − δ | Γ | )k x Γ k 22 ≤ k Φ Γ T Φ Γ x Γ k 22 ≤ ( 1 + δ | Γ | )k x Γ k 22


          & \|  \Phi_{\Gamma}^T \Phi_{\Lambda}) x_\Lambda \|_2 \leq \delta_s \| x_\Lambda \|_2
and


   \end{split}<nowiki></math></nowiki>
k( I − Φ Γ T Φ Γ )k 2 ≤ δ | Γ | k x Γ k 2


Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples)
SupposeΓ ∩ Λ = ∅


If <nowiki><math display="inline">\Phi</math></nowiki> satisfies RIP <nowiki><math display="inline">\| \Phi x^s\|_2 \leq \sqrt{1 + \delta_s} \| x^s\|_2</math></nowiki>, <nowiki><math display="inline">\forall \ x^s: \| x^s\|_0 \leq s</math></nowiki>, Then <nowiki><math display="inline">\forall x</math></nowiki>
k Φ Γ T Φ Λ ) x Λ k 2 ≤ δ s k x Λ k 2


<nowiki><math display="block">\| \Phi x\|_2 \leq \sqrt{1 + \delta_s} \| x\|_2 + \sqrt{1 + \delta_s} \frac{\| x \|_1}{sqrt{s}}</math></nowiki>
Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery


Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples)
from incomplete and inaccurate samples)


Let <nowiki><math display="inline">x^s</math></nowiki> be the best s-term approximation to x. Let <nowiki><math display="inline">x_r = x - x^s</math></nowiki> Let <nowiki><math display="block">y = \Phi x + e = \Phi x^s + \Phi x_r + e = \Phi x^s + \tilde{e}</math></nowiki>
If Φ satisfies RIP k Φx s k 2 ≤ 1 + δ s k x s k 2 , ∀ x s : k x s k 0 ≤ s, Then ∀ x


If <nowiki><math display="inline">\Phi</math></nowiki> statisfies RIP for sparcity s, then the norm of error <nowiki><math display="inline">\tilde{e}</math></nowiki> is bounded by
k Φx k 2 ≤


<nowiki><math display="block">\| \tilde{e} \|_2 \leq \sqrt{1 + \delta_s} \| x - x^s\|_2  + \sqrt{1 + \delta_s} \frac{\| x - x^s\|_1}{\sqrt{s}} + \| e \|_2 \quad \forall x</math></nowiki>
p


<nowiki>= Theory =</nowiki>
1 + δ s k x k 2 +
 
p
 
1 + δ s
 
k x k 1
 
sqrts
 
Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery
 
from incomplete and inaccurate samples)
 
Let x s be the best s-term approximation to x. Let x r = x − x s Let
 
y = Φx + e = Φx s + Φx r + e = Φx s + ẽ
 
If Φ statisfies RIP for sparcity s, then the norm of error ẽ is bounded by
 
k ẽ k 2 ≤
 
p
 
1 + δ s k x − x s k 2 +
 
p
 
1 + δ s
 
k x − x s k 1
 
 
+ k e k 2
 
s
 
∀ x
 
Theory


Gel’fend n-width
Gel’fend n-width


Errors <nowiki><math display="inline">E(S, \Phi, D)</math></nowiki>
Errors E ( S, Φ, D )


Definition Mutual Coherence
Definition Mutual Coherence


<nowiki><math display="inline">Let A \in R^{M \times N}, the mutual coherence \mu_A is defined by:</math></nowiki>
LetA ∈ R M × N , themutualcoherenceµ A isde f inedby :


<nowiki><math display="block">\mu_A = \underset{i \neq j} {\frac{| \langle a_i, a_j \rangle |}{ \| a_i \| \| a_j \|}}</math></nowiki>
µ A =


We want a small <nowiki><math display="inline">\mu_A</math></nowiki> because it will be close ot the normal matrix, which will satisfie RIP
|h a i , a j i|


<nowiki>= Numerical Example =</nowiki>
k a i kk a j k
 
i 6 = j
 
We want a small µ A because it will be close ot the normal matrix, which
 
will satisfie RIP
 
Numerical Example


Basis Persuit
Basis Persuit
Line 105: Line 174:


Solver
Solver
2
<math>(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math>        


                               
                               

Revision as of 01:17, 28 November 2021

MEng Systems Engineering student MS and BS in Statistics

Introduction

                                                                                                                                                                                                                                                                                                                             

Introduction

x ∈ R N often not really sparse but approximately sparse

Φ ∈ R M × N for M � N is a Random Gaussian or Bernoulli matrix

y ∈ R M are the observed y samples

e ∈ R M noise vector k e k 2 ≤ η

s.t.

How can we reconstruct x from y = Φx + e

The goal is to reconstruct x 0 ∈ R N given y and Φ

Sensing matrix Φ must satisfy RIP i.e. Random Gaussian or Bernoulli ma-

traceries satisfies (cite)

let [ N ] = { 1, . . . , N } be an index set

[ N ] enumerates the columns of Φ and x

Φ is an undetermined systems with infinite solutions. Why l 2 norm does

not work

What is compression is sunomunus with to the sparsity.

The problem formulation is to recover sparse data x 0 ∈ R N

The support of x 0 is supp ( x 0 ) = { i ∈ [ N ] : x 0 ( i ) 6 = 0 }

we say x is k sparse when | supp ( x )| ≤ k

We are interested in the smallest supp ( x ) , i.e. min(supp ( x ) )

Restricted isometry property (RIP) Introduced by Candes, Tao

Random Gaussian and Bernoulli satisfies RIP

Let Φ ∈ R M × N satsify RIP, Let [ N ] be an index set

Before we get into RIP lets talk about RIC

Restricted Isometry Constant (RIC) is the smallest δ | s s.t. s ⊆ [ N ] that satsifies the RIP condition

For s is a restriction on x denoted by x | s x ∈ R N tos k-sparse x s.t. RIP is

satisfied the s = | Γ | i.e. s ⊆ [ N ] and Φ | s ⊆ Φ where the columns of Φ | s is

indexed by i ∈ S

x i , i f i ∈ S

( x | S ) i =

0 otherwise

RIP defined as

( 1 − δ s )k x k 22 ≤ k Φx k 22 ≤ ( 1 + δ s )k x k 22

3 Lemmas Page 267 Blumensath Davies IHT for CS

Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed

sensing), For all index sets Γ and all Φ for which RIP holds with s = | Γ | that is

s = supp ( x )

1k Φ Γ T k 2 ≤

q

1 + δ | Γ | k y k 2

( 1 − δ | Γ | )k x Γ k 22 ≤ k Φ Γ T Φ Γ x Γ k 22 ≤ ( 1 + δ | Γ | )k x Γ k 22

and

k( I − Φ Γ T Φ Γ )k 2 ≤ δ | Γ | k x Γ k 2

SupposeΓ ∩ Λ = ∅

k Φ Γ T Φ Λ ) x Λ k 2 ≤ δ s k x Λ k 2

Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery

from incomplete and inaccurate √ samples)

If Φ satisfies RIP k Φx s k 2 ≤ 1 + δ s k x s k 2 , ∀ x s : k x s k 0 ≤ s, Then ∀ x

k Φx k 2 ≤

p

1 + δ s k x k 2 +

p

1 + δ s

k x k 1

sqrts

Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery

from incomplete and inaccurate samples)

Let x s be the best s-term approximation to x. Let x r = x − x s Let

y = Φx + e = Φx s + Φx r + e = Φx s + ẽ

If Φ statisfies RIP for sparcity s, then the norm of error ẽ is bounded by

k ẽ k 2 ≤

p

1 + δ s k x − x s k 2 +

p

1 + δ s

k x − x s k 1

+ k e k 2

s

∀ x

Theory

Gel’fend n-width

Errors E ( S, Φ, D )

Definition Mutual Coherence

LetA ∈ R M × N , themutualcoherenceµ A isde f inedby :

µ A =

|h a i , a j i|

k a i kk a j k

i 6 = j

We want a small µ A because it will be close ot the normal matrix, which

will satisfie RIP

Numerical Example

Basis Persuit

Iterative Hard Thresholding IHT

Solver

2