|
|
(One intermediate revision by the same user not shown) |
Line 1: |
Line 1: |
| MEng Systems Engineering student | | MEng Systems Engineering student |
| MS and BS in Statistics
| |
| | |
| == Introduction ==
| |
|
| |
| | |
| $x \in \mathbb{R}^N$ often not really sparse but aproximently sparse
| |
| | |
|
| |
| | |
| $\Phi \in \mathbb{R} ^{M \times N}$ for $M \ll N$ is a Random Gaussian or Bernoulli matrix
| |
| | |
|
| |
| | |
| $y \in \mathbb{R}^M$ are the observed $y$ samples
| |
| | |
|
| |
| | |
| $e \in \mathbb{R}^M$ nosie vector $ \| e\|_2 \leq \eta$
| |
| | |
|
| |
| | |
| s.t.
| |
| | |
| | |
| <math>\sum_i^N</math>
| |
| | |
| the support s is the smallest delta that satisfies RIP
| |
| | |
| | |
| <math>(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math>
| |
| | |
| | |
| = Introduction = <nowiki><math display="inline">x \in \mathbb{R}^N</math></nowiki> often not really sparse but aproximently sparse <nowiki><math display="inline">\Phi \in \mathbb{R} ^{M \times N}</math></nowiki> for <nowiki><math display="inline">M \ll N</math></nowiki> is a Random Gaussian or Bernoulli matrix <nowiki><math display="inline">y \in \mathbb{R}^M</math></nowiki> are the observed <nowiki><math display="inline">y</math></nowiki> samples <nowiki><math display="inline">e \in \mathbb{R}^M</math></nowiki> nosie vector <nowiki><math display="inline">\| e\|_2 \leq \eta</math></nowiki> s.t. How can we reconstruct <nowiki><math display="inline">x</math></nowiki> from <nowiki><math display="inline">y = \Phi x + e</math></nowiki> The goal is to reconstruct <nowiki><math display="inline">x_0 \in \mathbb{R}^N</math></nowiki> given <nowiki><math display="inline">y</math></nowiki> and <nowiki><math display="inline">\Phi</math></nowiki> Sensing matrix <nowiki><math display="inline">\Phi</math></nowiki> must satisfy RIP i.e. Random Gaussian or Bernoulli matraxies satsifies (cite) let <nowiki><math display="inline">[N] = \{1, \dots , N\}</math></nowiki> be an index set <nowiki><math display="inline">[N]</math></nowiki> enumerates the colums of <nowiki><math display="inline">\Phi</math></nowiki> and <nowiki><math display="inline">x</math></nowiki> <nowiki><math display="inline">\Phi</math></nowiki> is an underdetermined systems with infinite solutions. Why <nowiki><math display="inline">l_2</math></nowiki> norm does not work What is compression is sunomunus with to the sparcity. The problem formulation is to recover sparse data <nowiki><math display="inline">x_0 \in R^N</math></nowiki> The support of <nowiki><math display="inline">x_0</math></nowiki> is <nowiki><math display="inline">supp(x_0) = \{i \in [N]: x_0(i) \neq 0 \}</math></nowiki> we say <nowiki><math display="inline">x</math></nowiki> is <nowiki><math display="inline">k</math></nowiki> sparse when <nowiki><math display="inline">|supp(x)| \leq k</math></nowiki> We are interested in the smallest <nowiki><math display="inline">supp(x)</math></nowiki>, i.e. min(<nowiki><math display="inline">supp(x)</math></nowiki>) Restricted isometry property (RIP) Introduced by Candes, Tao Random Gausian and Bernoulli satsifies RIP Let <nowiki><math display="inline">\Phi \in \mathbb{R}^{M \times N}</math></nowiki> satsify RIP, Let <nowiki><math display="inline">[N]</math></nowiki> be an index set Before we get into RIP lets talk about RIC Resiricted Isometry Constant (RIC) is the smallest <nowiki><math display="inline">\delta |_s</math></nowiki> s.t. <nowiki><math display="inline">s \subseteq [N]</math></nowiki> that satsifies the RIP condition For <nowiki><math display="inline">s</math></nowiki> is a restrciton on <nowiki><math display="inline">x</math></nowiki> denoted by <nowiki><math display="inline">x_{|s}</math></nowiki> <nowiki><math display="inline">x \in \mathbb{R}^N \mbox{to} s</math></nowiki> k-sparse <nowiki><math display="inline">x</math></nowiki> s.t. RIP is satisified the s = <nowiki><math display="inline">|\Gamma|</math></nowiki> i.e. <nowiki><math display="inline">s \subseteq [N]</math></nowiki> and <nowiki><math display="inline">\Phi_{|s} \subseteq \Phi</math></nowiki> where the columns of <nowiki><math display="inline">\Phi_{|s}</math></nowiki> is indexed by <nowiki><math display="inline">i \in S</math></nowiki>
| |
| | |
| <nowiki><math display="block">(x_{|S})_i = \left\{</nowiki>
| |
| | |
| \begin{array}{c c}
| |
| | |
| x_i, & if \ i \in S\\
| |
| | |
| 0 & otherwise
| |
| | |
| \end{array}
| |
| | |
| \right.<nowiki></math></nowiki>
| |
| | |
| RIP defined as
| |
| | |
| <nowiki><math display="block">(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math></nowiki>
| |
| | |
| 3 Lemmas Page 267 Blumensath Davies IHT for CS
| |
| | |
| Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed sensing), For all index sets <nowiki><math display="inline">\Gamma</math></nowiki> and all <nowiki><math display="inline">\Phi</math></nowiki> for which RIP holds with <nowiki><math display="inline">s = |\Gamma|</math></nowiki> that is <nowiki><math display="inline">s = supp(x)</math></nowiki>
| |
| | |
| <nowiki><math display="block">\begin{split}</nowiki>
| |
| | |
| <nowiki> &\| \Phi_\Gamma^T\|_2 \leq \sqrt{1 + \delta_{|\Gamma|}} \| y\|_2\\</nowiki>
| |
| | |
| &(1 - \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \leq \| \Phi_{\Gamma}^T \Phi_{\Gamma} x_{\Gamma} \|_2^2 \leq (1 + \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \\
| |
| | |
| and & \\
| |
| | |
| &\| (I - \Phi_{\Gamma}^T \Phi_{\Gamma})\|_2 \leq \delta _{|\Gamma|} \| x_{\Gamma}\|_2\\
| |
| | |
| & Suppose \Gamma \cap \Lambda = \emptyset\\
| |
| | |
| & \| \Phi_{\Gamma}^T \Phi_{\Lambda}) x_\Lambda \|_2 \leq \delta_s \| x_\Lambda \|_2
| |
| | |
| \end{split}<nowiki></math></nowiki>
| |
| | |
| Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples)
| |
| | |
| If <nowiki><math display="inline">\Phi</math></nowiki> satisfies RIP <nowiki><math display="inline">\| \Phi x^s\|_2 \leq \sqrt{1 + \delta_s} \| x^s\|_2</math></nowiki>, <nowiki><math display="inline">\forall \ x^s: \| x^s\|_0 \leq s</math></nowiki>, Then <nowiki><math display="inline">\forall x</math></nowiki>
| |
| | |
| <nowiki><math display="block">\| \Phi x\|_2 \leq \sqrt{1 + \delta_s} \| x\|_2 + \sqrt{1 + \delta_s} \frac{\| x \|_1}{sqrt{s}}</math></nowiki>
| |
| | |
| Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples)
| |
| | |
| Let <nowiki><math display="inline">x^s</math></nowiki> be the best s-term approximation to x. Let <nowiki><math display="inline">x_r = x - x^s</math></nowiki> Let <nowiki><math display="block">y = \Phi x + e = \Phi x^s + \Phi x_r + e = \Phi x^s + \tilde{e}</math></nowiki>
| |
| | |
| If <nowiki><math display="inline">\Phi</math></nowiki> statisfies RIP for sparcity s, then the norm of error <nowiki><math display="inline">\tilde{e}</math></nowiki> is bounded by
| |
| | |
| <nowiki><math display="block">\| \tilde{e} \|_2 \leq \sqrt{1 + \delta_s} \| x - x^s\|_2 + \sqrt{1 + \delta_s} \frac{\| x - x^s\|_1}{\sqrt{s}} + \| e \|_2 \quad \forall x</math></nowiki>
| |
| | |
| <nowiki>= Theory =</nowiki>
| |
| | |
| Gel’fend n-width
| |
| | |
| Errors <nowiki><math display="inline">E(S, \Phi, D)</math></nowiki>
| |
| | |
| Definition Mutual Coherence
| |
| | |
| <nowiki><math display="inline">Let A \in R^{M \times N}, the mutual coherence \mu_A is defined by:</math></nowiki>
| |
| | |
| <nowiki><math display="block">\mu_A = \underset{i \neq j} {\frac{| \langle a_i, a_j \rangle |}{ \| a_i \| \| a_j \|}}</math></nowiki>
| |
| | |
| We want a small <nowiki><math display="inline">\mu_A</math></nowiki> because it will be close ot the normal matrix, which will satisfie RIP
| |
| | |
| <nowiki>= Numerical Example =</nowiki>
| |
| | |
| Basis Persuit
| |
| | |
| Iterative Hard Thresholding IHT
| |
| | |
| Solver
| |
| | |
|
| |