User:Molokaicat: Difference between revisions
m (Creating user page for new user.) |
Molokaicat (talk | contribs) (testing latex conversion) |
||
Line 1: | Line 1: | ||
MEng Systems Engineering student | MEng Systems Engineering student | ||
MS and BS in Statistics | MS and BS in Statistics | ||
== Introduction == | |||
$x \in \mathbb{R}^N$ often not really sparse but aproximently sparse | |||
$\Phi \in \mathbb{R} ^{M \times N}$ for $M \ll N$ is a Random Gaussian or Bernoulli matrix | |||
$y \in \mathbb{R}^M$ are the observed $y$ samples | |||
$e \in \mathbb{R}^M$ nosie vector $ \| e\|_2 \leq \eta$ | |||
s.t. | |||
<math>\sum_i^N</math> | |||
the support s is the smallest delta that satisfies RIP | |||
<math>(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math> | |||
= Introduction = <nowiki><math display="inline">x \in \mathbb{R}^N</math></nowiki> often not really sparse but aproximently sparse <nowiki><math display="inline">\Phi \in \mathbb{R} ^{M \times N}</math></nowiki> for <nowiki><math display="inline">M \ll N</math></nowiki> is a Random Gaussian or Bernoulli matrix <nowiki><math display="inline">y \in \mathbb{R}^M</math></nowiki> are the observed <nowiki><math display="inline">y</math></nowiki> samples <nowiki><math display="inline">e \in \mathbb{R}^M</math></nowiki> nosie vector <nowiki><math display="inline">\| e\|_2 \leq \eta</math></nowiki> s.t. How can we reconstruct <nowiki><math display="inline">x</math></nowiki> from <nowiki><math display="inline">y = \Phi x + e</math></nowiki> The goal is to reconstruct <nowiki><math display="inline">x_0 \in \mathbb{R}^N</math></nowiki> given <nowiki><math display="inline">y</math></nowiki> and <nowiki><math display="inline">\Phi</math></nowiki> Sensing matrix <nowiki><math display="inline">\Phi</math></nowiki> must satisfy RIP i.e. Random Gaussian or Bernoulli matraxies satsifies (cite) let <nowiki><math display="inline">[N] = \{1, \dots , N\}</math></nowiki> be an index set <nowiki><math display="inline">[N]</math></nowiki> enumerates the colums of <nowiki><math display="inline">\Phi</math></nowiki> and <nowiki><math display="inline">x</math></nowiki> <nowiki><math display="inline">\Phi</math></nowiki> is an underdetermined systems with infinite solutions. Why <nowiki><math display="inline">l_2</math></nowiki> norm does not work What is compression is sunomunus with to the sparcity. The problem formulation is to recover sparse data <nowiki><math display="inline">x_0 \in R^N</math></nowiki> The support of <nowiki><math display="inline">x_0</math></nowiki> is <nowiki><math display="inline">supp(x_0) = \{i \in [N]: x_0(i) \neq 0 \}</math></nowiki> we say <nowiki><math display="inline">x</math></nowiki> is <nowiki><math display="inline">k</math></nowiki> sparse when <nowiki><math display="inline">|supp(x)| \leq k</math></nowiki> We are interested in the smallest <nowiki><math display="inline">supp(x)</math></nowiki>, i.e. min(<nowiki><math display="inline">supp(x)</math></nowiki>) Restricted isometry property (RIP) Introduced by Candes, Tao Random Gausian and Bernoulli satsifies RIP Let <nowiki><math display="inline">\Phi \in \mathbb{R}^{M \times N}</math></nowiki> satsify RIP, Let <nowiki><math display="inline">[N]</math></nowiki> be an index set Before we get into RIP lets talk about RIC Resiricted Isometry Constant (RIC) is the smallest <nowiki><math display="inline">\delta |_s</math></nowiki> s.t. <nowiki><math display="inline">s \subseteq [N]</math></nowiki> that satsifies the RIP condition For <nowiki><math display="inline">s</math></nowiki> is a restrciton on <nowiki><math display="inline">x</math></nowiki> denoted by <nowiki><math display="inline">x_{|s}</math></nowiki> <nowiki><math display="inline">x \in \mathbb{R}^N \mbox{to} s</math></nowiki> k-sparse <nowiki><math display="inline">x</math></nowiki> s.t. RIP is satisified the s = <nowiki><math display="inline">|\Gamma|</math></nowiki> i.e. <nowiki><math display="inline">s \subseteq [N]</math></nowiki> and <nowiki><math display="inline">\Phi_{|s} \subseteq \Phi</math></nowiki> where the columns of <nowiki><math display="inline">\Phi_{|s}</math></nowiki> is indexed by <nowiki><math display="inline">i \in S</math></nowiki> | |||
<nowiki><math display="block">(x_{|S})_i = \left\{</nowiki> | |||
\begin{array}{c c} | |||
x_i, & if \ i \in S\\ | |||
0 & otherwise | |||
\end{array} | |||
\right.<nowiki></math></nowiki> | |||
RIP defined as | |||
<nowiki><math display="block">(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math></nowiki> | |||
3 Lemmas Page 267 Blumensath Davies IHT for CS | |||
Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed sensing), For all index sets <nowiki><math display="inline">\Gamma</math></nowiki> and all <nowiki><math display="inline">\Phi</math></nowiki> for which RIP holds with <nowiki><math display="inline">s = |\Gamma|</math></nowiki> that is <nowiki><math display="inline">s = supp(x)</math></nowiki> | |||
<nowiki><math display="block">\begin{split}</nowiki> | |||
<nowiki> &\| \Phi_\Gamma^T\|_2 \leq \sqrt{1 + \delta_{|\Gamma|}} \| y\|_2\\</nowiki> | |||
&(1 - \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \leq \| \Phi_{\Gamma}^T \Phi_{\Gamma} x_{\Gamma} \|_2^2 \leq (1 + \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \\ | |||
and & \\ | |||
&\| (I - \Phi_{\Gamma}^T \Phi_{\Gamma})\|_2 \leq \delta _{|\Gamma|} \| x_{\Gamma}\|_2\\ | |||
& Suppose \Gamma \cap \Lambda = \emptyset\\ | |||
& \| \Phi_{\Gamma}^T \Phi_{\Lambda}) x_\Lambda \|_2 \leq \delta_s \| x_\Lambda \|_2 | |||
\end{split}<nowiki></math></nowiki> | |||
Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples) | |||
If <nowiki><math display="inline">\Phi</math></nowiki> satisfies RIP <nowiki><math display="inline">\| \Phi x^s\|_2 \leq \sqrt{1 + \delta_s} \| x^s\|_2</math></nowiki>, <nowiki><math display="inline">\forall \ x^s: \| x^s\|_0 \leq s</math></nowiki>, Then <nowiki><math display="inline">\forall x</math></nowiki> | |||
<nowiki><math display="block">\| \Phi x\|_2 \leq \sqrt{1 + \delta_s} \| x\|_2 + \sqrt{1 + \delta_s} \frac{\| x \|_1}{sqrt{s}}</math></nowiki> | |||
Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples) | |||
Let <nowiki><math display="inline">x^s</math></nowiki> be the best s-term approximation to x. Let <nowiki><math display="inline">x_r = x - x^s</math></nowiki> Let <nowiki><math display="block">y = \Phi x + e = \Phi x^s + \Phi x_r + e = \Phi x^s + \tilde{e}</math></nowiki> | |||
If <nowiki><math display="inline">\Phi</math></nowiki> statisfies RIP for sparcity s, then the norm of error <nowiki><math display="inline">\tilde{e}</math></nowiki> is bounded by | |||
<nowiki><math display="block">\| \tilde{e} \|_2 \leq \sqrt{1 + \delta_s} \| x - x^s\|_2 + \sqrt{1 + \delta_s} \frac{\| x - x^s\|_1}{\sqrt{s}} + \| e \|_2 \quad \forall x</math></nowiki> | |||
<nowiki>= Theory =</nowiki> | |||
Gel’fend n-width | |||
Errors <nowiki><math display="inline">E(S, \Phi, D)</math></nowiki> | |||
Definition Mutual Coherence | |||
<nowiki><math display="inline">Let A \in R^{M \times N}, the mutual coherence \mu_A is defined by:</math></nowiki> | |||
<nowiki><math display="block">\mu_A = \underset{i \neq j} {\frac{| \langle a_i, a_j \rangle |}{ \| a_i \| \| a_j \|}}</math></nowiki> | |||
We want a small <nowiki><math display="inline">\mu_A</math></nowiki> because it will be close ot the normal matrix, which will satisfie RIP | |||
<nowiki>= Numerical Example =</nowiki> | |||
Basis Persuit | |||
Iterative Hard Thresholding IHT | |||
Solver | |||
Revision as of 01:02, 28 November 2021
MEng Systems Engineering student MS and BS in Statistics
Introduction
$x \in \mathbb{R}^N$ often not really sparse but aproximently sparse
$\Phi \in \mathbb{R} ^{M \times N}$ for $M \ll N$ is a Random Gaussian or Bernoulli matrix
$y \in \mathbb{R}^M$ are the observed $y$ samples
$e \in \mathbb{R}^M$ nosie vector $ \| e\|_2 \leq \eta$
s.t.
the support s is the smallest delta that satisfies RIP
= Introduction = <math display="inline">x \in \mathbb{R}^N</math> often not really sparse but aproximently sparse <math display="inline">\Phi \in \mathbb{R} ^{M \times N}</math> for <math display="inline">M \ll N</math> is a Random Gaussian or Bernoulli matrix <math display="inline">y \in \mathbb{R}^M</math> are the observed <math display="inline">y</math> samples <math display="inline">e \in \mathbb{R}^M</math> nosie vector <math display="inline">\| e\|_2 \leq \eta</math> s.t. How can we reconstruct <math display="inline">x</math> from <math display="inline">y = \Phi x + e</math> The goal is to reconstruct <math display="inline">x_0 \in \mathbb{R}^N</math> given <math display="inline">y</math> and <math display="inline">\Phi</math> Sensing matrix <math display="inline">\Phi</math> must satisfy RIP i.e. Random Gaussian or Bernoulli matraxies satsifies (cite) let <math display="inline">[N] = \{1, \dots , N\}</math> be an index set <math display="inline">[N]</math> enumerates the colums of <math display="inline">\Phi</math> and <math display="inline">x</math> <math display="inline">\Phi</math> is an underdetermined systems with infinite solutions. Why <math display="inline">l_2</math> norm does not work What is compression is sunomunus with to the sparcity. The problem formulation is to recover sparse data <math display="inline">x_0 \in R^N</math> The support of <math display="inline">x_0</math> is <math display="inline">supp(x_0) = \{i \in [N]: x_0(i) \neq 0 \}</math> we say <math display="inline">x</math> is <math display="inline">k</math> sparse when <math display="inline">|supp(x)| \leq k</math> We are interested in the smallest <math display="inline">supp(x)</math>, i.e. min(<math display="inline">supp(x)</math>) Restricted isometry property (RIP) Introduced by Candes, Tao Random Gausian and Bernoulli satsifies RIP Let <math display="inline">\Phi \in \mathbb{R}^{M \times N}</math> satsify RIP, Let <math display="inline">[N]</math> be an index set Before we get into RIP lets talk about RIC Resiricted Isometry Constant (RIC) is the smallest <math display="inline">\delta |_s</math> s.t. <math display="inline">s \subseteq [N]</math> that satsifies the RIP condition For <math display="inline">s</math> is a restrciton on <math display="inline">x</math> denoted by <math display="inline">x_{|s}</math> <math display="inline">x \in \mathbb{R}^N \mbox{to} s</math> k-sparse <math display="inline">x</math> s.t. RIP is satisified the s = <math display="inline">|\Gamma|</math> i.e. <math display="inline">s \subseteq [N]</math> and <math display="inline">\Phi_{|s} \subseteq \Phi</math> where the columns of <math display="inline">\Phi_{|s}</math> is indexed by <math display="inline">i \in S</math>
<math display="block">(x_{|S})_i = \left\{
\begin{array}{c c}
x_i, & if \ i \in S\\
0 & otherwise
\end{array}
\right.</math>
RIP defined as
<math display="block">(1 - \delta_s) \| x \|_2 ^2 \leq \| \Phi x \|_2^2 \leq (1 + \delta_s) \| x \|_2 ^2</math>
3 Lemmas Page 267 Blumensath Davies IHT for CS
Lemma 1(Blumensath, Davis 2009 Iterative hard thresholding for compressed sensing), For all index sets <math display="inline">\Gamma</math> and all <math display="inline">\Phi</math> for which RIP holds with <math display="inline">s = |\Gamma|</math> that is <math display="inline">s = supp(x)</math>
<math display="block">\begin{split}
&\| \Phi_\Gamma^T\|_2 \leq \sqrt{1 + \delta_{|\Gamma|}} \| y\|_2\\
&(1 - \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \leq \| \Phi_{\Gamma}^T \Phi_{\Gamma} x_{\Gamma} \|_2^2 \leq (1 + \delta_{|\Gamma|}) \| x_{\Gamma} \|_2 ^2 \\
and & \\
&\| (I - \Phi_{\Gamma}^T \Phi_{\Gamma})\|_2 \leq \delta _{|\Gamma|} \| x_{\Gamma}\|_2\\
& Suppose \Gamma \cap \Lambda = \emptyset\\
& \| \Phi_{\Gamma}^T \Phi_{\Lambda}) x_\Lambda \|_2 \leq \delta_s \| x_\Lambda \|_2
\end{split}</math>
Lemma 2 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples)
If <math display="inline">\Phi</math> satisfies RIP <math display="inline">\| \Phi x^s\|_2 \leq \sqrt{1 + \delta_s} \| x^s\|_2</math>, <math display="inline">\forall \ x^s: \| x^s\|_0 \leq s</math>, Then <math display="inline">\forall x</math>
<math display="block">\| \Phi x\|_2 \leq \sqrt{1 + \delta_s} \| x\|_2 + \sqrt{1 + \delta_s} \frac{\| x \|_1}{sqrt{s}}</math>
Lemma 3 (Needell Tropp, Prop 3.5 in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples)
Let <math display="inline">x^s</math> be the best s-term approximation to x. Let <math display="inline">x_r = x - x^s</math> Let <math display="block">y = \Phi x + e = \Phi x^s + \Phi x_r + e = \Phi x^s + \tilde{e}</math>
If <math display="inline">\Phi</math> statisfies RIP for sparcity s, then the norm of error <math display="inline">\tilde{e}</math> is bounded by
<math display="block">\| \tilde{e} \|_2 \leq \sqrt{1 + \delta_s} \| x - x^s\|_2 + \sqrt{1 + \delta_s} \frac{\| x - x^s\|_1}{\sqrt{s}} + \| e \|_2 \quad \forall x</math>
= Theory =
Gel’fend n-width
Errors <math display="inline">E(S, \Phi, D)</math>
Definition Mutual Coherence
<math display="inline">Let A \in R^{M \times N}, the mutual coherence \mu_A is defined by:</math>
<math display="block">\mu_A = \underset{i \neq j} {\frac{| \langle a_i, a_j \rangle |}{ \| a_i \| \| a_j \|}}</math>
We want a small <math display="inline">\mu_A</math> because it will be close ot the normal matrix, which will satisfie RIP
= Numerical Example =
Basis Persuit
Iterative Hard Thresholding IHT
Solver