Petros Boufounos

Title: L1 minimization without amplitude information

Abstract: In many practical situations, we need to reconstruct sparse signals from measurements that eliminate amplitude information of the acquired signal. This talk discusses two specific acquisition scenarios. In the first scenario the sparse signal is acquired by recording the timing of its zero crossings. Signal reconstruction from zero crossings is well know to be unstable. The sparsity prior and L1 regularization stabilizes the reconstruction and allows us to recover the signal. In the second scenario we only acquire the signs of linear measurements of a sparse signal. This introduces significant reconstruction ambiguity which the sparsity prior allows us to resolve. 

In both cases, the measurements eliminate amplitude information on the signal, resulting to a fundamental ambiguity. To resolve this ambiguity we impose the constraint that the signal lies on the unit L2 sphere. This constraint significantly reduces the search space and improves reconstruction results. We demonstrate a variation of the Fixed Point Continuation algorithm that runs on the unit sphere and, despite the nonconvexity of the problems, produces very good results.


Tony Chan

Title: TVL1 models for imaging: global optimization and geometric properties: Part I

Abstract: Rudin, Osher, and Fatemi's total variation based image denoising model has been very popular. We describe properties of a variant in which the fidelity term is replaced by the L1 norm. This seemingly modest modification has important consequences, in terms of contrast invariance, multiscale image decompositions, and geometric properties of solutions. In addition, it has connections to convex formulations of certain important shape optimization problems from computer vision, such as shape denoising and piecewise constant segmentation. In particular, we show that it leads to algorithms based on convex optimization techniques that are guaranteed to find the global minimizer of certain non-convex shape optimization problems.


Ronald DeVore

Title:  Decoders for Compressed Sensing

Abstract: In compressed sensing, we encode a discrete signal $x\in \R^N$ by the vector $y=\Phi x$ where $\Phi$ is a suitably chosen $n\times N$ matrix with $n<<N$.  The vector $x$ is underdetermined by $y$ and so decoding $y$ is a typical inverse problem. We shall discuss various ways of decoding $y$ including $\ell_1$ minimization and greedy algorithms. We shall discuss the performance of encoder-decoder pairs in terms of accuracy for a given computational budget.


Selim Esedoglu

Title: TVL1 models for imaging: global optimization and geometric properties: Part II

Abstract: We first discuss how the convex formulation of two-phase, piecewise constant image segmentation motivated by the L1 fidelity version of the ROF model allows adapting the convex duality based optimization methods developed originally for the standard ROF model to the context of image segmentation. We also briefly indicate recent work on applying these ideas to other related shape optimization problems. Then, we shift focus and discuss the correct way to generalize L1 regularization to the context of geometry denoising, which is a standard problem (also called surface fairing) in computer graphics.


Anna Gilbert (joint work with R. Berinde, P. Indyk, H. Karloff, and M. Strauss)

Title: Combining geometry and combinatorics: a unified approach to sparse signal recovery

Abstract: There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Phi and then uses linear programming to decode information about x from Phi x. The combinatorial approach constructs Phi and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the l_p norm for p >= 1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.


Jean-Luc Guermond/Bojan Popov

Title: Approximating PDEs in L1

Abstract: In this talk we will consider L_1-based minimization methods for three different problems:
  (1) Ill-posed transport equations;
  (2) Stationary Hamilton-Jacobi equations;
  (3) Digital elevation maps for natural and urban terrain.
We will describe each of the three problems. In the case of stationary Hamilton-Jacobi equations a convergence theory will be presented. We construct approximate solutions in all cases using a special type of l1-based minimization. The main features of our methods are that they are of arbitrary polynomial order and the numerical solutions are oscillation free even when the underlying data/solution has singularities.


Alexander Kurganov

Title: Numerical Methods for Modern Traffic Flow Models

Abstract: I will describe two different traffic flow models. The first model is a scalar conservation law with Arrhenius look-ahead dynamics. It is a modification of the classical fluid dynamics model by Lighthill and Whitham. The modification is based on an assumption that unlike the fluid particles, drivers can see the traffic dynamics ahead of their cars and adjust the speeds of their cars correspondingly. This factor results in a global dependence of the flux on the solution, which makes it challenging to develop numerical methods for the studied scalar conservation law.

The second model is a modification of the Aw-Rascle system, designed to describe the formation and dynamics of traffic jams. The model consists of a constrained pressureless gas dynamics system, which is extremely stiff. The stiffness makes it difficult to develop robust and accurate explicit numerical schemes for this model.

I will present several numerical methods for the above two models and demonstrate their superb performance on a number of numerical examples.


Gitta Kutyniok (joint work with David Donoho, Stanford University)

Title: l1-Minimization and the Geometric Separation Problem

Abstract: Modern data are often composed of two (or more) geometrically distinct constituents -- for instance, pointlike and curvelike structures in astronomical imaging of galaxies. Although it seems impossible to extract those components -- as there are two unknowns for every datum -- suggestive empirical results have already been obtained.

In this talk we develop a theoretical approach to this Geometric Separation Problem in which a deliberately overcomplete representation is chosen made of two frames. One is suited to pointlike structures (wavelets) and the other suited to curvelike structures (curvelets or shearlets). The decomposition principle is to minimize the  $\ell_1$ norm of the analysis (rather than synthesis) frame coefficients. Our theoretical results show that at all sufficiently fine scales, nearly-perfect separation is indeed achieved.

Our analysis has three interesting features. Firstly, we use a viewpoint deriving from microlocal analysis to understand heuristically why separation might be possible and to organize a rigorous analysis. Secondly, we introduce some novel technical tools: cluster coherence, rather than the now-traditional singleton coherence and $\ell_1$-minimization in frame settings, including those where singleton coherence within one frame may be high. Thirdly, our approach to exploiting $\ell_1$ minimization by using the geometry of the problem as the driving force reaches conclusions reminiscent of those which have been obtained previously using randomness of the sparsity pattern to be recovered; however here, the pattern is not random but highly organized.


Stanley Osher

Title: Fast Bregman Iteration for Compressive Sensing and Sparse Denoising

Abstract: We propose, analyze and show results of extremely fast, efficient and simple methods for solving the convex optimization problem:min_u{|u|_L1 /Au=f, u in R^n, A mXn, m<n}. The linearized method was first devised with J. Darbon, then the realization that it is relevant to this problem was done in more generality and with theoretical and computational results with W. Yin, D. Goldfarb and J. Darbon, convergence results for the linearized method with J. Cai and Z. Shen, and an improvement with Y. Mao, B.Dong and W. Yin, with sparse denoising added. Bregman iteration, introduced to image science with M. Burger, D. Goldfarb, J-J Xu, and W. Yin, is the key idea.


Alexander Petoukhov (joint work with Inna Kozlov)

Title: l^1 greedy algorithm for finding solutions of underdetermined linear systems

Abstract: Finding sparse solutions of undedetermined systems Ax=b is a hot topic of the last years. Many problems of information theory (data compression, error correcting codes, compressed sensing, etc)  can be reformulated in terms of this problem. 

We are gointg to discuss different apprlications of this problem and to present a new algorithm solving the problem for wider range of the input data. The known algorithms  are represent two competing classes: l^1 minimization and orthogonal greedy methods. Our new algorithm, essentially based on reweighted l^1 minimization (by E.Candes, M.Wakin and S.Boyd), may serve as "a peace agreement" between the two competing approaches. The algorithm outperporms the reweighted l^1 optimization by about 10%.


Justin Romberg

Title: Architectures for Compressive Sampling

Abstract: Several recent results in Compressive Sampling show that a sparse signal (i.e. one which can be compressed in a known orthobasis) can be efficiently acquired by taking linear measurements against random test functions.  In practice, however, it is difficult to build sensing devices which take these types of measurements.  In this presentation, we will show how to extend some of the results in Compressive Sampling to measurement systems which are more amenable to real-world implementation.  We will discuss applications to radar imaging, coherent optical imaging, and next-generation analog-to-digital converters.


Panagiotis Souganidis

Title: Rates of convergence for monotone approximations of viscosity solutions

Abstract: In this talk I will review a general theory yielding the convergence of monotone approximations to visocisty solutions of first and second order pde. I will also present new results about error estimates.


Eitan Tadmor

Title: L1 Techniques for Three Problems in Nonlinear PDEs, Signal and Image Processing

Abstract: We address three different questions using L1 techniques.    
    (i) The vanishing viscosity limit of two-dimensional nonlinear convection:L1 convergence follows from spectral dynamics (a joint work with D. Wei).
    (ii) Detection of edges in spectral information of piecewise smooth data (a joint work with S. Engelberg and J. Zou); and
    (iii) Recent results on image processing using (BV, L1) multi-scale hierarchical decompositions (joint work with P. Athavale).  


Jared Tanner (joint work with David Donoho, Stanford University)

Title: The surprising structure of Gaussian point clouds and its implications for signal processing

Abstract: We will explore connections between the structure of high-dimensional convex polytopes and information acquisition for compressible signals. A classical result in the field of convex polytopes is that if N points are distributed Gaussian i.i.d. at random in dimension n<<N, then only order (log N)^n of the points are vertices of their convex hull.  Recent results show that provided n grows slowly with N, then with high probability all of the points are vertices of its convex hull.  More surprisingly, a rich "neighborliness" structure emerges in the faces of the convex hull.  One implication of this phenomenon is that an N-vector with k non-zeros can be recovered computationally efficiently from only n random projections with n=2e k log(N/n). Alternatively, the best k-term approximation of a signal in any basis can be recovered from 2e k log(N/n) non-adaptive measurements, which is within a log factor of the optimal rate achievable for adaptive sampling.


Richard Tsai

Title: Exploratory path planning and target detection

Abstract: I will present a recent work on planning a path through an unknown environment under various settings. In this talk, a robot is placed in an unknown environment and is supposed to map out the environment using what can be seen from its current and previous locations. When an unknown diffusive source is present, the robot, equipped with the appropriate sensor, is to move to a location so that the unknown source can be determined from the collected sensor data, and is under visual surveillance from the robot.


Yin Zhang


Title:  Enhanced Compressive Sensing and More

Abstract: We consider the equivalence between finding the sparsest vector in a set and continuous optimization on the same set.  This approach enables remarkably simple proofs to unify and extend some compressive sensing recoverability and stability results, and reveals potential benefits of doing enhanced compressive sensing (ECS) where prior information could allow recovery of much less sparse signals.

We will also introduce some recent algorithmic work done by our CAAM group at Rice University.