L’approche variationnelle de résolution des problèmes inverses en imagerie a connu beaucoup de développements lors des trente dernières années. Son cadre mathématique flexible a permis de montrer des résultats garantissant leurs succès sous des hypothèses sur les paramètres du modèle (parcimonie, nombre de mesures, nature du bruit, etc). De nombreuses questions restent ouvertes dans ce domaine, résolution de problèmes inverses dans des espaces de mesures (ex: super-résolution), régularisation adaptée à de nouveaux modèles de faibles dimension, garanties pour les méthodes de résolution basées sur l’apprentissage profond, etc. Nous proposons pendant cette journée d’aborder les dernières avancées de ce domaine.
English title : Workshop on “Imaging inverse problems - regularization, low dimensional models and applications”
Appel à contributions
English speaking contributions are welcomed. The workshop will be held in english. Send an abstract before February 21st 2023
Si vous souhaitez faire une présentation pendant cette journée, nous vous invitons donc à soumettre un résumé court pour une présentation de 20 minutes (+10 minutes de questions).
La date limite pour la soumission du résumé est le : mardi 21 février.
Lieu et inscription
Institut de mathématiques de Bordeaux, Salle de Conférence, Talence. Pour assister à la journée l’inscription est gratuite mais obligatoire dans la limite des places disponibles (envoyer un email avant le 21 février).
A zoom session will be available to follow the workshop online. Just send an email to firstname.lastname@example.org to register before March 21st.
Les personnes faisant une présentation ayant besoin d’un soutien pour le financement de leur mission peuvent nous contacter directement.
Contact et soumission du résumé
- Par email à Yann Traonmilin: email@example.com
9:15 - Welcome - Set-up for first session
9:30 - 10:00 Clémence Prévost: Nonnegative block-term decomposition with the beta-divergence: joint data fusion and blind spectral unmixing
10:00 - 10:30 Zev Woodstock: Signal recovery from inconsistent nonlinear observations
10:30 - 11:00 Coffee break
11:00 - 11:30 Pierre Maréchal: On the deconvolution of random variables by mollification
11:30 - 12:00 Nathanaël Munier: The MLE is a reliable source: sharp performance guarantees for localization problems
12:00 - 13:30 Lunch (Buffet)
13:30 - 14:00 Mimoun Mohamed: Support Exploration Algorithm for Sparse Support Recovery
14:00 - 14:30 Pierre Weiss: Grid is Good: Adaptive Refinement Algorithms for Off-the-Grid Total Variation Minimization
14:30 - 15:00 Bastien Laville: Off-the-grid curve reconstruction: bridge the gap between point and level sets measure
15:00 - 15:30 Coffee break
15:30 - 16:00 Julia Lascar: Non-stationary hyperspectral unmixing with learnt regularization
16:30 - 17:00 Nathan Buskulic: Convergence Guarantees of Overparametrized Wide Deep Inverse Prior
17:00 - 17:30 Hui Shi: Compressive learning of deep regularization for denoising
Résumés / Abstracts
Title: Nonnegative block-term decomposition with the beta-divergence: joint data fusion and blind spectral unmixing
Abstract: We present a new method for solving simultaneously hyperspectral super-resolution and spectral unmixing of the unknown super-resolution image. Our method relies on three key elements: (1) the nonnegative decomposition in rank-(Lr,Lr,1) block-terms, (2) joint tensor factorization with multiplicative updates, and (3) the formulation of a family of optimization problems with beta-divergences objective functions. We come up with a family of simple, robust and efficient algorithms, adaptable to various noise statistics. Experiments show that our approach competes favorably with state-of-the-art methods for solving both problems at hand for various noise statistics.
Title: Signal recovery from inconsistent nonlinear observations
Abstract: We show that many nonlinear observation models in signal recovery can be represented using firmly nonexpansive operators. To address problems with inaccurate measurements, we propose solving a variational inequality relaxation which is guaranteed to possess solutions under mild conditions and which coincides with the original problem if it happens to be consistent. We then present an efficient algorithm for its solution, as well as numerical applications in signal and image recovery, including an experimental operator-theoretic method of promoting sparsity.
Title: On the deconvolution of random variables by mollification
Abstract: We approach the problem of deconvolution of random variables by means of mollification. We will propose a unifying framework to better understand the advantages and disadvantages of classical methods (deconvolution kernels, Tikhonov regularization, spectral cutoff). Mollification makes it possible to relax the restrictive assumptions of deconvolution kernels, and has better stabilizing properties than spectral cutoff or Tikhonov regularization. We will show that this approach gives rise to optimal convergence rates for both convolution kernels with exponential decay and kernels with polynomial decay, under a source condition corresponding to a regularity of the Sobolev or even Besov type.
Title: The MLE is a reliable source: sharp performance guarantees for localization problems
Abstract: Understanding the intrinsic performance limits of sparse recovery problems is typically addressed using the Cramér-Rao lower bound. However, this bound is only reliable asymptotically, which means either for very small noise level or for a large number of measurements. In this talk I will provide non asymptotic necessary conditions and also sufficient conditions for the maximum likelihood estimator (MLE) to yield a given precision with high probability. The two conditions match closely, with a discrepancy related to the conditioning of a noiseless cost function. They tightly surround the Cramér-Rao lower bound for low noise levels. However, they are significantly more precise for larger levels. In addition, this analysis reveals key geometric features that govern the localization accuracy.
This talk is based on this preprint
Title: Support Exploration Algorithm for Sparse Support Recovery
Abstract: We are interested in this work in solving sparse linear inverse problems involving a l0 constraint. To this end, we propose a new algorithm called Support Exploration Algorithm (SEA) that guarantees sparse support recovery. Indeed, SEA uses a non-sparse exploratory vector evolving in the input space to select sparse support. We exhibit an oracle update rule for the exploratory vector before using the straight-through estimator (STE) update (sparse recovery). Our theoretical analysis provides sufficient conditions for support recovery when the measurement matrix is arbitrary or satisfies the Restricted Isometry Property (RIP). First experiments on deconvolution problems show that SEA can supplement state-of-the-art algorithms and outperform them, especially when the measurement matrix is coherent.
This is a work of Mimoun Mohamed, François Malgouyres, Valentin Emiya and Caroline Chaux. The talk is based on this preprint.
Title: Grid is Good: Adaptive Refinement Algorithms for Off-the-Grid Total Variation Minimization
Abstract: We propose an adaptive refinement algorithm to solve total variation regularized measure optimization problems. The method iteratively constructs dyadic partitions of the unit cube based on i) the resolution of discretized dual problems and ii) on the detection of cells containing points that violate the dual constraints. The detection is based on upper-bounds on the dual certificate, in the spirit of branch-and-bound methods. The interest of this approach is that it avoids the use of heuristic approaches to find the maximizers of dual certificates. We prove the convergence of this approach under mild hypotheses and a linear convergence rate under additional non-degeneracy assumptions.
This is a work of Axel Flinth, Frédéric de Gournay and Pierre Weiss
Title: Off-the-grid curve reconstruction: bridge the gap between point and level sets measure
Abstract: The last few years have witnessed the development of super-resolution optimisation in measure spaces. These so-called “off-the-grid” approaches offer both theoretical (uniqueness, reconstruction guarantees) and numerical results, with very convincing results in biomedical imaging. However, gridless variational optimisation is taylored for spike reconstruction, which is not always suitable in imaging: more realistic biological structures such as curves to represent blood vessels or filaments should also be considered. We propose to discuss a new line of research allowing off-the-grid curve reconstruction, understood as the reconstruction of measures supported on curves from an image. By introducing a new space of measures with finite divergence that we coined \( V \), as well as a new functional inspired by BLASSO, we will introduce a result on the extreme points of the unit ball of the norm of \( V \) ; thus opening a promising avenue for numerical results that we propose to present.
Title: Non-stationary hyperspectral unmixing with learnt regularization
Abstract: In astrophysics or remote sensing, spectro-imagers can record cubes of data called hyperspectral images, with two spatial dimensions and a third dimension of energy. Often, the observed data are a mixture of several emitting sources. Thus, the task of source separation is key to perform detailed studies of the underlying physical components. Most source separation algorithms assume a stationary mixing model, i.e. a sum of spectra, one per component, each multiplied by an amplitude map. But in many cases, this assumption is erroneous, since the spectral shape of each component varies spatially due to physical properties. Our algorithm’s goal is to achieve non-stationary source separation, obtaining for each component a cube with varying spectral shape. This is an ill-posed problem, thus in need of regularization. For spectral regularization, we use a generative model learned on auto-encoders, which constrains the spectra to interpretable shapes in a semi-supervised scheme. This was combined with a spatial regularization scheme, via a sparse modelling of the generative model’s latent parameters. The optimization was achieved in an algorithm of alternating proximal gradient descent. It was tested for the case study of X-ray astrophysics spectro-imagery, for which results will be shown on realistic simulated data. To our knowledge, this is the first method to extend sparse blind source separation to the non-stationary case.
Title: Convergence Guarantees of Overparametrized Wide Deep Inverse Prior
Abstract: Neural networks have become a prominent approach to solve inverse problems in recent years. Amongst the different existing methods, the Deep Image/Inverse Priors (DIPs) technique is an unsupervised approach that optimizes a highly overparametrized neural network to transform a random input into an object whose image under the forward model matches the observation. However, the level of overparametrization necessary for such methods remains an open problem. In this work, we aim to investigate this question for a two-layers neural network with a smooth activation function. We provide overparametrization bounds under which such network trained via continuous-time gradient descent will converge exponentially fast with high probability, from which recovery prediction bounds will be derived. This work is thus a first step towards a theoretical understanding of overparametrized DIP networks, and more broadly it participates to the theoretical understanding of neural networks in inverse problem settings.
Title: Compressive learning of deep regularization for denoising
Abstract: Solving ill-posed inverse problems can be done accurately if a regularizer well adapted to the nature of the data is available. Such regularizer can be systematically linked with the distribution of the data itself through the maximum a posteriori Bayesian framework. Recently, regular- izers designed with the help of deep neural networks (DNN) received impressive success. Such regularizers are typically learned from large datasets. To reduce the computational burden of this task, we propose to adapt the compressive learning framework (called sketching) to the learning of regularizers parametrized by DNN. In this talk, we start by introducing the sketching framework, ReLU networks. Then we focus on describing the proposed framework. After that, we show the performance of the proposed methods on both synthetic data and real-life data. We conclude the presentation with some interesting open questions regarding the limitations of our method.