Optimal feedback controls for nonlinear systems are characterized by the solutions to a Hamilton Jacobi Bellmann (HJB) equation. In the deterministic case, this is a first order hyperbolic equation.

Its dimension is that of the statespace of the nonlinear system. Thus solving the HJB equation is a formidable task and one is confronted with a curse of dimensionality.

In practice, optimal feedback controls are frequently based on linearisation and subsequent treatment by efficient Riccati solvers. This can be effective, but it is local procedure, and it may fail or lead to erroneous results.

In this talk, I give a brief survey of current solution strategies to partially cope with this challenge. Subsequently I describe three approaches in some detail. The first one is a data driven technique, which approximates the solution to the HJB equation and its gradient from an ensemble of open loop solves.

The second one is based on Newton steps applied to the HJB equation. Combined with tensor calculus which allows to approximately solve HJB equations up to dimension 100. Results are shown for the control of discretized Fokker Planck equations. The third technique circumvents the direct solution of the HJB equation. Rather a neural network is trained by means of a succinctly chosen ansatz. It is proven that it approximates the optimal feedback gains as the dimension of the network is increased.

This work relies on collaborations with B.Azmi, S.Dolgov, D.Kalise, D.Vasquez, and D.Walter.

Aberrations caused by varying speed of sound in tissue, mainly in the body wall composed of skin, fat, and muscles, reduce image quality in medical ultrasound imaging. The phenomenon can be compared to smearing a thin layer of vaseline on a camera lens; no matter how well you try to focus the image it remains a bit blurred. Trying to compensate for this effect is referred to as aberration correction. The phenomenon is very similar to what happens in optics when light encounters atmospheric turbulence and aberration correction in optics and medical ultrasound have a lot in common. In this presentation I will go through the background of aberrations in medical imaging, how image quality is reduced and how this may be compensated for.

**Location**: RICAM, SP2 416-1

]]>

Two main concepts are commonly used in the literature on quasi-Monte Carlo (QMC) methods when finding integration node sets of high-quality QMC rules with good properties. On the one hand, these are lattice point sets and lattice sequences. The other class of commonly used QMC integration nodes is that of (digital) $(t, m, d)$-nets and $(t,d)$-sequences, where a special instance of $(t,m,d)$-nets, namely so-called polynomial lattice point sets, is also of interest.

In this thesis, we study, in particular, efficient algorithms for constructing rank-1 lattice rules and polynomial lattice rules. In both cases, given the number of points $N$ and the dimension $d$, they are entirely determined by the choice of a generating vector. A well-known algorithm to construct such rules has been established, namely the component-by-component (CBC) algorithm, which can build both good lattice rules and good polynomial lattice rules in multivariate function spaces.

Motivated by earlier work of Korobov from 1963 and 1982, we study variants of CBC search algorithms for good lattice rules and polynomial lattice rules. We show that the resulting rules exhibit a convergence rate in weighted function spaces that can be arbitrarily close to the optimal rate. Moreover, contrary to most other algorithms, we often do not need to know the smoothness of our integrands in advance; the generating vector will still recover the convergence rate associated with the smoothness of the particular integrand and, under appropriate conditions on the weights, the error bounds can be stated without dependence on the dimension $d$. The search algorithms presented are a component-by-component digit-by-digit algorithm and a particular version of the component-by-component algorithm. The algorithms can be implemented in a fast manner reducing the construction cost to $\calO(dN \ln N)$ with $N$ the number of points and $d$ the dimension, and we illustrate our findings with extensive numerical results.

We study the application of tailored quasi-Monte Carlo (QMC) methods to a class of optimal control problems subject to partial differential equation (PDE) constraints under uncertainty: the state in our setting is the solution of an elliptic or parabolic PDE with a random thermal diffusion coefficient, steered by a linear control function. To account for the presence of uncertainty in the optimal control problem, the objective function is composed with a risk measure. We focus on two risk measures, both involving high-dimensional integrals with respect to the stochastic variables: the expected value and the (nonlinear) entropic risk measure. The high-dimensional integrals are computed numerically using specially designed QMC methods, and under moderate assumptions on the input random field, the error rate is shown to be essentially linear independently of the stochastic dimension of the problem – and thereby superior to ordinary Monte Carlo methods.

]]>

**Inverse Problems and Mathematical Imaging Group**

In this talk, I will outline several inverse problems and relevant issues that I had occasions to deal with in the past.

First, I will show a complex-analytic approach to a Cauchy problem for the Laplace equation. Such a problem arises in crack and corrosion detection contexts. The proposed treatment allows incorporating additional measurements from interior of the domain.

Then, I will show some results on the inverse obstacle problem for the wave equation from partial space-time data. A peculiarity here is that measurements are available on the part of the boundary and only for a finite time interval, the geometry of a reconstructed obstacle is a priori unknown.

Third problem comes from a concrete lab set-up: reconstruction of the overall magnetisation of a paleomagnetic sample. Interestingly enough, this problem admits a closed-form asymptotic solution, with an asymptotic parameter related to the size of the measurement area.

Finally, if time permits, I will also show my recent results on the convolution integral equations which are naturally pertinent to some basic inverse problem settings.

I**nverse Problems and Mathematical Imaging**

It is a common certainty that the traditional inverse problems of recovering objects from remote measurements

are, mostly, highly unstable. To overcome this instability it is advised, in the engineering literature,

to create the missing contrasts in the targets to image, Ω, by injecting micro-bubbles or nano-particles.

In this thesis, we follow this direction and propose an approach how to analyse mathematically the effect

of the injected agents, which are small-sized particles modelled with materials that enjoy high contrast as

compared to the ones of the background, Ω, or desired sign for the electromagnetic properties. These

enhanced fields can be measured on the accessible boundary ∂Ω. The goal is then to extract the values

of needed coefficients from the measured enhanced fields. We state and provide detailed analysis for

two classes of such imaging modalities.

1. Acoustic imaging modality. Here, the contrast agents are Micro-Bubbles modelled by the mass

density and bulk modulus enjoying high contrasting scales. These contrasting scales allow them to

resonate at certain incident frequencies as the Minnaert resonance. The goal is then to reconstruct

the acoustic coefficients given by the mass density and bulk modulus, in the target Ω, from the

remotely measured ultrasound that is enhanced by the presence of the bubbles resonance and

excited by incident frequencies close to Minnaert resonance.

2. Photo-acoustic imaging modality. This technique consists of exciting the heterogeneous (i.e.

damaged) tissue, Ω, with an electromagnetic wave at a given incident frequency, which in turn

creates a pressure wave that we can measure at the accessible part ∂Ω. The goal is then to

extract information about the optical properties (i.e. the permittivity and conductivity) of this tissue,

Ω, from these measurements. As a first step, we consider the case of 2D TM-model where we use

dielectric nano-particles (enjoying high contrasts of their electric permittivity). As a second step,

we deal with the full Maxwell system where we use plasmonic nano-particles (having permittivity

of negative sign).

We show that the curves, or the surfaces, given by the measured fields in terms of the used bands

of frequencies have peaks only at the related resonant frequencies (i.e. Minnaert for acoustic fields,

Dielectric or plasmonic for electromagnetic fields). In particular, from the denominators of these fields,

we recover these resonant frequencies and, from their numerators we derive, the fields generated before

injecting the agents. This recovered information allows us to reconstruct the needed coefficients.

**Inverse Problems and Mathematical Imaging Group**

**Inverse Problems and Mathematical Imaging Group**

We present a new factorization method for recovering a conductivity inclusion in two dimensions from multi-static measurements. A conductivity inclusion induces a perturbation in the background potential, and the perturbation admits a multipole expansion whose coefficients are the so-called generalized polarization tensors (GPTs). We derive a factorization formula for the matrix composed of the GPTs in terms of the material parameters and the coefficients of the exterior conformal mapping associated with the inclusion. Using this formula, we induce accurate representations for the coefficients of the conformal mapping in terms of the GPTs. Our approach provides a non-iterative method for recovering the shape of a Lipschitz inclusion with arbitrary finite conductivity.

]]>**Kirsten Thonicke, Potsdam Institute for Climate Impact Research**

Wildfires are a global phenomenon and an Earth system process, while humans use of fire has created cultural landscapes over centuries. In many regions vegetation is adapted to a specific fire regime, often for centuries. In recent decades global biomass burning has released between 1.5 and 4 PgC p.a., depending on climate conditions causing extreme heat and droughts. These are the challenges global fire models, embedded in dynamic global vegetation models have to meet, to allow projecting climate-change impacts on fire and vegetation as well as advancing our understanding on the role of fire on tipping points. I will explain the concept of a process-based fire model embedded in a dynamic global vegetation model, its application to European and tropical fire regimes and how parameter optimization techniques can help to improve both, the interannual variability of fire and the distribution of vegetation types. Coupling the fire-enabled DGVM to an Earth System Model allows to investigate the role of fire in accelerating the risk to cross future Amazon tipping points.

]]>**Symbolic Computation Group**

Sum-product theory revolves around the fundamental idea that additive and multiplicative structures cannot coexists in sets of numbers. The most famous manifestation of this principle is the Erdős-Szemerédi sum-product conjecture, which says that any set of integers must determine very many distinct sums or products. This is a very important problem in additive number theory which remains wide open.

There are many other problems which have the same idea of additive/multiplicative disharmony at their core. These include

- item transferring the Erdős-Szemerédi conjecture to other fields
- focusing on extreme cases where one of the sets is particularly small,
- proving that sets defined by a combination of additive and multiplicative operations are
- always large,
- various beautiful geometric problems which turn out to be secretly about sums and products.

In this Habilitation defense, I will give a survey of this area of research and focus on some of what I consider to be my favourite contributions to the field.

]]>