In this paper, we analyze an operator splitting scheme of the nonlinear heat equation in $\Omega\subset\mathbb{R}^d$ ($d\geq 1$): $\partial_t u = \Delta u + \lambda |u|^{p-1} u$ in $\Omega\times(0,\infty)$, $u=0$ in $\partial\Omega\times(0,\infty)$, $u ({\bf x},0) =\phi ({\bf x})$ in $\Omega$. where $\lambda\in\{-1,1\}$ and $\phi \in W^{1,q}(\Omega)\cap L^{\infty} (\Omega)$ with $2\leq p < \infty$ and $d(p-1)/2<q<\infty$. We establish the well-posedness of the approximation of $u$ in $L^r$-space ($r\geq q$), and furthermore, we derive its convergence rate of order $\mathcal{O}(\tau)$ for a time step $\tau>0$. Finally, we give some numerical examples to confirm the reliability of the analyzed result.
In this paper we prove convergence rates for time discretisation schemes for semi-linear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator $A$ is the generator of a strongly continuous semigroup $S$ on a Hilbert space $X$, and the focus is on non-parabolic problems. The main results are optimal bounds for the uniform strong error $$\mathrm{E}_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|^p\Big)^{1/p},$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$. The usual schemes such as splitting/exponential Euler, implicit Euler, and Crank-Nicolson, etc.\ are included as special cases. Under conditions on the nonlinearity and the noise we show - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (linear equation, additive noise, general $S$); - $\mathrm{E}_{k}^{\infty}\lesssim \sqrt{k} \log(T/k)$ (nonlinear equation, multiplicative noise, contractive $S$); - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (nonlinear wave equation, multiplicative noise). The logarithmic factor can be removed if the splitting scheme is used with a (quasi)-contractive $S$. The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler pointwise strong error $$\mathrm{E}_k:=\bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|^p\bigg)^{1/p}.$$ Applications to Maxwell equations, Schr\"odinger equations, and wave equations are included. For these equations our results improve and reprove several existing results with a unified method.
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.
We study the problem of finding $K$ collision pairs in a random function $f : [N] \rightarrow [N]$ by using a quantum computer. We prove that the number of queries to the function in the quantum random oracle model must increase significantly when the size of the available memory is limited. Namely, we demonstrate that any algorithm using $S$ qubits of memory must perform a number $T$ of queries that satisfies the tradeoff $T^3 S \geq \Omega(K^3 N)$. Classically, the same question has only been settled recently by Dinur [Eurocrypt'20], who showed that the Parallel Collision Search algorithm of van Oorschot and Wiener achieves the optimal time-space tradeoff of $T^2 S = \Theta(K^2 N)$. Our result limits the extent to which quantum computing may decrease this tradeoff. Our method is based on a novel application of Zhandry's recording query technique [Crypto'19] for proving lower bounds in the exponentially small success probability regime. As a second application, we give a simpler proof of the time-space tradeoff $T^2 S \geq \Omega(N^3)$ for sorting $N$ numbers on a quantum computer, which was first obtained by Klauck, \v{S}palek and de Wolf [K\v{S}W07].
Given a $k\times n$ integer primitive matrix $\bf{A}$ (i.e., a matrix can be extended to an $n\times n$ unimodular matrix over the integers) with the maximal absolute value of entries $\|\bf{A}\|$ bounded by {an integer} $\lambda$ from above, we study the probability that the $m\times n$ matrix extended from $\bf{A}$ by appending other $m-k$ row vectors of dimension $n$ with entries chosen randomly and independently from the uniform distribution over $\{0, 1,\ldots, \lambda-1\}$ is still primitive. We present a complete and rigorous proof of a lower bound on the probability, which is at least a constant for fixed $m$ in the range $[k+1, n-4]$. As an application, we prove that there exists a fast Las Vegas algorithm that completes a $k\times n$ primitive matrix $\bf{A}$ to an $n\times n$ unimodular matrix within expected $\tilde{O}(n^{\omega}\log \|\bf{A}\|)$ bit operations, where $\tilde{O}$ is big-$O$ but without log factors, $\omega$ is the exponent on the arithmetic operations of matrix multiplication.
A Coefficient Inverse Problem for the radiative transport equation is considered. The globally convergent numerical method, the so-called convexification, is developed. For the first time, the viscosity solution is considered for a boundary value problem for the resulting system of two coupled partial differential equations. A Lipschitz stability estimate is proved for this boundary value problem using a Carleman estimate for the Laplace operator. Next, the global convergence analysis is provided via that Carleman estimate. Results of numerical experiments demonstrate a high computational efficiency of this approach.
The matrix sensing problem is an important low-rank optimization problem that has found a wide range of applications, such as matrix completion, phase synchornization/retrieval, robust PCA, and power system state estimation. In this work, we focus on the general matrix sensing problem with linear measurements that are corrupted by random noise. We investigate the scenario where the search rank $r$ is equal to the true rank $r^*$ of the unknown ground truth (the exact parametrized case), as well as the scenario where $r$ is greater than $r^*$ (the overparametrized case). We quantify the role of the restricted isometry property (RIP) in shaping the landscape of the non-convex factorized formulation and assisting with the success of local search algorithms. First, we develop a global guarantee on the maximum distance between an arbitrary local minimizer of the non-convex problem and the ground truth under the assumption that the RIP constant is smaller than $1/(1+\sqrt{r^*/r})$. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. More importantly, we prove that this noisy, overparametrized problem exhibits the strict saddle property, which leads to the global convergence of perturbed gradient descent algorithm in polynomial time. The results of this work provide a comprehensive understanding of the geometric landscape of the matrix sensing problem in the noisy and overparametrized regime.
In this paper we study multi-robot path planning for persistent monitoring tasks. We consider the case where robots have a limited battery capacity with a discharge time $D$. We represent the areas to be monitored as the vertices of a weighted graph. For each vertex, there is a constraint on the maximum allowable time between robot visits, called the latency. The objective is to find the minimum number of robots that can satisfy these latency constraints while also ensuring that the robots periodically charge at a recharging depot. The decision version of this problem is known to be PSPACE-complete. We present a $O(\frac{\log D}{\log \log D}\log \rho)$ approximation algorithm for the problem where $\rho$ is the ratio of the maximum and the minimum latency constraints. We also present an orienteering based heuristic to solve the problem and show empirically that it typically provides higher quality solutions than the approximation algorithm. We extend our results to provide an algorithm for the problem of minimizing the maximum weighted latency given a fixed number of robots. We evaluate our algorithms on large problem instances in a patrolling scenario and in a wildfire monitoring application. We also compare the algorithms with an existing solver on benchmark instances.
A convergent numerical method for $\alpha$-dissipative solutions of the Hunter--Saxton equation is derived. The method is based on applying a tailor-made projection operator to the initial data, and then solving exactly using the generalized method of characteristics. The projection step is the only step that introduces any approximation error. It is therefore crucial that its design ensures not only a good approximation of the initial data, but also that errors due to the energy dissipation at later times remain small. Furthermore, it is shown that the main quantity of interest, the wave profile, converges in $L^{\infty}$ for all $t \geq 0$, while a subsequence of the energy density converges weakly for almost every time.
We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.
The determinant lower bound of Lovasz, Spencer, and Vesztergombi [European Journal of Combinatorics, 1986] is a powerful general way to prove lower bounds on the hereditary discrepancy of a set system. In their paper, Lovasz, Spencer, and Vesztergombi asked if hereditary discrepancy can also be bounded from above by a function of the hereditary discrepancy. This was answered in the negative by Hoffman, and the largest known multiplicative gap between the two quantities for a set system of $m$ substes of a universe of size $n$ is on the order of $\max\{\log n, \sqrt{\log m}\}$. On the other hand, building on work of Matou\v{s}ek [Proceedings of the AMS, 2013], recently Jiang and Reis [SOSA, 2022] showed that this gap is always bounded up to constants by $\sqrt{\log(m)\log(n)}$. This is tight when $m$ is polynomial in $n$, but leaves open what happens for large $m$. We show that the bound of Jiang and Reis is tight for nearly the entire range of $m$. Our proof relies on a technique of amplifying discrepancy via taking Kronecker products, and on discrepancy lower bounds for a set system derived from the discrete Haar basis.