We consider space-time tracking type distributed optimal control problems for the wave equation in the space-time domain $Q:= \Omega \times (0,T) \subset {\mathbb{R}}^{n+1}$, where the control is assumed to be in the energy space $[H_{0;,0}^{1,1}(Q)]^*$, rather than in $L^2(Q)$ which is more common. While the latter ensures a unique state in the Sobolev space $H^{1,1}_{0;0,}(Q)$, this does not define a solution isomorphism. Hence we use an appropriate state space $X$ such that the wave operator becomes an isomorphism from $X$ onto $[H_{0;,0}^{1,1}(Q)]^*$. Using space-time finite element spaces of piecewise linear continuous basis functions on completely unstructured but shape regular simplicial meshes, we derive a priori estimates for the error $\|\widetilde{u}_{\varrho h}-\overline{u}\|_{L^2(Q)}$ between the computed space-time finite element solution $\widetilde{u}_{\varrho h}$ and the target function $\overline{u}$ with respect to the regularization parameter $\varrho$, and the space-time finite element mesh-size $h$, depending on the regularity of the desired state $\overline{u}$. These estimates lead to the optimal choice $\varrho=h^2$ in order to define the regularization parameter $\varrho$ for a given space-time finite element mesh size $h$, or to determine the required mesh size $h$ when $\varrho$ is a given constant representing the costs of the control. The theoretical results will be supported by numerical examples with targets of different regularities, including discontinuous targets. Furthermore, an adaptive space-time finite element scheme is proposed and numerically analyzed.
Convex function constrained optimization has received growing research interests lately. For a special convex problem which has strongly convex function constraints, we develop a new accelerated primal-dual first-order method that obtains an $\Ocal(1/\sqrt{\vep})$ complexity bound, improving the $\Ocal(1/{\vep})$ result for the state-of-the-art first-order methods. The key ingredient to our development is some novel techniques to progressively estimate the strong convexity of the Lagrangian function, which enables adaptive step-size selection and faster convergence performance. In addition, we show that the complexity is further improvable in terms of the dependence on some problem parameter, via a restart scheme that calls the accelerated method repeatedly. As an application, we consider sparsity-inducing constrained optimization which has a separable convex objective and a strongly convex loss constraint. In addition to achieving fast convergence, we show that the restarted method can effectively identify the sparsity pattern (active-set) of the optimal solution in finite steps. To the best of our knowledge, this is the first active-set identification result for sparsity-inducing constrained optimization.
The development of surrogate models to study uncertainties in hydrologic systems requires significant effort in the development of sampling strategies and forward model simulations. Furthermore, in applications where prediction time is critical, such as prediction of hurricane storm surge, the predictions of system response and uncertainties can be required within short time frames. Here, we develop an efficient stochastic shallow water model to address these issues. To discretize the physical and probability spaces we use a Stochastic Galerkin method and a Incremental Pressure Correction scheme to advance the solution in time. To overcome discrete stability issues, we propose cross-mode stabilization methods which employs existing stabilization methods in the probability space by adding stabilization terms to every stochastic mode in a modes-coupled way. We extensively verify the developed method for both idealized shallow water test cases and hindcasting of past hurricanes. We subsequently use the developed and verified method to perform a comprehensive statistical analysis of the established shallow water surrogate models. Finally, we propose a predictor for hurricane storm surge under uncertain wind drag coefficients and demonstrate its effectivity for Hurricanes Ike and Harvey.
Latent factor model estimation typically relies on either using domain knowledge to manually pick several observed covariates as factor proxies, or purely conducting multivariate analysis such as principal component analysis. However, the former approach may suffer from the bias while the latter can not incorporate additional information. We propose to bridge these two approaches while allowing the number of factor proxies to diverge, and hence make the latent factor model estimation robust, flexible, and statistically more accurate. As a bonus, the number of factors is also allowed to grow. At the heart of our method is a penalized reduced rank regression to combine information. To further deal with heavy-tailed data, a computationally attractive penalized robust reduced rank regression method is proposed. We establish faster rates of convergence compared with the benchmark. Extensive simulations and real examples are used to illustrate the advantages.
In this paper, we study an online sampling problem of the Wiener process. The goal is to minimize the mean squared error (MSE) of the remote estimator under a sampling frequency constraint when the transmission delay distribution is unknown. The sampling problem is reformulated into an optional stopping problem, and we propose an online sampling algorithm that can adaptively learn the optimal stopping threshold through stochastic approximation. We prove that the cumulative MSE regret grows with rate $\mathcal{O}(\ln k)$, where $k$ is the number of samples. Through Le Cam's two point method, we show that the worst-case cumulative MSE regret of any online sampling algorithm is lower bounded by $\Omega(\ln k)$. Hence, the proposed online sampling algorithm is minimax order-optimal. Finally, we validate the performance of the proposed algorithm via numerical simulations.
The first globally convergent numerical method for a Coefficient Inverse Problem (CIP) for the Riemannian Radiative Transfer Equation (RRTE) is constructed. This is a version of the so-called \textquotedblleft convexification" principle, which has been pursued by this research group for a number of years for some other CIPs for PDEs. Those PDEs are significantly different from RRTE. The presence of the Carleman Weight Function (CWF) in the numerical scheme is the key element of the convexification. CWF is the function, which is involved as the weight function in the Carleman estimate for the corresponding PDE operator. Convergence analysis is presented along with the results of numerical experiments, which confirm the theory. RRTE governs the propagation of photons in the diffuse medium in the case when they propagate along geodesic lines between their collisions. Geodesic lines are generated by the spatially variable dielectric constant of the medium.
The purpose of this paper is to introduce a new numerical method to solve multi-marginal optimal transport problems with pairwise interaction costs. The complexity of multi-marginal optimal transport generally scales exponentially in the number of marginals $m$. We introduce a one parameter family of cost functions that interpolates between the original and a special cost function for which the problem's complexity scales linearly in $m$. We then show that the solution to the original problem can be recovered by solving an ordinary differential equation in the parameter $\epsilon$, whose initial condition corresponds to the solution for the special cost function mentioned above; we then present some simulations, using both explicit Euler and explicit higher order Runge-Kutta schemes to compute solutions to the ODE, and, as a result, the multi-marginal optimal transport problem.
The sheaf-function correspondence identifies the group of constructible functions on a real analytic manifold $M$ with the Grothendieck group of constructible sheaves on $M$. When $M$ is a finite dimensional real vector space, Kashiwara-Schapira have recently introduced the convolution distance between sheaves of $k$-vector spaces on $M$. In this paper, we characterize distances on the group of constructible functions on a real finite dimensional vector space that can be controlled by the convolution distance through the sheaf-function correspondence. Our main result asserts that such distances are almost trivial: they vanish as soon as two constructible functions have the same Euler integral. We formulate consequences of our result for Topological Data Analysis: there cannot exists non-trivial additive invariants of persistence modules that are continuous for the interleaving distance.
With continuous outcomes, the average causal effect is typically defined using a contrast of expected potential outcomes. However, in the presence of skewed outcome data, the expectation may no longer be meaningful and the definition of the causal effect should be considered more closely. When faced with this challenge in practice, the typical approach is to either "ignore or transform" - ignore the skewness in the data entirely or transform the outcome to obtain a more symmetric distribution for which the expectation is interpretable as the central value. However, neither approach is entirely satisfactory. An appealing alternative is to define the causal effect using a contrast of median potential outcomes, although there is limited discussion or availability of confounding-adjustment methods to estimate this parameter. Within this study, we described and compared confounding-adjustment methods to estimate the causal difference in medians, addressing this gap. The methods considered were multivariable quantile regression, an inverse probability weighted (IPW) estimator, weighted quantile regression and two possible, little-known implementations of g-computation. The methods were evaluated within a simulation study under varying degrees of skewness in the outcome and applied to an empirical study. Results indicated that the IPW estimator, weighted quantile regression and g-computation implementations minimised bias across all simulation settings, if the corresponding model was correctly specified, with g-computation additionally minimising the variance in estimates. The methods presented within this paper provide appealing alternatives to the common "ignore or transform" approach, enhancing our capability to obtain meaningful causal effect estimates with skewed outcome data.
In this paper we present a new Eulerian finite element method for the discretization of scalar partial differential equations on evolving surfaces. In this method we use the restriction of standard space-time finite element spaces on a fixed bulk mesh to the space-time surface. The structure of the method is such that it naturally fits to a level set representation of the evolving surface. The higher order version of the method is based on a space-time variant of a mesh deformation that has been developed in the literature for stationary surfaces. The discretization method that we present is of (optimal) higher order accuracy for smoothly varying surfaces with sufficiently smooth solutions. Without any modifications the method can be used for the discretization of problems with topological singularities. A numerical study demonstrates both the higher order accuracy for smooth cases and the robustness with respect to toplogical singularities.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.