The hybrid high-order method is a modern numerical framework for the approximation of elliptic PDEs. We present here an extension of the hybrid high-order method to meshes possessing curved edges/faces. Such an extension allows us to enforce boundary conditions exactly on curved domains, and capture curved geometries that appear internally in the domain e.g. discontinuities in a diffusion coefficient. The method makes use of non-polynomial functions on the curved faces and does not require any mappings between reference elements/faces. Such an approach does not require the faces to be polynomial, and has a strict upper bound on the number of degrees of freedom on a curved face for a given polynomial degree. Moreover, this approach of enriching the space of unknowns on the curved faces with non-polynomial functions should extend naturally to other polytopal methods. We show the method to be stable and consistent on curved meshes and derive optimal error estimates in $L^2$ and energy norms. We present numerical examples of the method on a domain with curved boundary, and for a diffusion problem such that the diffusion tensor is discontinuous along a curved arc.
This paper studies the third-order characteristic of nonsingular discrete memoryless channels and the Gaussian channel with a maximal power constraint. The third-order term in our expansions employs a new quantity here called the \emph{channel skewness}, which affects the approximation accuracy more significantly as the error probability decreases. For the Gaussian channel, evaluating Shannon's (1959) random coding and sphere-packing bounds in the central limit theorem (CLT) regime enables exact computation of the channel skewness. For discrete memoryless channels, this work generalizes Moulin's (2017) bounds on the asymptotic expansion of the maximum achievable message set size for nonsingular channels from the CLT regime to include the moderate deviations (MD) regime, thereby refining Altu\u{g} and Wagner's (2014) MD result. For an example binary symmetric channel and most practically important $(n, \epsilon)$ pairs, including $n \in [100, 500]$ and $\epsilon \in [10^{-10}, 10^{-1}]$, an approximation up to the channel skewness is the most accurate among several expansions in the literature. A derivation of the third-order term in the type-II error exponent of binary hypothesis testing in the MD regime is also included; the resulting third-order term is similar to the channel skewness.
Thanks to a finite element method, we solve numerically parabolic partial differential equations on complex domains by avoiding the mesh generation, using a regular background mesh, not fitting the domain and its real boundary exactly. Our technique follows the phi-FEM paradigm, which supposes that the domain is given by a level-set function. In this paper, we prove a priori error estimates in l2(H1) and linf(L2) norms for an implicit Euler discretization in time. We give numerical illustrations to highlight the performances of phi-FEM, which combines optimal convergence accuracy, easy implementation process and fastness.
We introduce and compare computational techniques for sharp extreme event probability estimates in stochastic differential equations with small additive Gaussian noise. In particular, we focus on strategies that are scalable, i.e. their efficiency does not degrade upon spatial and temporal refinement. For that purpose, we extend algorithms based on the Laplace method for estimating the probability of an extreme event to infinite dimensions. The method estimates the limiting exponential scaling using a single realization of the random variable, the large deviation minimizer. Finding this minimizer amounts to solving an optimization problem governed by a differential equation. The probability estimate becomes sharp when it additionally includes prefactor information, which necessitates computing the determinant of a second derivative operator to evaluate a Gaussian integral around the minimizer. We present an approach in infinite dimensions based on Fredholm determinants, and develop numerical algorithms to compute these determinants efficiently for the high-dimensional systems that arise upon discretization. We also give an interpretation of this approach using Gaussian process covariances and transition tubes. An example model problem, for which we also provide an open-source python implementation, is used throughout the paper to illustrate all methods discussed. To study the performance of the methods, we consider examples of stochastic differential and stochastic partial differential equations, including the randomly forced incompressible three-dimensional Navier-Stokes equations.
Most continual learning (CL) algorithms have focused on tackling the stability-plasticity dilemma, that is, the challenge of preventing the forgetting of previous tasks while learning new ones. However, they have overlooked the impact of the knowledge transfer when the dataset in a certain task is biased - namely, when some unintended spurious correlations of the tasks are learned from the biased dataset. In that case, how would they affect learning future tasks or the knowledge already learned from the past tasks? In this work, we carefully design systematic experiments using one synthetic and two real-world datasets to answer the question from our empirical findings. Specifically, we first show through two-task CL experiments that standard CL methods, which are unaware of dataset bias, can transfer biases from one task to another, both forward and backward, and this transfer is exacerbated depending on whether the CL methods focus on the stability or the plasticity. We then present that the bias transfer also exists and even accumulate in longer sequences of tasks. Finally, we propose a simple, yet strong plug-in method for debiasing-aware continual learning, dubbed as Group-class Balanced Greedy Sampling (BGS). As a result, we show that our BGS can always reduce the bias of a CL model, with a slight loss of CL performance at most.
The paper focuses on a new error analysis of a class of mixed FEMs for stationary incompressible magnetohydrodynamics with the standard inf-sup stable velocity-pressure space pairs to Navier-Stokes equations and the N\'ed\'elec's edge element for the magnetic field. The methods have been widely used in various numerical simulations in the last several decades, while the existing analysis is not optimal due to the strong coupling of system and the pollution of the lower-order N\'ed\'elec's edge approximation in analysis. In terms of a newly modified Maxwell projection we establish new and optimal error estimates. In particular, we prove that the method based on the commonly-used Taylor-Hood/lowest-order N\'ed\'elec's edge element is efficient and the method provides the second-order accuracy for numerical velocity. Two numerical examples for the problem in both convex and nonconvex polygonal domains are presented. Numerical results confirm our theoretical analysis.
We present an extension of the linear sampling method for solving the sound-soft inverse acoustic scattering problem with randomly distributed point sources. The theoretical justification of our sampling method is based on the Helmholtz--Kirchhoff identity, the cross-correlation between measurements, and the volume and imaginary near-field operators, which we introduce and analyze. Implementations in MATLAB using boundary elements, the SVD, Tikhonov regularization, and Morozov's discrepancy principle are also discussed. We demonstrate the robustness and accuracy of our algorithms with several numerical experiments in two dimensions.
Traction parameters, that characterize the ground-wheel contact dynamics, are the central factor in the energy efficiency of vehicles. To optimize fuel consumption, reduce wear of tires, increase productivity etc., knowledge of current traction parameters is unavoidable. Unfortunately, these parameters are difficult to measure and require expensive force and torque sensors. An alternative way is to use system identification to determine them. In this work, we validate such a method in field experiments with a mobile robot. The method is based on an adaptive Kalman filter. We show how it estimates the traction parameters online, during the motion on the field, and compare them to their values determined via a 6-directional force-torque sensor installed for verification. Data of adhesion slip ratio curves is recorded and compared to curves from literature for additional validation of the method. The results can establish a foundation for a number of optimal traction methods.
Statistical inference for stochastic processes based on high-frequency observations has been an active research area for more than two decades. One of the most well-known and widely studied problems is the estimation of the quadratic variation of the continuous component of an It\^o semimartingale with jumps. Several rate- and variance-efficient estimators have been proposed in the literature when the jump component is of bounded variation. However, to date, very few methods can deal with jumps of unbounded variation. By developing new high-order expansions of the truncated moments of a locally stable L\'evy process, we construct a new rate- and variance-efficient volatility estimator for a class of It\^o semimartingales whose jumps behave locally like those of a stable L\'evy process with Blumenthal-Getoor index $Y\in (1,8/5)$ (hence, of unbounded variation). The proposed method is based on a two-step debiasing procedure for the truncated realized quadratic variation of the process. Our Monte Carlo experiments indicate that the method outperforms other efficient alternatives in the literature in the setting covered by our theoretical framework.
We introduce and analyze a discontinuous Galerkin method for the numerical modelling of the equations of Multiple-Network Poroelastic Theory (MPET) in the dynamic formulation. The MPET model can comprehensively describe functional changes in the brain considering multiple scales of fluids. Concerning the spatial discretization, we employ a high-order discontinuous Galerkin method on polygonal and polyhedral grids and we derive stability and a priori error estimates. The temporal discretization is based on a coupling between a Newmark $\beta$-method for the momentum equation and a $\theta$-method for the pressure equations. After the presentation of some verification numerical tests, we perform a convergence analysis using an agglomerated mesh of a geometry of a brain slice. Finally we present a simulation in a three dimensional patient-specific brain reconstructed from magnetic resonance images. The model presented in this paper can be regarded as a preliminary attempt to model the perfusion in the brain.
Our work explores the hardness of $3$SUM instances without certain additive structures, and its applications. As our main technical result, we show that solving $3$SUM on a size-$n$ integer set that avoids solutions to $a+b=c+d$ for $\{a, b\} \ne \{c, d\}$ still requires $n^{2-o(1)}$ time, under the $3$SUM hypothesis. Such sets are called Sidon sets and are well-studied in the field of additive combinatorics. - Combined with previous reductions, this implies that the All-Edges Sparse Triangle problem on $n$-vertex graphs with maximum degree $\sqrt{n}$ and at most $n^{k/2}$ $k$-cycles for every $k \ge 3$ requires $n^{2-o(1)}$ time, under the $3$SUM hypothesis. This can be used to strengthen the previous conditional lower bounds by Abboud, Bringmann, Khoury, and Zamir [STOC'22] of $4$-Cycle Enumeration, Offline Approximate Distance Oracle and Approximate Dynamic Shortest Path. In particular, we show that no algorithm for the $4$-Cycle Enumeration problem on $n$-vertex $m$-edge graphs with $n^{o(1)}$ delays has $O(n^{2-\varepsilon})$ or $O(m^{4/3-\varepsilon})$ pre-processing time for $\varepsilon >0$. We also present a matching upper bound via simple modifications of the known algorithms for $4$-Cycle Detection. - A slight generalization of the main result also extends the result of Dudek, Gawrychowski, and Starikovskaya [STOC'20] on the $3$SUM hardness of nontrivial 3-Variate Linear Degeneracy Testing (3-LDTs): we show $3$SUM hardness for all nontrivial 4-LDTs. The proof of our main technical result combines a wide range of tools: Balog-Szemer{\'e}di-Gowers theorem, sparse convolution algorithm, and a new almost-linear hash function with almost $3$-universal guarantee for integers that do not have small-coefficient linear relations.