The high-frequency Helmholtz equation on the entire space is truncated into a bounded domain using the perfectly matched layer (PML) technique and subsequently, discretized by the higher-order finite element method (FEM) and the continuous interior penalty finite element method (CIP-FEM). By formulating an elliptic problem involving a linear combination of a finite number of eigenfunctions related to the PML differential operator, a wave-number-explicit decomposition lemma is proved for the PML problem, which implies that the PML solution can be decomposed into a non-oscillating elliptic part and an oscillating but analytic part. The preasymptotic error estimates in the energy norm for both the $p$-th order CIP-FEM and FEM are proved to be $C_1(kh)^p + C_2k(kh)^{2p} +C_3 E^{\rm PML}$ under the mesh condition that $k^{2p+1}h^{2p}$ is sufficiently small, where $k$ is the wave number, $h$ is the mesh size, and $E^{\rm PML}$ is the PML truncation error which is exponentially small. In particular, the dependences of coefficients $C_j~(j=1,2)$ on the source $f$ are improved. Numerical experiments are presented to validate the theoretical findings, illustrating that the higher-order CIP-FEM can greatly reduce the pollution errors.
A new sparse semiparametric model is proposed, which incorporates the influence of two functional random variables in a scalar response in a flexible and interpretable manner. One of the functional covariates is included through a single-index structure, while the other is included linearly through the high-dimensional vector formed by its discretised observations. For this model, two new algorithms are presented for selecting relevant variables in the linear part and estimating the model. Both procedures utilise the functional origin of linear covariates. Finite sample experiments demonstrated the scope of application of both algorithms: the first method is a fast algorithm that provides a solution (without loss in predictive ability) for the significant computational time required by standard variable selection methods for estimating this model, and the second algorithm completes the set of relevant linear covariates provided by the first, thus improving its predictive efficiency. Some asymptotic results theoretically support both procedures. A real data application demonstrated the applicability of the presented methodology from a predictive perspective in terms of the interpretability of outputs and low computational cost.
We present a polymorphic linear lambda-calculus as a proof language for second-order intuitionistic linear logic. The calculus includes addition and scalar multiplication, enabling the proof of a linearity result at the syntactic level.
Map matching is a common preprocessing step for analysing vehicle trajectories. In the theory community, the most popular approach for map matching is to compute a path on the road network that is the most spatially similar to the trajectory, where spatial similarity is measured using the Fr\'echet distance. A shortcoming of existing map matching algorithms under the Fr\'echet distance is that every time a trajectory is matched, the entire road network needs to be reprocessed from scratch. An open problem is whether one can preprocess the road network into a data structure, so that map matching queries can be answered in sublinear time. In this paper, we investigate map matching queries under the Fr\'echet distance. We provide a negative result for geometric planar graphs. We show that, unless SETH fails, there is no data structure that can be constructed in polynomial time that answers map matching queries in $O((pq)^{1-\delta})$ query time for any $\delta > 0$, where $p$ and $q$ are the complexities of the geometric planar graph and the query trajectory, respectively. We provide a positive result for realistic input graphs, which we regard as the main result of this paper. We show that for $c$-packed graphs, one can construct a data structure of $\tilde O(cp)$ size that can answer $(1+\varepsilon)$-approximate map matching queries in $\tilde O(c^4 q \log^4 p)$ time, where $\tilde O(\cdot)$ hides lower-order factors and dependence on $\varepsilon$.
Obtaining high-resolution, accurate channel topography and deposit conditions is the prior challenge for the study of channelized debris flow. Currently, wide-used mapping technologies including satellite imaging and drone photogrammetry struggle to precisely observe channel interior conditions of mountainous long-deep gullies, particularly those in the Wenchuan Earthquake region. SLAM is an emerging tech for 3D mapping; however, extremely rugged environment in long-deep gullies poses two major challenges even for the state-of-art SLAM: (1) Atypical features; (2) Violent swaying and oscillation of sensors. These issues result in large deviation and lots of noise for SLAM results. To improve SLAM mapping in such environments, we propose an advanced SLAM-based channel detection and mapping system, namely AscDAMs. It features three main enhancements to post-process SLAM results: (1) The digital orthophoto map aided deviation correction algorithm greatly eliminates the systematic error; (2) The point cloud smoothing algorithm substantially diminishes noises; (3) The cross section extraction algorithm enables the quantitative assessment of channel deposits and their changes. Two field experiments were conducted in Chutou Gully, Wenchuan County in China in February and November 2023, representing observations before and after the rainy season. We demonstrate the capability of AscDAMs to greatly improve SLAM results, promoting SLAM for mapping the specially challenging environment. The proposed method compensates for the insufficiencies of existing technologies in detecting debris flow channel interiors including detailed channel morphology, erosion patterns, deposit distinction, volume estimation and change detection. It serves to enhance the study of full-scale debris flow mechanisms, long-term post-seismic evolution, and hazard assessment.
The problem of straggler mitigation in distributed matrix multiplication (DMM) is considered for a large number of worker nodes and a fixed small finite field. Polynomial codes and matdot codes are generalized by making use of algebraic function fields (i.e., algebraic functions over an algebraic curve) over a finite field. The construction of optimal solutions is translated to a combinatorial problem on the Weierstrass semigroups of the corresponding algebraic curves. Optimal or almost optimal solutions are provided. These have the same computational complexity per worker as classical polynomial and matdot codes, and their recovery thresholds are almost optimal in the asymptotic regime (growing number of workers and a fixed finite field).
We construct a fast exact algorithm for the simulation of the first-passage time, jointly with the undershoot and overshoot, of a tempered stable subordinator over an arbitrary non-increasing absolutely continuous function. We prove that the running time of our algorithm has finite exponential moments and provide bounds on its expected running time with explicit dependence on the characteristics of the process and the initial value of the function. The expected running time grows at most cubically in the stability parameter (as it approaches either $0$ or $1$) and is linear in the tempering parameter and the initial value of the function. Numerical performance, based on the implementation in the dedicated GitHub repository, exhibits a good agreement with our theoretical bounds. We provide numerical examples to illustrate the performance of our algorithm in Monte Carlo estimation.
We consider the scalar anisotropic wave equation. Recently a convergence analysis for radial perfectly matched layers (PML) in the frequency domain was reported and in the present article we continue this approach into the time domain. First we explain why there is a good hope that radial complex scalings can overcome the instabilities of PML methods caused by anisotropic materials. Next we discuss some sensitive details, which seem like a paradox at the first glance: if the absorbing layer and the inhomogeneities are sufficiently separated, then the solution is indeed stable. However, for more general data the problem becomes unstable. In numerical computations we observe instabilities regardless of the position of the inhomogeneities, although the instabilities arise only for fine enough discretizations. As a remedy we propose a complex frequency shifted scaling and discretizations by Hardy space infinite elements or truncation-free PMLs. We show numerical experiments which confirm the stability and convergence of these methods.
The celebrated Kleene fixed point theorem is crucial in the mathematical modelling of recursive specifications in Denotational Semantics. In this paper we discuss whether the hypothesis of the aforementioned result can be weakened. An affirmative answer to the aforesaid inquiry is provided so that a characterization of those properties that a self-mapping must satisfy in order to guarantee that its set of fixed points is non-empty when no notion of completeness are assumed to be satisfied by the partially ordered set. Moreover, the case in which the partially ordered set is coming from a quasi-metric space is treated in depth. Finally, an application of the exposed theory is obtained. Concretely, a mathematical method to discuss the asymptotic complexity of those algorithms whose running time of computing fulfills a recurrence equation is presented. Moreover, the aforesaid method retrieves the fixed point based methods that appear in the literature for asymptotic complexity analysis of algorithms. However, our new method improves the aforesaid methods because it imposes fewer requirements than those that have been assumed in the literature and, in addition, it allows to state simultaneously upper and lower asymptotic bounds for the running time computing.
We study the sharp interface limit of the stochastic Cahn-Hilliard equation with cubic double-well potential and additive space-time white noise $\epsilon^{\sigma}\dot{W}$ where $\epsilon>0$ is an interfacial width parameter. We prove that, for sufficiently large scaling constant $\sigma >0$, the stochastic Cahn-Hilliard equation converges to the deterministic Mullins-Sekerka/Hele-Shaw problem for $\epsilon\rightarrow 0$. The convergence is shown in suitable fractional Sobolev norms as well as in the $L^p$-norm for $p\in (2, 4]$ in spatial dimension $d=2,3$. This generalizes the existing result for the space-time white noise to dimension $d=3$ and improves the existing results for smooth noise, which were so far limited to $p\in \left(2, \frac{2d+8}{d+2}\right]$ in spatial dimension $d=2,3$. As a byproduct of the analysis of the stochastic problem with space-time white noise, we identify minimal regularity requirements on the noise which allow convergence to the sharp interface limit in the $\mathbb{H}^1$-norm and also provide improved convergence estimates for the sharp interface limit of the deterministic problem.
We consider the posets of equivalence relations on finite sets under the standard embedding ordering and under the consecutive embedding ordering. In the latter case, the relations are also assumed to have an underlying linear order, which governs consecutive embeddings. For each poset we ask the well quasi-order and atomicity decidability questions: Given finitely many equivalence relations $\rho_1,\dots,\rho_k$, is the downward closed set Av$(\rho_1,\dots,\rho_k)$ consisting of all equivalence relations which do not contain any of $\rho_1,\dots,\rho_k$: (a) well-quasi-ordered, meaning that it contains no infinite antichains? and (b) atomic, meaning that it is not a union of two proper downward closed subsets, or, equivalently, that it satisfies the joint embedding property?