Numerical resolution of high-dimensional nonlinear PDEs remains a huge challenge due to the curse of dimensionality. Starting from the weak formulation of the Lawson-Euler scheme, this paper proposes a stochastic particle method (SPM) by tracking the deterministic motion, random jump, resampling and reweighting of particles. Real-valued weighted particles are adopted by SPM to approximate the high-dimensional solution, which automatically adjusts the point distribution to intimate the relevant feature of the solution. A piecewise constant reconstruction with virtual uniform grid is employed to evaluate the nonlinear terms, which fully exploits the intrinsic adaptive characteristic of SPM. Combining both can SPM achieve the goal of adaptive sampling in time. Numerical experiments on the 6-D Allen-Cahn equation and the 7-D Hamiltonian-Jacobi-Bellman equation demonstrate the potential of SPM in solving high-dimensional nonlinear PDEs efficiently while maintaining an acceptable accuracy.
In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.
We study the Weyl formula for the asymptotic number of eigenvalues of the Laplace-Beltrami operator with Dirichlet boundary condition on a Riemannian manifold in the context of geometric flows. Assuming the eigenvalues to be the energies of some associated statistical system, we show that geometric flows are directly related with the direction of increasing entropy chosen. For a closed Riemannian manifold we obtain a volume preserving flow of geometry being equivalent to the increment of Gibbs entropy function derived from the spectrum of Laplace-Beltrami operator. Resemblance with Arnowitt, Deser, and Misner (ADM) formalism of gravity is also noted by considering open Riemannian manifolds, directly equating the geometric flow parameter and the direction of increasing entropy as time direction.
In a recently developed variational discretization scheme for second order initial value problems ( J. Comput. Phys. 498, 112652 (2024) ), it was shown that the Noether charge associated with time translation symmetry is exactly preserved in the interior of the simulated domain. The obtained solution also fulfils the naively discretized equations of motions inside the domain, except for the last two grid points. Here we provide an explanation for the deviations at the boundary as stemming from the Lagrange multipliers used to implement initial and connection conditions. We show explicitly that the Noether charge including the boundary corrections is exactly preserved at its continuum value over the whole simulation domain, including the boundary points.
Trajectory prediction in traffic scenes involves accurately forecasting the behaviour of surrounding vehicles. To achieve this objective it is crucial to consider contextual information, including the driving path of vehicles, road topology, lane dividers, and traffic rules. Although studies demonstrated the potential of leveraging heterogeneous context for improving trajectory prediction, state-of-the-art deep learning approaches still rely on a limited subset of this information. This is mainly due to the limited availability of comprehensive representations. This paper presents an approach that utilizes knowledge graphs to model the diverse entities and their semantic connections within traffic scenes. Further, we present nuScenes Knowledge Graph (nSKG), a knowledge graph for the nuScenes dataset, that models explicitly all scene participants and road elements, as well as their semantic and spatial relationships. To facilitate the usage of the nSKG via graph neural networks for trajectory prediction, we provide the data in a format, ready-to-use by the PyG library. All artefacts can be found here: //github.com/boschresearch/nuScenes_Knowledge_Graph
We present a comprehensive analysis of the implications of artificial latency in the Proposer-Builder Separation framework on the Ethereum network. Focusing on the MEV-Boost auction system, we analyze how strategic latency manipulation affects Maximum Extractable Value yields and network integrity. Our findings reveal both increased profitability for node operators and significant systemic challenges, including heightened network inefficiencies and centralization risks. We empirically validates these insights with a pilot that Chorus One has been operating on Ethereum mainnet. We demonstrate the nuanced effects of latency on bid selection and validator dynamics. Ultimately, this research underscores the need for balanced strategies that optimize Maximum Extractable Value capture while preserving the Ethereum network's decentralization ethos.
With the increasing demand of intelligent systems capable of operating in different contexts (e.g. users on the move) the correct interpretation of the user-need by such systems has become crucial to give consistent answers to the user questions. The most effective applications addressing such task are in the fields of natural language processing and semantic expansion of terms. These techniques are aimed at estimating the goal of an input query reformulating it as an intent, commonly relying on textual resources built exploiting different semantic relations like \emph{synonymy}, \emph{antonymy} and many others. The aim of this paper is to generate such resources using the labels of a given taxonomy as source of information. The obtained resources are integrated into a plain classifier for reformulating a set of input queries as intents and tracking the effect of each relation, in order to quantify the impact of each semantic relation on the classification. As an extension to this, the best tradeoff between improvement and noise introduction when combining such relations is evaluated. The assessment is made generating the resources and their combinations and using them for tuning the classifier which is used to reformulate the user questions as labels. The evaluation employs a wide and varied taxonomy as a use-case, exploiting its labels as basis for the semantic expansion and producing several corpora with the purpose of enhancing the pseudo-queries estimation.
We use Stein characterisations to derive new moment-type estimators for the parameters of several multivariate distributions in the i.i.d. case; we also derive the asymptotic properties of these estimators. Our examples include the multivariate truncated normal distribution and several spherical distributions. The estimators are explicit and therefore provide an interesting alternative to the maximum-likelihood estimator. The quality of these estimators is assessed through competitive simulation studies in which we compare their behaviour to the performance of other estimators available in the literature.
This manuscript is devoted to investigating the conservation laws of incompressible Navier-Stokes equations(NSEs), written in the energy-momentum-angular momentum conserving(EMAC) formulation, after being linearized by the two-level methods. With appropriate correction steps(e.g., Stoke/Newton corrections), we show that the two-level methods, discretized from EMAC NSEs, could preserve momentum, angular momentum, and asymptotically preserve energy. Error estimates and (asymptotic) conservative properties are analyzed and obtained, and numerical experiments are conducted to validate the theoretical results, mainly confirming that the two-level linearized methods indeed possess the property of (almost) retainability on conservation laws. Moreover, experimental error estimates and optimal convergence rates of two newly defined types of pressure approximation in EMAC NSEs are also obtained.
Given any finite set equipped with a probability measure, one may compute its Shannon entropy or information content. The entropy becomes the logarithm of the cardinality of the set when the uniform probability is used. Leinster introduced a notion of Euler characteristic for certain finite categories, also known as magnitude, that can be seen as a categorical generalization of cardinality. This paper aims to connect the two ideas by considering the extension of Shannon entropy to finite categories endowed with probability, in such a way that the magnitude is recovered when a certain choice of "uniform" probability is made.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.