Autonomous vehicles (AVs) need to determine their position and orientation accurately with respect to global coordinate system or local features under different scene geometries, traffic conditions and environmental conditions. \cite{reid2019localization} provides a comprehensive framework for the localization requirements for AVs. However, the framework is too restrictive whereby - (a) only a very small deviation from the lane is tolerated (one every $10^{8}$ hours), (b) all roadway types are considered same without any attention to restriction provided by the environment onto the localization and (c) the temporal nature of the location and orientation is not considered in the requirements. In this research, we present a more practical view of the localization requirement aimed at keeping the AV safe during an operation. We present the following novel contributions - (a) we propose a deviation penalty as a cumulative distribution function of the Weibull distribution which starts from the adjacent lane boundary, (b) we customize the parameters of the deviation penalty according to the current roadway type, particular lane boundary that the ego vehicle is against and roadway curvature and (c) we update the deviation penalty based on the available gap in the adjacent lane. We postulate that this formulation can provide a more robust and achievable view of the localization requirements than previous research while focusing on safety.
Markov chain Monte Carlo (MCMC) is a commonly used method for approximating expectations with respect to probability distributions. Uncertainty assessment for MCMC estimators is essential in practical applications. Moreover, for multivariate functions of a Markov chain, it is important to estimate not only the auto-correlation for each component but also to estimate cross-correlations, in order to better assess sample quality, improve estimates of effective sample size, and use more effective stopping rules. Berg and Song [2022] introduced the moment least squares (momentLS) estimator, a shape-constrained estimator for the autocovariance sequence from a reversible Markov chain, for univariate functions of the Markov chain. Based on this sequence estimator, they proposed an estimator of the asymptotic variance of the sample mean from MCMC samples. In this study, we propose novel autocovariance sequence and asymptotic variance estimators for Markov chain functions with multiple components, based on the univariate momentLS estimators from Berg and Song [2022]. We demonstrate strong consistency of the proposed auto(cross)-covariance sequence and asymptotic variance matrix estimators. We conduct empirical comparisons of our method with other state-of-the-art approaches on simulated and real-data examples, using popular samplers including the random-walk Metropolis sampler and the No-U-Turn sampler from STAN.
The increasing number of scientific publications in acoustics, in general, presents difficulties in conducting traditional literature surveys. This work explores the use of a generative pre-trained transformer (GPT) model to automate a literature survey of 116 articles on data-driven speech enhancement methods. The main objective is to evaluate the capabilities and limitations of the model in providing accurate responses to specific queries about the papers selected from a reference human-based survey. While we see great potential to automate literature surveys in acoustics, improvements are needed to address technical questions more clearly and accurately.
Evaluating environmental variables that vary stochastically is the principal topic for designing better environmental management and restoration schemes. Both the upper and lower estimates of these variables, such as water quality indices and flood and drought water levels, are important and should be consistently evaluated within a unified mathematical framework. We propose a novel pair of Orlicz regrets to consistently bound the statistics of random variables both from below and above. Here, consistency indicates that the upper and lower bounds are evaluated with common coefficients and parameter values being different from some of the risk measures proposed thus far. Orlicz regrets can flexibly evaluate the statistics of random variables based on their tail behavior. The explicit linkage between Orlicz regrets and divergence risk measures was exploited to better comprehend them. We obtain sufficient conditions to pose the Orlicz regrets as well as divergence risk measures, and further provide gradient descent-type numerical algorithms to compute them. Finally, we apply the proposed mathematical framework to the statistical evaluation of 31-year water quality data as key environmental indicators in a Japanese river environment.
We propose a novel family of test statistics to detect the presence of changepoints in a sequence of dependent, possibly multivariate, functional-valued observations. Our approach allows to test for a very general class of changepoints, including the "classical" case of changes in the mean, and even changes in the whole distribution. Our statistics are based on a generalisation of the empirical energy distance; we propose weighted functionals of the energy distance process, which are designed in order to enhance the ability to detect breaks occurring at sample endpoints. The limiting distribution of the maximally selected version of our statistics requires only the computation of the eigenvalues of the covariance function, thus being readily implementable in the most commonly employed packages, e.g. R. We show that, under the alternative, our statistics are able to detect changepoints occurring even very close to the beginning/end of the sample. In the presence of multiple changepoints, we propose a binary segmentation algorithm to estimate the number of breaks and the locations thereof. Simulations show that our procedures work very well in finite samples. We complement our theory with applications to financial and temperature data.
We introduce a general differentiable solver for time-dependent deformation problems with contact and friction. Our approach uses a finite element discretization with a high-order time integrator coupled with the recently proposed incremental potential contact method for handling contact and friction forces to solve PDE- and ODE-constrained optimization problems on scenes with a complex geometry. It support static and dynamic problems and differentiation with respect to all physical parameters involved in the physical problem description, which include shape, material parameters, friction parameters, and initial conditions. Our analytically derived adjoint formulation is efficient, with a small overhead (typically less than 10% for nonlinear problems) over the forward simulation, and shares many similarities with the forward problem, allowing the reuse of large parts of existing forward simulator code. We implement our approach on top of the open-source PolyFEM library, and demonstrate the applicability of our solver to shape design, initial condition optimization, and material estimation on both simulated results and in physical validations.
A long-standing issue in the parallel-in-time community is the poor convergence of standard iterative parallel-in-time methods for hyperbolic partial differential equations (PDEs), and for advection-dominated PDEs more broadly. Here, a local Fourier analysis (LFA) convergence theory is derived for the two-level variant of the iterative parallel-in-time method of multigrid reduction-in-time (MGRIT). This closed-form theory allows for new insights into the poor convergence of MGRIT for advection-dominated PDEs when using the standard approach of rediscretizing the fine-grid problem on the coarse grid. Specifically, we show that this poor convergence arises, at least in part, from inadequate coarse-grid correction of certain smooth Fourier modes known as characteristic components, which was previously identified as causing poor convergence of classical spatial multigrid on steady-state advection-dominated PDEs. We apply this convergence theory to show that, for certain semi-Lagrangian discretizations of advection problems, MGRIT convergence using rediscretized coarse-grid operators cannot be robust with respect to CFL number or coarsening factor. A consequence of this analysis is that techniques developed for improving convergence in the spatial multigrid context can be re-purposed in the MGRIT context to develop more robust parallel-in-time solvers. This strategy has been used in recent work to great effect; here, we provide further theoretical evidence supporting the effectiveness of this approach.
A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.
We introduce an approach which allows detecting causal relationships between variables for which the time evolution is available. Causality is assessed by a variational scheme based on the Information Imbalance of distance ranks, a statistical test capable of inferring the relative information content of different distance measures. We test whether the predictability of a putative driven system Y can be improved by incorporating information from a potential driver system X, without making assumptions on the underlying dynamics and without the need to compute probability densities of the dynamic variables. This framework makes causality detection possible even for high-dimensional systems where only few of the variables are known or measured. Benchmark tests on coupled chaotic dynamical systems demonstrate that our approach outperforms other model-free causality detection methods, successfully handling both unidirectional and bidirectional couplings. We also show that the method can be used to robustly detect causality in human electroencephalography data.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem, possibly with a local or a boundary control, for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits excellent performances and robustness with respect to the parameters of interest.
A conjecture attributed to Smith states that every pair of longest cycles in a $k$-connected graph intersect each other in at least $k$ vertices. In this paper, we show that every pair of longest cycles in a~$k$-connected graph on $n$ vertices intersect each other in at least~$\min\{n,8k-n-16\}$ vertices, which confirms Smith's conjecture when $k\geq (n+16)/7$. An analog conjecture for paths instead of cycles was stated by Hippchen. By a simple reduction, we relate both conjectures, showing that Hippchen's conjecture is valid when either $k \leq 6$ or $k \geq (n+9)/7$.