The problem of allocating accident responsibility for autonomous driving is a difficult issue in the field of autonomous driving. Due to the complexity of autonomous driving technology, most of the research on the responsibility of autonomous driving accidents has remained at the theoretical level. When encountering actual autonomous driving accidents, a proven and fair solution is needed. To address this problem, this study proposes a multi-subject responsibility allocation optimization method based on the RCModel (Risk Chain Model), which analyzes the responsibility of each actor from a technical perspective and promotes a more reasonable and fair allocation of responsibility.
This paper presents a new discretization error quantification method for the numerical integration of ordinary differential equations. The error is modelled by using the Wishart distribution, which enables us to capture the correlation between variables. Error quantification is achieved by solving an optimization problem under the order constraints for the covariance matrices. An algorithm for the optimization problem is also established in a slightly broader context.
The work of Kalman and Bucy has established a duality between filtering and optimal estimation in the context of time-continuous linear systems. This duality has recently been extended to time-continuous nonlinear systems in terms of an optimization problem constrained by a backward stochastic partial differential equation. Here we revisit this problem from the perspective of appropriate forward-backward stochastic differential equations. This approach sheds new light on the estimation problem and provides a unifying perspective. It is also demonstrated that certain formulations of the estimation problem lead to deterministic formulations similar to the linear Gaussian case as originally investigated by Kalman and Bucy. Finally, optimal control of partially observed diffusion processes is discussed as an application of the proposed estimators.
Convex PCA, which was introduced by Bigot et al., is a dimension reduction methodology for data with values in a convex subset of a Hilbert space. This setting arises naturally in many applications, including distributional data in the Wasserstein space of an interval, and ranked compositional data under the Aitchison geometry. Our contribution in this paper is threefold. First, we present several new theoretical results including consistency as well as continuity and differentiability of the objective function in the finite dimensional case. Second, we develop a numerical implementation of finite dimensional convex PCA when the convex set is polyhedral, and show that this provides a natural approximation of Wasserstein geodesic PCA. Third, we illustrate our results with two financial applications, namely distributions of stock returns ranked by size and the capital distribution curve, both of which are of independent interest in stochastic portfolio theory.
The dual consistency is an important issue in developing stable DWR error estimation towards the goal-oriented mesh adaptivity. In this paper, such an issue is studied in depth based on a Newton-GMG framework for the steady Euler equations. Theoretically, the numerical framework is redescribed using the Petrov-Galerkin scheme, based on which the dual consistency is depicted. A boundary modification technique is discussed for preserving the dual consistency within the Newton-GMG framework. Numerically, a geometrical multigrid is proposed for solving the dual problem, and a regularization term is designed to guarantee the convergence of the iteration. The following features of our method can be observed from numerical experiments, i). a stable numerical convergence of the quantity of interest can be obtained smoothly for problems with different configurations, and ii). towards accurate calculation of quantity of interest, mesh grids can be saved significantly using the proposed dual-consistent DWR method, compared with the dual-inconsistent one.
We present the full approximation scheme constraint decomposition (FASCD) multilevel method for solving variational inequalities (VIs). FASCD is a common extension of both the full approximation scheme (FAS) multigrid technique for nonlinear partial differential equations, due to A.~Brandt, and the constraint decomposition (CD) method introduced by X.-C.~Tai for VIs arising in optimization. We extend the CD idea by exploiting the telescoping nature of certain function space subset decompositions arising from multilevel mesh hierarchies. When a reduced-space (active set) Newton method is applied as a smoother, with work proportional to the number of unknowns on a given mesh level, FASCD V-cycles exhibit nearly mesh-independent convergence rates, and full multigrid cycles are optimal solvers. The example problems include differential operators which are symmetric linear, nonsymmetric linear, and nonlinear, in unilateral and bilateral VI problems.
We present a scalable strategy for development of mesh-free hybrid neuro-symbolic partial differential equation solvers based on existing mesh-based numerical discretization methods. Particularly, this strategy can be used to efficiently train neural network surrogate models of partial differential equations by (i) leveraging the accuracy and convergence properties of advanced numerical methods, solvers, and preconditioners, as well as (ii) better scalability to higher order PDEs by strictly limiting optimization to first order automatic differentiation. The presented neural bootstrapping method (hereby dubbed NBM) is based on evaluation of the finite discretization residuals of the PDE system obtained on implicit Cartesian cells centered on a set of random collocation points with respect to trainable parameters of the neural network. Importantly, the conservation laws and symmetries present in the bootstrapped finite discretization equations inform the neural network about solution regularities within local neighborhoods of training points. We apply NBM to the important class of elliptic problems with jump conditions across irregular interfaces in three spatial dimensions. We show the method is convergent such that model accuracy improves by increasing number of collocation points in the domain and predonditioning the residuals. We show NBM is competitive in terms of memory and training speed with other PINN-type frameworks. The algorithms presented here are implemented using \texttt{JAX} in a software package named \texttt{JAX-DIPS} (//github.com/JAX-DIPS/JAX-DIPS), standing for differentiable interfacial PDE solver. We open sourced \texttt{JAX-DIPS} to facilitate research into use of differentiable algorithms for developing hybrid PDE solvers.
Tikhonov regularization is a widely used technique in solving inverse problems that can enforce prior properties on the desired solution. In this paper, we propose a Krylov subspace based iterative method for solving linear inverse problems with general-form Tikhonov regularization term $x^TMx$, where $M$ is a positive semi-definite matrix. An iterative process called the preconditioned Golub-Kahan bidiagonalization (pGKB) is designed, which implicitly utilizes a proper preconditioner to generate a series of solution subspaces with desirable properties encoded by the regularizer $x^TMx$. Based on the pGKB process, we propose an iterative regularization algorithm via projecting the original problem onto small dimensional solution subspaces. We analyze regularization effect of this algorithm, including the incorporation of prior properties of the desired solution into the solution subspace and the semi-convergence behavior of regularized solution. To overcome instabilities caused by semi-convergence, we further propose two pGKB based hybrid regularization algorithms. All the proposed algorithms are tested on both small-scale and large-scale linear inverse problems. Numerical results demonstrate that these iterative algorithms exhibit excellent performance, outperforming other state-of-the-art algorithms in some cases.
We study the problem of estimating non-linear functionals of discrete distributions in the context of local differential privacy. The initial data $x_1,\ldots,x_n \in [K]$ are supposed i.i.d. and distributed according to an unknown discrete distribution $p = (p_1,\ldots,p_K)$. Only $\alpha$-locally differentially private (LDP) samples $z_1,...,z_n$ are publicly available, where the term 'local' means that each $z_i$ is produced using one individual attribute $x_i$. We exhibit privacy mechanisms (PM) that are interactive (i.e. they are allowed to use already published confidential data) or non-interactive. We describe the behavior of the quadratic risk for estimating the power sum functional $F_{\gamma} = \sum_{k=1}^K p_k^{\gamma}$, $\gamma >0$ as a function of $K, \, n$ and $\alpha$. In the non-interactive case, we study two plug-in type estimators of $F_{\gamma}$, for all $\gamma >0$, that are similar to the MLE analyzed by Jiao et al. (2017) in the multinomial model. However, due to the privacy constraint the rates we attain are slower and similar to those obtained in the Gaussian model by Collier et al. (2020). In the interactive case, we introduce for all $\gamma >1$ a two-step procedure which attains the faster parametric rate $(n \alpha^2)^{-1/2}$ when $\gamma \geq 2$. We give lower bounds results over all $\alpha$-LDP mechanisms and all estimators using the private samples.
The purpose of the paper is to provide a characterization of the error of the best polynomial approximation of composite functions in weighted spaces. Such a characterization is essential for the convergence analysis of numerical methods applied to non-linear problems or for numerical approaches that make use of regularization techniques to cure low smoothness of the solution. This result is obtained through an estimate of the derivatives of composite functions in weighted uniform norm.
Quantile treatment effects (QTEs) can characterize the potentially heterogeneous causal effect of a treatment on different points of the entire outcome distribution. Propensity score (PS) methods are commonly employed for estimating QTEs in non-randomized studies. Empirical and theoretical studies have shown that insufficient and unnecessary adjustment for covariates in PS models can lead to bias and efficiency loss in estimating treatment effects. Striking a balance between bias and efficiency through variable selection is a crucial concern in casual inference. It is essential to acknowledge that the covariates related treatment and outcome may vary across different quantiles of the outcome distribution. However, previous studies have overlooked to adjust for different covariates separately in the PS models when estimating different QTEs. In this article, we proposed the quantile regression outcome-adaptive lasso (QROAL) method to select covariates that can provide unbiased and efficient estimates of QTEs. A distinctive feature of our proposed method is the utilization of linear quantile regression models for constructing penalty weights, enabling covariate selection in PS models separately when estimating different QTEs. We conducted simulation studies to show the superiority of our proposed method over the outcome-adaptive lasso (OAL) method in variable selection. Moreover, the proposed method exhibited favorable performance compared to the OAL method in terms of root mean square error in a range of settings, including both homogeneous and heterogeneous scenarios. Additionally, we applied the QROAL method to datasets from the China Health and Retirement Longitudinal Study (CHARLS) to explore the impact of smoking status on the severity of depression symptoms.