In this paper, we develop an accurate pseudospectral method to approximate numerically the Riesz-Feller operator $D_\gamma^\alpha$ on $\mathbb R$, where $\alpha\in(0,2)$, and $|\gamma|\le\min\{\alpha, 2 - \alpha\}$. This operator can be written as a linear combination of the Weyl-Marchaud derivatives $\mathcal{D}^{\alpha}$ and $\overline{\mathcal{D}^\alpha}$, when $\alpha\in(0,1)$, and of $\partial_x\mathcal{D}^{\alpha-1}$ and $\partial_x\overline{\mathcal{D}^{\alpha-1}}$, when $\alpha\in(1,2)$. Given the so-called Higgins functions $\lambda_k(x) = ((ix-1)/(ix+1))^k$, where $k\in\mathbb Z$, we compute explicitly, using complex variable techniques, $\mathcal{D}^{\alpha}[\lambda_k](x)$, $\overline{\mathcal{D}^\alpha}[\lambda_k](x)$, $\partial_x\mathcal{D}^{\alpha-1}[\lambda_k](x)$, $\partial_x\overline{\mathcal{D}^{\alpha-1}}[\lambda_k](x)$ and $D_\gamma^\alpha[\lambda_k](x)$, in terms of the Gaussian hypergeometric function ${}_2F_1$, and relate these results to previous ones for the fractional Laplacian. This enables us to approximate $\mathcal{D}^{\alpha}[u](x)$, $\overline{\mathcal{D}^\alpha}[u](x)$, $\partial_x\mathcal{D}^{\alpha-1}[u](x)$, $\partial_x\overline{\mathcal{D}^{\alpha-1}}[u](x)$ and $D_\gamma^\alpha[u](x)$, for bounded continuous functions $u(x)$. Finally, we simulate a nonlinear Riesz-Feller fractional diffusion equation, characterized by having front propagating solutions whose speed grows exponentially in time.
The $L_{\infty}$ star discrepancy is a very well-studied measure used to quantify the uniformity of a point set distribution. Constructing optimal point sets for this measure is seen as a very hard problem in the discrepancy community. Indeed, optimal point sets are, up to now, known only for $n\leq 6$ in dimension 2 and $n \leq 2$ for higher dimensions. We introduce in this paper mathematical programming formulations to construct point sets with as low $L_{\infty}$ star discrepancy as possible. Firstly, we present two models to construct optimal sets and show that there always exist optimal sets with the property that no two points share a coordinate. Then, we provide possible extensions of our models to other measures, such as the extreme and periodic discrepancies. For the $L_{\infty}$ star discrepancy, we are able to compute optimal point sets for up to 21 points in dimension 2 and for up to 8 points in dimension 3. For $d=2$ and $n\ge 7$ points, these point sets have around a 50% lower discrepancy than the current best point sets, and show a very different structure.
We prove that the Minimum Distance Problem (MDP) on linear codes over any fixed finite field and parameterized by the input distance bound is W[1]-hard to approximate within any constant factor. We also prove analogous results for the parameterized Shortest Vector Problem (SVP) on integer lattices. Specifically, we prove that SVP in the $\ell_p$ norm is W[1]-hard to approximate within any constant factor for any fixed $p >1$ and W[1]-hard to approximate within a factor approaching $2$ for $p=1$. (We show hardness under randomized reductions in each case.) These results answer the main questions left open (and explicitly posed) by Bhattacharyya, Bonnet, Egri, Ghoshal, Karthik C. S., Lin, Manurangsi, and Marx (Journal of the ACM, 2021) on the complexity of parameterized MDP and SVP. For MDP, they established similar hardness for binary linear codes and left the case of general fields open. For SVP in $\ell_p$ norms with $p > 1$, they showed inapproximability within some constant factor (depending on $p$) and left open showing such hardness for arbitrary constant factors. They also left open showing W[1]-hardness even of exact SVP in the $\ell_1$ norm.
Depth estimation is a critical technology in autonomous driving, and multi-camera systems are often used to achieve a 360$^\circ$ perception. These 360$^\circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image. Alternatively, monocular methods may not produce consistent cross-view predictions. To address these issues, we propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap. We suggest building virtual pinhole cameras to resolve the distortion problem of fisheye cameras and unify the processing for the two types of 360$^\circ$ cameras. For handling the varying noise on camera poses caused by unstable movement, the approach employs a self-calibration method to obtain highly accurate relative poses of the adjacent cameras with minor overlap. These enable the use of robust stereo methods to obtain high-quality depth prior in the overlap region. This prior serves not only as an additional input but also as pseudo-labels that enhance the accuracy of depth estimation methods and improve cross-view prediction consistency. The effectiveness of SGDE is evaluated on one fisheye camera dataset, Synthetic Urban, and two pinhole camera datasets, DDAD and nuScenes. Our experiments demonstrate that SGDE is effective for both supervised and self-supervised depth estimation, and highlight the potential of our method for advancing downstream autonomous driving technologies, such as 3D object detection and occupancy prediction.
Given the growing significance of reliable, trustworthy, and explainable machine learning, the requirement of uncertainty quantification for anomaly detection systems has become increasingly important. In this context, effectively controlling Type I error rates ($\alpha$) without compromising the statistical power ($1-\beta$) of these systems can build trust and reduce costs related to false discoveries, particularly when follow-up procedures are expensive. Leveraging the principles of conformal prediction emerges as a promising approach for providing respective statistical guarantees by calibrating a model's uncertainty. This work introduces a novel framework for anomaly detection, termed cross-conformal anomaly detection, building upon well-known cross-conformal methods designed for prediction tasks. With that, it addresses a natural research gap by extending previous works in the context of inductive conformal anomaly detection, relying on the split-conformal approach for model calibration. Drawing on insights from conformal prediction, we demonstrate that the derived methods for calculating cross-conformal $p$-values strike a practical compromise between statistical efficiency (full-conformal) and computational efficiency (split-conformal) for uncertainty-quantified anomaly detection on benchmark datasets.
We consider the reinforcement learning problem for the constrained Markov decision process (CMDP), which plays a central role in satisfying safety or resource constraints in sequential learning and decision-making. In this problem, we are given finite resources and a MDP with unknown transition probabilities. At each stage, we take an action, collecting a reward and consuming some resources, all assumed to be unknown and need to be learned over time. In this work, we take the first step towards deriving optimal problem-dependent guarantees for the CMDP problems. We derive a logarithmic regret bound, which translates into a $O(\frac{\kappa}{\epsilon}\cdot\log^2(1/\epsilon))$ sample complexity bound, with $\kappa$ being a problem-dependent parameter, yet independent of $\epsilon$. Our sample complexity bound improves upon the state-of-art $O(1/\epsilon^2)$ sample complexity for CMDP problems established in the previous literature, in terms of the dependency on $\epsilon$. To achieve this advance, we develop a new framework for analyzing CMDP problems. To be specific, our algorithm operates in the primal space and we resolve the primal LP for the CMDP problem at each period in an online manner, with \textit{adaptive} remaining resource capacities. The key elements of our algorithm are: i). an eliminating procedure that characterizes one optimal basis of the primal LP, and; ii) a resolving procedure that is adaptive to the remaining resources and sticks to the characterized optimal basis.
We consider the problem of the exact computation of the marginal eigenvalue distributions in the Laguerre and Jacobi $\beta$ ensembles. In the case $\beta=1$ this is a question of long standing in the mathematical statistics literature. A recursive procedure to accomplish this task is given for $\beta$ a positive integer, and the parameter $\lambda_1$ a non-negative integer. This case is special due to a finite basis of elementary functions, with coefficients which are polynomials. In the Laguerre case with $\beta = 1$ and $\lambda_1 + 1/2$ a non-negative integer some evidence is given of their again being a finite basis, now consisting of elementary functions and the error function multiplied by elementary functions. Moreover, from this the corresponding distributions in the fixed trace case permit a finite basis of power functions, as also for $\lambda_1$ a non-negative integer. The fixed trace case in this setting is relevant to quantum information theory and quantum transport problem, allowing particularly the exact determination of Landauer conductance distributions in a previously intractable parameter regime. Our findings also aid in analyzing zeros of the generating function for specific gap probabilities, supporting the validity of an associated large $N$ local central limit theorem.
Geometric matching is an important topic in computational geometry and has been extensively studied over decades. In this paper, we study a geometric-matching problem, known as geometric many-to-many matching. In this problem, the input is a set $S$ of $n$ colored points in $\mathbb{R}^d$, which implicitly defines a graph $G = (S,E(S))$ where $E(S) = \{(p,q): p,q \in S \text{ have different colors}\}$, and the goal is to compute a minimum-cost subset $E^* \subseteq E(S)$ of edges that cover all points in $S$. Here the cost of $E^*$ is the sum of the costs of all edges in $E^*$, where the cost of a single edge $e$ is the Euclidean distance (or more generally, the $L_p$-distance) between the two endpoints of $e$. Our main result is a $(1+\varepsilon)$-approximation algorithm with an optimal running time $O_\varepsilon(n \log n)$ for geometric many-to-many matching in any fixed dimension, which works under any $L_p$-norm. This is the first near-linear approximation scheme for the problem in any $d \geq 2$. Prior to this work, only the bipartite case of geometric many-to-many matching was considered in $\mathbb{R}^1$ and $\mathbb{R}^2$, and the best known approximation scheme in $\mathbb{R}^2$ takes $O_\varepsilon(n^{1.5} \cdot \mathsf{poly}(\log n))$ time.
We present a KE-tableau-based procedure for the main TBox and ABox reasoning tasks for the description logic $\mathcal{DL}\langle \mathsf{4LQS^{R,\!\times}}\rangle(\mathbf{D})$, in short $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$. The logic $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$, representable in the decidable multi-sorted quantified set-theoretic fragment $\mathsf{4LQS^R}$, combines the high scalability and efficiency of rule languages such as the Semantic Web Rule Language (SWRL) with the expressivity of description logics. Our algorithm is based on a variant of the KE-tableau system for sets of universally quantified clauses, where the KE-elimination rule is generalized in such a way as to incorporate the $\gamma$-rule. The novel system, called KE$^\gamma$-tableau, turns out to be an improvement of the system introduced in \cite{RR2017} and of standard first-order KE-tableau \cite{dagostino94}. Suitable benchmark test sets executed on C++ implementations of the three mentioned systems show that the performances of the KE$^\gamma$-tableau-based reasoner are often up to about 400% better than the ones of the other two systems. This a first step towards the construction of efficient reasoners for expressive OWL ontologies based on fragments of computable set-theory.
We present an ongoing implementation of a KE-tableau based reasoner for a decidable fragment of stratified elementary set theory expressing the description logic $\mathcal{DL}\langle \mathsf{4LQS^{R,\!\times}}\rangle(\mathbf{D})$ (shortly $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$). The reasoner checks the consistency of $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$-knowledge bases (KBs) represented in set-theoretic terms. It is implemented in \textsf{C++} and supports $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$-KBs serialized in the OWL/XML format. To the best of our knowledge, this is the first attempt to implement a reasoner for the consistency checking of a description logic represented via a fragment of set theory that can also classify standard OWL ontologies.
We present a KE-tableau-based implementation of a reasoner for a decidable fragment of (stratified) set theory expressing the description logic $\mathcal{DL}\langle \mathsf{4LQS^{R,\!\times}}\rangle(\mathbf{D})$ ($\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$, for short). Our application solves the main TBox and ABox reasoning problems for $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$. In particular, it solves the consistency problem for $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$-knowledge bases represented in set-theoretic terms, and a generalization of the \emph{Conjunctive Query Answering} problem in which conjunctive queries with variables of three sorts are admitted. The reasoner, which extends and optimizes a previous prototype for the consistency checking of $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$-knowledge bases (see \cite{cilc17}), is implemented in \textsf{C++}. It supports $\mathcal{DL}_{\mathbf{D}}^{4,\!\times}$-knowledge bases serialized in the OWL/XML format, and it admits also rules expressed in SWRL (Semantic Web Rule Language).