An expansion procedure using third kind Chebyshev polynomials as base functions is suggested for solving second type Volterra integral equations with logarithmic kernels. The algorithm's convergence is studied and some illustrative examples are presented to show the method's efficiency and reliability, comparisons with other methods in the literature are made.
Economic models may exhibit incompleteness depending on whether or not they admit certain policy-relevant features such as strategic interaction, self-selection, or state dependence. We develop a novel test of model incompleteness and analyze its asymptotic properties. A key observation is that one can identify the least-favorable parametric model that represents the most challenging scenario for detecting local alternatives without knowledge of the selection mechanism. We build a robust test of incompleteness on a score function constructed from such a model. The proposed procedure remains computationally tractable even with nuisance parameters because it suffices to estimate them only under the null hypothesis of model completeness. We illustrate the test by applying it to a market entry model and a triangular model with a set-valued control function.
We study the problem of estimating mixtures of Gaussians under the constraint of differential privacy (DP). Our main result is that $\tilde{O}(k^2 d^4 \log(1/\delta) / \alpha^2 \varepsilon)$ samples are sufficient to estimate a mixture of $k$ Gaussians up to total variation distance $\alpha$ while satisfying $(\varepsilon, \delta)$-DP. This is the first finite sample complexity upper bound for the problem that does not make any structural assumptions on the GMMs. To solve the problem, we devise a new framework which may be useful for other tasks. On a high level, we show that if a class of distributions (such as Gaussians) is (1) list decodable and (2) admits a "locally small'' cover [BKSW19] with respect to total variation distance, then the class of its mixtures is privately learnable. The proof circumvents a known barrier indicating that, unlike Gaussians, GMMs do not admit a locally small cover [AAL21].
Spherical polygons used in practice are nice, but the spherical point-in-polygon problem (SPiP) has long eluded solutions based on the winding number (wn). That a punctured sphere is simply connected is to blame. As a workaround, we prove that requiring the boundary of a spherical polygon to never intersect its antipode is sufficient to reduce its SPiP problem to the planar, point-in-polygon (PiP) problem, whose state-of-the-art solution uses wn and does not utilize known interior points (KIP). We refer to such spherical polygons as boundary antipode-excluding (BAE) and show that all spherical polygons fully contained within an open hemisphere is BAE. We document two successful reduction methods, one based on rotation and the other on shearing, and address a common concern. Both reduction algorithms, when combined with a wn-PiP algorithm, solve SPiP correctly and efficiently for BAE spherical polygons. The MATLAB code provided demonstrates scenarios that are problematic for previous work.
In this paper we consider the filtering of a class of partially observed piecewise deterministic Markov processes (PDMPs). In particular, we assume that an ordinary differential equation (ODE) drives the deterministic element and can only be solved numerically via a time discretization. We develop, based upon the approach in [20], a new particle and multilevel particle filter (MLPF) in order to approximate the filter associated to the discretized ODE. We provide a bound on the mean square error associated to the MLPF which provides guidance on setting the simulation parameter of that algorithm and implies that significant computational gains can be obtained versus using a particle filter. Our theoretical claims are confirmed in several numerical examples.
Graph representations are the generalization of geometric graph drawings from the plane to higher dimensions. A method introduced by Tutte to optimize properties of graph drawings is to minimize their energy. We explore this minimization for spherical graph representations, where the vertices lie on a unit sphere such that the origin is their barycentre. We present a primal and dual semidefinite program which can be used to find such a spherical graph representation minimizing the energy. We denote the optimal value of this program by $\rho(G)$ for a given graph $G$. The value turns out to be related to the second largest eigenvalue of the adjacency matrix of $G$, which we denote by $\lambda_2$. We show that for $G$ regular, $\rho(G) \leq \frac{\lambda_{2}}{2} \cdot v(G)$, and that equality holds if and only if the $\lambda_{2}$ eigenspace contains a spherical 1-design. Moreover, if $G$ is a random $d$-regular graph, $\rho(G)=\left(\sqrt{(d-1)} +o(1)\right)\cdot v(G)$, asymptotically almost surely.
Thermodynamic equations of state (EOS) are essential for many industries as well as in academia. Even leaving aside the expensive and extensive measurement campaigns required for the data acquisition, the development of EOS is an intensely time-consuming process, which does often still heavily rely on expert knowledge and iterative fine-tuning. To improve upon and accelerate the EOS development process, we introduce thermodynamics-informed symbolic regression (TiSR), a symbolic regression (SR) tool aimed at thermodynamic EOS modeling. TiSR is already a capable SR tool, which was used in the research of //doi.org/10.1007/s10765-023-03197-z. It aims to combine an SR base with the extensions required to work with often strongly scattered experimental data, different residual pre- and post-processing options, and additional features required to consider thermodynamic EOS development. Although TiSR is not ready for end users yet, this paper is intended to report on its current state, showcase the progress, and discuss (distant and not so distant) future directions. TiSR is available at //github.com/scoop-group/TiSR and can be cited as //doi.org/10.5281/zenodo.8317547.
We provide a method, based on automata theory, to mechanically prove the correctness of many numeration systems based on Fibonacci numbers. With it, long case-based and induction-based proofs of correctness can be replaced by simply constructing a regular expression (or finite automaton) specifying the rules for valid representations, followed by a short computation. Examples of the systems that can be handled using our technique include Brown's lazy representation (1965), the far-difference representation developed by Alpert (2009), and three representations proposed by Hajnal (2023). We also provide three additional systems and prove their validity.
Unmeasured confounding bias is among the largest threats to the validity of observational studies. Although sensitivity analyses and various study designs have been proposed to address this issue, they do not leverage the growing availability of auxiliary data accessible through open data platforms. Using negative controls has been introduced in the causal inference literature as a promising approach to account for unmeasured confounding bias. In this paper, we develop a Bayesian nonparametric method to estimate a causal exposure-response function (CERF). This estimation method effectively utilizes auxiliary information from negative control variables to adjust for unmeasured confounding completely. We model the CERF as a mixture of linear models. This strategy offers the dual advantage of capturing the potential nonlinear shape of CERFs while maintaining computational efficiency. Additionally, it leverages closed-form results that hold under the linear model assumption. We assess the performance of our method through simulation studies. The results demonstrate the method's ability to accurately recover the true shape of the CERF in the presence of unmeasured confounding. To showcase the practical utility of our approach, we apply it to adjust for a potential unmeasured confounder when evaluating the relationship between long-term exposure to ambient $PM_{2.5}$ and cardiovascular hospitalization rates among the elderly in the continental U.S. We implement our estimation procedure in open-source software to ensure transparency and reproducibility and make our code publicly available.
Artificial Intelligence for IT Operations (AIOps) leverages AI approaches to handle the massive amount of data generated during the operations of software systems. Prior works have proposed various AIOps solutions to support different tasks in system operations and maintenance, such as anomaly detection. In this study, we conduct an in-depth analysis of open-source AIOps projects to understand the characteristics of AIOps in practice. We first carefully identify a set of AIOps projects from GitHub and analyze their repository metrics (e.g., the used programming languages). Then, we qualitatively examine the projects to understand their input data, analysis techniques, and goals. Finally, we assess the quality of these projects using different quality metrics, such as the number of bugs. To provide context, we also sample two sets of baseline projects from GitHub: a random sample of machine learning projects and a random sample of general-purposed projects. By comparing different metrics between our identified AIOps projects and these baselines, we derive meaningful insights. Our results reveal a recent and growing interest in AIOps solutions. However, the quality metrics indicate that AIOps projects suffer from more issues than our baseline projects. We also pinpoint the most common issues in AIOps approaches and discuss potential solutions to address these challenges. Our findings offer valuable guidance to researchers and practitioners, enabling them to comprehend the current state of AIOps practices and shed light on different ways of improving AIOps' weaker aspects. To the best of our knowledge, this work marks the first attempt to characterize open-source AIOps projects.
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.