We investigate the computational properties of basic mathematical notions pertaining to $\mathbb{R}\rightarrow \mathbb{R}$-functions and subsets of $\mathbb{R}$, like finiteness, countability, (absolute) continuity, bounded variation, suprema, and regularity. We work in higher-order computability theory based on Kleene's S1-S9 schemes. We show that the aforementioned italicised properties give rise to two huge and robust classes of computationally equivalent operations, the latter based on well-known theorems from the mainstream mathematics literature. As part of this endeavour, we develop an equivalent $\lambda$-calculus formulation of S1-S9 that accommodates partial objects. We show that the latter are essential to our enterprise via the study of countably based and partial functionals of type $3$.
An $r$-quasiplanar graph is a graph drawn in the plane with no $r$ pairwise crossing edges. Let $s \geq 3$ be an integer and $r=2^s$. We prove that there is a constant $C$ such that every $r$-quasiplanar graph with $n \geq r$ vertices has at most $n\left(Cs^{-1}\log n\right)^{2s-4}$ edges. A graph whose vertices are continuous curves in the plane, two being connected by an edge if and only if they intersect, is called a string graph. We show that for every $\epsilon>0$, there exists $\delta>0$ such that every string graph with $n$ vertices, whose chromatic number is at least $n^{\epsilon}$ contains a clique of size at least $n^{\delta}$. A clique of this size or a coloring using fewer than $n^{\epsilon}$ colors can be found by a polynomial time algorithm in terms of the size of the geometric representation of the set of strings. In the process, we use, generalize, and strengthen previous results of Lee, Tomon, and others. All of our theorems are related to geometric variants of the following classical graph-theoretic problem of Erdos, Gallai, and Rogers. Given a $K_r$-free graph on $n$ vertices and an integer $s<r$, at least how many vertices can we find such that the subgraph induced by them is $K_s$-free?
In this paper, we consider recent progress in estimating the average treatment effect when extreme inverse probability weights are present and focus on methods that account for a possible violation of the positivity assumption. These methods aim at estimating the treatment effect on the subpopulation of patients for whom there is a clinical equipoise. We propose a systematic approach to determine their related causal estimands and develop new insights into the properties of the weights targeting such a subpopulation. Then, we examine the roles of overlap weights, matching weights, Shannon's entropy weights, and beta weights. This helps us characterize and compare their underlying estimators, analytically and via simulations, in terms of the accuracy, precision, and root mean squared error. Moreover, we study the asymptotic behaviors of their augmented estimators (that mimic doubly robust estimators), which lead to improved estimations when either the propensity or the regression models are correctly specified. Based on the analytical and simulation results, we conclude that overall overlap weights are preferable to matching weights, especially when there is moderate or extreme violations of the positivity assumption. Finally, we illustrate the methods using a real data example marked by extreme inverse probability weights.
Urban rail transit provides significant comprehensive benefits such as large traffic volume and high speed, serving as one of the most important components of urban traffic construction management and congestion solution. Using real passenger flow data of an Asian subway system from April to June of 2018, this work analyzes the space-time distribution of the passenger flow using short-term traffic flow prediction. Stations are divided into four types for passenger flow forecasting, and meteorological records are collected for the same period. Then, machine learning methods with different inputs are applied and multivariate regression is performed to evaluate the improvement effect of each weather element on passenger flow forecasting of representative metro stations on hourly basis. Our results show that by inputting weather variables the precision of prediction on weekends enhanced while the performance on weekdays only improved marginally, while the contribution of different elements of weather differ. Also, different categories of stations are affected differently by weather. This study provides a possible method to further improve other prediction models, and attests to the promise of data-driven analytics for optimization of short-term scheduling in transit management.
In this paper, we present an abstract framework of many-valued modal logic with the interpretation of atomic propositions and modal operators as predicate lifting over coalgebras for an endofunctor on the category of sets. It generalizes Pattinson's stratification method for colagebraic modal logic to the many-valued setting. In contrast to standard techniques of canonical model construction and filtration, this method employs an induction principle to prove the soundness, completeness, and finite model property of the logics. As a consequence, we can lift a restriction on the previous approach~ \cite{Lin2022} that requires the underlying language must have the expressive power to internalize the meta-level truth valuation operations.
We consider a high-dimensional random constrained optimization problem in which a set of binary variables is subjected to a linear system of equations. The cost function is a simple linear cost, measuring the Hamming distance with respect to a reference configuration. Despite its apparent simplicity, this problem exhibits a rich phenomenology. We show that different situations arise depending on the random ensemble of linear systems. When each variable is involved in at most two linear constraints, we show that the problem can be partially solved analytically, in particular we show that upon convergence, the zero-temperature limit of the cavity equations returns the optimal solution. We then study the geometrical properties of more general random ensembles. In particular we observe a range in the density of constraints at which the systems enters a glassy phase where the cost function has many minima. Interestingly, the algorithmic performances are only sensitive to another phase transition affecting the structure of configurations allowed by the linear constraints. We also extend our results to variables belonging to $\text{GF}(q)$, the Galois Field of order $q$. We show that increasing the value of $q$ allows to achieve a better optimum, which is confirmed by the Replica Symmetric cavity method predictions.
We study the rank of sub-matrices arising out of kernel functions, $F(\pmb{x},\pmb{y}): \mathbb{R}^d \times \mathbb{R}^d \mapsto \mathbb{R}$, where $\pmb{x},\pmb{y} \in \mathbb{R}^d$ with $F(\pmb{x},\pmb{y})$ is smooth everywhere except along the line $\pmb{x}=\pmb{y}$. Such kernel functions are frequently encountered in a wide range of applications such as $N$ body problems, Green's functions, integral equations, geostatistics, kriging, Gaussian processes, etc. One of the challenges in dealing with these kernel functions is that the corresponding matrix associated with these kernels is large and dense and thereby, the computational cost of matrix operations is high. In this article, we prove new theorems bounding the numerical rank of sub-matrices arising out of these kernel functions. Under reasonably mild assumptions, we prove that the rank of certain sub-matrices is rank-deficient in finite precision. This rank depends on the dimension of the ambient space and also on the type of interaction between the hyper-cubes containing the corresponding set of particles. This rank structure can be leveraged to reduce the computational cost of certain matrix operations such as matrix-vector products, solving linear systems, etc. We also present numerical results on the growth of rank of certain sub-matrices in $1$D, $2$D, $3$D and $4$D, which, not surprisingly, agrees with the theoretical results.
In the adversarially robust streaming model, a stream of elements is presented to an algorithm and is allowed to depend on the output of the algorithm at earlier times during the stream. In the classic insertion-only model of data streams, Ben-Eliezer et. al. (PODS 2020, best paper award) show how to convert a non-robust algorithm into a robust one with a roughly $1/\varepsilon$ factor overhead. This was subsequently improved to a $1/\sqrt{\varepsilon}$ factor overhead by Hassidim et. al. (NeurIPS 2020, oral presentation), suppressing logarithmic factors. For general functions the latter is known to be best-possible, by a result of Kaplan et. al. (CRYPTO 2021). We show how to bypass this impossibility result by developing data stream algorithms for a large class of streaming problems, with no overhead in the approximation factor. Our class of streaming problems includes the most well-studied problems such as the $L_2$-heavy hitters problem, $F_p$-moment estimation, as well as empirical entropy estimation. We substantially improve upon all prior work on these problems, giving the first optimal dependence on the approximation factor. As in previous work, we obtain a general transformation that applies to any non-robust streaming algorithm and depends on the so-called twist number. However, the key technical innovation is that we apply the transformation to what we call a difference estimator for the streaming problem, rather than an estimator for the streaming problem itself. We then develop the first difference estimators for a wide range of problems. Our difference estimator methodology is not only applicable to the adversarially robust model, but to other streaming models where temporal properties of the data play a central role. (Abstract shortened to meet arXiv limit.)
Analyzing observational data from multiple sources can be useful for increasing statistical power to detect a treatment effect; however, practical constraints such as privacy considerations may restrict individual-level information sharing across data sets. This paper develops federated methods that only utilize summary-level information from heterogeneous data sets. Our federated methods provide doubly-robust point estimates of treatment effects as well as variance estimates. We derive the asymptotic distributions of our federated estimators, which are shown to be asymptotically equivalent to the corresponding estimators from the combined, individual-level data. We show that to achieve these properties, federated methods should be adjusted based on conditions such as whether models are correctly specified and stable across heterogeneous data sets.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well known causal inference framework. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods.