The Byzantine consensus problem involves $n$ processes, out of which t < n could be faulty and behave arbitrarily. Three properties characterize consensus: (1) termination, requiring correct (non-faulty) processes to eventually reach a decision, (2) agreement, preventing them from deciding different values, and (3) validity, precluding ``unreasonable'' decisions. But, what is a reasonable decision? Strong validity, a classical property, stipulates that, if all correct processes propose the same value, only that value can be decided. Weak validity, another established property, stipulates that, if all processes are correct and they propose the same value, that value must be decided. The space of possible validity properties is vast. However, their impact on consensus remains unclear. This paper addresses the question of which validity properties allow Byzantine consensus to be solvable with partial synchrony, and at what cost. First, we determine necessary and sufficient conditions for a validity property to make the consensus problem solvable; we say that such validity properties are solvable. Notably, we prove that, if n <= 3t, all solvable validity properties are trivial (there exists an always-admissible decision). Furthermore, we show that, with any non-trivial (and solvable) validity property, consensus requires Omega(t^2) messages. This extends the seminal Dolev-Reischuk bound, originally proven for strong validity, to all non-trivial validity properties. Lastly, we give a general Byzantine consensus algorithm, we call Universal, for any solvable (and non-trivial) validity property. Importantly, Universal incurs O(n^2) message complexity. Thus, together with our lower bound, Universal implies a fundamental result in partial synchrony: with t \in Omega(n), the message complexity of all (non-trivial) consensus variants is Theta(n^2).
The common cause principle for two random variables $A$ and $B$ is examined in the case of causal insufficiency, when their common cause $C$ is known to exist, but only the joint probability of $A$ and $B$ is observed. As a result, $C$ cannot be uniquely identified (the latent confounder problem). We show that the generalized maximum likelihood method can be applied to this situation and allows identification of $C$ that is consistent with the common cause principle. It closely relates to the maximum entropy principle. Investigation of the two binary symmetric variables reveals a non-analytic behavior of conditional probabilities reminiscent of a second-order phase transition. This occurs during the transition from correlation to anti-correlation in the observed probability distribution. The relation between the generalized likelihood approach and alternative methods, such as predictive likelihood and the minimum common cause entropy, is discussed. The consideration of the common cause for three observed variables (and one hidden cause) uncovers causal structures that defy representation through directed acyclic graphs with the Markov condition.
Foundational ontologies devoted to the effective representation of processes and procedures are not widely investigated at present, thereby limiting the practical adoption of semantic approaches in real scenarios where the precise instructions to follow must be considered. Also, the representation ought to include how agents should carry out the actions associated with the process, whether or not agents are able to perform those actions, the possible roles played as well as the related events. The OASIS ontology provides an established model to capture agents and their interactions but lacks means for representing processes and procedures carried out by agents. This motivates the research presented in this article, which delivers an extension of the OASIS 2 ontology to combine the capabilities for representing agents and their behaviours with the full conceptualization of processes and procedures. The overarching goal is to deliver a foundational OWL ontology that deals with agent planning, reaching a balance between generality and applicability, which is known to be an open challenge.
Models with intractable normalizing functions have numerous applications. Because the normalizing constants are functions of the parameters of interest, standard Markov chain Monte Carlo cannot be used for Bayesian inference for these models. A number of algorithms have been developed for such models. Some have the posterior distribution as their asymptotic distribution. Other ``asymptotically inexact'' algorithms do not possess this property. There is limited guidance for evaluating approximations based on these algorithms. Hence it is very hard to tune them. We propose two new diagnostics that address these problems for intractable normalizing function models. Our first diagnostic, inspired by the second Bartlett identity, is in principle broadly applicable to Monte Carlo approximations beyond the normalizing function problem. We develop an approximate version of this diagnostic that is applicable to intractable normalizing function problems. Our second diagnostic is a Monte Carlo approximation to a kernel Stein discrepancy-based diagnostic introduced by Gorham and Mackey (2017). We provide theoretical justification for our methods and apply them to several algorithms in challenging simulated and real data examples including an Ising model, an exponential random graph model, and a Conway--Maxwell--Poisson regression model, obtaining interesting insights about the algorithms in these contexts.
We consider a decision maker allocating one unit of renewable and divisible resource in each period on a number of arms. The arms have unknown and random rewards whose means are proportional to the allocated resource and whose variances are proportional to an order $b$ of the allocated resource. In particular, if the decision maker allocates resource $A_i$ to arm $i$ in a period, then the reward $Y_i$ is$Y_i(A_i)=A_i \mu_i+A_i^b \xi_{i}$, where $\mu_i$ is the unknown mean and the noise $\xi_{i}$ is independent and sub-Gaussian. When the order $b$ ranges from 0 to 1, the framework smoothly bridges the standard stochastic multi-armed bandit and online learning with full feedback. We design two algorithms that attain the optimal gap-dependent and gap-independent regret bounds for $b\in [0,1]$, and demonstrate a phase transition at $b=1/2$. The theoretical results hinge on a novel concentration inequality we have developed that bounds a linear combination of sub-Gaussian random variables whose weights are fractional, adapted to the filtration, and monotonic.
Uniform sampling of bipartite graphs and hypergraphs with given degree sequences is necessary for building null models to statistically evaluate their topology. Because these graphs can be represented as binary matrices, the problem is equivalent to uniformly sampling $r \times c$ binary matrices with fixed row and column sums. The trade algorithm, which includes both the curveball and fastball implementations, is the state-of-the-art for performing such sampling. Its mixing time is currently unknown, although $5r$ is currently used as a heuristic. In this paper we propose a new distribution-based approach that not only provides an estimation of the mixing time, but also actually returns a sample of matrices that are guaranteed (within a user-chosen error tolerance) to be uniformly randomly sampled. In numerical experiments on matrices that vary by size, fill, and row and column sum distributions, we find that the upper bound on mixing time is at least $10r$, and that it increases as a function of both $c$ and the fraction of cells containing a 1.
In this paper we report our new finding on the linear sampling and factorization methods: in addition to shape identification, the linear sampling and factorization methods have capability in parameter identification. Our demonstration is for shape/parameter identification associated with a restricted Fourier integral operator which arises from the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. Within the framework of linear sampling method, we develop both a shape identification theory and a parameter identification theory which are stimulated, analyzed, and implemented with the help of the prolate spheroidal wave functions and their generalizations. Both the shape and parameter identification theories are general, since the theories allow any general regularization scheme such as the Tikhonov or the singular value cut off regularization. We further propose a prolate-Galerkin formulation of the linear sampling method for implementation and provide numerical experiments to demonstrate how the linear sampling method is capable of reconstructing both the shape and the parameter.
We study the properties of conformal prediction for network data under various sampling mechanisms that commonly arise in practice but often result in a non-representative sample of nodes. We interpret these sampling mechanisms as selection rules applied to a superpopulation and study the validity of conformal prediction conditional on an appropriate selection event. We show that the sampled subarray is exchangeable conditional on the selection event if the selection rule satisfies a permutation invariance property and a joint exchangeability condition holds for the superpopulation. Our result implies the finite-sample validity of conformal prediction for certain selection events related to ego networks and snowball sampling. We also show that when data are sampled via a random walk on a graph, a variant of weighted conformal prediction yields asymptotically valid prediction sets for an independently selected node from the population.
Noise-shaping quantization techniques are widely used for converting bandlimited signals from the analog to the digital domain. They work by "shaping" the quantization noise so that it falls close to the reconstruction operator's null space. We investigate the compatibility of two such schemes, specifically $\Sigma\Delta$ quantization and distributed noise-shaping quantization, with random samples of bandlimited functions. Let $f$ be a real-valued $\pi$-bandlimited function. Suppose $R>1$ is a real number and assume that $\{x_i\}_{i=1}^m$ is a sequence of i.i.d random variables uniformly distributed on $[-\tilde{R},\tilde{R}]$, where $\tilde{R}>R$ is appropriately chosen. We show that by using a noise-shaping quantizer to quantize the values of $f$ at $\{x_i\}_{i=1}^m$, a function $f^{\sharp}$ can be reconstructed from these quantized values such that $\|f-f^{\sharp}\|_{L^2[-R, R]}$ decays with high probability as $m$ and $\tilde{R}$ increase. We emphasize that the sample points $\{x_i\}_{i=1}^m$ are completely random, i.e., they have no predefined structure, which makes our findings the first of their kind.
An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS). To save computational cost, most of existing NAS algorithms often train and evaluate intermediate neural architectures on a small proxy dataset with limited training epochs. But it is difficult to expect an accurate performance estimation of an architecture in such a coarse evaluation way. This paper advocates a new neural architecture evaluation scheme, which aims to determine which architecture would perform better instead of accurately predict the absolute architecture performance. Therefore, we propose a \textbf{relativistic} architecture performance predictor in NAS (ReNAS). We encode neural architectures into feature tensors, and further refining the representations with the predictor. The proposed relativistic performance predictor can be deployed in discrete searching methods to search for the desired architectures without additional evaluation. Experimental results on NAS-Bench-101 dataset suggests that, sampling 424 ($0.1\%$ of the entire search space) neural architectures and their corresponding validation performance is already enough for learning an accurate architecture performance predictor. The accuracies of our searched neural architectures on NAS-Bench-101 and NAS-Bench-201 datasets are higher than that of the state-of-the-art methods and show the priority of the proposed method.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.