亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We make two contributions to the study of theory combination in satisfiability modulo theories. The first is a table of examples for the combinations of the most common model-theoretic properties in theory combination, namely stable infiniteness, smoothness, convexity, finite witnessability, and strong finite witnessability (and therefore politeness and strong politeness as well). All of our examples are sharp, in the sense that we also offer proofs that no theories are available within simpler signatures. This table significantly progresses the current understanding of the various properties and their interactions. The most remarkable example in this table is of a theory over a single sort that is polite but not strongly polite (the existence of such a theory was only known until now for two-sorted signatures). The second contribution is a new combination theorem showing that in order to apply polite theory combination, it is sufficient for one theory to be stably infinite and strongly finitely witnessable, thus showing that smoothness is not a critical property in this combination method. This result has the potential to greatly simplify the process of showing which theories can be used in polite combination, as showing stable infiniteness is considerably simpler than showing smoothness.

相關內容

A sequence of random variables is called exchangeable if its joint distribution is invariant under permutations. The original formulation of de Finetti's theorem says that any exchangeable sequence of $\{0,1\}$-valued random variables can be thought of as a mixture of independent and identically distributed sequences in a certain precise mathematical sense. Interpreting this statement from a convex analytic perspective, Hewitt and Savage obtained the same conclusion for more general state spaces under some topological conditions. The main contribution of this paper is in providing a new framework that explains the theorem purely as a consequence of the underlying distribution of the random variables, with no topological conditions (beyond Hausdorffness) on the state space being necessary if the distribution is Radon. We also show that it is consistent with the axioms of ZFC that de Finetti's theorem holds for all sequences of exchangeable random variables taking values in any complete metric space. The framework we use is based on nonstandard analysis. We have provided a self-contained introduction to nonstandard analysis as an appendix, thus rendering measure theoretic probability and point-set topology as the only prerequisites for this paper. Our introduction aims to develop some new ideologies that might be of interest to mathematicians, philosophers, and mathematics educators alike. Our technical tools come from nonstandard topological measure theory, in which a highlight is a new generalization of Prokhorov's theorem. Modulo such technical tools, our proof relies on properties of the empirical measures induced by hyperfinitely many identically distributed random variables -- a feature that allows us to establish de Finetti's theorem in the generality that we seek while still retaining the combinatorial intuition of proofs of simpler versions of de Finetti's theorem.

In compact settings, the convergence rate of the empirical optimal transport cost to its population value is well understood for a wide class of spaces and cost functions. In unbounded settings, however, hitherto available results require strong assumptions on the ground costs and the concentration of the involved measures. In this work, we pursue a decomposition-based approach to generalize the convergence rates found in compact spaces to unbounded settings under generic moment assumptions that are sharp up to an arbitrarily small $\epsilon > 0$. Hallmark properties of empirical optimal transport on compact spaces, like the recently established adaptation to lower complexity, are shown to carry over to the unbounded case.

We consider the problem of learning from data corrupted by underrepresentation bias, where positive examples are filtered from the data at different, unknown rates for a fixed number of sensitive groups. We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out parameters, even in settings where intersectional group membership makes learning each intersectional rate computationally infeasible. Using this estimate for the group-wise drop-out rate, we construct a re-weighting scheme that allows us to approximate the loss of any hypothesis on the true distribution, even if we only observe the empirical error on a biased sample. Finally, we present an algorithm encapsulating this learning and re-weighting process, and we provide strong PAC-style guarantees that, with high probability, our estimate of the risk of the hypothesis over the true distribution will be arbitrarily close to the true risk.

Inferring the means in the multivariate normal model $X \sim N_n(\theta, I)$ with unknown mean vector $\theta=(\theta_1,...,\theta_n)' \in \mathbb{R}^n$ and observed data $X=(X_1,...,X_n)'\in {\mathbb R}^n$ is a challenging task, known as the problem of many normal means (MNMs). This paper tackles two fundamental kinds of MNMs within the framework of Inferential Models (IMs). The first kind, referred to as the {\it classic} kind, is presented as is. The second kind, referred to as the {\it empirical Bayes} kind, assumes that the individual means $\theta_i$'s are drawn independently {\it a priori} from an unknown distribution $G(.)$. The IM formulation for the empirical Bayes kind utilizes numerical deconvolution, enabling prior-free probabilistic inference with over-parameterization for $G(.)$. The IM formulation for the classic kind, on the other hand, utilizes a latent random permutation, providing a novel approach for reasoning with uncertainty and deeper understanding. For uncertainty quantification within the familiar frequentist inference framework, the IM method of maximum plausibility is used for point estimation. Conservative interval estimation is obtained based on plausibility, using a Monte Carlo-based adaptive adjustment approach to construct shorter confidence intervals with targeted coverage. These methods are demonstrated through simulation studies and a real-data example. The numerical results show that the proposed methods for point estimation outperform traditional James-Stein and Efron's $g$-modeling in terms of mean square error, and the adaptive intervals are satisfactory in both coverage and efficiency. The paper concludes with suggestions for future developments and extensions of the proposed methods.

In this paper, we present a novel characterization of the smoothness of a model based on basic principles of Large Deviation Theory. In contrast to prior work, where the smoothness of a model is normally characterized by a real value (e.g., the weights' norm), we show that smoothness can be described by a simple real-valued function. Based on this concept of smoothness, we propose an unifying theoretical explanation of why some interpolators generalize remarkably well and why a wide range of modern learning techniques (i.e., stochastic gradient descent, $\ell_2$-norm regularization, data augmentation, invariant architectures, and overparameterization) are able to find them. The emergent conclusion is that all these methods provide complimentary procedures that bias the optimizer to smoother interpolators, which, according to this theoretical analysis, are the ones with better generalization error.

Unfolding can tackle the path-explosion problem caused by concurrency. Traditional unfolding generation faces an NP-complete problem when adding events to the unfolding structure, which also exists in the case of verifying linear temporal logic (LTL). The reason is that it is necessary to enumerate possible concurrent event combinations after adding an event. Many state-of-the-art methods optimally explore unfolding-based structure (called event structure) by a tree-like structure, which should be constructed on the event structure with complete conflict and causal relations. However, a synchronization of a Petri net and the Buchi representation of LTL as a folded net can not represent complete conflict and causal relations. Thus, it is difficult to apply such a tree-like structure directly on the folded net. To resolve this difficulty, we propose a new method, called partial-order checking with unfolding, to verify LTL based on PDNet (program dependence net). We define an exploration tree with a new notion of delayed transitions, which is different from the existing tree-like structure. It improves the unfolding generation by avoiding all possible event combinations. Then, we propose an algorithm to simultaneously construct the exploration tree while generating the unfolding structure, as well as checking LTL. We implement a tool PUPER for concurrent programs with POSIX threads. It improves traditional unfolding generations via our exploration tree-based algorithms and shows better performance than SPIN and DiVine on the used benchmarks.

In epidemiological studies, the capture-recapture (CRC) method is a powerful tool that can be used to estimate the number of diseased cases or potentially disease prevalence based on data from overlapping surveillance systems. Estimators derived from log-linear models are widely applied by epidemiologists when analyzing CRC data. The popularity of the log-linear model framework is largely associated with its accessibility and the fact that interaction terms can allow for certain types of dependency among data streams. In this work, we shed new light on significant pitfalls associated with the log-linear model framework in the context of CRC using real data examples and simulation studies. First, we demonstrate that the log-linear model paradigm is highly exclusionary. That is, it can exclude, by design, many possible estimates that are potentially consistent with the observed data. Second, we clarify the ways in which regularly used model selection metrics (e.g., information criteria) are fundamentally deceiving in the effort to select a best model in this setting. By focusing attention on these important cautionary points and on the fundamental untestable dependency assumption made when fitting a log-linear model to CRC data, we hope to improve the quality of and transparency associated with subsequent surveillance-based CRC estimates of case counts.

This paper introduces Artificial Intelligence Clinics on Mobile (AICOM), an open-source project devoted to answering the United Nations Sustainable Development Goal 3 (SDG3) on health, which represents a universal recognition that health is fundamental to human capital and social and economic development. The core motivation for the AICOM project is the fact that over 80% of the people in the least developed countries (LDCs) own a mobile phone, even though less than 40% of these people have internet access. Hence, through enabling AI-based disease diagnostics and screening capability on affordable mobile phones without connectivity will be a critical first step to addressing healthcare access problems. The technologies developed in the AICOM project achieve exactly this goal, and we have demonstrated the effectiveness of AICOM on monkeypox screening tasks. We plan to continue expanding and open-sourcing the AICOM platform, aiming for it to evolve into an universal AI doctor for the Underserved and Hard-to-Reach.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

Federated learning is a new distributed machine learning framework, where a bunch of heterogeneous clients collaboratively train a model without sharing training data. In this work, we consider a practical and ubiquitous issue in federated learning: intermittent client availability, where the set of eligible clients may change during the training process. Such an intermittent client availability model would significantly deteriorate the performance of the classical Federated Averaging algorithm (FedAvg for short). We propose a simple distributed non-convex optimization algorithm, called Federated Latest Averaging (FedLaAvg for short), which leverages the latest gradients of all clients, even when the clients are not available, to jointly update the global model in each iteration. Our theoretical analysis shows that FedLaAvg attains the convergence rate of $O(1/(N^{1/4} T^{1/2}))$, achieving a sublinear speedup with respect to the total number of clients. We implement and evaluate FedLaAvg with the CIFAR-10 dataset. The evaluation results demonstrate that FedLaAvg indeed reaches a sublinear speedup and achieves 4.23% higher test accuracy than FedAvg.

北京阿比特科技有限公司