亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The edges of the characteristic imset polytope, $\operatorname{CIM}_p$, were recently shown to have strong connections to causal discovery as many algorithms could be interpreted as greedy restricted edge-walks, even though only a strict subset of the edges are known. To better understand the general edge structure of the polytope we describe the edge structure of faces with a clear combinatorial interpretation: for any undirected graph $G$ we have the face $\operatorname{CIM}_G$, the convex hull of the characteristic imsets of DAGs with skeleton $G$. We give a full edge-description of $\operatorname{CIM}_G$ when $G$ is a tree, leading to interesting connections to other polytopes. In particular the well-studied stable set polytope can be recovered as a face of $\operatorname{CIM}_G$ when $G$ is a tree. Building on this connection we are also able to give a description of all edges of $\operatorname{CIM}_G$ when $G$ is a cycle, suggesting possible inroads for generalization. We then introduce an algorithm for learning directed trees from data, utilizing our newly discovered edges, that outperforms classical methods on simulated Gaussian data.

相關內容

We consider the computational problem of finding short paths in the skeleton of the perfect matching polytope of a bipartite graph. We prove that unless $P=NP$, there is no polynomial-time algorithm that computes a path of constant length between two vertices at distance two of the perfect matching polytope of a bipartite graph. Conditioned on $P\neq NP$, this disproves a conjecture by Ito, Kakimura, Kamiyama, Kobayashi and Okamoto [SIAM Journal on Discrete Mathematics, 36(2), pp. 1102-1123 (2022)]. Assuming the Exponential Time Hypothesis we prove the stronger result that there exists no polynomial-time algorithm computing a path of length at most $\left(\frac{1}{4}-o(1)\right)\frac{\log N}{\log \log N}$ between two vertices at distance two of the perfect matching polytope of an $N$-vertex bipartite graph. These results remain true if the bipartite graph is restricted to be of maximum degree three. The above has the following interesting implication for the performance of pivot rules for the simplex algorithm on simply-structured combinatorial polytopes: If $P\neq NP$, then for every simplex pivot rule executable in polynomial time and every constant $k \in \mathbb{N}$ there exists a linear program on a perfect matching polytope and a starting vertex of the polytope such that the optimal solution can be reached in two monotone steps from the starting vertex, yet the pivot rule will require at least $k$ steps to reach the optimal solution. This result remains true in the more general setting of pivot rules for so-called circuit-augmentation algorithms.

Bayesian Networks (BNs) have become increasingly popular over the last few decades as a tool for reasoning under uncertainty in fields as diverse as medicine, biology, epidemiology, economics and the social sciences. This is especially true in real-world areas where we seek to answer complex questions based on hypothetical evidence to determine actions for intervention. However, determining the graphical structure of a BN remains a major challenge, especially when modelling a problem under causal assumptions. Solutions to this problem include the automated discovery of BN graphs from data, constructing them based on expert knowledge, or a combination of the two. This paper provides a comprehensive review of combinatoric algorithms proposed for learning BN structure from data, describing 74 algorithms including prototypical, well-established and state-of-the-art approaches. The basic approach of each algorithm is described in consistent terms, and the similarities and differences between them highlighted. Methods of evaluating algorithms and their comparative performance are discussed including the consistency of claims made in the literature. Approaches for dealing with data noise in real-world datasets and incorporating expert knowledge into the learning process are also covered.

An integrated experimental, computational, and non-deterministic approach is demonstrated to predict the damage tolerance of an aluminum plate reinforced with a co-cured bonded quasi-isotropic E-glass/epoxy composite overlay and to determine the most sensitive material parameters and their ranges of influence on the damage tolerance of the hybrid system. To simulate the complex progressive damage in the repaired structure, a high fidelity three-dimensional finite element model is developed and validated using four-point bend testing to investigate potential damage mechanisms. A surrogate model is then generated to explore the complex parameter space of this model. Global sensitivity analysis and uncertainty quantification are performed for non-deterministic analysis to characterize the energy absorption capability of the patched structure relative to these influential design properties. Additionally, correlating the data quality of the material parameters with the sensitivity analysis results provides practical guidelines for model improvement and the design optimization of the patched structure.

Let $L$ be a finite lattice and $\mathcal{E}(L)$ be the set of join endomorphisms of $L$. We consider the problem of given $L$ and $f,g \in \mathcal{E}(L)$, finding the greatest lower bound $f \sqcap_{{\scriptsize \mathcal{E}(L)}} g$ in the lattice $\mathcal{E}(L)$. (1) We show that if $L$ is distributive, the problem can be solved in time $O(n)$ where $n=| L |$. The previous upper bound was $O(n^2)$. (2) We provide new algorithms for arbitrary lattices and give experimental evidence that they are significantly faster than the existing algorithm. (3) We characterize the standard notion of distributed knowledge of a group as the greatest lower bound of the join-endomorphisms representing the knowledge of each member of the group. (4) We show that deciding whether an agent has the distributed knowledge of two other agents can be computed in time $O(n^2)$ where $n$ is the size of the underlying set of states. (5) For the special case of $S5$ knowledge, we show that it can be decided in time $O(n\alpha_{n})$ where $\alpha_{n}$ is the inverse of the Ackermann function.

We study the problem of covering and learning sums $X = X_1 + \cdots + X_n$ of independent integer-valued random variables $X_i$ (SIIRVs) with unbounded, or even infinite, support. De et al. at FOCS 2018, showed that the maximum value of the collective support of $X_i$'s necessarily appears in the sample complexity of learning $X$. In this work, we address two questions: (i) Are there general families of SIIRVs with unbounded support that can be learned with sample complexity independent of both $n$ and the maximal element of the support? (ii) Are there general families of SIIRVs with unbounded support that admit proper sparse covers in total variation distance? As for question (i), we provide a set of simple conditions that allow the unbounded SIIRV to be learned with complexity $\text{poly}(1/\epsilon)$ bypassing the aforementioned lower bound. We further address question (ii) in the general setting where each variable $X_i$ has unimodal probability mass function and is a different member of some, possibly multi-parameter, exponential family $\mathcal{E}$ that satisfies some structural properties. These properties allow $\mathcal{E}$ to contain heavy tailed and non log-concave distributions. Moreover, we show that for every $\epsilon > 0$, and every $k$-parameter family $\mathcal{E}$ that satisfies some structural assumptions, there exists an algorithm with $\tilde{O}(k) \cdot \text{poly}(1/\epsilon)$ samples that learns a sum of $n$ arbitrary members of $\mathcal{E}$ within $\epsilon$ in TV distance. The output of the learning algorithm is also a sum of random variables whose distribution lies in the family $\mathcal{E}$. En route, we prove that any discrete unimodal exponential family with bounded constant-degree central moments can be approximated by the family corresponding to a bounded subset of the initial (unbounded) parameter space.

We build a general framework which establishes a one-to-one correspondence between species abundance distribution (SAD) and species accumulation curve (SAC). The appearance rates of the species and the appearance times of individuals of each species are modeled as Poisson processes. The number of species can be finite or infinite. Hill numbers are extended to the framework. We introduce a linear derivative ratio family of models, $\mathrm{LDR}_1$, of which the ratio of the first and the second derivatives of the expected SAC is a linear function. A D1/D2 plot is proposed to detect this linear pattern in the data. By extrapolation of the curve in the D1/D2 plot, a species richness estimator that extends Chao1 estimator is introduced. The SAD of $\mathrm{LDR}_1$ is the Engen's extended negative binomial distribution, and the SAC encompasses several popular parametric forms including the power law. Family $\mathrm{LDR}_1$ is extended in two ways: $\mathrm{LDR}_2$ which allows species with zero detection probability, and $\mathrm{RDR}_1$ where the derivative ratio is a rational function. Real data are analyzed to demonstrate the proposed methods. We also consider the scenario where we record only a few leading appearance times of each species. We show how maximum likelihood inference can be performed when only the empirical SAC is observed, and elucidate its advantages over the traditional curve-fitting method.

A number of rules for resolving majority cycles in elections have been proposed in the literature. Recently, Holliday and Pacuit (Journal of Theoretical Politics 33 (2021) 475-524) axiomatically characterized one such cycle-resolving rule, dubbed Split Cycle: in each majority cycle, discard the majority preferences with the smallest majority margin. They showed that any rule satisfying five standard axioms, plus a weakening of Arrow's Independence of Irrelevant Alternatives (IIA) called Coherent IIA, is refined by Split Cycle. In this paper, we go further and show that Split Cycle is the only rule satisfying the axioms of Holliday and Pacuit together with two additional axioms: Coherent Defeat and Positive Involvement in Defeat. Coherent Defeat states that any majority preference not occurring in a cycle is retained, while Positive Involvement in Defeat is closely related to the well-known axiom of Positive Involvement (as in J. P\'{e}rez, Social Choice and Welfare 18 (2001) 601-616). We characterize Split Cycle not only as a collective choice rule but also as a social choice correspondence, over both profiles of linear ballots and profiles of ballots allowing ties.

The independent set polynomial is important in many areas. For every integer $\Delta\geq 2$, the Shearer threshold is the value $\lambda^*(\Delta)=(\Delta-1)^{\Delta-1}/\Delta^{\Delta}$ . It is known that for $\lambda < - \lambda^*(\Delta)$, there are graphs~$G$ with maximum degree~$\Delta$ whose independent set polynomial, evaluated at~$\lambda$, is at most~$0$. Also, there are no such graphs for any $\lambda > -\lambda^*(\Delta)$. This paper is motivated by the computational problem of approximating the independent set polynomial when $\lambda < - \lambda^*(\Delta)$. The key issue in complexity bounds for this problem is "implementation". Informally, an implementation of a real number $\lambda'$ is a graph whose hard-core partition function, evaluated at~$\lambda$, simulates a vertex-weight of~$\lambda'$ in the sense that $\lambda'$ is the ratio between the contribution to the partition function from independent sets containing a certain vertex and the contribution from independent sets that do not contain that vertex. Implementations are the cornerstone of intractability results for the problem of approximately evaluating the independent set polynomial. Our main result is that, for any $\lambda < - \lambda^*(\Delta)$, it is possible to implement a set of values that is dense over the reals. The result is tight in the sense that it is not possible to implement a set of values that is dense over the reals for any $\lambda> \lambda^*(\Delta)$. Our result has already been used in a paper with \bezakova{} (STOC 2018) to show that it is \#P-hard to approximate the evaluation of the independent set polynomial on graphs of degree at most~$\Delta$ at any value $\lambda<-\lambda^*(\Delta)$. In the appendix, we give an additional incomparable inapproximability result (strengthening the inapproximability bound to an exponential factor, but weakening the hardness to NP-hardness).

We prove two sets of results concerning computational complexity classes. The first concerns a variation of the random oracle hypothesis posed by Bennett and Gill after they showed that relative to a randomly chosen oracle, P not equal NP with probability 1. This hypothesis was quickly disproven in several ways, most famously in 1992 with the result that IP equals PSPACE, in spite of the classes being shown unequal with probability 1. Here we propose a variation of what it means to be ``large'' using the Ellentuck topology. In this new context, we demonstrate that the set of oracles separating NP and co-NP is not small, and obtain similar results for the separation of PSPACE from PH along with the separation of NP from BQP. We demonstrate that this version of the hypothesis turns it into a sufficient condition for unrelativized relationships, at least in the three cases considered here. Second, we example the descriptive complexity of the classes of oracles providing the separations for these various classes, and determine their exact placement in the Borel hierarchy.

Many view deep learning as a "black box" used only for forecasting. However, this paper provides an alternative application by constructing a structural deep neural network to generate latent factors in asset pricing. The conventional approach of sorting firm characteristics to generate long-short factor portfolio weights is underappreciated nonlinear modeling. First, we describe the complete mechanism for fitting crosssectional returns by firm characteristics through risk factors. Second, unlike statistical models, our model has an economic-guided objective function that minimizes pricing errors. Empirically, we find asset pricing and investment improvements using individual stocks and test portfolios for in-sample and out-of-sample analysis.

北京阿比特科技有限公司