亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A family of sets is $(p,q)$-intersecting if every nonempty subfamily of $p$ or fewer sets has at least $q$ elements in its total intersection. A family of sets has the $(p,q)$-Helly property if every nonempty $(p,q)$-intersecting subfamily has total intersection of cardinality at least $q$. The $(2,1)$-Helly property is the usual Helly property. A hypergraph is $(p,q)$-Helly if its edge family has the $(p,q)$-Helly property and hereditary $(p,q)$-Helly if each of its subhypergraphs has the $(p,q)$-Helly property. A graph is $(p,q)$-clique-Helly if the family of its maximal cliques has the $(p,q)$-the Helly property and hereditary $(p,q)$-clique-Helly if each of its induced subgraphs is $(p,q)$-clique-Helly. The classes of $(p,q)$-biclique-Helly and hereditary $(p,q)$-biclique-Helly graphs are defined analogously. We prove several characterizations of hereditary $(p,q)$-Helly hypergraphs, including one by minimal forbidden partial subhypergraphs. We give an improved time bound for the recognition of $(p,q)$-Helly hypergraphs for each fixed $q$ and show that the recognition of hereditary $(p,q)$-Helly hypergraphs can be solved in polynomial time if $p$ and $q$ are fixed but co-NP-complete if $p$ is part of the input. In addition, we generalize to $(p,q)$-clique-Helly graphs the characterization of $p$-clique-Helly graphs in terms of expansions and give different characterizations of hereditary $(p,q)$-clique-Helly graphs, including one by forbidden induced subgraphs. We give an improvement on the time bound for the recognition of $(p,q)$-clique-Helly graphs and prove that the recognition problem of hereditary $(p,q)$-clique-Helly graphs is polynomial-time solvable for $p$ and $q$ fixed but NP-hard if $p$ or $q$ is part of the input. Finally, we provide different characterizations, give recognition algorithms, and prove hardness results for (hereditary) $(p,q)$-biclique-Helly graphs.

相關內容

This paper uses Gaussian mixture model instead of linear Gaussian model to fit the distribution of every node in Bayesian network. We will explain why and how we use Gaussian mixture models in Bayesian network. Meanwhile we propose a new method, called double iteration algorithm, to optimize the mixture model, the double iteration algorithm combines the expectation maximization algorithm and gradient descent algorithm, and it performs perfectly on the Bayesian network with mixture models. In experiments we test the Gaussian mixture model and the optimization algorithm on different graphs which is generated by different structure learning algorithm on real data sets, and give the details of every experiment.

The approximate uniform sampling of graph realizations with a given degree sequence is an everyday task in several social science, computer science, engineering etc. projects. One approach is using Markov chains. The best available current result about the well-studied switch Markov chain is that it is rapidly mixing on P-stable degree sequences (see DOI:10.1016/j.ejc.2021.103421). The switch Markov chain does not change any degree sequence. However, there are cases where degree intervals are specified rather than a single degree sequence. (A natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed.) Rechner, Strowick, and M\"uller-Hannemann introduced in 2018 the notion of degree interval Markov chain which uses three (separately well-studied) local operations (switch, hinge-flip and toggle), and employing on degree sequence realizations where any two sequences under scrutiny have very small coordinate-wise distance. Recently Amanatidis and Kleer published a beautiful paper (arXiv:2110.09068), showing that the degree interval Markov chain is rapidly mixing if the sequences are coming from a system of very thin intervals which are centered not far from a regular degree sequence. In this paper we extend substantially their result, showing that the degree interval Markov chain is rapidly mixing if the intervals are centred at P-stable degree sequences.

Conditional independence models associated with directed acyclic graphs (DAGs) may be characterized in at least three different ways: via a factorization, the global Markov property (given by the d-separation criterion), and the local Markov property. Marginals of DAG models also imply equality constraints that are not conditional independences; the well-known `Verma constraint' is an example. Constraints of this type are used for testing edges, and in a computationally efficient marginalization scheme via variable elimination. We show that equality constraints like the `Verma constraint' can be viewed as conditional independences in kernel objects obtained from joint distributions via a fixing operation that generalizes conditioning and marginalization. We use these constraints to define, via ordered local and global Markov properties, and a factorization, a graphical model associated with acyclic directed mixed graphs (ADMGs). We prove that marginal distributions of DAG models lie in this model, and that a set of these constraints given by Tian provides an alternative definition of the model. Finally, we show that the fixing operation used to define the model leads to a particularly simple characterization of identifiable causal effects in hidden variable causal DAG models.

Faces play a central role in the combinatorial and computational aspects of polyhedra. In this paper, we present the first formalization of faces of polyhedra in the proof assistant Coq. This builds on the formalization of a library providing the basic constructions and operations over polyhedra, including projections, convex hulls and images under linear maps. Moreover, we design a special mechanism which automatically introduces an appropriate representation of a polyhedron or a face, depending on the context of the proof. We demonstrate the usability of this approach by establishing some of the most important combinatorial properties of faces, namely that they constitute a family of graded atomistic and coatomistic lattices closed under interval sublattices. We also prove a theorem due to Balinski on the $d$-connectedness of the adjacency graph of polytopes of dimension $d$.

A natural way of increasing our understanding of NP-complete graph problems is to restrict the input to a special graph class. Classes of $H$-free graphs, that is, graphs that do not contain some graph $H$ as an induced subgraph, have proven to be an ideal testbed for such a complexity study. However, if the forbidden graph $H$ contains a cycle or claw, then these problems often stay NP-complete. A recent complexity study on the $k$-Colouring problem shows that we may still obtain tractable results if we also bound the diameter of the $H$-free input graph. We continue this line of research by initiating a complexity study on the impact of bounding the diameter for a variety of classical vertex partitioning problems restricted to $H$-free graphs. We prove that bounding the diameter does not help for Independent Set, but leads to new tractable cases for problems closely related to 3-Colouring. That is, we show that Near-Bipartiteness, Independent Feedback Vertex Set, Independent Odd Cycle Transversal, Acyclic 3-Colouring and Star 3-Colouring are all polynomial-time solvable for chair-free graphs of bounded diameter. To obtain these results we exploit a new structural property of 3-colourable chair-free graphs.

Let $m$ be a positive integer and $p$ a prime. In this paper, we investigate the differential properties of the power mapping $x^{p^m+2}$ over $\mathbb{F}_{p^n}$, where $n=2m$ or $n=2m-1$. For the case $n=2m$, by transforming the derivative equation of $x^{p^m+2}$ and studying some related equations, we completely determine the differential spectrum of this power mapping. For the case $n=2m-1$, the derivative equation can be transformed to a polynomial of degree $p+3$. The problem is more difficult and we obtain partial results about the differential spectrum of $x^{p^m+2}$.

In this short note, we show that for any $\epsilon >0$ and $k<n^{0.5-\epsilon}$ the choice number of the Kneser graph $KG_{n,k}$ is $\Theta (n\log n)$.

Universal coding of integers~(UCI) is a class of variable-length code, such that the ratio of the expected codeword length to $\max\{1,H(P)\}$ is within a constant factor, where $H(P)$ is the Shannon entropy of the decreasing probability distribution $P$. However, if we consider the ratio of the expected codeword length to $H(P)$, the ratio tends to infinity by using UCI, when $H(P)$ tends to zero. To solve this issue, this paper introduces a class of codes, termed generalized universal coding of integers~(GUCI), such that the ratio of the expected codeword length to $H(P)$ is within a constant factor $K$. First, the definition of GUCI is proposed and the coding structure of GUCI is introduced. Next, we propose a class of GUCI $\mathcal{C}$ to achieve the expansion factor $K_{\mathcal{C}}=2$ and show that the optimal GUCI is in the range $1\leq K_{\mathcal{C}}^{*}\leq 2$. Then, by comparing UCI and GUCI, we show that when the entropy is very large or $P(0)$ is not large, there are also cases where the average codeword length of GUCI is shorter. Finally, the asymptotically optimal GUCI is presented.

Holonomic functions play an essential role in Computer Algebra since they allow the application of many symbolic algorithms. Among all algorithmic attempts to find formulas for power series, the holonomic property remains the most important requirement to be satisfied by the function under consideration. The targeted functions mainly summarize that of meromorphic functions. However, expressions like $\tan(z)$, $z/(\exp(z)-1)$, $\sec(z)$, etc., particularly, reciprocals, quotients and compositions of holonomic functions, are generally not holonomic. Therefore their power series are inaccessible by the holonomic framework. From the mathematical dictionaries, one can observe that most of the known closed-form formulas of non-holonomic power series involve another sequence whose evaluation depends on some finite summations. In the case of $\tan(z)$ and $\sec(z)$ the corresponding sequences are the Bernoulli and Euler numbers, respectively. Thus providing a symbolic approach that yields complete representations when linear summations for power series coefficients of non-holonomic functions appear, might be seen as a step forward towards the representation of non-holonomic power series. By adapting the method of ansatz with undetermined coefficients, we build an algorithm that computes least-order quadratic differential equations with polynomial coefficients for a large class of non-holonomic functions. A differential equation resulting from this procedure is converted into a recurrence equation by applying the Cauchy product formula and rewriting powers into polynomials and derivatives into shifts. Finally, using enough initial values we are able to give normal form representations to characterize several non-holonomic power series and prove non-trivial identities. We discuss this algorithm and its implementation for Maple 2022.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司