亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We discuss some highlights of our computer-verified proof of the construction, given a countable transitive set-model $M$ of $\mathit{ZFC}$, of generic extensions satisfying $\mathit{ZFC}+\neg\mathit{CH}$ and $\mathit{ZFC}+\mathit{CH}$. Moreover, let $\mathcal{R}$ be the set of instances of the Axiom of Replacement. We isolated a 21-element subset $\Omega\subseteq\mathcal{R}$ and defined $\mathcal{F}:\mathcal{R}\to\mathcal{R}$ such that for every $\Phi\subseteq\mathcal{R}$ and $M$-generic $G$, $M\models \mathit{ZC} \cup \mathcal{F}\text{``}\Phi \cup \Omega$ implies $M[G]\models \mathit{ZC} \cup \Phi \cup \{ \neg \mathit{CH} \}$, where $\mathit{ZC}$ is Zermelo set theory with Choice. To achieve this, we worked in the proof assistant Isabelle, basing our development on the Isabelle/ZF library by L. Paulson and others.

相關內容

The growth pattern of an invasive cell-to-cell propagation (called the successive coronas) on the square grid is a tilted square. On the triangular and hexagonal grids, it is an hexagon. It is remarkable that, on the aperiodic structure of Penrose tilings, this cell-to-cell diffusion process tends to a regular decagon (at the limit). In this article we generalize this result to any regular multigrid dual tiling, by defining the characteristic polygon of a multigrid and its dual tiling. Exploiting this elegant duality allows to fully understand why such surprising phenomena, of seeing highly regular polygonal shapes emerge from aperiodic underlying structures, happen.

The hull of a linear code $C$ is the intersection of $C$ with its dual code. We present and analyze the number of linear $q$-ary codes of the same length and dimension but with different dimensions for their hulls. We prove that for given dimension $k$ and length $n\ge 2k$ the number of all $[n,k]_q$ linear codes with hull dimension $l$ decreases as $l$ increases. We also present classification results for binary and ternary linear codes with trivial hulls (LCD and self-orthogonal) for some values of the length $n$ and dimension $k$, comparing the obtained numbers with the number of all linear codes for the given $n$ and $k$.

We consider the problem of finding a maximum size triangle-free $2$-matching in a graph $G$. A $2$-matching is any subset of the edges such that each vertex is incident to at most two edges from the subset. We present a fast combinatorial algorithm for the problem. Our algorithm and its analysis are dramatically simpler than the very complicated result by Hartvigsen from 1984. In the design of this algorithm we use several new concepts. It has been proven before that for any triangle-free $2$-matching $M$ which is not maximum the graph contains an $M$-augmenting path, whose application to $M$ results in a bigger triangle-free $2$-matching. It was not known how to efficiently find such a path. A new observation is that the search for an augmenting path $P$ can be restricted to so-called {\em amenable} paths that go through any triangle $t$ contained in $P \cup M$ a limited number of times. To find an augmenting path that is amenable and hence whose application does not create any triangle we forbid some edges to be followed by certain others. This operation can be thought of as using gadgets, in which some pairs of edges get disconnected. To be able to disconnect two edges we employ {\em half-edges}. A {\em half-edge} of edge $e$ is, informally speaking, a half of $e$ containing exactly one of its endpoints. This is another novel application of half-edges which were previously used for TSP and other matching problems. Additionally, gadgets are not fixed during any augmentation phase, but are dynamically changing according to the currently discovered state of reachability by amenable paths.

The volume function V(t) of a compact set S\in R^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called ``positive reach'') which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call ``polynomial reach'') might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea.

Propensity score matching is commonly used to draw causal inference from observational survival data. However, its asymptotic properties have yet to be established, and variance estimation is still open to debate. We derive the statistical properties of the propensity score matching estimator of the marginal causal hazard ratio based on matching with replacement and a fixed number of matches. We also propose a double-resampling technique for variance estimation that takes into account the uncertainty due to propensity score estimation prior to matching.

We describe an algorithm which, given two essential curves on a surface $S$, computes their distance in the curve graph of $S$, up to multiplicative and additive errors. As an application, we present an algorithm to decide the Nielsen-Thurston type (periodic, reducible, or pseudo-Anosov) of a mapping class of $S$. The novelty of our algorithms lies in the fact that their running time is polynomial in the size of the input and in the complexity of $S$ -- say, its Euler characteristic. This is in contrast with previously known algorithms, which run in polynomial time in the size of the input for any fixed surface $S$.

The Na\"ive Bayes has proven to be a tractable and efficient method for classification in multivariate analysis. However, features are usually correlated, a fact that violates the Na\"ive Bayes' assumption of conditional independence, and may deteriorate the method's performance. Moreover, datasets are often characterized by a large number of features, which may complicate the interpretation of the results as well as slow down the method's execution. In this paper we propose a sparse version of the Na\"ive Bayes classifier that is characterized by three properties. First, the sparsity is achieved taking into account the correlation structure of the covariates. Second, different performance measures can be used to guide the selection of features. Third, performance constraints on groups of higher interest can be included. Our proposal leads to a smart search, which yields competitive running times, whereas the flexibility in terms of performance measure for classification is integrated. Our findings show that, when compared against well-referenced feature selection approaches, the proposed sparse Na\"ive Bayes obtains competitive results regarding accuracy, sparsity and running times for balanced datasets. In the case of datasets with unbalanced (or with different importance) classes, a better compromise between classification rates for the different classes is achieved.

The problem of estimating a parameter in the drift coefficient is addressed for $N$ discretely observed independent and identically distributed stochastic differential equations (SDEs). This is done considering additional constraints, wherein only public data can be published and used for inference. The concept of local differential privacy (LDP) is formally introduced for a system of stochastic differential equations. The objective is to estimate the drift parameter by proposing a contrast function based on a pseudo-likelihood approach. A suitably scaled Laplace noise is incorporated to meet the privacy requirements. Our key findings encompass the derivation of explicit conditions tied to the privacy level. Under these conditions, we establish the consistency and asymptotic normality of the associated estimator. Notably, the convergence rate is intricately linked to the privacy level, and is some situations may be completely different from the case where privacy constraints are ignored. Our results hold true as the discretization step approaches zero and the number of processes $N$ tends to infinity.

We present a novel approach for differentially private data synthesis of protected tabular datasets, a relevant task in highly sensitive domains such as healthcare and government. Current state-of-the-art methods predominantly use marginal-based approaches, where a dataset is generated from private estimates of the marginals. In this paper, we introduce PrivPGD, a new generation method for marginal-based private data synthesis, leveraging tools from optimal transport and particle gradient descent. Our algorithm outperforms existing methods on a large range of datasets while being highly scalable and offering the flexibility to incorporate additional domain-specific constraints.

The forcing number of a graph with a perfect matching $M$ is the minimum number of edges in $M$ whose endpoints need to be deleted, such that the remaining graph only has a single perfect matching. This number is of great interest in theoretical chemistry, since it conveys information about the structural properties of several interesting molecules. On the other hand, in bipartite graphs the forcing number corresponds to the famous feedback vertex set problem in digraphs. Determining the complexity of finding the smallest forcing number of a given planar graph is still a widely open and important question in this area, originally proposed by Afshani, Hatami, and Mahmoodian in 2004. We take a first step towards the resolution of this question by providing an algorithm that determines the set of all possible forcing numbers of an outerplanar graph in polynomial time. This is the first polynomial-time algorithm concerning this problem for a class of graphs of comparable or greater generality.

北京阿比特科技有限公司