亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The union-closed sets conjecture states that in any nonempty union-closed family $\mathcal{F}$ of subsets of a finite set, there exists an element contained in at least a proportion $1/2$ of the sets of $\mathcal{F}$. Using the information-theoretic method, Gilmer \cite{gilmer2022constant} recently showed that there exists an element contained in at least a proportion $0.01$ of the sets of such $\mathcal{F}$. He conjectured that his technique can be pushed to the constant $\frac{3-\sqrt{5}}{2}$ which was subsequently confirmed by several researchers \cite{sawin2022improved,chase2022approximate,alweiss2022improved,pebody2022extension}. Furthermore, Sawin \cite{sawin2022improved} showed that Gilmer's technique can be improved to obtain a bound better than $\frac{3-\sqrt{5}}{2}$, but this new bound is not explicitly given by Sawin. This paper further improves Gilmer's technique to derive new bounds in the optimization form for the union-closed sets conjecture. These bounds include Sawin's improvement as a special case. By providing cardinality bounds on auxiliary random variables, we make Sawin's improvement computable, and then evaluate it numerically which yields a bound around $0.38234$, slightly better than $\frac{3-\sqrt{5}}{2}\approx0.38197$. }

相關內容

We revisit the problem of computing with noisy information considered in Feige et al. 1994, which includes computing the OR function from noisy queries, and computing the MAX, SEARCH and SORT functions from noisy pairwise comparisons. For $K$ given elements, the goal is to correctly recover the desired function with probability at least $1-\delta$ when the outcome of each query is flipped with probability $p$. We consider both the adaptive sampling setting where each query can be adaptively designed based on past outcomes, and the non-adaptive sampling setting where the query cannot depend on past outcomes. The prior work provides tight bounds on the worst-case query complexity in terms of the dependence on $K$. However, the upper and lower bounds do not match in terms of the dependence on $\delta$ and $p$. We improve the lower bounds for all the four functions under both adaptive and non-adaptive query models. Most of our lower bounds match the upper bounds up to constant factors when either $p$ or $\delta$ is bounded away from $0$, while the ratio between the best prior upper and lower bounds goes to infinity when $p\rightarrow 0$ or $p\rightarrow 1/2$. On the other hand, we also provide matching upper and lower bounds for the number of queries in expectation, improving both the upper and lower bounds for the variable-length query model.

In the network coding framework, given a prime power $q$ and the vector space $\mathbb{F}_q^n$, a constant type flag code is a set of nested sequences of $\mathbb{F}_q$-subspaces (flags) with the same increasing sequence of dimensions (the type of the flag). If a flag code arises as the orbit under the action of a cyclic subgroup of the general linear group over a flag, we say that it is a cyclic orbit flag code. Among the parameters of such a family of codes, we have its best friend, that is the largest field over which all the subspaces in the generating flag are vector spaces. This object permits to compute the cardinality of the code and estimate its minimum distance. However, as it occurs with other absolute parameters of a flag code, the information given by the best friend is not complete in many cases due to the fact that it can be obtained in different ways. In this work, we present a new invariant, the best friend vector, that captures the specific way the best friend can be unfolded. Furthermore, throughout the paper we analyze the strong underlying interaction between this invariant and other parameters such as the cardinality, the flag distance, or the type vector, and how it conditions them. Finally, we investigate the realizability of a prescribed best friend vector in a vector space.

Learning the graphical structure of Bayesian networks is key to describing data-generating mechanisms in many complex applications but poses considerable computational challenges. Observational data can only identify the equivalence class of the directed acyclic graph underlying a Bayesian network model, and a variety of methods exist to tackle the problem. Under certain assumptions, the popular PC algorithm can consistently recover the correct equivalence class by reverse-engineering the conditional independence (CI) relationships holding in the variable distribution. The dual PC algorithm is a novel scheme to carry out the CI tests within the PC algorithm by leveraging the inverse relationship between covariance and precision matrices. By exploiting block matrix inversions we can also perform tests on partial correlations of complementary (or dual) conditioning sets. The multiple CI tests of the dual PC algorithm proceed by first considering marginal and full-order CI relationships and progressively moving to central-order ones. Simulation studies show that the dual PC algorithm outperforms the classic PC algorithm both in terms of run time and in recovering the underlying network structure, even in the presence of deviations from Gaussianity. Additionally, we show that the dual PC algorithm applies for Gaussian copula models, and demonstrate its performance in that setting.

The convexity of a set can be generalized to the two weaker notions of reach and $r$-convexity; both describe the regularity of a set's boundary. For any compact subset of $\mathbb{R}^d$, we provide methods for computing upper bounds on these quantities from point cloud data. The bounds converge to the respective quantities as the point cloud becomes dense in the set, and the rate of convergence for the bound on the reach is given under a weak regularity condition. We also introduce the $\beta$-reach, a generalization of the reach that excludes small-scale features of size less than a parameter $\beta\in[0,\infty)$. Numerical studies suggest how the $\beta$-reach can be used in high-dimension to infer the reach and other geometric properties of smooth submanifolds.

We study the problem of social welfare maximization in bilateral trade, where two agents, a buyer and a seller, trade an indivisible item. We consider arguably the simplest form of mechanisms -- the fixed-price mechanisms, where the designer offers trade at a fixed price to the seller and buyer. Besides the simple form, fixed-price mechanisms are also the only DSIC and budget balanced mechanisms in bilateral trade. We obtain improved approximation ratios of fixed-price mechanisms in different settings. In the full prior information setting where the designer has access to the value distributions of both the seller and buyer, we show that the optimal fixed-price mechanism can achieve at least $0.72$ of the optimal welfare, and no fixed-price mechanism can achieve more than $0.7381$ of the optimal welfare. Prior to our result the state of the art approximation ratio was $1 - 1/e + 0.0001 \approx 0.632$. Interestingly, we further show that the optimal approximation ratio achievable with full prior information is identical to the optimal approximation ratio obtainable with only one-sided prior information. We further consider two limited information settings. In the first one, the designer is only given the mean of the buyer's (or the seller's) value. We show that with such minimal information, one can already design a fixed-price mechanism that achieves $2/3$ of the optimal social welfare, which surpasses the previous state of the art ratio even when the designer has access to the full prior information. Furthermore, $2/3$ is the optimal attainable ratio in this setting. In the second one, we assume that the designer has sample access to the value distributions. We propose a new family mechanisms called order statistic mechanisms and provide a complete characterization of their approximation ratios for any fixed number of samples.

This paper explores variants of the subspace iteration algorithm for computing approximate invariant subspaces. The standard subspace iteration approach is revisited and new variants that exploit gradient-type techniques combined with a Grassmann manifold viewpoint are developed. A gradient method as well as a conjugate gradient technique are described. Convergence of the gradient-based algorithm is analyzed and a few numerical experiments are reported, indicating that the proposed algorithms are sometimes superior to a standard Chebyshev-based subspace iteration when compared in terms of number of matrix vector products, but do not require estimating optimal parameters. An important contribution of this paper to achieve this good performance is the accurate and efficient implementation of an exact line search. In addition, new convergence proofs are presented for the non-accelerated gradient method that includes a locally exponential convergence if started in a $\mathcal{O(\sqrt{\delta})}$ neighbourhood of the dominant subspace with spectral gap $\delta$.

Learning the kernel parameters for Gaussian processes is often the computational bottleneck in applications such as online learning, Bayesian optimization, or active learning. Amortizing parameter inference over different datasets is a promising approach to dramatically speed up training time. However, existing methods restrict the amortized inference procedure to a fixed kernel structure. The amortization network must be redesigned manually and trained again in case a different kernel is employed, which leads to a large overhead in design time and training time. We propose amortizing kernel parameter inference over a complete kernel-structure-family rather than a fixed kernel structure. We do that via defining an amortization network over pairs of datasets and kernel structures. This enables fast kernel inference for each element in the kernel family without retraining the amortization network. As a by-product, our amortization network is able to do fast ensembling over kernel structures. In our experiments, we show drastically reduced inference time combined with competitive test performance for a large set of kernels and datasets.

In this paper we study the threshold model of \emph{geometric inhomogeneous random graphs} (GIRGs); a generative random graph model that is closely related to \emph{hyperbolic random graphs} (HRGs). These models have been observed to capture complex real-world networks well with respect to the structural and algorithmic properties. Following comprehensive studies regarding their \emph{connectivity}, i.e., which parts of the graphs are connected, we have a good understanding under which circumstances a \emph{giant} component (containing a constant fraction of the graph) emerges. While previous results are rather technical and challenging to work with, the goal of this paper is to provide more accessible proofs. At the same time we significantly improve the previously known probabilistic guarantees, showing that GIRGs contain a giant component with probability $1 - \exp(-\Omega(n^{(3-\tau)/2}))$ for graph size $n$ and a degree distribution with power-law exponent $\tau \in (2, 3)$. Based on that we additionally derive insights about the connectivity of certain induced subgraphs of GIRGs.

Reliable probabilistic primality tests are fundamental in public-key cryptography. In adversarial scenarios, a composite with a high probability of passing a specific primality test could be chosen. In such cases, we need worst-case error estimates for the test. However, in many scenarios the numbers are randomly chosen and thus have significantly smaller error probability. Therefore, we are interested in average case error estimates. In this paper, we establish such bounds for the strong Lucas primality test, as only worst-case, but no average case error bounds, are currently available. This allows us to use this test with more confidence. We examine an algorithm that draws odd $k$-bit integers uniformly and independently, runs $t$ independent iterations of the strong Lucas test with randomly chosen parameters, and outputs the first number that passes all $t$ consecutive rounds. We attain numerical upper bounds on the probability on returing a composite. Furthermore, we consider a modified version of this algorithm that excludes integers divisible by small primes, resulting in improved bounds. Additionally, we classify the numbers that contribute most to our estimate.

We present high-order, finite element-based Second Moment Methods (SMMs) for solving radiation transport problems in two spatial dimensions. We leverage the close connection between the Variable Eddington Factor (VEF) method and SMM to convert existing discretizations of the VEF moment system to discretizations of the SMM moment system. The moment discretizations are coupled to a high-order Discontinuous Galerkin discretization of the Discrete Ordinates transport equations. We show that the resulting methods achieve high-order accuracy on high-order (curved) meshes, preserve the thick diffusion limit, and are effective on a challenging multi-material problem both in outer fixed-point iterations and in inner preconditioned iterative solver iterations for the discrete moment systems. We also present parallel scaling results and provide direct comparisons to the VEF algorithms the SMM algorithms were derived from.

北京阿比特科技有限公司