亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The fractional list packing number $\chi_{\ell}^{\bullet}(G)$ of a graph $G$ is a graph invariant that has recently arisen from the study of disjoint list-colourings. It measures how large the lists of a list-assignment $L:V(G)\rightarrow 2^{\mathbb{N}}$ need to be to ensure the existence of a `perfectly balanced' probability distribution on proper $L$-colourings, i.e., such that at every vertex $v$, every colour appears with equal probability $1/|L(v)|$. In this work we give various bounds on $\chi_{\ell}^{\bullet}(G)$, which admit strengthenings for correspondence and local-degree versions. As a corollary, we improve theorems on the related notion of flexible list colouring. In particular we study Cartesian products and $d$-degenerate graphs, and we prove that $\chi_{\ell}^{\bullet}(G)$ is bounded from above by the pathwidth of $G$ plus one. The correspondence analogue of the latter is false for treewidth instead of pathwidth.

相關內容

For a subfamily ${F}\subseteq 2^{[n]}$ of the Boolean lattice, consider the graph $G_{F}$ on ${F}$ based on the pairwise inclusion relations among its members. Given a positive integer $t$, how large can ${F}$ be before $G_{F}$ must contain some component of order greater than $t$? For $t=1$, this question was answered exactly almost a century ago by Sperner: the size of a middle layer of the Boolean lattice. For $t=2^n$, this question is trivial. We are interested in what happens between these two extremes. For $t=2^{g}$ with $g=g(n)$ being any integer function that satisfies $g(n)=o(n/\log n)$ as $n\to\infty$, we give an asymptotically sharp answer to the above question: not much larger than the size of a middle layer. This constitutes a nontrivial generalisation of Sperner's theorem. We do so by a reduction to a Tur\'an-type problem for rainbow cycles in properly edge-coloured graphs. Among other results, we also give a sharp answer to the question, how large can ${F}$ be before $G_{F}$ must be connected?

We consider the following task: suppose an algorithm is given copies of an unknown $n$-qubit quantum state $|\psi\rangle$ promised $(i)$ $|\psi\rangle$ is $\varepsilon_1$-close to a stabilizer state in fidelity or $(ii)$ $|\psi\rangle$ is $\varepsilon_2$-far from all stabilizer states, decide which is the case. We show that for every $\varepsilon_1>0$ and $\varepsilon_2\leq \varepsilon_1^C$, there is a $\textsf{poly}(1/\varepsilon_1)$-sample and $n\cdot \textsf{poly}(1/\varepsilon_1)$-time algorithm that decides which is the case (where $C>1$ is a universal constant). Our proof includes a new definition of Gowers norm for quantum states, an inverse theorem for the Gowers-$3$ norm of quantum states and new bounds on stabilizer covering for structured subsets of Paulis using results in additive combinatorics.

This paper gives a self-contained introduction to the Hilbert projective metric $\mathcal{H}$ and its fundamental properties, with a particular focus on the space of probability measures. We start by defining the Hilbert pseudo-metric on convex cones, focusing mainly on dual formulations of $\mathcal{H}$ . We show that linear operators on convex cones contract in the distance given by the hyperbolic tangent of $\mathcal{H}$, which in particular implies Birkhoff's classical contraction result for $\mathcal{H}$. Turning to spaces of probability measures, where $\mathcal{H}$ is a metric, we analyse the dual formulation of $\mathcal{H}$ in the general setting, and explore the geometry of the probability simplex under $\mathcal{H}$ in the special case of discrete probability measures. Throughout, we compare $\mathcal{H}$ with other distances between probability measures. In particular, we show how convergence in $\mathcal{H}$ implies convergence in total variation, $p$-Wasserstein distance, and any $f$-divergence. Furthermore, we derive a novel sharp bound for the total variation between two probability measures in terms of their Hilbert distance.

We investigate critical points of eigencurves of bivariate matrix pencils $A+\lambda B +\mu C$. Points $(\lambda,\mu)$ for which $\det(A+\lambda B+\mu C)=0$ form algebraic curves in $\mathbb C^2$ and we focus on points where $\mu'(\lambda)=0$. Such points are referred to as zero-group-velocity (ZGV) points, following terminology from engineering applications. We provide a general theory for the ZGV points and show that they form a subset (with equality in the generic case) of the 2D points $(\lambda_0,\mu_0)$, where $\lambda_0$ is a multiple eigenvalue of the pencil $(A+\mu_0 C)+\lambda B$, or, equivalently, there exist nonzero $x$ and $y$ such that $(A+\lambda_0 B+\mu_0 C)x=0$, $y^H(A+\lambda_0 B+\mu_0 C)=0$, and $y^HBx=0$. We introduce three numerical methods for computing 2D and ZGV points. The first method calculates all 2D (ZGV) points from the eigenvalues of a related singular two-parameter eigenvalue problem. The second method employs a projected regular two-parameter eigenvalue problem to compute either all eigenvalues or only a subset of eigenvalues close to a given target. The third approach is a locally convergent Gauss--Newton-type method that computes a single 2D point from an inital approximation, the later can be provided for all 2D points via the method of fixed relative distance by Jarlebring, Kvaal, and Michiels. In our numerical examples we use these methods to compute 2D-eigenvalues, solve double eigenvalue problems, determine ZGV points of a parameter-dependent quadratic eigenvalue problem, evaluate the distance to instability of a stable matrix, and find critical points of eigencurves of a two-parameter Sturm-Liouville problem.

Conditional independence and graphical models are crucial concepts for sparsity and statistical modeling in higher dimensions. For L\'evy processes, a widely applied class of stochastic processes, these notions have not been studied. By the L\'evy-It\^o decomposition, a multivariate L\'evy process can be decomposed into the sum of a Brownian motion part and an independent jump process. We show that conditional independence statements between the marginal processes can be studied separately for these two parts. While the Brownian part is well-understood, we derive a novel characterization of conditional independence between the sample paths of the jump process in terms of the L\'evy measure. We define L\'evy graphical models as L\'evy processes that satisfy undirected or directed Markov properties. We prove that the graph structure is invariant under changes of the univariate marginal processes. L\'evy graphical models allow the construction of flexible, sparse dependence models for L\'evy processes in large dimensions, which are interpretable thanks to the underlying graph. For trees, we develop statistical methodology to learn the underlying structure from low- or high-frequency observations of the L\'evy process and show consistent graph recovery. We apply our method to model stock returns from U.S. companies to illustrate the advantages of our approach.

The paper addresses the problem of finding a low-rank approximation of a multi-dimensional tensor, $\Phi $, using a subset of its entries. A distinctive aspect of the tensor completion problem explored here is that entries of the $d$-dimensional tensor $\Phi$ are reconstructed via $C$-dimensional slices, where $C < d - 1$. This setup is motivated by, and applied to, the reduced-order modeling of parametric dynamical systems. In such applications, parametric solutions are often reconstructed from space-time slices through sparse sampling over the parameter domain. To address this non-standard completion problem, we introduce a novel low-rank tensor format called the hybrid tensor train. Completion in this format is then incorporated into a Galerkin reduced order model (ROM), specifically an interpolatory tensor-based ROM. We demonstrate the performance of both the completion method and the ROM on several examples of dynamical systems derived from finite element discretizations of parabolic partial differential equations with parameter-dependent coefficients or boundary conditions.

For image generation with diffusion models (DMs), a negative prompt n can be used to complement the text prompt p, helping define properties not desired in the synthesized image. While this improves prompt adherence and image quality, finding good negative prompts is challenging. We argue that this is due to a semantic gap between humans and DMs, which makes good negative prompts for DMs appear unintuitive to humans. To bridge this gap, we propose a new diffusion-negative prompting (DNP) strategy. DNP is based on a new procedure to sample images that are least compliant with p under the distribution of the DM, denoted as diffusion-negative sampling (DNS). Given p, one such image is sampled, which is then translated into natural language by the user or a captioning model, to produce the negative prompt n*. The pair (p, n*) is finally used to prompt the DM. DNS is straightforward to implement and requires no training. Experiments and human evaluations show that DNP performs well both quantitatively and qualitatively and can be easily combined with several DM variants.

We improve bounds on the degree and sparsity of Boolean functions representing the Legendre symbol as well as on the $N$th linear complexity of the Legendre sequence. We also prove similar results for both the Liouville function for integers and its analog for polynomials over $\mathbb{F}_2$, or more general for any (binary) arithmetic function which satisfies $f(2n)=-f(n)$ for $n=1,2,\ldots$

We explore the theoretical possibility of learning $d$-dimensional targets with $W$-parameter models by gradient flow (GF) when $W<d$. Our main result shows that if the targets are described by a particular $d$-dimensional probability distribution, then there exist models with as few as two parameters that can learn the targets with arbitrarily high success probability. On the other hand, we show that for $W<d$ there is necessarily a large subset of GF-non-learnable targets. In particular, the set of learnable targets is not dense in $\mathbb R^d$, and any subset of $\mathbb R^d$ homeomorphic to the $W$-dimensional sphere contains non-learnable targets. Finally, we observe that the model in our main theorem on almost guaranteed two-parameter learning is constructed using a hierarchical procedure and as a result is not expressible by a single elementary function. We show that this limitation is essential in the sense that most models written in terms of elementary functions cannot achieve the learnability demonstrated in this theorem.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司