亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Hausdorff $\Phi$-dimension is a notion of Hausdorff dimension developed using a restricted class of coverings of a set. We introduce a constructive analogue of $\Phi$-dimension using the notion of constructive $\Phi$-$s$-supergales. We prove a Point-to-Set Principle for $\Phi$-dimension, through which we get Point-to-Set Principles for Hausdorff dimension, continued-fraction dimension and dimension of Cantor coverings as special cases. We also provide a Kolmogorov complexity characterization of constructive $\Phi$-dimension. A class of covering sets $\Phi$ is said to be "faithful" to Hausdorff dimension if the $\Phi$-dimension and Hausdorff dimension coincide for every set. Similarly, $\Phi$ is said to be "faithful" to constructive dimension if the constructive $\Phi$-dimension and constructive dimension coincide for every set. Using the Point-to-Set Principle for Cantor coverings and a new technique for the construction of sequences satisfying a certain Kolmogorov complexity condition, we show that the notions of ``faithfulness'' of Cantor coverings at the Hausdorff and constructive levels are equivalent. We adapt the result by Albeverio, Ivanenko, Lebid, and Torbin to derive the necessary and sufficient conditions for the constructive dimension faithfulness of the coverings generated by the Cantor series expansion, based on the terms of the expansion.

相關內容

迄今為(wei)止,產(chan)品設(she)計師最友好的交互動畫軟件。

We introduce $5/2$- and $7/2$-order $L^2$-accurate randomized Runge-Kutta-Nystr\"{o}m methods, tailored for approximating Hamiltonian flows within non-reversible Markov chain Monte Carlo samplers, such as unadjusted Hamiltonian Monte Carlo and unadjusted kinetic Langevin Monte Carlo. We establish quantitative $5/2$-order $L^2$-accuracy upper bounds under gradient and Hessian Lipschitz assumptions on the potential energy function. The numerical experiments demonstrate the superior efficiency of the proposed unadjusted samplers on a variety of well-behaved, high-dimensional target distributions.

We present the first polynomial-time algorithm to exactly compute the number of labeled chordal graphs on $n$ vertices. Our algorithm solves a more general problem: given $n$ and $\omega$ as input, it computes the number of $\omega$-colorable labeled chordal graphs on $n$ vertices, using $O(n^7)$ arithmetic operations. A standard sampling-to-counting reduction then yields a polynomial-time exact sampler that generates an $\omega$-colorable labeled chordal graph on $n$ vertices uniformly at random. Our counting algorithm improves upon the previous best result by Wormald (1985), which computes the number of labeled chordal graphs on $n$ vertices in time exponential in $n$. An implementation of the polynomial-time counting algorithm gives the number of labeled chordal graphs on up to $30$ vertices in less than three minutes on a standard desktop computer. Previously, the number of labeled chordal graphs was only known for graphs on up to $15$ vertices. In addition, we design two approximation algorithms: (1) an approximate counting algorithm that computes a $(1\pm\varepsilon)$-approximation of the number of $n$-vertex labeled chordal graphs, and (2) an approximate sampling algorithm that generates a random labeled chordal graph according to a distribution whose total variation distance from the uniform distribution is at most $\varepsilon$. The approximate counting algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ time, and the approximate sampling algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ expected time.

This paper considers the $\varepsilon$-differentially private (DP) release of an approximate cumulative distribution function (CDF) of the samples in a dataset. We assume that the true (approximate) CDF is obtained after lumping the data samples into a fixed number $K$ of bins. In this work, we extend the well-known binary tree mechanism to the class of \emph{level-uniform tree-based} mechanisms and identify $\varepsilon$-DP mechanisms that have a small $\ell_2$-error. We identify optimal or close-to-optimal tree structures when either of the parameters, which are the branching factors or the privacy budgets at each tree level, are given, and when the algorithm designer is free to choose both sets of parameters. Interestingly, when we allow the branching factors to take on real values, under certain mild restrictions, the optimal level-uniform tree-based mechanism is obtained by choosing equal branching factors \emph{independent} of $K$, and equal privacy budgets at all levels. Furthermore, for selected $K$ values, we explicitly identify the optimal \emph{integer} branching factors and tree height, assuming equal privacy budgets at all levels. Finally, we describe general strategies for improving the private CDF estimates further, by combining multiple noisy estimates and by post-processing the estimates for consistency.

In an error-correcting code, a sender encodes a message $x \in \{ 0, 1 \}^k$ such that it is still decodable by a receiver on the other end of a noisy channel. In the setting of \emph{error-correcting codes with feedback}, after sending each bit, the sender learns what was received at the other end and can tailor future messages accordingly. While the unique decoding radius of feedback codes has long been known to be $\frac13$, the list decoding capabilities of feedback codes is not well understood. In this paper, we provide the first nontrivial bounds on the list decoding radius of feedback codes for lists of size $\ell$. For $\ell = 2$, we fully determine the $2$-list decoding radius to be $\frac37$. For larger values of $\ell$, we show an upper bound of $\frac12 - \frac{1}{2^{\ell + 2} - 2}$, and show that the same techniques for the $\ell = 2$ case cannot match this upper bound in general.

The practical use of text-to-image generation has evolved from simple, monolithic models to complex workflows that combine multiple specialized components. While workflow-based approaches can lead to improved image quality, crafting effective workflows requires significant expertise, owing to the large number of available components, their complex inter-dependence, and their dependence on the generation prompt. Here, we introduce the novel task of prompt-adaptive workflow generation, where the goal is to automatically tailor a workflow to each user prompt. We propose two LLM-based approaches to tackle this task: a tuning-based method that learns from user-preference data, and a training-free method that uses the LLM to select existing flows. Both approaches lead to improved image quality when compared to monolithic models or generic, prompt-independent workflows. Our work shows that prompt-dependent flow prediction offers a new pathway to improving text-to-image generation quality, complementing existing research directions in the field.

This paper presents an optimal construction of $N$-bit-delay almost instantaneous fixed-to-variable-length (AIFV) codes, the general form of binary codes we can make when finite bits of decoding delay are allowed. The presented method enables us to optimize lossless codes among a broader class of codes compared to the conventional FV and AIFV codes. The paper first discusses the problem of code construction, which contains some essential partial problems, and defines three classes of optimality to clarify how far we can solve the problems. The properties of the optimal codes are analyzed theoretically, showing the sufficient conditions for achieving the optimum. Then, we propose an algorithm for constructing $N$-bit-delay AIFV codes for given stationary memory-less sources. The optimality of the constructed codes is discussed both theoretically and empirically. They showed shorter expected code lengths when $N\ge 3$ than the conventional AIFV-$m$ and extended Huffman codes. Moreover, in the random numbers simulation, they performed higher compression efficiency than the 32-bit-precision range codes under reasonable conditions.

The random walk $d$-ary cuckoo hashing algorithm was defined by Fotakis, Pagh, Sanders, and Spirakis to generalize and improve upon the standard cuckoo hashing algorithm of Pagh and Rodler. Random walk $d$-ary cuckoo hashing has low space overhead, guaranteed fast access, and fast in practice insertion time. In this paper, we give a theoretical insertion time bound for this algorithm. More precisely, for every $d\ge 3$ hashes, let $c_d^*$ be the sharp threshold for the load factor at which a valid assignment of $cm$ objects to a hash table of size $m$ likely exists. We show that for any $d\ge 4$ hashes and load factor $c<c_d^*$, the expectation of the random walk insertion time is $O(1)$, that is, a constant depending only on $d$ and $c$ but not $m$.

We prove that the constructive and intuitionistic variants of the modal logic $\mathsf{KB}$ coincide. This result contrasts with a recent result by Das and Marin, who showed that the constructive and intuitionistic variants of $\mathsf{K}$ do not prove the same diamond-free formulas.

Principal component analysis (PCA) is a widely used technique for dimension reduction. As datasets continue to grow in size, distributed-PCA (DPCA) has become an active research area. A key challenge in DPCA lies in efficiently aggregating results across multiple machines or computing nodes due to computational overhead. Fan et al. (2019) introduced a pioneering DPCA method to estimate the leading rank-$r$ eigenspace, aggregating local rank-$r$ projection matrices by averaging. However, their method does not utilize eigenvalue information. In this article, we propose a novel DPCA method that incorporates eigenvalue information to aggregate local results via the matrix $\beta$-mean, which we call $\beta$-DPCA. The matrix $\beta$-mean offers a flexible and robust aggregation method through the adjustable choice of $\beta$ values. Notably, for $\beta=1$, it corresponds to the arithmetic mean; for $\beta=-1$, the harmonic mean; and as $\beta \to 0$, the geometric mean. Moreover, the matrix $\beta$-mean is shown to associate with the matrix $\beta$-divergence, a subclass of the Bregman matrix divergence, to support the robustness of $\beta$-DPCA. We also study the stability of eigenvector ordering under eigenvalue perturbation for $\beta$-DPCA. The performance of our proposal is evaluated through numerical studies.

Video instance segmentation (VIS) is the task that requires simultaneously classifying, segmenting and tracking object instances of interest in video. Recent methods typically develop sophisticated pipelines to tackle this task. Here, we propose a new video instance segmentation framework built upon Transformers, termed VisTR, which views the VIS task as a direct end-to-end parallel sequence decoding/prediction problem. Given a video clip consisting of multiple image frames as input, VisTR outputs the sequence of masks for each instance in the video in order directly. At the core is a new, effective instance sequence matching and segmentation strategy, which supervises and segments instances at the sequence level as a whole. VisTR frames the instance segmentation and tracking in the same perspective of similarity learning, thus considerably simplifying the overall pipeline and is significantly different from existing approaches. Without bells and whistles, VisTR achieves the highest speed among all existing VIS models, and achieves the best result among methods using single model on the YouTube-VIS dataset. For the first time, we demonstrate a much simpler and faster video instance segmentation framework built upon Transformers, achieving competitive accuracy. We hope that VisTR can motivate future research for more video understanding tasks.

北京阿比特科技有限公司