亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we extend to two-dimensional data two recently introduced one-dimensional compressibility measures: the $\gamma$ measure defined in terms of the smallest string attractor, and the $\delta$ measure defined in terms of the number of distinct substrings of the input string. Concretely, we introduce the two-dimensional measures $\gamma_{2D}$ and $\delta_{2D}$ as natural generalizations of $\gamma$ and $\delta$ and study some of their properties. Among other things, we prove that $\delta_{2D}$ is monotone and can be computed in linear time, and we show that although it is still true that $\delta_{2D} \leq \gamma_{2D}$ the gap between the two measures can be $\Omega(\sqrt{n})$ for families of $n\times n$ matrices and therefore asymptotically larger than the gap in one-dimension. Finally, we use the measures $\gamma_{2D}$ and $\delta_{2D}$ to provide the first analysis of the space usage of the two-dimensional block tree introduced in [Brisaboa et al., Two-dimensional block trees, The computer Journal, 2023].

相關內容

Let $(X_t)$ be a reflected diffusion process in a bounded convex domain in $\mathbb R^d$, solving the stochastic differential equation $$dX_t = \nabla f(X_t) dt + \sqrt{2f (X_t)} dW_t, ~t \ge 0,$$ with $W_t$ a $d$-dimensional Brownian motion. The data $X_0, X_D, \dots, X_{ND}$ consist of discrete measurements and the time interval $D$ between consecutive observations is fixed so that one cannot `zoom' into the observed path of the process. The goal is to infer the diffusivity $f$ and the associated transition operator $P_{t,f}$. We prove injectivity theorems and stability inequalities for the maps $f \mapsto P_{t,f} \mapsto P_{D,f}, t<D$. Using these estimates we establish the statistical consistency of a class of Bayesian algorithms based on Gaussian process priors for the infinite-dimensional parameter $f$, and show optimality of some of the convergence rates obtained. We discuss an underlying relationship between the degree of ill-posedness of this inverse problem and the `hot spots' conjecture from spectral geometry.

In this work we propose a method to perform optimization on manifolds. We assume to have an objective function $f$ defined on a manifold and think of it as the potential energy of a mechanical system. By adding a momentum-dependent kinetic energy we define its Hamiltonian function, which allows us to write the corresponding Hamiltonian system. We make it conformal by introducing a dissipation term: the result is the continuous model of our scheme. We solve it via splitting methods (Lie-Trotter and leapfrog): we combine the RATTLE scheme, approximating the conserved flow, with the exact dissipated flow. The result is a conformal symplectic method for constant stepsizes. We also propose an adaptive stepsize version of it. We test it on an example, the minimization of a function defined on a sphere, and compare it with the usual gradient descent method.

We consider spin systems on general $n$-vertex graphs of unbounded degree and explore the effects of spectral independence on the rate of convergence to equilibrium of global Markov chains. Spectral independence is a novel way of quantifying the decay of correlations in spin system models, which has significantly advanced the study of Markov chains for spin systems. We prove that whenever spectral independence holds, the popular Swendsen--Wang dynamics for the $q$-state ferromagnetic Potts model on graphs of maximum degree $\Delta$, where $\Delta$ is allowed to grow with $n$, converges in $O((\Delta \log n)^c)$ steps where $c > 0$ is a constant independent of $\Delta$ and $n$. We also show a similar mixing time bound for the block dynamics of general spin systems, again assuming that spectral independence holds. Finally, for monotone spin systems such as the Ising model and the hardcore model on bipartite graphs, we show that spectral independence implies that the mixing time of the systematic scan dynamics is $O(\Delta^c \log n)$ for a constant $c>0$ independent of $\Delta$ and $n$. Systematic scan dynamics are widely popular but are notoriously difficult to analyze. Our result implies optimal $O(\log n)$ mixing time bounds for any systematic scan dynamics of the ferromagnetic Ising model on general graphs up to the tree uniqueness threshold. Our main technical contribution is an improved factorization of the entropy functional: this is the common starting point for all our proofs. Specifically, we establish the so-called $k$-partite factorization of entropy with a constant that depends polynomially on the maximum degree of the graph.

We introduce an algorithm for estimating the trace of a matrix function $f(\mathbf{A})$ using implicit products with a symmetric matrix $\mathbf{A}$. Existing methods for implicit trace estimation of a matrix function tend to treat matrix-vector products with $f(\mathbf{A})$ as a black-box to be computed by a Krylov subspace method. Like other recent algorithms for implicit trace estimation, our approach is based on a combination of deflation and stochastic trace estimation. However, we take a closer look at how products with $f(\mathbf{A})$ are integrated into these approaches which enables several efficiencies not present in previously studied methods. In particular, we describe a Krylov subspace method for computing a low-rank approximation of a matrix function by a computationally efficient projection onto Krylov subspace.

This paper presents a new foundational approach to information theory based on the concept of the information efficiency of a recursive function, which is defined as the difference between the information in the input and the output. The theory allows us to study planar representations of various infinite domains. Dilation theory studies the information effects of recursive operations in terms of topological deformations of the plane. I show that the well-known class of finite sets of natural numbers behaves erratically under such transformations. It is subject to phase transitions that in some cases have a fractal nature. The class is \emph{semi-countable}: there is no intrinsic information theory for this class and there are no efficient methods for systematic search. There is a relation between the information efficiency of the function and the time needed to compute it: a deterministic computational process can destroy information in linear time, but it can only generate information at logarithmic speed. Checking functions for problems in $NP$ are information discarding. Consequently, when we try to solve a decision problem based on an efficiently computable checking function, we need exponential time to reconstruct the information destroyed by such a function. At the end of the paper I sketch a systematic taxonomy for problems in $NP$.

A Las Vegas randomized algorithm is given to compute the Hermite normal form of a nonsingular integer matrix $A$ of dimension $n$. The algorithm uses quadratic integer multiplication and cubic matrix multiplication and has running time bounded by $O(n^3 (\log n + \log ||A||)^2(\log n)^2)$ bit operations, where $||A||= \max_{ij} |A_{ij}|$ denotes the largest entry of $A$ in absolute value. A variant of the algorithm that uses pseudo-linear integer multiplication is given that has running time $(n^3 \log ||A||)^{1+o(1)}$ bit operations, where the exponent $"+o(1)"$ captures additional factors $c_1 (\log n)^{c_2} (\log \log ||A||)^{c_3}$ for positive real constants $c_1,c_2,c_3$.

A storage code is an assignment of symbols to the vertices of a connected graph $G(V,E)$ with the property that the value of each vertex is a function of the values of its neighbors, or more generally, of a certain neighborhood of the vertex in $G$. In this work we introduce a new construction method of storage codes, enabling one to construct new codes from known ones via an interleaving procedure driven by resolvable designs. We also study storage codes on $\mathbb Z$ and ${\mathbb Z}^2$ (lines and grids), finding closed-form expressions for the capacity of several one and two-dimensional systems depending on their recovery set, using connections between storage codes, graphs, anticodes, and difference-avoiding sets.

We study the continuous multi-reference alignment model of estimating a periodic function on the circle from noisy and circularly-rotated observations. Motivated by analogous high-dimensional problems that arise in cryo-electron microscopy, we establish minimax rates for estimating generic signals that are explicit in the dimension $K$. In a high-noise regime with noise variance $\sigma^2 \gtrsim K$, for signals with Fourier coefficients of roughly uniform magnitude, the rate scales as $\sigma^6$ and has no further dependence on the dimension. This rate is achieved by a bispectrum inversion procedure, and our analyses provide new stability bounds for bispectrum inversion that may be of independent interest. In a low-noise regime where $\sigma^2 \lesssim K/\log K$, the rate scales instead as $K\sigma^2$, and we establish this rate by a sharp analysis of the maximum likelihood estimator that marginalizes over latent rotations. A complementary lower bound that interpolates between these two regimes is obtained using Assouad's hypercube lemma. We extend these analyses also to signals whose Fourier coefficients have a slow power law decay.

Model degrees of freedom ($\df$) is a fundamental concept in statistics because it quantifies the flexibility of a fitting procedure and is indispensable in model selection. The $\df$ is often intuitively equated with the number of independent variables in the fitting procedure. But for adaptive regressions that perform variable selection (e.g., the best subset regressions), the model $\df$ is larger than the number of selected variables. The excess part has been defined as the \emph{search degrees of freedom} ($\sdf$) to account for model selection. However, this definition is limited since it does not consider fitting procedures in augmented space, such as splines and regression trees; and it does not use the same fitting procedure for $\sdf$ and $\df$. For example, the lasso's $\sdf$ is defined through the \emph{relaxed} lasso's $\df$ instead of the lasso's $\df$. Here we propose a \emph{modified search degrees of freedom} ($\msdf$) to directly account for the cost of searching in the original or augmented space. Since many fitting procedures can be characterized by a linear operator, we define the search cost as the effort to determine such a linear operator. When we construct a linear operator for the lasso via the iterative ridge regression, $\msdf$ offers a new perspective for its search cost. For some complex procedures such as the multivariate adaptive regression splines (MARS), the search cost needs to be pre-determined to serve as a tuning parameter for the procedure itself, but it might be inaccurate. To investigate the inaccurate pre-determined search cost, we develop two concepts, \emph{nominal} $\df$ and \emph{actual} $\df$, and formulate a property named \emph{self-consistency} when there is no gap between the \emph{nominal} $\df$ and the \emph{actual} $\df$.

A garden $G$ is populated by $n\ge 1$ bamboos $b_1, b_2, ..., b_n$ with the respective daily growth rates $h_1 \ge h_2 \ge \dots \ge h_n$. It is assumed that the initial heights of bamboos are zero. The robotic gardener maintaining the garden regularly attends bamboos and trims them to height zero according to some schedule. The Bamboo Garden Trimming Problem (BGT) is to design a perpetual schedule of cuts to maintain the elevation of the bamboo garden as low as possible. The bamboo garden is a metaphor for a collection of machines which have to be serviced, with different frequencies, by a robot which can service only one machine at a time. The objective is to design a perpetual schedule of servicing which minimizes the maximum (weighted) waiting time for servicing. We consider two variants of BGT. In discrete BGT the robot trims only one bamboo at the end of each day. In continuous BGT the bamboos can be cut at any time, however, the robot needs time to move from one bamboo to the next. For discrete BGT, we show tighter approximation algorithms for the case when the growth rates are balanced and for the general case. The former algorithm settles one of the conjectures about the Pinwheel problem. The general approximation algorithm improves on the previous best approximation ratio. For continuous BGT, we propose approximation algorithms which achieve approximation ratios $O(\log \lceil h_1/h_n\rceil)$ and $O(\log n)$.

北京阿比特科技有限公司