亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider spin systems on general $n$-vertex graphs of unbounded degree and explore the effects of spectral independence on the rate of convergence to equilibrium of global Markov chains. Spectral independence is a novel way of quantifying the decay of correlations in spin system models, which has significantly advanced the study of Markov chains for spin systems. We prove that whenever spectral independence holds, the popular Swendsen--Wang dynamics for the $q$-state ferromagnetic Potts model on graphs of maximum degree $\Delta$, where $\Delta$ is allowed to grow with $n$, converges in $O((\Delta \log n)^c)$ steps where $c > 0$ is a constant independent of $\Delta$ and $n$. We also show a similar mixing time bound for the block dynamics of general spin systems, again assuming that spectral independence holds. Finally, for monotone spin systems such as the Ising model and the hardcore model on bipartite graphs, we show that spectral independence implies that the mixing time of the systematic scan dynamics is $O(\Delta^c \log n)$ for a constant $c>0$ independent of $\Delta$ and $n$. Systematic scan dynamics are widely popular but are notoriously difficult to analyze. Our result implies optimal $O(\log n)$ mixing time bounds for any systematic scan dynamics of the ferromagnetic Ising model on general graphs up to the tree uniqueness threshold. Our main technical contribution is an improved factorization of the entropy functional: this is the common starting point for all our proofs. Specifically, we establish the so-called $k$-partite factorization of entropy with a constant that depends polynomially on the maximum degree of the graph.

相關內容

Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.

Matrix reduction is the standard procedure for computing the persistent homology of a filtered simplicial complex with $m$ simplices. Its output is a particular decomposition of the total boundary matrix, from which the persistence diagrams and generating cycles are derived. Persistence diagrams are known to vary continuously with respect to their input, motivating the study of their computation for time-varying filtered complexes. Computing persistence dynamically can be reduced to maintaining a valid decomposition under adjacent transpositions in the filtration order. Since there are $O(m^2)$ such transpositions, this maintenance procedure exhibits limited scalability and is often too fine for many applications. We propose a coarser strategy for maintaining the decomposition over a 1-parameter family of filtrations. By reduction to a particular longest common subsequence problem, we show that the minimal number of decomposition updates $d$ can be found in $O(m \log \log m)$ time and $O(m)$ space, and that the corresponding sequence of permutations -- which we call a schedule -- can be constructed in $O(d m \log m)$ time. We also show that, in expectation, the storage needed to employ this strategy is actually sublinear in $m$. Exploiting this connection, we show experimentally that the decrease in operations to compute diagrams across a family of filtrations is proportional to the difference between the expected quadratic number of states and the proposed sublinear coarsening. Applications to video data, dynamic metric space data, and multiparameter persistence are also presented.

We characterize the convergence properties of traditional best-response (BR) algorithms in computing solutions to mixed-integer Nash equilibrium problems (MI-NEPs) that turn into a class of monotone Nash equilibrium problems (NEPs) once relaxed the integer restrictions. We show that the sequence produced by a Jacobi/Gauss-Seidel BR method always approaches a bounded region containing the entire solution set of the MI-NEP, whose tightness depends on the problem data, and it is related to the degree of strong monotonicity of the relaxed NEP. When the underlying algorithm is applied to the relaxed NEP, we establish data-dependent complexity results characterizing its convergence to the unique solution of the NEP. In addition, we derive one of the very few sufficient conditions for the existence of solutions to MI-NEPs. The theoretical results developed bring important practical advantages that are illustrated on a numerical instance of a smart building control application.

The categorical Gini correlation, $\rho_g$, was proposed by Dang et al. to measure the dependence between a categorical variable, $Y$ , and a numerical variable, $X$. It has been shown that $\rho_g$ has more appealing properties than current existing dependence measurements. In this paper, we develop the jackknife empirical likelihood (JEL) method for $\rho_g$. Confidence intervals for the Gini correlation are constructed without estimating the asymptotic variance. Adjusted and weighted JEL are explored to improve the performance of the standard JEL. Simulation studies show that our methods are competitive to existing methods in terms of coverage accuracy and shortness of confidence intervals. The proposed methods are illustrated in an application on two real datasets.

The domatic number of a graph is the maximum number of vertex disjoint dominating sets that partition the vertex set of the graph. In this paper we consider the fractional variant of this notion. Graphs with fractional domatic number 1 are exactly the graphs that contain an isolated vertex. Furthermore, it is known that all other graphs have fractional domatic number at least 2. In this note we characterize graphs with fractional domatic number 2. More specifically, we show that a graph without isolated vertices has fractional domatic number 2 if and only if it has a vertex of degree 1 or a connected component isomorphic to a 4-cycle. We conjecture that if the fractional domatic number is more than 2, then it is at least 7/3.

We introduce the extremal range, a local statistic for studying the spatial extent of extreme events in random fields on $\mathbb{R}^2$. Conditioned on exceedance of a high threshold at a location $s$, the extremal range at $s$ is the random variable defined as the smallest distance from $s$ to a location where there is a non-exceedance. We leverage tools from excursion-set theory to study distributional properties of the extremal range, propose parametric models and predict the median extremal range at extreme threshold levels. The extremal range captures the rate at which the spatial extent of conditional extreme events scales for increasingly high thresholds, and we relate its distributional properties with the bivariate tail dependence coefficient and the extremal index of time series in classical Extreme-Value Theory. Consistent estimation of the distribution function of the extremal range for stationary random fields is proven. For non-stationary random fields, we implement generalized additive median regression to predict extremal-range maps at very high threshold levels. An application to two large daily temperature datasets, namely reanalyses and climate-model simulations for France, highlights decreasing extremal dependence for increasing threshold levels and reveals strong differences in joint tail decay rates between reanalyses and simulations.

We revisit the task of quantum state redistribution in the one-shot setting, and design a protocol for this task with communication cost in terms of a measure of distance from quantum Markov chains. More precisely, the distance is defined in terms of quantum max-relative entropy and quantum hypothesis testing entropy. Our result is the first to operationally connect quantum state redistribution and quantum Markov chains, and can be interpreted as an operational interpretation for a possible one-shot analogue of quantum conditional mutual information. The communication cost of our protocol is lower than all previously known ones and asymptotically achieves the well-known rate of quantum conditional mutual information. Thus, our work takes a step towards the important open question of near-optimal characterization of the one-shot quantum state redistribution.

There has been significant progress in the study of sampling discretization of integral norms for both a designated finite-dimensional function space and a finite collection of such function spaces (universal discretization). Sampling discretization results turn out to be very useful in various applications, particularly in sampling recovery. Recent sampling discretization results typically provide existence of good sampling points for discretization. In this paper, we show that independent and identically distributed random points provide good universal discretization with high probability. Furthermore, we demonstrate that a simple greedy algorithm based on those points that are good for universal discretization provides excellent sparse recovery results in the square norm.

Spectral deferred corrections (SDC) are a class of iterative methods for the numerical solution of ordinary differential equations. SDC can be interpreted as a Picard iteration to solve a fully implicit collocation problem, preconditioned with a low-order method. It has been widely studied for first-order problems, using explicit, implicit or implicit-explicit Euler and other low-order methods as preconditioner. For first-order problems, SDC achieves arbitrary order of accuracy and possesses good stability properties. While numerical results for SDC applied to the second-order Lorentz equations exist, no theoretical results are available for SDC applied to second-order problems. We present an analysis of the convergence and stability properties of SDC using velocity-Verlet as the base method for general second-order initial value problems. Our analysis proves that the order of convergence depends on whether the force in the system depends on the velocity. We also demonstrate that the SDC iteration is stable under certain conditions. Finally, we show that SDC can be computationally more efficient than a simple Picard iteration or a fourth-order Runge-Kutta-Nystr\"om method.

Random objects are complex non-Euclidean data taking value in general metric space, possibly devoid of any underlying vector space structure. Such data are getting increasingly abundant with the rapid advancement in technology. Examples include probability distributions, positive semi-definite matrices, and data on Riemannian manifolds. However, except for regression for object-valued response with Euclidean predictors and distribution-on-distribution regression, there has been limited development of a general framework for object-valued response with object-valued predictors in the literature. To fill this gap, we introduce the notion of a weak conditional Fr\'echet mean based on Carleman operators and then propose a global nonlinear Fr\'echet regression model through the reproducing kernel Hilbert space (RKHS) embedding. Furthermore, we establish the relationships between the conditional Fr\'echet mean and the weak conditional Fr\'echet mean for both Euclidean and object-valued data. We also show that the state-of-the-art global Fr\'echet regression developed by Petersen and Mueller, 2019 emerges as a special case of our method by choosing a linear kernel. We require that the metric space for the predictor admits a reproducing kernel, while the intrinsic geometry of the metric space for the response is utilized to study the asymptotic properties of the proposed estimates. Numerical studies, including extensive simulations and a real application, are conducted to investigate the performance of our estimator in a finite sample.

北京阿比特科技有限公司