亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Roman domination in a graph $G$ is a variant of the classical domination, defined by means of a so-called Roman domination function $f\colon V(G)\to \{0,1,2\}$ such that if $f(v)=0$ then, the vertex $v$ is adjacent to at least one vertex $w$ with $f(w)=2$. The weight $f(G)$ of a Roman dominating function of $G$ is the sum of the weights of all vertices of $G$, that is, $f(G)=\sum_{u\in V(G)}f(u)$. The Roman domination number $\gamma_R(G)$ is the minimum weight of a Roman dominating function of $G$. In this paper we propose algorithms to compute this parameter involving the $(\min,+)$ powers of large matrices with high computational requirements and the GPU (Graphics Processing Unit) allows us to accelerate such operations. Specific routines have been developed to efficiently compute the $(\min ,+)$ product on GPU architecture, taking advantage of its computational power. These algorithms allow us to compute the Roman domination number of cylindrical graphs $P_m\Box C_n$ i.e., the Cartesian product of a path and a cycle, in cases $m=7,8,9$, $ n\geq 3$ and $m\geq $10$, n\equiv 0\pmod 5$. Moreover, we provide a lower bound for the remaining cases $m\geq 10, n\not\equiv 0\pmod 5$.

相關內容

While overparameterization is known to benefit generalization, its impact on Out-Of-Distribution (OOD) detection is less understood. This paper investigates the influence of model complexity in OOD detection. We propose an expected OOD risk metric to evaluate classifiers confidence on both training and OOD samples. Leveraging Random Matrix Theory, we derive bounds for the expected OOD risk of binary least-squares classifiers applied to Gaussian data. We show that the OOD risk depicts an infinite peak, when the number of parameters is equal to the number of samples, which we associate with the double descent phenomenon. Our experimental study on different OOD detection methods across multiple neural architectures extends our theoretical insights and highlights a double descent curve. Our observations suggest that overparameterization does not necessarily lead to better OOD detection. Using the Neural Collapse framework, we provide insights to better understand this behavior. To facilitate reproducibility, our code will be made publicly available upon publication.

We propose a tamed-adaptive Milstein scheme for stochastic differential equations in which the first-order derivatives of the coefficients are locally H\"older continuous of order $\alpha$. We show that the scheme converges in the $L_2$-norm with a rate of $(1+\alpha)/2$ over both finite intervals $[0, T]$ and the infinite interval $(0, +\infty)$, under certain growth conditions on the coefficients.

Parameter inference for linear and non-Gaussian state space models is challenging because the likelihood function contains an intractable integral over the latent state variables. While Markov chain Monte Carlo (MCMC) methods provide exact samples from the posterior distribution as the number of samples go to infinity, they tend to have high computational cost, particularly for observations of a long time series. Variational Bayes (VB) methods are a useful alternative when inference with MCMC methods is computationally expensive. VB methods approximate the posterior density of the parameters by a simple and tractable distribution found through optimisation. In this paper, we propose a novel sequential variational Bayes approach that makes use of the Whittle likelihood for computationally efficient parameter inference in this class of state space models. Our algorithm, which we call Recursive Variational Gaussian Approximation with the Whittle Likelihood (R-VGA-Whittle), updates the variational parameters by processing data in the frequency domain. At each iteration, R-VGA-Whittle requires the gradient and Hessian of the Whittle log-likelihood, which are available in closed form for a wide class of models. Through several examples using a linear Gaussian state space model and a univariate/bivariate non-Gaussian stochastic volatility model, we show that R-VGA-Whittle provides good approximations to posterior distributions of the parameters and is very computationally efficient when compared to asymptotically exact methods such as Hamiltonian Monte Carlo.

Many models require integrals of high-dimensional functions: for instance, to obtain marginal likelihoods. Such integrals may be intractable, or too expensive to compute numerically. Instead, we can use the Laplace approximation (LA). The LA is exact if the function is proportional to a normal density; its effectiveness therefore depends on the function's true shape. Here, we propose the use of the probabilistic numerical framework to develop a diagnostic for the LA and its underlying shape assumptions, modelling the function and its integral as a Gaussian process and devising a "test" by conditioning on a finite number of function values. The test is decidedly non-asymptotic and is not intended as a full substitute for numerical integration - rather, it is simply intended to test the feasibility of the assumptions underpinning the LA with as minimal computation. We discuss approaches to optimize and design the test, apply it to known sample functions, and highlight the challenges of high dimensions.

A split graph is a graph whose vertex set can be partitioned into a clique and an independent set. A connected graph $G$ is said to be $t$-admissible if admits a spanning tree in which the distance between any two adjacent vertices of $G$ is at most $t$. Given a graph $G$, determining the smallest $t$ for which $G$ is $t$-admissible, i.e., the stretch index of $G$ denoted by $\sigma(G)$, is the goal of the $t$-admissibility problem. Split graphs are $3$-admissible and can be partitioned into three subclasses: split graphs with $\sigma = 1$, $2$ or $3$. In this work we consider such a partition while dealing with the problem of coloring the edges of a split graph. Vizing proved that any graph can have its edges colored with $\Delta$ or $\Delta+1$ colors, and thus can be classified as Class $1$ or Class $2$, respectively. The edge coloring problem is open for split graphs in general. In previous results, we classified split graphs with $\sigma = 2$ and in this paper we classify and provide an algorithm to color the edges of a subclass of split graphs with $\sigma = 3$.

We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.

For linear inverse problems with Gaussian priors and observation noise, the posterior is Gaussian, with mean and covariance determined by the conditioning formula. We analyse measure approximation problems of finding the best approximation to the posterior in a family of Gaussians with approximate covariance or approximate mean, for Hilbert parameter spaces and finite-dimensional observations. We quantify the error of the approximating Gaussian either with the Kullback-Leibler divergence or the family of R\'{e}nyi divergences. Using the Feldman-Hajek theorem and recent results on reduced-rank operator approximations, we identify optimal solutions to these measure approximation problems. Our results extend those of Spantini et al. (SIAM J. Sci. Comput. 2015) to Hilbertian parameter spaces. In addition, our results show that the posterior differs from the prior only on a subspace of dimension equal to the rank of the Hessian of the negative log-likelihood, and that this subspace is a subspace of the Cameron-Martin space of the prior.

We introduce an algebraic concept of the frame for abstract conditional independence (CI) models, together with basic operations with respect to which such a frame should be closed: copying and marginalization. Three standard examples of such frames are (discrete) probabilistic CI structures, semi-graphoids and structural semi-graphoids. We concentrate on those frames which are closed under the operation of set-theoretical intersection because, for these, the respective families of CI models are lattices. This allows one to apply the results from lattice theory and formal concept analysis to describe such families in terms of implications among CI statements. The central concept of this paper is that of self-adhesivity defined in algebraic terms, which is a combinatorial reflection of the self-adhesivity concept studied earlier in context of polymatroids and information theory. The generalization also leads to a self-adhesivity operator defined on the hyper-level of CI frames. We answer some of the questions related to this approach and raise other open questions. The core of the paper is in computations. The combinatorial approach to computation might overcome some memory and space limitation of software packages based on polyhedral geometry, in particular, if SAT solvers are utilized. We characterize some basic CI families over 4 variables in terms of canonical implications among CI statements. We apply our method in information-theoretical context to the task of entropic region demarcation over 5 variables.

We give a fault tolerant construction for error correction and computation using two punctured quantum Reed-Muller (PQRM) codes. In particular, we consider the $[[127,1,15]]$ self-dual doubly-even code that has transversal Clifford gates (CNOT, H, S) and the triply-even $[[127,1,7]]$ code that has transversal T and CNOT gates. We show that code switching between these codes can be accomplished using Steane error correction. For fault-tolerant ancilla preparation we utilize the low-depth hypercube encoding circuit along with different code automorphism permutations in different ancilla blocks, while decoding is handled by the high-performance classical successive cancellation list decoder. In this way, every logical operation in this universal gate set is amenable to extended rectangle analysis. The CNOT exRec has a failure rate approaching $10^{-9}$ at $10^{-3}$ circuit-level depolarizing noise. Furthermore, we map the PQRM codes to a 2D layout suitable for implementation in arrays of trapped atoms and try to reduce the circuit depth of parallel atom movements in state preparation. The resulting protocol is strictly fault-tolerant for the $[[127,1,7]]$ code and practically fault-tolerant for the $[[127,1,15]]$ code. Moreover, each patch requires a permutation consisting of $7$ sub-hypercube swaps only. These are swaps of rectangular grids in our 2D hypercube layout and can be naturally created with acousto-optic deflectors (AODs). Lastly, we show for the family of $[[2^{2r},{2r\choose r},2^r]]$ QRM codes that the entire logical Clifford group can be achieved using only permutations, transversal gates, and fold-transversal gates.

Consider a finite directed graph without cycles in which the arrows are weighted. We present an algorithm for the computation of a new distance, called path-length-weighted distance, which has proven useful for graph analysis in the context of fraud detection. The idea is that the new distance explicitly takes into account the size of the paths in the calculations. Thus, although our algorithm is based on arguments similar to those at work for the Bellman-Ford and Dijkstra methods, it is in fact essentially different. We lay out the appropriate framework for its computation, showing the constraints and requirements for its use, along with some illustrative examples.

北京阿比特科技有限公司