亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Graph burning is a graph process that models the spread of social contagion. Initially, all the vertices of a graph $G$ are unburnt. At each step, an unburnt vertex is put on fire and the fire from burnt vertices of the previous step spreads to their adjacent unburnt vertices. This process continues till all the vertices are burnt. The burning number $b(G)$ of the graph $G$ is the minimum number of steps required to burn all the vertices in the graph. The burning number conjecture by Bonato et al. states that for a connected graph $G$ of order $n$, its burning number $b(G) \leq \lceil \sqrt{n} \rceil$. It is easy to observe that in order to burn a graph it is enough to burn its spanning tree. Hence it suffices to prove that for any tree $T$ of order $n$, its burning number $b(T) \leq \lceil \sqrt{n} \rceil$ where $T$ is the spanning tree of $G$. It was proved in 2018 that $b(T) \leq \lceil \sqrt{n + n_2 + 1/4} +1/2 \rceil$ for a tree $T$ where $n_2$ is the number of degree $2$ vertices in $T$. In this paper, we provide an algorithm to burn a tree and we improve the existing bound using this algorithm. We prove that $b(T)\leq \lceil \sqrt{n + n_2 + 8}\rceil -1$ which is an improved bound for $n\geq 50$. We also provide an algorithm to burn some subclasses of the binary tree and prove the burning number conjecture for the same.

相關內容

Linear codes are widely studied in coding theory as they have nice applications in distributed storage, combinatorics, lattices, cryptography and so on. Constructing linear codes with desirable properties is an interesting research topic. In this paper, based on the augmentation technique, we present two families of linear codes from some functions over finite fields. The first family of linear codes is constructed from monomial functions over finite fields. The locality of them is determined and the weight distributions of two subfamilies of the codes are also given. An infinite family of locally recoverable codes which are at least almost optimal and some optimal recoverable codes are obtained from the linear codes. In particular, the two subfamilies of the codes are proved to be both optimally or almost optimally extendable and self-orthogonal. The second family of linear codes is constructed from weakly regular bent functions over finite fields and their weight distribution is determined. This family of codes is proved to have locality 3 for some cases and is conjectured to have locality 2 for other cases. Particularly, two families of optimal locally recoverable codes are derived from the linear codes. Besides, this family of codes is also proved to be both optimally or almost optimally extendable and self-orthogonal.

We study least-squares trace regression when the parameter is the sum of a $r$-low-rank matrix and a $s$-sparse matrix and a fraction $\epsilon$ of the labels is corrupted. For subgaussian distributions and feature-dependent noise, we highlight three needed design properties, each one derived from a different process inequality: a "product process inequality", "Chevet's inequality" and a "multiplier process inequality". These properties handle, simultaneously, additive decomposition, label contamination and design-noise interaction. They imply the near-optimality of a tractable estimator with respect to the effective dimensions $d_{eff,r}$ and $d_{eff,s}$ of the low-rank and sparse components, $\epsilon$ and the failure probability $\delta$. The near-optimal rate is $\mathsf{r}(n,d_{eff,r}) + \mathsf{r}(n,d_{eff,s}) + \sqrt{(1+\log(1/\delta))/n} + \epsilon\log(1/\epsilon)$, where $\mathsf{r}(n,d_{eff,r})+\mathsf{r}(n,d_{eff,s})$ is the optimal rate in average with no contamination. Our estimator is adaptive to $(s,r,\epsilon,\delta)$ and, for fixed absolute constant $c>0$, it attains the mentioned rate with probability $1-\delta$ uniformly over all $\delta\ge\exp(-cn)$. Without matrix decomposition, our analysis also entails optimal bounds for a robust estimator adapted to the noise variance. Our estimators are based on "sorted" versions of Huber's loss. We present simulations matching the theory. In particular, it reveals the superiority of "sorted" Huber's losses over the classical Huber's loss.

Gradient-enhanced Kriging (GE-Kriging) is a well-established surrogate modelling technique for approximating expensive computational models. However, it tends to get impractical for high-dimensional problems due to the size of the inherent correlation matrix and the associated high-dimensional hyper-parameter tuning problem. To address these issues, a new method, called sliced GE-Kriging (SGE-Kriging), is developed in this paper for reducing both the size of the correlation matrix and the number of hyper-parameters. We first split the training sample set into multiple slices, and invoke Bayes' theorem to approximate the full likelihood function via a sliced likelihood function, in which multiple small correlation matrices are utilized to describe the correlation of the sample set rather than one large one. Then, we replace the original high-dimensional hyper-parameter tuning problem with a low-dimensional counterpart by learning the relationship between the hyper-parameters and the derivative-based global sensitivity indices. The performance of SGE-Kriging is finally validated by means of numerical experiments with several benchmarks and a high-dimensional aerodynamic modeling problem. The results show that the SGE-Kriging model features an accuracy and robustness that is comparable to the standard one but comes at much less training costs. The benefits are most evident for high-dimensional problems with tens of variables.

The mutual-visibility problem in a graph $G$ asks for the cardinality of a largest set of vertices $S\subseteq V(G)$ so that for any two vertices $x,y\in S$ there is a shortest $x,y$-path $P$ so that all internal vertices of $P$ are not in $S$. This is also said as $x,y$ are visible with respect to $S$, or $S$-visible for short. Variations of this problem are known, based on the extension of the visibility property of vertices that are in and/or outside $S$. Such variations are called total, outer and dual mutual-visibility problems. This work is focused on studying the corresponding four visibility parameters in graphs of diameter two, throughout showing bounds and/or closed formulae for these parameters. The mutual-visibility problem in the Cartesian product of two complete graphs is equivalent to (an instance of) the celebrated Zarankievicz's problem. Here we study the dual and outer mutual-visibility problem for the Cartesian product of two complete graphs and all the mutual-visibility problems for the direct product of such graphs as well. We also study all the mutual-visibility problems for the line graphs of complete and complete bipartite graphs. As a consequence of this study, we present several relationships between the mentioned problems and some instances of the classical Tur\'an problem. Moreover, we study the visibility problems for cographs and several non-trivial diameter-two graphs of minimum size.

In the Bayes paradigm and for a given loss function, we propose the construction of a new type of posterior distributions, that extends the classical Bayes one, for estimating the law of an $n$-sample. The loss functions we have in mind are based on the total variation and Hellinger distances as well as some $\mathbb{L}_{j}$-ones. We prove that, with a probability close to one, this new posterior distribution concentrates its mass in a neighbourhood of the law of the data, for the chosen loss function, provided that this law belongs to the support of the prior or, at least, lies close enough to it. We therefore establish that the new posterior distribution enjoys some robustness properties with respect to a possible misspecification of the prior, or more precisely, its support. For the total variation and squared Hellinger losses, we also show that the posterior distribution keeps its concentration properties when the data are only independent, hence not necessarily i.i.d., provided that most of their marginals or the average of these are close enough to some probability distribution around which the prior puts enough mass. The posterior distribution is therefore also stable with respect to the equidistribution assumption. We illustrate these results by several applications. We consider the problems of estimating a location parameter or both the location and the scale of a density in a nonparametric framework. Finally, we also tackle the problem of estimating a density, with the squared Hellinger loss, in a high-dimensional parametric model under some sparsity conditions. The results established in this paper are non-asymptotic and provide, as much as possible, explicit constants.

Factor Analysis based on multivariate $t$ distribution ($t$fa) is a useful robust tool for extracting common factors on heavy-tailed or contaminated data. However, $t$fa is only applicable to vector data. When $t$fa is applied to matrix data, it is common to first vectorize the matrix observations. This introduces two challenges for $t$fa: (i) the inherent matrix structure of the data is broken, and (ii) robustness may be lost, as vectorized matrix data typically results in a high data dimension, which could easily lead to the breakdown of $t$fa. To address these issues, starting from the intrinsic matrix structure of matrix data, a novel robust factor analysis model, namely bilinear factor analysis built on the matrix-variate $t$ distribution ($t$bfa), is proposed in this paper. The novelty is that it is capable to simultaneously extract common factors for both row and column variables of interest on heavy-tailed or contaminated matrix data. Two efficient algorithms for maximum likelihood estimation of $t$bfa are developed. Closed-form expression for the Fisher information matrix to calculate the accuracy of parameter estimates are derived. Empirical studies are conducted to understand the proposed $t$bfa model and compare with related competitors. The results demonstrate the superiority and practicality of $t$bfa. Importantly, $t$bfa exhibits a significantly higher breakdown point than $t$fa, making it more suitable for matrix data.

There is a folkloric belief that a depth-$\Theta(m)$ quantum circuit is needed to estimate the trace of the product of $m$ density matrices (i.e., a multivariate trace), a subroutine crucial to applications in condensed matter and quantum information science. We prove that this belief is overly conservative by constructing a constant quantum-depth circuit for the task, inspired by the method of Shor error correction. Furthermore, our circuit demands only local gates in a two dimensional circuit -- we show how to implement it in a highly parallelized way on an architecture similar to that of Google's Sycamore processor. With these features, our algorithm brings the central task of multivariate trace estimation closer to the capabilities of near-term quantum processors. We instantiate the latter application with a theorem on estimating nonlinear functions of quantum states with "well-behaved" polynomial approximations.

We study effective randomness-preserving transformations of path-incompressible trees. Some path-incompressible trees with infinitely many paths do not compute perfect path-random trees with computable oracle-use. Sparse perfect path-incompressible trees can be effectively densified, almost surely. We characterize the branching density of path-random trees.

The coupling effects in multiphysics processes are often neglected in designing multiscale methods. The coupling may be described by a non-positive definite operator, which in turn brings significant challenges in multiscale simulations. In the paper, we develop a regularized coupling multiscale method based on the generalized multiscale finite element method (GMsFEM) to solve coupled thermomechanical problems, and it is referred to as the coupling generalized multiscale finite element method (CGMsFEM). The method consists of defining the coupling multiscale basis functions through local regularized coupling spectral problems in each coarse-grid block, which can be implemented by a novel design of two relaxation parameters. Compared to the standard GMsFEM, the proposed method can not only accurately capture the multiscale coupling correlation effects of multiphysics problems but also greatly improve computational efficiency with fewer multiscale basis functions. In addition, the convergence analysis is also established, and the optimal error estimates are derived, where the upper bound of errors is independent of the magnitude of the relaxation coefficient. Several numerical examples for periodic, random microstructure, and random material coefficients are presented to validate the theoretical analysis. The numerical results show that the CGMsFEM shows better robustness and efficiency than uncoupled GMsFEM.

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

北京阿比特科技有限公司