亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study upper bounds on the size of optimum locating-total dominating sets in graphs. A set $S$ of vertices of a graph $G$ is a locating-total dominating set if every vertex of $G$ has a neighbor in $S$, and if any two vertices outside $S$ have distinct neighborhoods within $S$. The smallest size of such a set is denoted by $\gamma^L_t(G)$. It has been conjectured that $\gamma^L_t(G)\leq\frac{2n}{3}$ holds for every twin-free graph $G$ of order $n$ without isolated vertices. We prove that the conjecture holds for cobipartite graphs, split graphs, block graphs and subcubic graphs.

相關內容

The fractional list packing number $\chi_{\ell}^{\bullet}(G)$ of a graph $G$ is a graph invariant that has recently arisen from the study of disjoint list-colourings. It measures how large the lists of a list-assignment $L:V(G)\rightarrow 2^{\mathbb{N}}$ need to be to ensure the existence of a `perfectly balanced' probability distribution on proper $L$-colourings, i.e., such that at every vertex $v$, every colour appears with equal probability $1/|L(v)|$. In this work we give various bounds on $\chi_{\ell}^{\bullet}(G)$, which admit strengthenings for correspondence and local-degree versions. As a corollary, we improve theorems on the related notion of flexible list colouring. In particular we study Cartesian products and $d$-degenerate graphs, and we prove that $\chi_{\ell}^{\bullet}(G)$ is bounded from above by the pathwidth of $G$ plus one. The correspondence analogue of the latter is false for treewidth instead of pathwidth.

We prove that multilevel Picard approximations are capable of approximating solutions of semilinear heat equations in $L^{p}$-sense, ${p}\in [2,\infty)$, in the case of gradient-dependent, Lipschitz-continuous nonlinearities, in the sense that the computational effort of the multilevel Picard approximations grow at most polynomially in both the dimension $d$ and the reciprocal $1/\epsilon$ of the prescribed accuracy $\epsilon$.

Approximating a univariate function on the interval $[-1,1]$ with a polynomial is among the most classical problems in numerical analysis. When the function evaluations come with noise, a least-squares fit is known to reduce the effect of noise as more samples are taken. The generic algorithm for the least-squares problem requires $O(Nn^2)$ operations, where $N+1$ is the number of sample points and $n$ is the degree of the polynomial approximant. This algorithm is unstable when $n$ is large, for example $n\gg \sqrt{N}$ for equispaced sample points. In this study, we blend numerical analysis and statistics to introduce a stable and fast $O(N\log N)$ algorithm called NoisyChebtrunc based on the Chebyshev interpolation. It has the same error reduction effect as least-squares and the convergence is spectral until the error reaches $O(\sigma \sqrt{{n}/{N}})$, where $\sigma$ is the noise level, after which the error continues to decrease at the Monte-Carlo $O(1/\sqrt{N})$ rate. To determine the polynomial degree, NoisyChebtrunc employs a statistical criterion, namely Mallows' $C_p$. We analyze NoisyChebtrunc in terms of the variance and concentration in the infinity norm to the underlying noiseless function. These results show that with high probability the infinity-norm error is bounded by a small constant times $\sigma \sqrt{{n}/{N}}$, when the noise {is} independent and follows a subgaussian or subexponential distribution. We illustrate the performance of NoisyChebtrunc with numerical experiments.

The aim of this study is to establish a general transformation matrix between B-spline surfaces and ANCF surface elements. This study is a further study of the conversion between the ANCF and B-spline surfaces. In this paper, a general transformation matrix between the Bezier surfaces and ANCF surface element is established. This general transformation matrix essentially describes the linear relationship between ANCF and Bezier surfaces. Moreover, the general transformation matrix can help to improve the efficiency of the process to transfer the distorted configuration in the CAA back to the CAD, an urgent requirement in engineering practice. In addition, a special Bezier surface control polygon is given in this study. The Bezier surface described with this control polygon can be converted to an ANCF surface element with fewer d.o.f.. And the converted ANCF surface element with 36 d.o.f. was once addressed by Dufva and Shabana. So the special control polygon can be regarded as the geometric condition in conversion to an ANCF surface element with 36 d.o.f. Based on the fact that a B-spline surface can be seen as a set of Bezier surfaces connected together, the method to establish a general transformation matrix between the ANCF and lower-order B-spline surfaces is given. Specially, the general transformation is not in a recursive form, but in a simplified form.

Classical generative diffusion models learn an isotropic Gaussian denoising process, treating all spatial regions uniformly, thus neglecting potentially valuable structural information in the data. Inspired by the long-established work on anisotropic diffusion in image processing, we present a novel edge-preserving diffusion model that is a generalization of denoising diffusion probablistic models (DDPM). In particular, we introduce an edge-aware noise scheduler that varies between edge-preserving and isotropic Gaussian noise. We show that our model's generative process converges faster to results that more closely match the target distribution. We demonstrate its capability to better learn the low-to-mid frequencies within the dataset, which plays a crucial role in representing shapes and structural information. Our edge-preserving diffusion process consistently outperforms state-of-the-art baselines in unconditional image generation. It is also more robust for generative tasks guided by a shape-based prior, such as stroke-to-image generation. We present qualitative and quantitative results showing consistent improvements (FID score) of up to 30% for both tasks.

We show that, under mild assumptions, every distribution on the hypercube $\{0, 1\}^{n}$ that admits a polynomial-time Markov chain approximate sampler also has an exact sampling algorithm with expected running time in poly$(n)$.

We construct an interpolatory high-order cubature rule to compute integrals of smooth functions over self-affine sets with respect to an invariant measure. The main difficulty is the computation of the cubature weights, which we characterize algebraically, by exploiting a self-similarity property of the integral. We propose an $h$-version and a $p$-version of the cubature, present an error analysis and conduct numerical experiments.

Image Edge detection (ED) is a base task in computer vision. While the performance of the ED algorithm has been improved greatly by introducing CNN-based models, current models still suffer from unsatisfactory precision rates especially when only a low error toleration distance is allowed. Therefore, model architecture for more precise predictions still needs an investigation. On the other hand, the unavoidable noise training data provided by humans would lead to unsatisfactory model predictions even when inputs are edge maps themselves, which also needs a solution. In this paper, more precise ED models are presented with cascaded skipping density blocks (CSDB). Our models obtain state-of-the-art(SOTA) predictions in several datasets, especially in average precision rate (AP), over a high-standard benchmark, which is confirmed by extensive experiments. Also, a novel modification on data augmentation for training is employed, which allows noiseless data to be employed in model training for the first time, and thus further improves the model performance. The relative Python codes can be found on //github.com/Hao-B-Shu/SDPED.

The classical $k$-means clustering requires a complete data matrix without missing entries. As a natural extension of the $k$-means clustering for missing data, the $k$-POD clustering has been proposed, which ignores the missing entries in the $k$-means clustering. This paper shows the inconsistency of the $k$-POD clustering even under the missing completely at random mechanism. More specifically, the expected loss of the $k$-POD clustering can be represented as the weighted sum of the expected $k$-means losses with parts of variables. Thus, the $k$-POD clustering converges to the different clustering from the $k$-means clustering as the sample size goes to infinity. This result indicates that although the $k$-means clustering works well, the $k$-POD clustering may fail to capture the hidden cluster structure. On the other hand, for high-dimensional data, the $k$-POD clustering could be a suitable choice when the missing rate in each variable is low.

We consider the problem of estimating the factors of a rank-$1$ matrix with i.i.d. Gaussian, rank-$1$ measurements that are nonlinearly transformed and corrupted by noise. Considering two prototypical choices for the nonlinearity, we study the convergence properties of a natural alternating update rule for this nonconvex optimization problem starting from a random initialization. We show sharp convergence guarantees for a sample-split version of the algorithm by deriving a deterministic recursion that is accurate even in high-dimensional problems. Notably, while the infinite-sample population update is uninformative and suggests exact recovery in a single step, the algorithm -- and our deterministic prediction -- converges geometrically fast from a random initialization. Our sharp, non-asymptotic analysis also exposes several other fine-grained properties of this problem, including how the nonlinearity and noise level affect convergence behavior. On a technical level, our results are enabled by showing that the empirical error recursion can be predicted by our deterministic sequence within fluctuations of the order $n^{-1/2}$ when each iteration is run with $n$ observations. Our technique leverages leave-one-out tools originating in the literature on high-dimensional $M$-estimation and provides an avenue for sharply analyzing higher-order iterative algorithms from a random initialization in other high-dimensional optimization problems with random data.

北京阿比特科技有限公司