亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The symmetric circulant TSP is a special case of the traveling salesman problem in which edge costs are symmetric and obey circulant symmetry. Despite the substantial symmetry of the input, remarkably little is known about the symmetric circulant TSP, and the complexity of the problem has been an often-cited open question. Considerable effort has been made to understand the case in which only edges of two lengths are allowed to have finite cost: the two-stripe symmetric circulant TSP. In this paper, we resolve the complexity of the two-stripe symmetric circulant TSP. To do so, we reduce two-stripe symmetric circulant TSP to the problem of finding certain minimum-cost Hamiltonian paths on cylindrical graphs. We then solve this Hamiltonian path problem. Our results show that the two-stripe symmetric circulant TSP is in P. Note that a two-stripe symmetric circulant TSP instance consists of a constant number of inputs (including $n$, the number of cities), so that a polynomial-time algorithm for the decision problem must run in time polylogarithmic in $n$, and a polynomial-time algorithm for the optimization problem cannot output the tour. We address this latter difficulty by showing that the optimal tour must fall into one of two parameterized classes of tours, and that we can output the class and the parameters in polynomial time. Thus we make a substantial contribution to the set of polynomial-time solvable special cases of the TSP, and take an important step towards resolving the complexity of the general symmetric circulant TSP.

相關內容

A two dimensional eigenvalue problem (2DEVP) of a Hermitian matrix pair $(A, C)$ is introduced in this paper. The 2DEVP can be viewed as a linear algebraic formulation of the well-known eigenvalue optimization problem of the parameter matrix $H(\mu) = A - \mu C$. We present fundamental properties of the 2DEVP such as the existence, the necessary and sufficient condition for the finite number of 2D-eigenvalues and variational characterizations. We use eigenvalue optimization problems from the minmax of two Rayleigh quotients and the computation of distance to instability to show their connections with the 2DEVP and new insights of these problems derived from the properties of the 2DEVP.

Asymptotic study on the partition function $p(n)$ began with the work of Hardy and Ramanujan. Later Rademacher obtained a convergent series for $p(n)$ and an error bound was given by Lehmer. Despite having this, a full asymptotic expansion for $p(n)$ with an explicit error bound is not known. Recently O'Sullivan studied the asymptotic expansion of $p^{k}(n)$-partitions into $k$th powers, initiated by Wright, and consequently obtained an asymptotic expansion for $p(n)$ along with a concise description of the coefficients involved in the expansion but without any estimation of the error term. Here we consider a detailed and comprehensive analysis on an estimation of the error term obtained by truncating the asymptotic expansion for $p(n)$ at any positive integer $n$. This gives rise to an infinite family of inequalities for $p(n)$ which finally answers to a question proposed by Chen. Our error term estimation predominantly relies on applications of algorithmic methods from symbolic summation.

Point-interactive image colorization aims to colorize grayscale images when a user provides the colors for specific locations. It is essential for point-interactive colorization methods to appropriately propagate user-provided colors (i.e., user hints) in the entire image to obtain a reasonably colorized image with minimal user effort. However, existing approaches often produce partially colorized results due to the inefficient design of stacking convolutional layers to propagate hints to distant relevant regions. To address this problem, we present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions, leveraging the global receptive field of Transformers. The self-attention mechanism of Transformers enables iColoriT to selectively colorize relevant regions with only a few local hints. Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture. Also, in order to mitigate the artifacts caused by pixel shuffling with large upsampling ratios, we present the local stabilizing layer. Extensive quantitative and qualitative results demonstrate that our approach highly outperforms existing methods for point-interactive colorization, producing accurately colorized images with a user's minimal effort. Official codes are available at //pmh9960.github.io/research/iColoriT

We present a Newton-Krylov solver for a viscous-plastic sea-ice model. This constitutive relation is commonly used in climate models to describe the material properties of sea ice. Due to the strong nonlinearity introduced by the material law in the momentum equation, the development of fast, robust and scalable solvers is still a substantial challenge. In this paper, we propose a novel primal-dual Newton linearization for the implicitly-in-time discretized momentum equation. Compared to existing methods, it converges faster and more robustly with respect to mesh refinement, and thus enables numerically converged sea-ice simulations at high resolutions. Combined with an algebraic multigrid-preconditioned Krylov method for the linearized systems, which contain strongly varying coefficients, the resulting solver scales well and can be used in parallel. We present experiments for two challenging test problems and study solver performance for problems with up to 8.4 million spatial unknowns.

Linear computation broadcast (LCBC) refers to a setting with $d$ dimensional data stored at a central server, where $K$ users, each with some prior linear side-information, wish to retrieve various linear combinations of the data. The goal is to determine the minimum amount of information that must be broadcast to satisfy all the users. The reciprocal of the optimal broadcast cost is the capacity of LCBC. The capacity is known for up to $K=3$ users. Since LCBC includes index coding as a special case, large $K$ settings of LCBC are at least as hard as the index coding problem. Instead of the general setting (all instances), by focusing on the generic setting (almost all instances) this work shows that the generic capacity of the symmetric LCBC (where every user has $m'$ dimensions of side-information and $m$ dimensions of demand) for large number of users ($K>d$ suffices) is $C_g=1/\Delta_g$, where $\Delta_g=\min\left\{\max\{0,d-m'\}, Km, \frac{dm}{m+m'}\right\}$ is the broadcast cost that is both achievable and unbeatable asymptotically almost surely for large $n$, among all LCBC instances with the given parameters $p,K,d,m,m'$. Relative to baseline schemes of random coding or separate transmissions, $C_g$ shows an extremal gain by a factor of $K$ as a function of number of users, and by a factor of $\approx d/4$ as a function of data dimensions, when optimized over remaining parameters. For arbitrary number of users, the generic capacity of the symmetric LCBC is characterized within a factor of $2$.

Stochastic kriging has been widely employed for simulation metamodeling to predict the response surface of complex simulation models. However, its use is limited to cases where the design space is low-dimensional because, in general, the sample complexity (i.e., the number of design points required for stochastic kriging to produce an accurate prediction) grows exponentially in the dimensionality of the design space. The large sample size results in both a prohibitive sample cost for running the simulation model and a severe computational challenge due to the need to invert large covariance matrices. Based on tensor Markov kernels and sparse grid experimental designs, we develop a novel methodology that dramatically alleviates the curse of dimensionality. We show that the sample complexity of the proposed methodology grows only slightly in the dimensionality, even under model misspecification. We also develop fast algorithms that compute stochastic kriging in its exact form without any approximation schemes. We demonstrate via extensive numerical experiments that our methodology can handle problems with a design space of more than 10,000 dimensions, improving both prediction accuracy and computational efficiency by orders of magnitude relative to typical alternative methods in practice.

Variational Bayesian posterior inference often requires simplifying approximations such as mean-field parametrisation to ensure tractability. However, prior work has associated the variational mean-field approximation for Bayesian neural networks with underfitting in the case of small datasets or large model sizes. In this work, we show that invariances in the likelihood function of over-parametrised models contribute to this phenomenon because these invariances complicate the structure of the posterior by introducing discrete and/or continuous modes which cannot be well approximated by Gaussian mean-field distributions. In particular, we show that the mean-field approximation has an additional gap in the evidence lower bound compared to a purpose-built posterior that takes into account the known invariances. Importantly, this invariance gap is not constant; it vanishes as the approximation reverts to the prior. We proceed by first considering translation invariances in a linear model with a single data point in detail. We show that, while the true posterior can be constructed from a mean-field parametrisation, this is achieved only if the objective function takes into account the invariance gap. Then, we transfer our analysis of the linear model to neural networks. Our analysis provides a framework for future work to explore solutions to the invariance problem.

For Lagrange polynomial interpolation on open arcs $X=\gamma$ in $\CC$, it is well-known that the Lebesgue constant for the family of Chebyshev points ${\bf{x}}_n:=\{x_{n,j}\}^{n}_{j=0}$ on $[-1,1]\subset \RR$ has growth order of $O(log(n))$. The same growth order was shown in \cite{ZZ} for the Lebesgue constant of the family ${\bf {z^{**}_n}}:=\{z_{n,j}^{**}\}^{n}_{j=0}$ of some properly adjusted Fej\'er points on a rectifiable smooth open arc $\gamma\subset \CC$. On the other hand, in our recent work \cite{CZ2021}, it was observed that if the smooth open arc $\gamma$ is replaced by an $L$-shape arc $\gamma_0 \subset \CC$ consisting of two line segments, numerical experiments suggest that the Marcinkiewicz-Zygmund inequalities are no longer valid for the family of Fej\'er points ${\bf z}_n^{*}:=\{z_{n,j}^{*}\}^{n}_{j=0}$ on $\gamma$, and that the rate of growth for the corresponding Lebesgue constant $L_{{\bf {z}}^{*}_n}$ is as fast as $c\,log^2(n)$ for some constant $c>0$. The main objective of the present paper is 3-fold: firstly, it will be shown that for the special case of the $L$-shape arc $\gamma_0$ consisting of two line segments of the same length that meet at the angle of $\pi/2$, the growth rate of the Lebesgue constant $L_{{\bf {z}}_n^{*}}$ is at least as fast as $O(Log^2(n))$, with $\lim\sup \frac{L_{{\bf {z}}_n^{*}}}{log^2(n)} = \infty$; secondly, the corresponding (modified) Marcinkiewicz-Zygmund inequalities fail to hold; and thirdly, a proper adjustment ${\bf z}_n^{**}:=\{z_{n,j}^{**}\}^{n}_{j=0}$ of the Fej\'er points on $\gamma$ will be described to assure the growth rate of $L_{{\bf z}_n^{**}}$ to be exactly $O(Log^2(n))$.

This paper designs a new and scientific environmental quality assessment method, and takes Saihan dam as an example to explore the environmental improvement degree to the local and Beijing areas. AHP method is used to assign values to each weight 7 primary indicators and 21 secondary indicators were used to establish an environmental quality assessment model. The conclusion shows that after the establishment of Saihan dam, the local environmental quality has been improved by 7 times, and the environmental quality in Beijing has been improved by 13%. Then the future environmental index is predicted. Finally the Spearson correlation coefficient is analyzed, and it is proved that correlation is 99% when the back-propagation algorithm is used to test and prove that the error is little.

Let $L$ be an $n\times n$ array whose top left $r\times r$ subarray is filled with $k$ different symbols, each occurring at most once in each row and at most once in each column. We establish necessary and sufficient conditions that ensure the remaining cells of $L$ can be filled such that each symbol occurs at most once in each row and at most once in each column, $L$ is symmetric with respect to the main diagonal, and each symbol occurs a prescribed number of times in $L$. The case where the prescribed number of times each symbol occurs is $n$ was solved by Cruse (J. Combin. Theory Ser. A 16 (1974), 18-22), and the case where the top left subarray is $r\times n$ and the symmetry is not required, was settled by Goldwasser et al. (J. Combin. Theory Ser. A 130 (2015), 26-41). Our result allows the entries of the main diagonal to be specified as well, which leads to an extension of the Andersen-Hoffman's Theorem (Annals of Disc. Math. 15 (1982) 9-26, European J. Combin. 4 (1983) 33-35).

北京阿比特科技有限公司