亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Since the seminal result of Karger, Motwani, and Sudan, algorithms for approximate 3-coloring have primarily centered around SDP-based rounding. However, it is likely that important combinatorial or algebraic insights are needed in order to break the $n^{o(1)}$ threshold. One way to develop new understanding in graph coloring is to study special subclasses of graphs. For instance, Blum studied the 3-coloring of random graphs, and Arora and Ge studied the 3-coloring of graphs with low threshold-rank. In this work, we study graphs which arise from a tensor product, which appear to be novel instances of the 3-coloring problem. We consider graphs of the form $H = (V,E)$ with $V =V( K_3 \times G)$ and $E = E(K_3 \times G) \setminus E'$, where $E' \subseteq E(K_3 \times G)$ is any edge set such that no vertex has more than an $\epsilon$ fraction of its edges in $E'$. We show that one can construct $\widetilde{H} = K_3 \times \widetilde{G}$ with $V(\widetilde{H}) = V(H)$ that is close to $H$. For arbitrary $G$, $\widetilde{H}$ satisfies $|E(H) \Delta E(\widetilde{H})| \leq O(\epsilon|E(H)|)$. Additionally when $G$ is a mild expander, we provide a 3-coloring for $H$ in polynomial time. These results partially generalize an exact tensor factorization algorithm of Imrich. On the other hand, without any assumptions on $G$, we show that it is NP-hard to 3-color $H$.

相關內容

Parallel-in-time methods for partial differential equations (PDEs) have been the subject of intense development over recent decades, particularly for diffusion-dominated problems. It has been widely reported in the literature, however, that many of these methods perform quite poorly for advection-dominated problems. Here we analyze the particular iterative parallel-in-time algorithm of multigrid reduction-in-time (MGRIT) for discretizations of constant-wave-speed linear advection problems. We focus on common method-of-lines discretizations that employ upwind finite differences in space and Runge-Kutta methods in time. Using a convergence framework we developed in previous work, we prove for a subclass of these discretizations that, if using the standard approach of rediscretizing the fine-grid problem on the coarse grid, robust MGRIT convergence with respect to CFL number and coarsening factor is not possible. This poor convergence and non-robustness is caused, at least in part, by an inadequate coarse-grid correction for smooth Fourier modes known as characteristic components.We propose an alternative coarse-grid that provides a better correction of these modes. This coarse-grid operator is related to previous work and uses a semi-Lagrangian discretization combined with an implicitly treated truncation error correction. Theory and numerical experiments show the coarse-grid operator yields fast MGRIT convergence for many of the method-of-lines discretizations considered, including for both implicit and explicit discretizations of high order.

The absolute value equations (AVE) problem is an algebraic problem of solving Ax+|x|=b. So far, most of the research focused on methods for solving AVEs, but we address the problem itself by analysing properties of AVE and the corresponding solution set. In particular, we investigate topological properties of the solution set, such as convexity, boundedness, connectedness, or whether it consists of finitely many solutions. Further, we address problems related to nonnegativity of solutions such as solvability or unique solvability. AVE can be formulated by means of different optimization problems, and in this regard we are interested in how the solutions of AVE are related with optima, Karush-Kuhn-Tucker points and feasible solutions of these optimization problems. We characterize the matrix classes associated with the above mentioned properties and inspect the computational complexity of the recognition problem; some of the classes are polynomially recognizable, but some others are proved to be NP-hard. For the intractable cases, we propose various sufficient conditions. We also post new challenging problems that raised during the investigation of the problem.

For $r:=(r_1,\dots,r_k)$, an $r$-factorization of the complete $\lambda$-fold $h$-uniform $n$-vertex hypergraph $\lambda K_n^h$ is a partition of (the edges of) $\lambda K_n^h$ into $F_1,\dots, F_k$ such that for $i=1,\dots,k$, $F_i$ is $r_i$-regular and spanning. Suppose that $n \geq (h-1)(2m-1)$. Given a partial $r$-factorization of $\lambda K_m^h$, that is, a coloring (i.e. partition) $P$ of the edges of $\lambda K_m^h$ into $F_1,\dots, F_k$ such that for $i=1,\dots,k$, $F_i$ is spanning and the degree of each vertex in $F_i$ is at most $r_i$, we find necessary and sufficient conditions that ensure $P$ can be extended to a connected $r$-factorization of $\lambda K_n^h$ (i.e. an $r$-factorization in which each factor is connected). Moreover, we prove a general result that implies the following. Given a partial $s$-factorization $P$ of any sub-hypergraph of $\lambda K_m^h$, where $s:=(s_1,\dots,s_q)$ and $q$ is not too big, we find necessary and sufficient conditions under which $P$ can be embedded into a connected $r$-factorization of $\lambda K_n^h$. These results can be seen as unified generalizations of various classical combinatorial results such as Cruse's theorem on embedding partial symmetric latin squares, Baranyai's theorem on factorization of hypergraphs, Hilton's theorem on extending path decompositions into Hamiltonian decompositions, H\"{a}ggkvist and Hellgren's theorem on extending 1-factorizations, and Hilton, Johnson, Rodger, and Wantland's theorem on embedding connected factorizations.

Let $S_{p,n}$ denote the sample covariance matrix based on $n$ independent identically distributed $p$-dimensional random vectors in the null-case. The main result of this paper is an expansion of trace moments and power-trace covariances of $S_{p,n}$ simultaneously for both high- and low-dimensional data. To this end we develop a graph theory oriented ansatz of describing trace moments as weighted sums over colored graphs. Specifically, explicit formulas for the highest order coefficients in the expansion are deduced by restricting attention to graphs with either no or one cycle. The novelty is a color-preserving decomposition of graphs into a tree-structure and their seed graphs, which allows for the identification of Euler circuits from graphs with the same tree-structure but different seed graphs. This approach may also be used to approximate the mean and covariance to even higher degrees of accuracy.

A list assignment $L$ of a graph $G$ is a function that assigns to every vertex $v$ of $G$ a set $L(v)$ of colors. A proper coloring $\alpha$ of $G$ is called an $L$-coloring of $G$ if $\alpha(v)\in L(v)$ for every $v\in V(G)$. For a list assignment $L$ of $G$, the $L$-recoloring graph $\mathcal{G}(G,L)$ of $G$ is a graph whose vertices correspond to the $L$-colorings of $G$ and two vertices of $\mathcal{G}(G,L)$ are adjacent if their corresponding $L$-colorings differ at exactly one vertex of $G$. A $d$-face in a plane graph is a face of length $d$. Dvo\v{r}\'ak and Feghali conjectured for a planar graph $G$ and a list assignment $L$ of $G$, that: (i) If $|L(v)|\geq 10$ for every $v\in V(G)$, then the diameter of $\mathcal{G}(G,L)$ is $O(|V(G)|)$. (ii) If $G$ is triangle-free and $|L(v)|\geq 7$ for every $v\in V(G)$, then the diameter of $\mathcal{G}(G,L)$ is $O(|V(G)|)$. In a recent paper, Cranston (European J. Combin. (2022)) has proved (ii). In this paper, we make progress towards the conjecture by proving the following results. Let $G$ be a plane graph and $L$ be a list assignment of $G$. $\bullet$ If for every $3$-face of $G$, there are at most two $3$-faces adjacent to it and $|L(v)|\geq 10$ for every $v\in V(G)$, then the diameter of $\mathcal{G}(G,L)$ is at most $190|V(G)|$. $\bullet$ If for every $3$-face of $G$, there is at most one $3$-face adjacent to it and $|L(v)|\geq 9$ for every $v\in V(G)$, then the diameter of $\mathcal{G}(G,L)$ is at most $13|V(G)|$. $\bullet$ If the faces adjacent to any $3$-face have length at least $6$ and $|L(v)|\geq 7$ for every $v\in V(G)$, then the diameter of $\mathcal{G}(G,L)$ is at most $242|V(G)|$. This result strengthens the Cranston's result on (ii).

Trawl processes belong to the class of continuous-time, strictly stationary, infinitely divisible processes; they are defined as L\'{e}vy bases evaluated over deterministic trawl sets. This article presents the first nonparametric estimator of the trawl function characterising the trawl set and the serial correlation of the process. Moreover, it establishes a detailed asymptotic theory for the proposed estimator, including a law of large numbers and a central limit theorem for various asymptotic relations between an in-fill and a long-span asymptotic regime. In addition, it develops consistent estimators for both the asymptotic bias and variance, which are subsequently used for establishing feasible central limit theorems which can be applied to data. A simulation study shows the good finite sample performance of the proposed estimators and, in an empirical illustration, the new methodology is applied to modelling and forecasting high-frequency financial spread data from a limit order book.

We study the rank of sub-matrices arising out of kernel functions, $F(\pmb{x},\pmb{y}): \mathbb{R}^d \times \mathbb{R}^d \mapsto \mathbb{R}$, where $\pmb{x},\pmb{y} \in \mathbb{R}^d$, that have a singularity along the diagonal $\pmb{x}=\pmb{y}$. Such kernel functions are frequently encountered in a wide range of applications such as $N$ body problems, Green's functions, integral equations, geostatistics, kriging, Gaussian processes, etc. One of the challenges in dealing with these kernel functions is that the corresponding matrix associated with these kernels is large and dense and thereby, the computational cost of matrix operations is high. In this article, we prove new theorems bounding the numerical rank of sub-matrices arising out of these kernel functions. Under reasonably mild assumptions, we prove that the rank of certain sub-matrices is rank-deficient in finite precision. This rank depends on the dimension of the ambient space and also on the type of interaction between the hyper-cubes containing the corresponding set of particles. This rank structure can be leveraged to reduce the computational cost of certain matrix operations such as matrix-vector products, solving linear systems, etc. We also present numerical results on the growth of rank of certain sub-matrices in $1$D, $2$D, $3$D and $4$D, which, not surprisingly, agrees with the theoretical results.

Approximate solutions to large least squares problems can be computed efficiently using leverage score-based row-sketches, but directly computing the leverage scores, or sampling according to them with naive methods, still requires an expensive manipulation and processing of the design matrix. In this paper we develop efficient leverage score-based sampling methods for matrices with certain Kronecker product-type structure; in particular we consider matrices that are monotone lower column subsets of Kronecker product matrices. Our discussion is general, encompassing least squares problems on infinite domains, in which case matrices formally have infinitely many rows. We briefly survey leverage score-based sampling guarantees from the numerical linear algebra and approximation theory communities, and follow this with efficient algorithms for sampling when the design matrix has Kronecker-type structure. Our numerical examples confirm that sketches based on exact leverage score sampling for our class of structured matrices achieve superior residual compared to approximate leverage score sampling methods.

This work extends the results of [Garde and Hyv\"onen, Math. Comp. 91:1925-1953] on series reversion for Calder\'on's problem to the case of realistic electrode measurements, with both the internal admittivity of the investigated body and the contact admittivity at the electrode-object interfaces treated as unknowns. The forward operator, sending the internal and contact admittivities to the linear electrode current-to-potential map, is first proven to be analytic. A reversion of the corresponding Taylor series yields a family of numerical methods of different orders for solving the inverse problem of electrical impedance tomography, with the possibility to employ different parametrizations for the unknown internal and boundary admittivities. The functionality and convergence of the methods is established only if the employed finite-dimensional parametrization of the unknowns allows the Fr\'echet derivative of the forward map to be injective, but we also heuristically extend the methods to more general settings by resorting to regularization motivated by Bayesian inversion. The performance of this regularized approach is tested via three-dimensional numerical examples based on simulated data. The effect of modeling errors is a focal point of the numerical studies.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

北京阿比特科技有限公司