亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the 1970s, Gy\H{o}ri and Lov\'{a}sz showed that for a $k$-connected $n$-vertex graph, a given set of terminal vertices $t_1, \dots, t_k$ and natural numbers $n_1, \dots, n_k$ satisfying $\sum_{i=1}^{k} n_i = n$, a connected vertex partition $S_1, \dots, S_k$ satisfying $t_i \in S_i$ and $|S_i| = n_i$ exists. However, polynomial algorithms to actually compute such partitions are known so far only for $k \leq 4$. This motivates us to take a new approach and constrain this problem to particular graph classes instead of restricting the values of $k$. More precisely, we consider $k$-connected chordal graphs and a broader class of graphs related to them. For the first, we give an algorithm with $O(n^2)$ running time that solves the problem exactly, and for the second, an algorithm with $O(n^4)$ running time that deviates on at most one vertex from the given required vertex partition sizes.

相關內容

Stochastic kriging has been widely employed for simulation metamodeling to predict the response surface of complex simulation models. However, its use is limited to cases where the design space is low-dimensional because, in general, the sample complexity (i.e., the number of design points required for stochastic kriging to produce an accurate prediction) grows exponentially in the dimensionality of the design space. The large sample size results in both a prohibitive sample cost for running the simulation model and a severe computational challenge due to the need to invert large covariance matrices. Based on tensor Markov kernels and sparse grid experimental designs, we develop a novel methodology that dramatically alleviates the curse of dimensionality. We show that the sample complexity of the proposed methodology grows only slightly in the dimensionality, even under model misspecification. We also develop fast algorithms that compute stochastic kriging in its exact form without any approximation schemes. We demonstrate via extensive numerical experiments that our methodology can handle problems with a design space of more than 10,000 dimensions, improving both prediction accuracy and computational efficiency by orders of magnitude relative to typical alternative methods in practice.

This thesis investigates the quality of randomly collected data by employing a framework built on information-based complexity, a field related to the numerical analysis of abstract problems. The quality or power of gathered information is measured by its radius which is the uniform error obtainable by the best possible algorithm using it. The main aim is to present progress towards understanding the power of random information for approximation and integration problems.

Parallel-in-time methods for partial differential equations (PDEs) have been the subject of intense development over recent decades, particularly for diffusion-dominated problems. It has been widely reported in the literature, however, that many of these methods perform quite poorly for advection-dominated problems. Here we analyze the particular iterative parallel-in-time algorithm of multigrid reduction-in-time (MGRIT) for discretizations of constant-wave-speed linear advection problems. We focus on common method-of-lines discretizations that employ upwind finite differences in space and Runge-Kutta methods in time. Using a convergence framework we developed in previous work, we prove for a subclass of these discretizations that, if using the standard approach of rediscretizing the fine-grid problem on the coarse grid, robust MGRIT convergence with respect to CFL number and coarsening factor is not possible. This poor convergence and non-robustness is caused, at least in part, by an inadequate coarse-grid correction for smooth Fourier modes known as characteristic components.We propose an alternative coarse-grid that provides a better correction of these modes. This coarse-grid operator is related to previous work and uses a semi-Lagrangian discretization combined with an implicitly treated truncation error correction. Theory and numerical experiments show the coarse-grid operator yields fast MGRIT convergence for many of the method-of-lines discretizations considered, including for both implicit and explicit discretizations of high order.

The real world and its geographic objects are modeled and represented in different spatial databases. Each of these databases provides only a partial description (in space and time) of the geographic objects represented. Sometimes, producers and users of databases need to connect several of them for updates or comparisons. Much of the work in spatial database management focuses on matching these spatial databases and more particularly network databases, such as road networks. With regard to network data, one situation remains neglected, that of matching a linear database (with polylines objects) with a surface database (with polygons objects). In any case, users also need to connect these two types of spatial database. In this paper, a case study is made using French examples (Cachan, near Paris), as well as international case studies (Bordeaux in France, Victoria in Canada, and Copenhagen in Denmark), to propose an approach intended to make coherent network geographical objects in two different reference frames (linear and surface). This issue is addressed here through the example of road data. The surface data are then formalized in order to adapt them to linear data describing the same geographical objects. In the end, a polygon in the surface data corresponds to a single polyline in the linear data. This consistency should simplify the transfer of information from one reference frame (linear or surface) to the other. In other words, the methodology developed aims to make linear and surfacegeographic data interoperable.

Spectral embedding finds vector representations of the nodes of a network, based on the eigenvectors of its adjacency or Laplacian matrix, and has found applications throughout the sciences. Many such networks are multipartite, meaning their nodes can be divided into groups and nodes of the same group are never connected. When the network is multipartite, this paper demonstrates that the node representations obtained via spectral embedding live near group-specific low-dimensional subspaces of a higher-dimensional ambient space. For this reason we propose a follow-on step after spectral embedding, to recover node representations in their intrinsic rather than ambient dimension, proving uniform consistency under a low-rank, inhomogeneous random graph model. Our method naturally generalizes bipartite spectral embedding, in which node representations are obtained by singular value decomposition of the biadjacency or bi-Laplacian matrix.

We consider polynomial optimization problems (POP) on a semialgebraic set contained in the nonnegative orthant (every POP on a compact set can be put in this format by a simple translation of the origin). Such a POP can be converted to an equivalent POP by squaring each variable. Using even symmetry and the concept of factor width, we propose a hierarchy of semidefinite relaxations based on the extension of P\'olya's Positivstellensatz by Dickinson-Povh. As its distinguishing and crucial feature, the maximal matrix size of each resulting semidefinite relaxation can be chosen arbitrarily and in addition, we prove that the sequence of values returned by the new hierarchy converges to the optimal value of the original POP at the rate $O(\varepsilon^{-c})$ if the semialgebraic set has nonempty interior. When applied to (i) robustness certification of multi-layer neural networks and (ii) computation of positive maximal singular values, our method based on P\'olya's Positivstellensatz provides better bounds and runs several hundred times faster than the standard Moment-SOS hierarchy.

Graphs in many applications, such as social networks and IoT, are inherently streaming, involving continuous additions and deletions of vertices and edges at high rates. Constructing random walks in a graph, i.e., sequences of vertices selected with a specific probability distribution, is a prominent task in many of these graph applications as well as machine learning (ML) on graph-structured data. In a streaming scenario, random walks need to constantly keep up with the graph updates to avoid stale walks and thus, performance degradation in the downstream tasks. We present Wharf, a system that efficiently stores and updates random walks on streaming graphs. It avoids a potential size explosion by maintaining a compressed, high-throughput, and low-latency data structure. It achieves (i) the succinct representation by coupling compressed purely functional binary trees and pairing functions for storing the walks, and (ii) efficient walk updates by effectively pruning the walk search space. We evaluate Wharf, with real and synthetic graphs, in terms of throughput and latency when updating random walks. The results show the high superiority of Wharf over inverted index- and tree-based baselines.

Fourier phase retrieval, which seeks to reconstruct a signal from its Fourier magnitude, is of fundamental importance in fields of engineering and science. In this paper, we give a theoretical understanding of algorithms for Fourier phase retrieval. Particularly, we show if there exists an algorithm which could reconstruct an arbitrary signal ${\mathbf x}\in {\mathbb C}^N$ in $ \mbox{Poly}(N) \log(1/\epsilon)$ time to reach $\epsilon$-precision from its magnitude of discrete Fourier transform and its initial value $x(0)$, then $\mathcal{ P}=\mathcal{NP}$. This demystifies the phenomenon that, although almost all signals are determined uniquely by their Fourier magnitude with a prior conditions, there is no algorithm with theoretical guarantees being proposed over the past few decades. Our proofs employ the result in computational complexity theory that Product Partition problem is NP-complete in the strong sense.

High-order meshes provide a more accurate geometrical approximation of an object's boundary (where stress usually concentrates, especially in the presence of contacts) than linear elements, for a negligible additional cost when used in a finite element simulation. High-order bases provide major advantages over linear ones in terms of efficiency, as they provide (for the same physical model) higher accuracy for the same running time, and reliability, as they are less affected by locking artifacts and mesh quality. Thus, we introduce a high-order finite element formulation (high-order basis) for elastodynamic simulation on high-order (curved) meshes with contact handling based on the recently proposed Incremental Potential Contact model. Our approach is based on the observation that each IPC optimization step used to minimize the elasticity, contact, and friction potentials leads to linear trajectories even in the presence of non-linear meshes or non-linear finite element basis. It is thus possible to retain the strong non-penetration guarantees and large time steps of the original formulation while benefiting from the high-order basis and high-order geometry. Additionally, we show that collision proxies can be naturally incorporated into this formulation. We demonstrate the effectiveness of our approach in a selection of problems from graphics, computational fabrication, and scientific computing.

The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.

北京阿比特科技有限公司