亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a geometric structure induced by any given convex polygon $P$, called $Nest(P)$, which is an arrangement of $\Theta(n^2)$ line segments, each of which is parallel to an edge of $P$, where $n$ denotes the number of edges of $P$. We then deduce six nontrivial properties of $Nest(P)$ from the convexity of $P$ and the parallelism of the line segments in $Nest(P)$. Among others, we show that $Nest(P)$ is a subdivision of the exterior of $P$, and its inner boundary interleaves the boundary of $P$. They manifest that $Nest(P)$ has a surprisingly good interaction with the boundary of $P$. Furthermore, we study some computational problems on $Nest(P)$. In particular, we consider three kinds of location queries on $Nest(P)$ and answer each of them in (amortized) $O(\log^2n)$ time. Our algorithm for answering these queries avoids an explicit construction of $Nest(P)$, which would take $\Omega(n^2)$ time. By applying the aforementioned six properties altogether, we find that the geometric optimization problem of finding the maximum area parallelogram(s) in $P$ can be reduced to answering $O(n)$ aforementioned location queries, and thus be solved in $O(n\log^2n)$ time. This application will be reported in a subsequent paper.

相關內容

In this paper we provide some more details on the numerical analysis and we present some enlightening numerical results related to the spectrum of a finite element least-squares approximation of the linear elasticity formulation introduced recently. We show that, although the formulation is robust in the incompressible limit for the source problem, its spectrum is strongly dependent on the Lam\'e parameters and on the underlying mesh.

We consider the problem of designing fundamental graph algorithms on the model of Massive Parallel Computation (MPC). The input to the problem is an undirected graph $G$ with $n$ vertices and $m$ edges, and with $D$ being the maximum diameter of any connected component in $G$. We consider the MPC with low local space, allowing each machine to store only $\Theta(n^\delta)$ words for an arbitrarily constant $\delta > 0$, and with linear global space (which is equal to the number of machines times the local space available), that is, with optimal utilization. In a recent breakthrough, Andoni et al. (FOCS 18) and Behnezhad et al. (FOCS 19) designed parallel randomized algorithms that in $O(\log D + \log \log n)$ rounds on an MPC with low local space determine all connected components of an input graph, improving upon the classic bound of $O(\log n)$ derived from earlier works on PRAM algorithms. In this paper, we show that asymptotically identical bounds can be also achieved for deterministic algorithms: we present a deterministic MPC low local space algorithm that in $O(\log D + \log \log n)$ rounds determines all connected components of the input graph.

Recent years have witnessed a rapid growth of distributed machine learning (ML) frameworks, which exploit the massive parallelism of computing clusters to expedite ML training. However, the proliferation of distributed ML frameworks also introduces many unique technical challenges in computing system design and optimization. In a networked computing cluster that supports a large number of training jobs, a key question is how to design efficient scheduling algorithms to allocate workers and parameter servers across different machines to minimize the overall training time. Toward this end, in this paper, we develop an online scheduling algorithm that jointly optimizes resource allocation and locality decisions. Our main contributions are three-fold: i) We develop a new analytical model that considers both resource allocation and locality; ii) Based on an equivalent reformulation and observations on the worker-parameter server locality configurations, we transform the problem into a mixed packing and covering integer program, which enables approximation algorithm design; iii) We propose a meticulously designed approximation algorithm based on randomized rounding and rigorously analyze its performance. Collectively, our results contribute to the state of the art of distributed ML system optimization and algorithm design.

We show that the decision problem of determining whether a given (abstract simplicial) $k$-complex has a geometric embedding in $\mathbb R^d$ is complete for the Existential Theory of the Reals for all $d\geq 3$ and $k\in\{d-1,d\}$. This implies that the problem is polynomial time equivalent to determining whether a polynomial equation system has a real root. Moreover, this implies NP-hardness and constitutes the first hardness results for the algorithmic problem of geometric embedding (abstract simplicial) complexes.

The work explores a specific scenario for structural computational optimization based on the following elements: (a) a relaxed optimization setting considering the ersatz (bi-material) approximation, (b) a treatment based on a nonsmoothed characteristic function field as a topological design variable, (c) the consistent derivation of a relaxed topological derivative whose determination is simple, general and efficient, (d) formulation of the overall increasing cost function topological sensitivity as a suitable optimality criterion, and (e) consideration of a pseudo-time framework for the problem solution, ruled by the problem constraint evolution. In this setting, it is shown that the optimization problem can be analytically solved in a variational framework, leading to, nonlinear, closed-form algebraic solutions for the characteristic function, which are then solved, in every time-step, via fixed point methods based on a pseudo-energy cutting algorithm combined with the exact fulfillment of the constraint, at every iteration of the non-linear algorithm, via a bisection method. The issue of the ill-posedness (mesh dependency) of the topological solution, is then easily solved via a Laplacian smoothing of that pseudo-energy. In the aforementioned context, a number of (3D) topological structural optimization benchmarks are solved, and the solutions obtained with the explored closed-form solution method, are analyzed, and compared, with their solution through an alternative level set method. Although the obtained results, in terms of the cost function and topology designs, are very similar in both methods, the associated computational cost is about five times smaller in the closedform solution method this possibly being one of its advantages.

Location Routing is a fundamental planning problem in logistics, in which strategic location decisions on the placement of facilities (depots, distribution centers, warehouses etc.) are taken based on accurate estimates of operational routing costs. We present an approximation algorithm, i.e., an algorithm with proven worst-case guarantees both in terms of running time and solution quality, for the general capacitated version of this problem, in which both vehicles and facilities are capacitated. Before, such algorithms were only known for the special case where facilities are uncapacitated or where their capacities can be extended arbitrarily at linear cost. Previously established lower bounds that are known to approximate the optimal solution value well in the uncapacitated case can be off by an arbitrary factor in the general case. We show that this issue can be overcome by a bifactor approximation algorithm that may slightly exceed facility capacities by an adjustable, arbitrarily small margin while approximating the optimal cost by a constant factor. In addition to these proven worst-case guarantees, we also assess the practical performance of our algorithm in a comprehensive computational study, showing that the approach allows efficient computation of near-optimal solutions for instance sizes beyond the reach of current state-of-the-art heuristics.

In 1991, Craig Gotsman and Nathan Linial conjectured that for all $n$ and $d$, the average sensitivity of a degree-$d$ polynomial threshold function on $n$ variables is maximized by the degree-$d$ symmetric polynomial which computes the parity function on the $d$ layers of the hypercube with Hamming weight closest to $n/2$. We refute the conjecture for almost all $d$ and for almost all $n$, and we confirm the conjecture in many of the remaining cases.

Graph Convolutional Networks (GCNs) have recently become the primary choice for learning from graph-structured data, superseding hash fingerprints in representing chemical compounds. However, GCNs lack the ability to take into account the ordering of node neighbors, even when there is a geometric interpretation of the graph vertices that provides an order based on their spatial positions. To remedy this issue, we propose Geometric Graph Convolutional Network (geo-GCN) which uses spatial features to efficiently learn from graphs that can be naturally located in space. Our contribution is threefold: we propose a GCN-inspired architecture which (i) leverages node positions, (ii) is a proper generalisation of both GCNs and Convolutional Neural Networks (CNNs), (iii) benefits from augmentation which further improves the performance and assures invariance with respect to the desired properties. Empirically, geo-GCN outperforms state-of-the-art graph-based methods on image classification and chemical tasks.

We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.

Because of continuous advances in mathematical programing, Mix Integer Optimization has become a competitive vis-a-vis popular regularization method for selecting features in regression problems. The approach exhibits unquestionable foundational appeal and versatility, but also poses important challenges. We tackle these challenges, reducing computational burden when tuning the sparsity bound (a parameter which is critical for effectiveness) and improving performance in the presence of feature collinearity and of signals that vary in nature and strength. Importantly, we render the approach efficient and effective in applications of realistic size and complexity - without resorting to relaxations or heuristics in the optimization, or abandoning rigorous cross-validation tuning. Computational viability and improved performance in subtler scenarios is achieved with a multi-pronged blueprint, leveraging characteristics of the Mixed Integer Programming framework and by means of whitening, a data pre-processing step.

北京阿比特科技有限公司