亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we consider the problem of partitioning a polygon into a set of connected disjoint sub-polygons, each of which covers an area of a specific size. The work is motivated by terrain covering applications in robotics, where the goal is to find a set of efficient plans for a team of heterogeneous robots to cover a given area. Within this application, solving a polygon partitioning problem is an essential stepping stone. Unlike previous work, the problem formulation proposed in this paper also considers a compactness metric of the generated sub-polygons, in addition to the area size constraints. Maximizing the compactness of sub-polygons directly influences the optimality of any generated motion plans. Consequently, this increases the efficiency with which robotic tasks can be performed within each sub-region. The proposed problem representation is based on grid cell decomposition and a potential field model that allows for the use of standard optimization techniques. A new algorithm, the AreaDecompose algorithm, is proposed to solve this problem. The algorithm includes a number of existing and new optimization techniques combined with two post-processing methods. The approach has been evaluated on a set of randomly generated polygons which are then divided using different criteria and the results have been compared with a state-of-the-art algorithm. Results show that the proposed algorithm can efficiently divide polygon regions maximizing compactness of the resulting partitions, where the sub-polygon regions are on average up to 73% more compact in comparison to existing techniques.

相關內容

Computing sample means on Riemannian manifolds is typically computationally costly. The Fr\'echet mean offers a generalization of the Euclidean mean to general metric spaces, particularly to Riemannian manifolds. Evaluating the Fr\'echet mean numerically on Riemannian manifolds requires the computation of geodesics for each sample point. When closed-form expressions do not exist for geodesics, an optimization-based approach is employed. In geometric deep-learning, particularly Riemannian convolutional neural networks, a weighted Fr\'echet mean enters each layer of the network, potentially requiring an optimization in each layer. The weighted diffusion-mean offers an alternative weighted mean sample estimator on Riemannian manifolds that do not require the computation of geodesics. Instead, we present a simulation scheme to sample guided diffusion bridges on a product manifold conditioned to intersect at a predetermined time. Such a conditioning is non-trivial since, in general, manifolds cannot be covered by a single chart. Exploiting the exponential chart, the conditioning can be made similar to that in the Euclidean setting.

In the Geometric Median problem with outliers, we are given a finite set of points in d-dimensional real space and an integer m, the goal is to locate a new point in space (center) and choose m of the input points to minimize the sum of the Euclidean distances from the center to the chosen points. This problem can be solved "almost exactly" in polynomial time if d is fixed and admits an approximation scheme PTAS in high dimensions. However, the complexity of the problem was an open question. We prove that, if the dimension of space is not fixed, Geometric Median with outliers is strongly NP-hard, does not admit approximation schemes FPTAS unless P=NP, and is W[1]-hard with respect to the parameter m. The proof is done by a reduction from the Independent Set problem. Based on a similar reduction, we also get the NP-hardness of closely related geometric 2-clustering problems in which it is required to partition a given set of points into two balanced clusters minimizing the cost of median clustering. Finally, we study Geometric Median with outliers in $\ell_\infty$ space and prove the same complexity results as for the Euclidean problem.

One of the most pressing problems in modern analysis is the study of the growth rate of the norms of all possible matrix products $\|A_{i_{n}}\cdots A_{i_{0}}\|$ with factors from a set of matrices $\mathscr{A}$. So far, only for a relatively small number of classes of matrices $\mathscr{A}$ has it been possible to rigorously describe the sequences of matrices $\{A_{i_{n}}\}$ that guarantee the maximal growth rate of the corresponding norms. Moreover, in almost all theoretically studied cases, the index sequences $\{i_{n}\}$ of matrices maximizing the norms of the corresponding matrix products turned out to be periodic or so-called Sturmian sequences, which entails a whole set of ``good'' properties of the sequences $\{A_{i_{n}}\}$, in particular the existence of a limiting frequency of occurrence of each matrix factor $A_{i}\in\mathscr{A}$ in them. The paper determines a class of $2\times 2$ matrices consisting of two matrices similar to rotations of the plane in which the sequence $\{A_{i_{n}}\}$ maximizing the growth rate of the norms $\|A_{i_{n}}\cdots A_{i_{0}}\|$ is not Sturmian. All considerations are based on numerical modeling and cannot be considered mathematically rigorous in this part. Rather, they should be interpreted as a set of questions for further comprehensive theoretical analysis.

We present a method for comparing point forecasts in a region of interest, such as the tails or centre of a variable's range. This method cannot be hedged, in contrast to conditionally selecting events to evaluate and then using a scoring function that would have been consistent (or proper) prior to event selection. Our method also gives decompositions of scoring functions that are consistent for the mean or a particular quantile or expectile. Each member of each decomposition is itself a consistent scoring function that emphasises performance over a selected region of the variable's range. The score of each member of the decomposition has a natural interpretation rooted in optimal decision theory. It is the weighted average of economic regret over user decision thresholds, where the weight emphasises those decision thresholds in the corresponding region of interest.

Data in non-Euclidean spaces are commonly encountered in many fields of Science and Engineering. For instance, in Robotics, attitude sensors capture orientation which is an element of a Lie group. In the recent past, several researchers have reported methods that take into account the geometry of Lie Groups in designing parameter estimation algorithms in nonlinear spaces. Maximum likelihood estimators (MLE) are quite commonly used for such tasks and it is well known in the field of statistics that Stein's shrinkage estimators dominate the MLE in a mean-squared sense assuming the observations are from a normal population. In this paper, we present a novel shrinkage estimator for data residing in Lie groups, specifically, abelian or compact Lie groups. The key theoretical results presented in this paper are: (i) Stein's Lemma and its proof for Lie groups and, (ii) proof of dominance of the proposed shrinkage estimator over MLE for abelian and compact Lie groups. We present examples of simulation studies of the dominance of the proposed shrinkage estimator and an application of shrinkage estimation to multiple-robot localization.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

The availability of large microarray data has led to a growing interest in biclustering methods in the past decade. Several algorithms have been proposed to identify subsets of genes and conditions according to different similarity measures and under varying constraints. In this paper we focus on the exclusive row biclustering problem for gene expression data sets, in which each row can only be a member of a single bicluster while columns can participate in multiple ones. This type of biclustering may be adequate, for example, for clustering groups of cancer patients where each patient (row) is expected to be carrying only a single type of cancer, while each cancer type is associated with multiple (and possibly overlapping) genes (columns). We present a novel method to identify these exclusive row biclusters through a combination of existing biclustering algorithms and combinatorial auction techniques. We devise an approach for tuning the threshold for our algorithm based on comparison to a null model in the spirit of the Gap statistic approach. We demonstrate our approach on both synthetic and real-world gene expression data and show its power in identifying large span non-overlapping rows sub matrices, while considering their unique nature. The Gap statistic approach succeeds in identifying appropriate thresholds in all our examples.

Large margin nearest neighbor (LMNN) is a metric learner which optimizes the performance of the popular $k$NN classifier. However, its resulting metric relies on pre-selected target neighbors. In this paper, we address the feasibility of LMNN's optimization constraints regarding these target points, and introduce a mathematical measure to evaluate the size of the feasible region of the optimization problem. We enhance the optimization framework of LMNN by a weighting scheme which prefers data triplets which yield a larger feasible region. This increases the chances to obtain a good metric as the solution of LMNN's problem. We evaluate the performance of the resulting feasibility-based LMNN algorithm using synthetic and real datasets. The empirical results show an improved accuracy for different types of datasets in comparison to regular LMNN.

Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings. Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them. We train SpectralNet using a procedure that involves constrained stochastic optimization. Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a special-purpose output layer, allow us to keep the network output orthogonal. Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points. To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities leaned from unlabeled data using a Siamese network. Additional improvement can be achieved by applying the network to code representations produced, e.g., by standard autoencoders. Our end-to-end learning procedure is fully unsupervised. In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. State-of-the-art clustering results are reported on the Reuters dataset. Our implementation is publicly available at //github.com/kstant0725/SpectralNet .

We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

北京阿比特科技有限公司