亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an empirical study of a range of evolutionary algorithms applied to various noisy combinatorial optimisation problems. There are three sets of experiments. The first looks at several toy problems, such as OneMax and other linear problems. We find that UMDA and the Paired-Crossover Evolutionary Algorithm (PCEA) are the only ones able to cope robustly with noise, within a reasonable fixed time budget. In the second stage, UMDA and PCEA are then tested on more complex noisy problems: SubsetSum, Knapsack and SetCover. Both perform well under increasing levels of noise, with UMDA being the better of the two. In the third stage, we consider two noisy multi-objective problems (CountingOnesCountingZeros and a multi-objective formulation of SetCover). We compare several adaptations of UMDA for multi-objective problems with the Simple Evolutionary Multi-objective Optimiser (SEMO) and NSGA-II. We conclude that UMDA, and its variants, can be highly effective on a variety of noisy combinatorial optimisation, outperforming many other evolutionary algorithms.

相關內容

Conventional multi-agent path planners typically compute an ensemble of paths while optimizing a single objective, such as path length. However, many applications may require multiple objectives, say fuel consumption and completion time, to be simultaneously optimized during planning and these criteria may not be readily compared and sometimes lie in competition with each other. Naively applying existing multi-objective search algorithms, such as multi-objective A* (MOA*), to multi-agent path finding may prove to be inefficient as the size of the space of possible solutions, i.e., the Pareto-optimal set, can grow exponentially with the number of agents (the dimension of the search space). This article presents an approach named Multi-Objective Conflict-Based Search (MO-CBS) that bypasses this so-called curse of dimensionality by leveraging prior Conflict-Based Search (CBS), a well-known algorithm for single-objective multi-agent path finding, and principles of dominance from multi-objective optimization literature. We also develop several variants of MO-CBS to further improve its performance. We prove that MO-CBS and its variants are able to compute the entire Pareto-optimal set. Numerical results show that MO-CBS outperforms both MOA* as well as MOM*, a recently developed state-of-the-art multi-objective multi-agent planner.

Moving mesh methods are designed to redistribute a mesh in a regular way. This applied problem can be considered to overlap with the problem of finding a diffeomorphic mapping between density measures. In applications, an off-the-shelf grid needs to be restructured to have higher grid density in some regions than others. This should be done in a way that avoids tangling, hence, the attractiveness of diffeomorphic mapping techniques. For exact diffeomorphic mapping on the sphere a major tool used is Optimal Transport, which allows for diffeomorphic mapping between even non-continuous source and target densities. However, recently Optimal Information Transport was rigorously developed allowing for exact and inexact diffeomorphic mapping and the solving of a simpler partial differential equation. In this manuscript, we solve adaptive mesh problems using Optimal Transport and Optimal Information Transport on the sphere and introduce how to generalize these computations to more general manifolds. We choose to perform this comparison with provably convergent solvers, which is generally challenging for either problem due to the lack of boundary conditions and lack of comparison principle in the partial differential equation formulation.

We present a general technique, based on parametric search with some twist, for solving a variety of optimization problems on a set of points in the plane or in higher dimensions. These problems include (i) the reverse shortest path problem in unit-disk graphs, recently studied by Wang and Zhao, (ii) the same problem for weighted unit-disk graphs, with a decision procedure recently provided by Wang and Xue, (iii) extensions of these problems to three and higher dimensions, (iv) the discrete Fr\'echet distance with one-sided shortcuts in higher dimensions, extending the study by Ben Avraham et al., and (v) the maximum-height independent towers problem, in which we want to erect vertical towers of maximum height over a 1.5-dimensional terrain so that no pair of tower tips are mutually visible. We obtain significantly improved solutions for problems (i) and (ii), and new efficient solutions to problems (iii), (iv) and (v), which do not appear to have been studied earlier.

A tournament graph $T = \left(V, E \right)$ is an oriented complete graph, which can be used to model a round-robin tournament between $n$ players. In this paper, we address the problem of finding a champion of the tournament, also known as Copeland winner, which is a player that wins the highest number of matches. Solving this problem has important implications on several Information Retrieval applications, including Web search, conversational IR, machine translation, question answering, recommender systems, etc. Our goal is to solve the problem by minimizing the number of times we probe the adjacency matrix, i.e., the number of matches played. We prove that any deterministic/randomized algorithm finding a champion with constant success probability requires $\Omega(\ell n)$ comparisons, where $\ell$ is the number of matches lost by the champion. We then present an optimal deterministic algorithm matching this lower bound without knowing $\ell$ and we extend our analysis to three strictly related problems. Lastly, we conduct a comprehensive experimental assessment of the proposed algorithms to speed up a state-of-the-art solution for ranking on public data. Results show that our proposals speed up the retrieval of the champion up to $13\times$ in this scenario.

We study a new class of NP search problems, those which can be proved total using standard combinatorial reasoning based on approximate counting. Our model for this kind of reasoning is the bounded arithmetic theory $\mathrm{APC}_2$ of [Je\v{r}\'abek 2009]. In particular, the Ramsey and weak pigeonhole search problems lie in the new class. We give a purely computational characterization of this class and show that, relative to an oracle, it does not contain the problem CPLS, a strengthening of PLS. As CPLS is provably total in the theory $T^2_2$, this shows that $\mathrm{APC}_2$ does not prove every $\forall \Sigma^b_1$ sentence which is provable in bounded arithmetic. This answers the question posed in [Buss, Ko{\l}odziejczyk, Thapen 2014] and represents some progress in the programme of separating the levels of the bounded arithmetic hierarchy by low-complexity sentences. Our main technical tool is an extension of the "fixing lemma" from [Pudl\'ak, Thapen 2017], a form of switching lemma, which we use to show that a random partial oracle from a certain distribution will, with high probability, determine an entire computation of a $\textrm{P}^{\textrm{NP}}$ oracle machine. The introduction to the paper is intended to make the statements and context of the results accessible to someone unfamiliar with NP search problems or with bounded arithmetic.

Combinatorial designs provide an interesting source of optimization problems. Among them, permutation codes are particularly interesting given their applications in powerline communications, flash memories, and block ciphers. This paper addresses the design of permutation codes by evolutionary algorithms (EA) by developing an iterative approach. Starting from a single random permutation, new permutations satisfying the minimum distance constraint are incrementally added to the code by using a permutation-based EA. We investigate our approach against four different fitness functions targeting the minimum distance requirement at different levels of detail and with two different policies concerning code expansion and pruning. We compare the results achieved by our EA approach to those of a simple random search, remarking that neither method scales well with the problem size.

We study constrained reinforcement learning (CRL) from a novel perspective by setting constraints directly on state density functions, rather than the value functions considered by previous works. State density has a clear physical and mathematical interpretation, and is able to express a wide variety of constraints such as resource limits and safety requirements. Density constraints can also avoid the time-consuming process of designing and tuning cost functions required by value function-based constraints to encode system specifications. We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density constrained RL problem optimally and the constrains are guaranteed to be satisfied. We prove that the proposed algorithm converges to a near-optimal solution with a bounded error even when the policy update is imperfect. We use a set of comprehensive experiments to demonstrate the advantages of our approach over state-of-the-art CRL methods, with a wide range of density constrained tasks as well as standard CRL benchmarks such as Safety-Gym.

Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. Our code is available at //github.com/kjunelee/MetaOptNet.

Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.

We present an end-to-end framework for solving the Vehicle Routing Problem (VRP) using reinforcement learning. In this approach, we train a single model that finds near-optimal solutions for problem instances sampled from a given distribution, only by observing the reward signals and following feasibility rules. Our model represents a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. On capacitated VRP, our approach outperforms classical heuristics and Google's OR-Tools on medium-sized instances in solution quality with comparable computation time (after training). We demonstrate how our approach can handle problems with split delivery and explore the effect of such deliveries on the solution quality. Our proposed framework can be applied to other variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems.

北京阿比特科技有限公司