亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Population diversity is crucial in evolutionary algorithms as it helps with global exploration and facilitates the use of crossover. Despite many runtime analyses showing advantages of population diversity, we have no clear picture of how diversity evolves over time. We study how population diversity of $(\mu+1)$ algorithms, measured by the sum of pairwise Hamming distances, evolves in a fitness-neutral environment. We give an exact formula for the drift of population diversity and show that it is driven towards an equilibrium state. Moreover, we bound the expected time for getting close to the equilibrium state. We find that these dynamics, including the location of the equilibrium, are unaffected by surprisingly many algorithmic choices. All unbiased mutation operators with the same expected number of bit flips have the same effect on the expected diversity. Many crossover operators have no effect at all, including all binary unbiased, respectful operators. We review crossover operators from the literature and identify crossovers that are neutral towards the evolution of diversity and crossovers that are not.

相關內容

In game theory, mechanism design is concerned with the design of incentives so that a desired outcome of the game can be achieved. In this paper, we explore the concept of equilibrium design, where incentives are designed to obtain a desirable equilibrium that satisfies a specific temporal logic property. Our study is based on a framework where system specifications are represented as temporal logic formulae, games as quantitative concurrent game structures, and players' goals as mean-payoff objectives. We consider system specifications given by LTL and GR(1) formulae, and show that designing incentives to ensure that a given temporal logic property is satisfied on some/every Nash equilibrium of the game can be achieved in PSPACE for LTL properties and in NP/{\Sigma}P 2 for GR(1) specifications. We also examine the complexity of related decision and optimisation problems, such as optimality and uniqueness of solutions, as well as considering social welfare, and show that the complexities of these problems lie within the polynomial hierarchy. Equilibrium design can be used as an alternative solution to rational synthesis and verification problems for concurrent games with mean-payoff objectives when no solution exists or as a technique to repair concurrent games with undesirable Nash equilibria in an optimal way.

The classical and extended occupancy distributions are useful for examining the number of occupied bins in problems involving random allocation of balls to bins. We examine the extended occupancy problem by framing it as a Markov chain and deriving the spectral decomposition of the transition probability matrix. We look at three distributions of interest that arise from the problem, all involving the noncentral Stirling numbers of the second kind. These distributions give a useful generalisation to the binomial and negative-binomial distributions. We examine how these distributions relate to one another, and we derive recursive properties and mixture properties that characterise the distributions.

Min-max routing problems aim to minimize the maximum tour length among agents as they collaboratively visit all cities, i.e., the completion time. These problems include impactful real-world applications but are known as NP-hard. Existing methods are facing challenges, particularly in large-scale problems that require the coordination of numerous agents to cover thousands of cities. This paper proposes a new deep-learning framework to solve large-scale min-max routing problems. We model the simultaneous decision-making of multiple agents as a sequential generation process, allowing the utilization of scalable deep-learning models for sequential decision-making. In the sequentially approximated problem, we propose a scalable contextual Transformer model, Equity-Transformer, which generates sequential actions considering an equitable workload among other agents. The effectiveness of Equity-Transformer is demonstrated through its superior performance in two representative min-max routing tasks: the min-max multiple traveling salesman problem (min-max mTSP) and the min-max multiple pick-up and delivery problem (min-max mPDP). Notably, our method achieves significant reductions of runtime, approximately 335 times, and cost values of about 53% compared to a competitive heuristic (LKH3) in the case of 100 vehicles with 1,000 cities of mTSP. We provide reproducible source code: //github.com/kaist-silab/equity-transformer

Experiential AI is an emerging research field that addresses the challenge of making AI tangible and explicit, both to fuel cultural experiences for audiences, and to make AI systems more accessible to human understanding. The central theme is how artists, scientists and other interdisciplinary actors can come together to understand and communicate the functionality of AI, ML and intelligent robots, their limitations, and consequences, through informative and compelling experiences. It provides an approach and methodology for the arts and tangible experiences to mediate between impenetrable computer code and human understanding, making not just AI systems but also their values and implications more transparent, and therefore accountable. In this paper, we report on an empirical case study of an experiential AI system designed for creative data exploration of a user-defined dimension, to enable creators to gain more creative control over the AI process. We discuss how experiential AI can increase legibility and agency for artists, and how the arts can provide creative strategies and methods which can add to the toolbox for human-centred XAI.

We consider the problem of solving linear least squares problems in a framework where only evaluations of the linear map are possible. We derive randomized methods that do not need any other matrix operations than forward evaluations, especially no evaluation of the adjoint map is needed. Our method is motivated by the simple observation that one can get an unbiased estimate of the application of the adjoint. We show convergence of the method and then derive a more efficient method that uses an exact linesearch. This method, called random descent, resembles known methods in other context and has the randomized coordinate descent method as special case. We provide convergence analysis of the random descent method emphasizing the dependence on the underlying distribution of the random vectors. Furthermore we investigate the applicability of the method in the context of ill-posed inverse problems and show that the method can have beneficial properties when the unknown solution is rough. We illustrate the theoretical findings in numerical examples. One particular result is that the random descent method actually outperforms established transposed-free methods (TFQMR and CGS) in examples.

Recently, the use of deep equilibrium methods has emerged as a new approach for solving imaging and other ill-posed inverse problems. While learned components may be a key factor in the good performance of these methods in practice, a theoretical justification from a regularization point of view is still lacking. In this paper, we address this issue by providing stability and convergence results for the class of equilibrium methods. In addition, we derive convergence rates and stability estimates in the symmetric Bregman distance. We strengthen our results for regularization operators with contractive residuals. Furthermore, we use the presented analysis to gain insight into the practical behavior of these methods, including a lower bound on the performance of the regularized solutions. In addition, we show that the convergence analysis leads to the design of a new type of loss function which has several advantages over previous ones. Numerical simulations are used to support our findings.

Ranking functions such as PageRank assign numeric values (ranks) to nodes of graphs, most notably the web graph. Node rankings are an integral part of Internet search algorithms, since they can be used to order the results of queries. However, these ranking functions are famously subject to attacks by spammers, who modify the web graph in order to give their own pages more rank. We characterize the interplay between rankers and spammers as a game. We define the two critical features of this game, spam resistance and distortion, based on how spammers spam and how rankers protect against spam. We observe that all the ranking functions that are well-studied in the literature, including the original formulation of PageRank, have poor spam resistance, poor distortion, or both. Finally, we study Min-PPR, the form of PageRank used at Google itself, but which has received no (theoretical or empirical) treatment in the literature. We prove that Min-PPR has low distortion and high spam resistance. A secondary benefit is that Min-PPR comes with an explicit cost function on nodes that shows how important they are to the spammer; thus a ranker can focus their spam-detection capacity on these vulnerable nodes. Both Min-PPR and its associated cost function are straightforward to compute.

We study variance reduction methods for extragradient (EG) algorithms for a class of variational inequalities satisfying a classical error-bound condition. Previously, linear convergence was only known to hold under strong monotonicity. The error-bound condition is much weaker than strong monotonicity and captures a larger class of problems, including bilinear saddle-point problems such as those arising from two-player zero-sum Nash equilibrium computation. We show that EG algorithms with SVRG-style variance reduction (SVRG-EG) achieve linear convergence under the error-bound condition. In addition, motivated by the empirical success of increasing iterate averaging techniques in solving saddle-point problems, we also establish new convergence results for variance-reduced EG with increasing iterate averaging. Finally, we conduct numerical experiments to demonstrate the advantage of SVRG-EG, with and without increasing iterate averaging, over deterministic EG.

Real-world games, which concern imperfect information, multiple players, and simultaneous moves, are less frequently discussed in the existing literature of game theory. While reinforcement learning (RL) provides a general framework to extend the game theoretical algorithms, the assumptions that guarantee their convergence towards Nash equilibria may no longer hold in real-world games. Starting from the definition of the Nash distribution, we construct a continuous-time dynamic named imperfect-information exponential-decay score-based learning (IESL) to find approximate Nash equilibria in games with the above-mentioned features. Theoretical analysis demonstrates that IESL yields equilibrium-approaching policies in imperfect information simultaneous games with the basic assumption of concavity. Experimental results show that IESL manages to find approximate Nash equilibria in four canonical poker scenarios and significantly outperforms three other representative algorithms in 3-player Leduc poker, manifesting its equilibrium-finding ability even in practical sequential games. Furthermore, related to the concept of game hypomonotonicity, a trade-off between the convergence of the IESL dynamic and the ultimate NashConv of the convergent policies is observed from the perspectives of both theory and experiment.

Graph machine learning has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To tackle the challenge, automated graph machine learning, which aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design, is gaining an increasing number of attentions from the research community. In this paper, we extensively discuss automated graph machine approaches, covering hyper-parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning. We briefly overview existing libraries designed for either graph machine learning or automated machine learning respectively, and further in depth introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning. Last but not least, we share our insights on future research directions for automated graph machine learning. This paper is the first systematic and comprehensive discussion of approaches, libraries as well as directions for automated graph machine learning.

北京阿比特科技有限公司