亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the MAXSPACE problem, given a set of ads A, one wants to place a subset A' of A into K slots B_1, ..., B_K of size L. Each ad A_i in A has size s_i and frequency w_i. A schedule is feasible if the total size of ads in any slot is at most L, and each ad A_i in A' appears in exactly w_i slots. The goal is to find a feasible schedule that maximizes the space occupied in all slots. We introduce MAXSPACE-RDWV, a MAXSPACE generalization with release dates, deadlines, variable frequency, and generalized profit. In MAXSPACE-RDWV each ad A_i has a release date r_i >= 1, a deadline d_i >= r_i, a profit v_i that may not be related with s_i and lower and upper bounds w^min_i and w^max_i for frequency. In this problem, an ad may only appear in a slot B_j with r_i <= j <= d_i, and the goal is to find a feasible schedule that maximizes the sum of values of scheduled ads. This paper presents some algorithms based on meta-heuristics Greedy Randomized Adaptive Search Procedure (GRASP), Variable Neighborhood Search (VNS), Local Search, and Tabu Search for MAXSPACE and MAXSPACE-RDWV. We compare our proposed algorithms with Hybrid-GA proposed by Kumar et al. (2006). We also create a version of Hybrid-GA for MAXSPACE-RDWV and compare it with our meta-heuristics for this problem. Some meta-heuristics, such as VNS and GRASP+VNS, have better results than Hybrid-GA for both problems.

相關內容

Data augmentation has been an indispensable tool to improve the performance of deep neural networks, however the augmentation can hardly transfer among different tasks and datasets. Consequently, a recent trend is to adopt AutoML technique to learn proper augmentation policy without extensive hand-crafted tuning. In this paper, we propose an efficient differentiable search algorithm called Direct Differentiable Augmentation Search (DDAS). It exploits meta-learning with one-step gradient update and continuous relaxation to the expected training loss for efficient search. Our DDAS can achieve efficient augmentation search without relying on approximations such as Gumbel Softmax or second order gradient approximation. To further reduce the adverse effect of improper augmentations, we organize the search space into a two level hierarchy, in which we first decide whether to apply augmentation, and then determine the specific augmentation policy. On standard image classification benchmarks, our DDAS achieves state-of-the-art performance and efficiency tradeoff while reducing the search cost dramatically, e.g. 0.15 GPU hours for CIFAR-10. In addition, we also use DDAS to search augmentation for object detection task and achieve comparable performance with AutoAugment, while being 1000x faster.

We consider a new functional inequality controlling the rate of relative entropy decay for random walks, the interchange process and more general block-type dynamics for permutations. The inequality lies between the classical logarithmic Sobolev inequality and the modified logarithmic Sobolev inequality, roughly interpolating between the two as the size of the blocks grows. Our results suggest that the new inequality may have some advantages with respect to the latter well known inequalities when multi-particle processes are considered. We prove a strong form of tensorization for independent particles interacting through synchronous updates. Moreover, for block dynamics on permutations we compute the optimal constants in all mean field settings, namely whenever the rate of update of a block depends only on the size of the block. Along the way we establish the independence of the spectral gap on the number of particles for these mean field processes. As an application of our entropy inequalities we prove a new subadditivity estimate for permutations, which implies a sharp upper bound on the permanent of arbitrary matrices with nonnegative entries, thus resolving a well known conjecture.

In the study of collective motion, it is common practice to collect movement information at the level of the group to infer the characteristics of the individual agents and their interactions. However, it is not clear whether one can always correctly infer individual characteristics from movement data of the collective. We investigate this question in the context of a composite crowd with two groups of agents, each with its own desired direction of motion. A simple observer attempts to classify an agent into its group based on its movement information. However, collective effects such as collisions, entrainment of agents, formation of lanes and clusters, etc. render the classification problem non-trivial, and lead to misclassifications. Based on our understanding of these effects, we propose a new observer algorithm that infers, based only on observed movement information, how the local neighborhood aids or hinders agent movement. Unlike a traditional supervised learning approach, this algorithm is based on physical insights and scaling arguments, and does not rely on training-data. This new observer improves classification performance and is able to differentiate agents belonging to different groups even when their motion is identical. Data-agnostic approaches like this have relevance to a large class of real-world problems where clean, labeled data is difficult to obtain, and is a step towards hybrid approaches that integrate both data and domain knowledge.

This paper addresses the problem of learning an equilibrium efficiently in general-sum Markov games through decentralized multi-agent reinforcement learning. Given the fundamental difficulty of calculating a Nash equilibrium (NE), we instead aim at finding a coarse correlated equilibrium (CCE), a solution concept that generalizes NE by allowing possible correlations among the agents' strategies. We propose an algorithm in which each agent independently runs optimistic V-learning (a variant of Q-learning) to efficiently explore the unknown environment, while using a stabilized online mirror descent (OMD) subroutine for policy updates. We show that the agents can find an $\epsilon$-approximate CCE in at most $\widetilde{O}( H^6S A /\epsilon^2)$ episodes, where $S$ is the number of states, $A$ is the size of the largest individual action space, and $H$ is the length of an episode. This appears to be the first sample complexity result for learning in generic general-sum Markov games. Our results rely on a novel investigation of an anytime high-probability regret bound for OMD with a dynamic learning rate and weighted regret, which would be of independent interest. One key feature of our algorithm is that it is fully \emph{decentralized}, in the sense that each agent has access to only its local information, and is completely oblivious to the presence of others. This way, our algorithm can readily scale up to an arbitrary number of agents, without suffering from the exponential dependence on the number of agents.

We argue that proven exponential upper bounds on runtimes, an established area in classic algorithms, are interesting also in heuristic search and we prove several such results. We show that any of the algorithms randomized local search, Metropolis algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function under a large set of noise assumptions in a runtime that is at most exponential in the problem dimension~$n$. This drastically extends a previous such result, limited to the (1+1) EA, the LeadingOnes function, and one-bit or bit-wise prior noise with noise probability at most $1/2$, and at the same time simplifies its proof. With the same general argument, among others, we also derive a sub-exponential upper bound for the runtime of the $(1,\lambda)$ evolutionary algorithm on the OneMax problem when the offspring population size $\lambda$ is logarithmic, but below the efficiency threshold. To show that our approach can also deal with non-trivial parent population sizes, we prove an exponential upper bound for the runtime of the mutation-based version of the simple genetic algorithm on the OneMax benchmark, matching a known exponential lower bound.

In this article, we develop and analyse a new spectral method to solve the semi-classical Schr\"odinger equation based on the Gaussian wave-packet transform (GWPT) and Hagedorn's semi-classical wave-packets (HWP). The GWPT equivalently recasts the highly oscillatory wave equation as a much less oscillatory one (the $w$ equation) coupled with a set of ordinary differential equations governing the dynamics of the so-called GWPT parameters. The Hamiltonian of the $ w $ equation consists of a quadratic part and a small non-quadratic perturbation, which is of order $ \mathcal{O}(\sqrt{\varepsilon }) $, where $ \varepsilon\ll 1 $ is the rescaled Planck's constant. By expanding the solution of the $ w $ equation as a superposition of Hagedorn's wave-packets, we construct a spectral method while the $ \mathcal{O}(\sqrt{\varepsilon}) $ perturbation part is treated by the Galerkin approximation. This numerical implementation of the GWPT avoids imposing artificial boundary conditions and facilitates rigorous numerical analysis. For arbitrary dimensional cases, we establish how the error of solving the semi-classical Schr\"odinger equation with the GWPT is determined by the errors of solving the $ w $ equation and the GWPT parameters. We prove that this scheme has the spectral convergence with respect to the number of Hagedorn's wave-packets in one dimension. Extensive numerical tests are provided to demonstrate the properties of the proposed method.

Recent research has revealed that the security of deep neural networks that directly process 3D point clouds to classify objects can be threatened by adversarial samples. Although existing adversarial attack methods achieve high success rates, they do not restrict the point modifications enough to preserve the point cloud appearance. To overcome this shortcoming, two constraints are proposed. These include applying hard boundary constraints on the number of modified points and on the point perturbation norms. Due to the restrictive nature of the problem, the search space contains many local maxima. The proposed method addresses this issue by using a high step-size at the beginning of the algorithm to search the main surface of the point cloud fast and effectively. Then, in order to converge to the desired output, the step-size is gradually decreased. To evaluate the performance of the proposed method, it is run on the ModelNet40 and ScanObjectNN datasets by employing the state-of-the-art point cloud classification models; including PointNet, PointNet++, and DGCNN. The obtained results show that it can perform successful attacks and achieve state-of-the-art results by only a limited number of point modifications while preserving the appearance of the point cloud. Moreover, due to the effective search algorithm, it can perform successful attacks in just a few steps. Additionally, the proposed step-size scheduling algorithm shows an improvement of up to $14.5\%$ when adopted by other methods as well. The proposed method also performs effectively against popular defense methods.

In this paper, we propose a deep reinforcement learning framework called GCOMB to learn algorithms that can solve combinatorial problems over large graphs. GCOMB mimics the greedy algorithm in the original problem and incrementally constructs a solution. The proposed framework utilizes Graph Convolutional Network (GCN) to generate node embeddings that predicts the potential nodes in the solution set from the entire node set. These embeddings enable an efficient training process to learn the greedy policy via Q-learning. Through extensive evaluation on several real and synthetic datasets containing up to a million nodes, we establish that GCOMB is up to 41% better than the state of the art, up to seven times faster than the greedy algorithm, robust and scalable to large dynamic networks.

Policy gradient methods are widely used in reinforcement learning algorithms to search for better policies in the parameterized policy space. They do gradient search in the policy space and are known to converge very slowly. Nesterov developed an accelerated gradient search algorithm for convex optimization problems. This has been recently extended for non-convex and also stochastic optimization. We use Nesterov's acceleration for policy gradient search in the well-known actor-critic algorithm and show the convergence using ODE method. We tested this algorithm on a scheduling problem. Here an incoming job is scheduled into one of the four queues based on the queue lengths. We see from experimental results that algorithm using Nesterov's acceleration has significantly better performance compared to algorithm which do not use acceleration. To the best of our knowledge this is the first time Nesterov's acceleration has been used with actor-critic algorithm.

Superpixel segmentation has become an important research problem in image processing. In this paper, we propose an Iterative Spanning Forest (ISF) framework, based on sequences of Image Foresting Transforms, where one can choose i) a seed sampling strategy, ii) a connectivity function, iii) an adjacency relation, and iv) a seed pixel recomputation procedure to generate improved sets of connected superpixels (supervoxels in 3D) per iteration. The superpixels in ISF structurally correspond to spanning trees rooted at those seeds. We present five ISF methods to illustrate different choices of its components. These methods are compared with approaches from the state-of-the-art in effectiveness and efficiency. The experiments involve 2D and 3D datasets with distinct characteristics, and a high level application, named sky image segmentation. The theoretical properties of ISF are demonstrated in the supplementary material and the results show that some of its methods are competitive with or superior to the best baselines in effectiveness and efficiency.

北京阿比特科技有限公司