亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Consider oriented graph nodes requiring periodic visits by a service agent. The agent moves among the nodes and receives a payoff for each completed service task, depending on the time elapsed since the previous visit to a node. We consider the problem of finding a suitable schedule for the agent to maximize its long-run average payoff per time unit. We show that the problem of constructing an $\varepsilon$-optimal schedule is PSPACE-hard for every fixed $\varepsilon \geq 0$, and that there exists an optimal periodic schedule of exponential length. We propose randomized finite-memory (RFM) schedules as a compact description of the agent's strategies and design an efficient algorithm for constructing RFM schedules. Furthermore, we construct deterministic periodic schedules by sampling from RFM schedules.

相關內容

While multilinear algebra appears natural for studying the multiway interactions modeled by hypergraphs, tensor methods for general hypergraphs have been stymied by theoretical and practical barriers. A recently proposed adjacency tensor is applicable to nonuniform hypergraphs, but is prohibitively costly to form and analyze in practice. We develop tensor times same vector (TTSV) algorithms for this tensor which improve complexity from $O(n^r)$ to a low-degree polynomial in $r$, where $n$ is the number of vertices and $r$ is the maximum hyperedge size. Our algorithms are implicit, avoiding formation of the order $r$ adjacency tensor. We demonstrate the flexibility and utility of our approach in practice by developing tensor-based hypergraph centrality and clustering algorithms. We also show these tensor measures offer complementary information to analogous graph-reduction approaches on data, and are also able to detect higher-order structure that many existing matrix-based approaches provably cannot.

Screw and Lie group theory allows for user-friendly modeling of multibody systems (MBS) while at the same they give rise to computationally efficient recursive algorithms. The inherent frame invariance of such formulations allows for use of arbitrary reference frames within the kinematics modeling (rather than obeying modeling conventions such as the Denavit-Hartenberg convention) and to avoid introduction of joint frames. The computational efficiency is owed to a representation of twists, accelerations, and wrenches that minimizes the computational effort. This can be directly carried over to dynamics formulations. In this paper recursive $O\left( n\right) $ Newton-Euler algorithms are derived for the four most frequently used representations of twists, and their specific features are discussed. These formulations are related to the corresponding algorithms that were presented in the literature. The MBS motion equations are derived in closed form using the Lie group formulation. One are the so-called 'Euler-Jourdain' or 'projection' equations, of which Kane's equations are a special case, and the other are the Lagrange equations. The recursive kinematics formulations are readily extended to higher orders in order to compute derivatives of the motions equations. To this end, recursive formulations for the acceleration and jerk are derived. It is briefly discussed how this can be employed for derivation of the linearized motion equations and their time derivatives. The geometric modeling allows for direct application of Lie group integration methods, which is briefly discussed.

Motivated by distribution problems arising in the supply chain of Haleon, we investigate a discrete optimization problem that we call the "container delivery scheduling problem". The problem models a supplier dispatching ordered products with shipping containers from manufacturing sites to distribution centers, where orders are collected by the buyers at agreed due times. The supplier may expedite or delay item deliveries to reduce transshipment costs at the price of increasing inventory costs, as measured by the number of containers and distribution center storage/backlog costs, respectively. The goal is to compute a delivery schedule attaining good trade-offs between the two. This container delivery scheduling problem is a temporal variant of classic bin packing problems, where the item sizes are not fixed, but depend on the item due times and delivery times. An approach for solving the problem should specify a batching policy for container consolidation and a scheduling policy for deciding when each container should be delivered. Based on the available item due times, we develop algorithms with sequential and nested batching policies as well as on-time and delay-tolerant scheduling policies. We elaborate on the problem's hardness and substantiate the proposed algorithms with positive and negative approximation bounds, including the derivation of an algorithm achieving an asymptotically tight 2-approximation ratio.

Wasserstein gradient flows on probability measures have found a host of applications in various optimization problems. They typically arise as the continuum limit of exchangeable particle systems evolving by some mean-field interaction involving a gradient-type potential. However, in many problems, such as in multi-layer neural networks, the so-called particles are edge weights on large graphs whose nodes are exchangeable. Such large graphs are known to converge to continuum limits called graphons as their size grow to infinity. We show that the Euclidean gradient flow of a suitable function of the edge-weights converges to a novel continuum limit given by a curve on the space of graphons that can be appropriately described as a gradient flow or, more technically, a curve of maximal slope. Several natural functions on graphons, such as homomorphism functions and the scalar entropy, are covered by our set-up, and the examples have been worked out in detail.

The core numbers of vertices in a graph are one of the most well-studied cohesive subgraph models because of the linear running time. In practice, many data graphs are dynamic graphs that are continuously changing by inserting or removing edges. The core numbers are updated in dynamic graphs with edge insertions and deletions, which is called core maintenance. When a burst of a large number of inserted or removed edges come in, we have to handle these edges on time to keep up with the data stream. There are two main sequential algorithms for core maintenance, \textsc{Traversal} and \textsc{Order}. It is proved that the \textsc{Order} algorithm significantly outperforms the \alg{Traversal} algorithm over all tested graphs with up to 2,083 times speedups. To the best of our knowledge, all existing parallel approaches are based on the \alg{Traversal} algorithm; also, their parallelism exists only for affected vertices with different core numbers, which will reduce to sequential when all vertices have the same core numbers. In this paper, we propose a new parallel core maintenance algorithm based on the \alg{Order} algorithm. Importantly, our new approach always has parallelism, even for the graphs where all vertices have the same core numbers. Extensive experiments are conducted over real-world, temporal, and synthetic graphs on a 64-core machine. The results show that for inserting and removing 100,000 edges using 16-worker, our method achieves up to 289x and 10x times speedups compared with the most efficient existing method, respectively.

Many applications rely on solving time-dependent partial differential equations (PDEs) that include second derivatives. Summation-by-parts (SBP) operators are crucial for developing stable, high-order accurate numerical methodologies for such problems. Conventionally, SBP operators are tailored to the assumption that polynomials accurately approximate the solution, and SBP operators should thus be exact for them. However, this assumption falls short for a range of problems for which other approximation spaces are better suited. We recently addressed this issue and developed a theory for first-derivative SBP operators based on general function spaces, coined function-space SBP (FSBP) operators. In this paper, we extend the innovation of FSBP operators to accommodate second derivatives. The developed second-derivative FSBP operators maintain the desired mimetic properties of existing polynomial SBP operators while allowing for greater flexibility by being applicable to a broader range of function spaces. We establish the existence of these operators and detail a straightforward methodology for constructing them. By exploring various function spaces, including trigonometric, exponential, and radial basis functions, we illustrate the versatility of our approach. We showcase the superior performance of these non-polynomial FSBP operators over traditional polynomial-based operators for a suite of one- and two-dimensional problems, encompassing a boundary layer problem and the viscous Burgers' equation. The work presented here opens up possibilities for using second-derivative SBP operators based on suitable function spaces, paving the way for a wide range of applications in the future.

In this paper, we are concerned with efficiently solving the sequences of regularized linear least squares problems associated with employing Tikhonov-type regularization with regularization operators designed to enforce edge recovery. An optimal regularization parameter, which balances the fidelity to the data with the edge-enforcing constraint term, is typically not known a priori. This adds to the total number of regularized linear least squares problems that must be solved before the final image can be recovered. Therefore, in this paper, we determine effective multigrid preconditioners for these sequences of systems. We focus our approach on the sequences that arise as a result of the edge-preserving method introduced in [6], where we can exploit an interpretation of the regularization term as a diffusion operator; however, our methods are also applicable in other edge-preserving settings, such as iteratively reweighted least squares problems. Particular attention is paid to the selection of components of the multigrid preconditioner in order to achieve robustness for different ranges of the regularization parameter value. In addition, we present a parameter culling approach that, when used with the L-curve heuristic, reduces the total number of solves required. We demonstrate our preconditioning and parameter culling routines on examples in computed tomography and image deblurring.

In this paper, online game is studied, where at each time, a group of players aim at selfishly minimizing their own time-varying cost function simultaneously subject to time-varying coupled constraints and local feasible set constraints. Only local cost functions and local constraints are available to individual players, who can share limited information with their neighbors through a fixed and connected graph. In addition, players have no prior knowledge of future cost functions and future local constraint functions. In this setting, a novel decentralized online learning algorithm is devised based on mirror descent and a primal-dual strategy. The proposed algorithm can achieve sublinearly bounded regrets and constraint violation by appropriately choosing decaying stepsizes. Furthermore, it is shown that the generated sequence of play by the designed algorithm can converge to the variational GNE of a strongly monotone game, to which the online game converges. Additionally, a payoff-based case, i.e., in a bandit feedback setting, is also considered and a new payoff-based learning policy is devised to generate sublinear regrets and constraint violation. Finally, the obtained theoretical results are corroborated by numerical simulations.

Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

Reinforcement learning is one of the core components in designing an artificial intelligent system emphasizing real-time response. Reinforcement learning influences the system to take actions within an arbitrary environment either having previous knowledge about the environment model or not. In this paper, we present a comprehensive study on Reinforcement Learning focusing on various dimensions including challenges, the recent development of different state-of-the-art techniques, and future directions. The fundamental objective of this paper is to provide a framework for the presentation of available methods of reinforcement learning that is informative enough and simple to follow for the new researchers and academics in this domain considering the latest concerns. First, we illustrated the core techniques of reinforcement learning in an easily understandable and comparable way. Finally, we analyzed and depicted the recent developments in reinforcement learning approaches. My analysis pointed out that most of the models focused on tuning policy values rather than tuning other things in a particular state of reasoning.

北京阿比特科技有限公司