亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Solving planning and scheduling problems for multiple tasks with highly coupled state and temporal constraints is notoriously challenging. An appealing approach to effectively decouple the problem is to judiciously order the events such that decisions can be made over sequences of tasks. As many problems encountered in practice are over-constrained, we must instead find relaxed solutions in which certain requirements are dropped. This motivates a formulation of optimality with respect to the costs of relaxing constraints and the problem of finding an optimal ordering under which this relaxing cost is minimum. In this paper, we present Generalized Conflict-directed Ordering (GCDO), a branch-and-bound ordering method that generates an optimal total order of events by leveraging the generalized conflicts of both inconsistency and suboptimality from sub-solvers for cost estimation and solution space pruning. Due to its ability to reason over generalized conflicts, GCDO is much more efficient in finding high-quality total orders than the previous conflict-directed approach CDITO. We demonstrate this by benchmarking on temporal network configuration problems, which involves managing networks over time and makes necessary tradeoffs between network flows against CDITO and Mixed Integer-Linear Programing (MILP). Our algorithm is able to solve two orders of magnitude more benchmark problems to optimality and twice the problems compared to CDITO and MILP within a runtime limit, respectively.

相關內容

This paper explores a family of generalized sweeping preconditionners for Helmholtz problems with non-overlapping checkerboard partition of the computational domain. The domain decomposition procedure relies on high-order transmission conditions and cross-point treatments, which cannot scale without an efficient preconditioning technique when the number of subdomains increases. With the proposed approach, existing sweeping preconditioners, such as the symmetric Gauss-Seidel and parallel double sweep preconditioners, can be applied to checkerboard partitions with different sweeping directions (e.g. horizontal and diagonal). Several directions can be combined thanks to the flexible version of GMRES, allowing for the rapid transfer of information in the different zones of the computational domain, then accelerating the convergence of the final iterative solution procedure. Several two-dimensional finite element results are proposed to study and to compare the sweeping preconditioners, and to illustrate the performance on cases of increasing complexity.

While operating communication networks adaptively may improve utilization and performance, frequent adjustments also introduce an algorithmic challenge: the re-optimization of traffic engineering solutions is time-consuming and may limit the granularity at which a network can be adjusted. This paper is motivated by question whether the reactivity of a network can be improved by re-optimizing solutions dynamically rather than from scratch, especially if inputs such as link weights do not change significantly. This paper explores to what extent dynamic algorithms can be used to speed up fundamental tasks in network operations. We specifically investigate optimizations related to traffic engineering (namely shortest paths and maximum flow computations), but also consider spanning tree and matching applications. While prior work on dynamic graph algorithms focuses on link insertions and deletions, we are interested in the practical problem of link weight changes. We revisit existing upper bounds in the weight-dynamic model, and present several novel lower bounds on the amortized runtime for recomputing solutions. In general, we find that the potential performance gains depend on the application, and there are also strict limitations on what can be achieved, even if link weights change only slightly.

In this paper, we consider contention resolution algorithms that are augmented with predictions about the network. We begin by studying the natural setup in which the algorithm is provided a distribution defined over the possible network sizes that predicts the likelihood of each size occurring. The goal is to leverage the predictive power of this distribution to improve on worst-case time complexity bounds. Using a novel connection between contention resolution and information theory, we prove lower bounds on the expected time complexity with respect to the Shannon entropy of the corresponding network size random variable, for both the collision detection and no collision detection assumptions. We then analyze upper bounds for these settings, assuming now that the distribution provided as input might differ from the actual distribution generating network sizes. We express their performance with respect to both entropy and the statistical divergence between the two distributions -- allowing us to quantify the cost of poor predictions. Finally, we turn our attention to the related perfect advice setting, parameterized with a length $b\geq 0$, in which all active processes in a given execution are provided the best possible $b$ bits of information about their network. We provide tight bounds on the speed-up possible with respect to $b$ for deterministic and randomized algorithms, with and without collision detection. These bounds provide a fundamental limit on the maximum power that can be provided by any predictive model with a bounded output size.

In this article, we propose a new approach, optimize then agree for minimizing a sum $ f = \sum_{i=1}^n f_i(x)$ of convex objective functions over a directed graph. The optimize then agree approach decouples the optimization step and the consensus step in a distributed optimization framework. The key motivation for optimize then agree is to guarantee that the disagreement between the estimates of the agents during every iteration of the distributed optimization algorithm remains under any apriori specified tolerance; existing algorithms do not provide such a guarantee which is required in many practical scenarios. In this method, each agent during each iteration maintains an estimate of the optimal solution and, utilizes its locally available gradient information along with a finite-time approximate consensus protocol to move towards the optimal solution (hence the name Gradient-Consensus algorithm). We establish that the proposed algorithm has a global R-linear rate of convergence if the aggregate function $f$ is strongly convex and Lipschitz differentiable. We also show that under the relaxed assumption of $f_i$'s being convex and Lipschitz differentiable, the objective function error residual decreases at a Q-linear rate (in terms of the number of gradient computation steps) until it reaches a small value, which can be managed using the tolerance value specified on the finite-time approximate consensus protocol; no existing method in the literature has such strong convergence guarantees when $f_i$ are not necessarily strongly convex functions. The communication overhead for the improved guarantees on meeting constraints and better convergence of our algorithm is $O(k\log k)$ iterates in comparison to $O(k)$ of the traditional algorithms. Further, we numerically evaluate the performance of the proposed algorithm by solving a distributed logistic regression problem.

The relationship between safety and optimality in control is not well understood, and they are often seen as important yet conflicting objectives. There is a pressing need to formalize this relationship, especially given the growing prominence of learning-based methods. Indeed, it is common practice in reinforcement learning to simply modify reward functions by penalizing failures, with the penalty treated as a mere heuristic. We rigorously examine this relationship, and formalize the requirements for safe value functions: value functions that are both optimal for a given task, and enforce safety. We reveal the structure of this relationship through a proof of strong duality, showing that there always exists a finite penalty that induces a safe value function. This penalty is not unique, but upper-unbounded: larger penalties do not harm optimality. Although it is often not possible to compute the minimum required penalty, we reveal clear structure of how the penalty, rewards, discount factor, and dynamics interact. This insight suggests practical, theory-guided heuristics to design reward functions for control problems where safety is important.

We present an incremental search algorithm, called Lifelong-GLS, which combines the vertex efficiency of Lifelong Planning A* (LPA*) and the edge efficiency of Generalized Lazy Search (GLS) for efficient replanning on dynamic graphs where edge evaluation is expensive. We use a lazily evaluated LPA* to repair the cost-to-come inconsistencies of the relevant region of the current search tree based on the previous search results, and then we restrict the expensive edge evaluations only to the current shortest subpath as in the GLS framework. The proposed algorithm is complete and correct in finding the optimal solution in the current graph, if one exists. We also show that the search returns a bounded suboptimal solution, if an inflated heuristic edge weight is used and the tree repairing propagation is truncated early for faster search. Finally, we show the efficiency of the proposed algorithm compared to the standard LPA* and the GLS algorithms over consecutive search episodes in a dynamic environment. For each search, the proposed algorithm reduces the edge evaluations by a significant amount compared to the LPA*. Both the number of vertex expansions and the number of edge evaluations are reduced substantially compared to GLS, as the proposed algorithm utilizes previous search results to facilitate the new search.

Deep Learning (DL), in particular deep neural networks (DNN), by design is purely data-driven and in general does not require physics. This is the strength of DL but also one of its key limitations when applied to science and engineering problems in which underlying physical properties (such as stability, conservation, and positivity) and desired accuracy need to be achieved. DL methods in their original forms are not capable of respecting the underlying mathematical models or achieving desired accuracy even in big-data regimes. On the other hand, many data-driven science and engineering problems, such as inverse problems, typically have limited experimental or observational data, and DL would overfit the data in this case. Leveraging information encoded in the underlying mathematical models, we argue, not only compensates missing information in low data regimes but also provides opportunities to equip DL methods with the underlying physics and hence obtaining higher accuracy. This short communication introduces several model-constrained DL approaches (including both feed-forward DNN and autoencoders) that are capable of learning not only information hidden in the training data but also in the underlying mathematical models to solve inverse problems. We present and provide intuitions for our formulations for general nonlinear problems. For linear inverse problems and linear networks, the first order optimality conditions show that our model-constrained DL approaches can learn information encoded in the underlying mathematical models, and thus can produce consistent or equivalent inverse solutions, while naive purely data-based counterparts cannot.

Bi-objective search is a well-known algorithmic problem, concerned with finding a set of optimal solutions in a two-dimensional domain. This problem has a wide variety of applications such as planning in transport systems or optimal control in energy systems. Recently, bi-objective A*-based search (BOA*) has shown state-of-the-art performance in large networks. This paper develops a bi-directional variant of BOA*, enriched with several speed-up heuristics. Our experimental results on 1,000 benchmark cases show that our bi-directional A* algorithm for bi-objective search (BOBA*) can optimally solve all of the benchmark cases within the time limit, outperforming the state of the art BOA*, bi-objective Dijkstra and bi-directional bi-objective Dijkstra by an average runtime improvement of a factor of five over all of the benchmark instances.

We present Neural A*, a novel data-driven search method for path planning problems. Despite the recent increasing attention to data-driven path planning, a machine learning approach to search-based planning is still challenging due to the discrete nature of search algorithms. In this work, we reformulate a canonical A* search algorithm to be differentiable and couple it with a convolutional encoder to form an end-to-end trainable neural network planner. Neural A* solves a path planning problem by encoding a problem instance to a guidance map and then performing the differentiable A* search with the guidance map. By learning to match the search results with ground-truth paths provided by experts, Neural A* can produce a path consistent with the ground truth accurately and efficiently. Our extensive experiments confirmed that Neural A* outperformed state-of-the-art data-driven planners in terms of the search optimality and efficiency trade-off, and furthermore, successfully predicted realistic human trajectories by directly performing search-based planning on natural image inputs.

Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot and few-shot learning problems. Our approach is based on a novel Class Adapting Principal Directions (CAPD) concept that allows multiple embeddings of image features into a semantic space. Given an image, our method produces one principal direction for each seen class. Then, it learns how to combine these directions to obtain the principal direction for each unseen class such that the CAPD of the test image is aligned with the semantic embedding of the true class, and opposite to the other classes. This allows efficient and class-adaptive information transfer from seen to unseen classes. In addition, we propose an automatic process for selection of the most useful seen classes for each unseen class to achieve robustness in zero-shot learning. Our method can update the unseen CAPD taking the advantages of few unseen images to work in a few-shot learning scenario. Furthermore, our method can generalize the seen CAPDs by estimating seen-unseen diversity that significantly improves the performance of generalized zero-shot learning. Our extensive evaluations demonstrate that the proposed approach consistently achieves superior performance in zero-shot, generalized zero-shot and few/one-shot learning problems.

北京阿比特科技有限公司