亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

With the emergence of energy communities, where a number of prosumers invest in shared generation and storage, the issue of fair allocation of benefits is increasingly important. The Shapley value has attracted increasing interest for redistribution in energy settings - however, computing it exactly is intractable beyond a few dozen prosumers. In this paper, we first conduct a systematic review of the literature on the use of Shapley value in energy-related applications, as well as efforts to compute or approximate it. Next, we formalise the main methods for approximating the Shapley value in community energy settings, and propose a new one, which we call the stratified expected value approximation. To compare the performance of these methods, we design a novel method for exact Shapley value computation, which can be applied to communities of up to several hundred agents by clustering the prosumers into a smaller number of demand profiles. We perform a large-scale experimental comparison of the proposed methods, for communities of up to 200 prosumers, using large-scale, publicly available data from two large-scale energy trials in the UK (UKERC Energy Data Centre, 2017, UK Power Networks Innovation, 2021). Our analysis shows that, as the number of agents in the community increases, the relative difference to the exact Shapley value converges to under 1% for all the approximation methods considered. In particular, for most experimental scenarios, we show that there is no statistical difference between the newly proposed stratified expected value method and the existing state-of-the-art method that uses adaptive sampling (O'Brien et al., 2015), although the cost of computation for large communities is an order of magnitude lower.

相關內容

Parametrizations of data manifolds in shape spaces can be computed using the rich toolbox of Riemannian geometry. This, however, often comes with high computational costs, which raises the question if one can learn an efficient neural network approximation. We show that this is indeed possible for shape spaces with a special product structure, namely those smoothly approximable by a direct sum of low-dimensional manifolds. Our proposed architecture leverages this structure by separately learning approximations for the low-dimensional factors and a subsequent combination. After developing the approach as a general framework, we apply it to a shape space of triangular surfaces. Here, typical examples of data manifolds are given through datasets of articulated models and can be factorized, for example, by a Sparse Principal Geodesic Analysis (SPGA). We demonstrate the effectiveness of our proposed approach with experiments on synthetic data as well as manifolds extracted from data via SPGA.

Computation of a tensor singular value decomposition (t-SVD) with a few passes over the underlying data tensor is crucial in using modern computer architectures, where the main concern is communication cost. The current subspace randomized algorithms for computation of the t-SVD, need 2q + 2 passes over the data tensor where q is a non-negative integer number (power iteration parameter). In this paper, we propose an efficient and flexible randomized algorithm that works for any number of passes q, not necessarily being an even number. The flexibility of the proposed algorithm in using fewer passes naturally leads to lower computational and communication costs. This benefit makes it applicable especially when the data tensors are large or multiple tensor decompositions are required in our task. The proposed algorithm is a generalization of the methods developed for matrices to tensors. The expected/average error bound of the proposed algorithm is derived. Several numerical experiments on random and real-time datasets are conducted and the proposed algorithm is compared with some baseline algorithms. The results confirmed that the proposed algorithm is efficient, applicable, and can provide better performance than the existing algorithms. We also use our proposed method to develop a fast algorithm for the tensor completion problem.

This paper establishes a structure-preserving numerical scheme for the Cahn--Hilliard equation with degenerate mobility. First, by applying a finite volume method with upwind numerical fluxes to the degenerate Cahn--Hilliard equation rewritten by the scalar auxiliary variable (SAV) approach, we creatively obtain an unconditionally bound-preserving, energy-stable and fully-discrete scheme, which, for the first time, addresses the boundedness of the classical SAV approach under $H^{-1}$-gradient flow. Then, a dimensional-splitting technique is introduced in high-dimensional cases, which greatly reduces the computational complexity while preserves original structural properties. Numerical experiments are presented to verify the bound-preserving and energy-stable properties of the proposed scheme. Finally, by applying the proposed structure-preserving scheme, we numerically demonstrate that surface diffusion can be approximated by the Cahn--Hilliard equation with degenerate mobility and Flory--Huggins potential when the absolute temperature is sufficiently low, which agrees well with the theoretical result by using formal asymptotic analysis.wn theoretically by formal matched asymptotics.

We present a novel reinforcement learning based algorithm for multi-robot task allocation problem in warehouse environments. We formulate it as a Markov Decision Process and solve via a novel deep multi-agent reinforcement learning method (called RTAW) with attention inspired policy architecture. Hence, our proposed policy network uses global embeddings that are independent of the number of robots/tasks. We utilize proximal policy optimization algorithm for training and use a carefully designed reward to obtain a converged policy. The converged policy ensures cooperation among different robots to minimize total travel delay (TTD) which ultimately improves the makespan for a sufficiently large task-list. In our extensive experiments, we compare the performance of our RTAW algorithm to state of the art methods such as myopic pickup distance minimization (greedy) and regret based baselines on different navigation schemes. We show an improvement of upto 14% (25-1000 seconds) in TTD on scenarios with hundreds or thousands of tasks for different challenging warehouse layouts and task generation schemes. We also demonstrate the scalability of our approach by showing performance with up to $1000$ robots in simulations.

Cylindrical Algebraic Decomposition (CAD) is a key proof technique for formal verification of cyber-physical systems. CAD is computationally expensive, with worst-case doubly-exponential complexity. Selecting an optimal variable ordering is paramount to efficient use of CAD. Prior work has demonstrated that machine learning can be useful in determining efficient variable orderings. Much of this work has been driven by CAD problems extracted from applications of the MetiTarski theorem prover. In this paper, we revisit this prior work and consider issues of bias in existing training and test data. We observe that the classical MetiTarski benchmarks are heavily biased towards particular variable orderings. To address this, we apply symmetries to create a new dataset containing more than 41K MetiTarski challenges designed to remove bias. Furthermore, we evaluate issues of information leakage, and test the generalizability of our models on the new dataset.

We study the implicit upwind finite volume scheme for numerically approximating the advection-diffusion equation with a vector field in the low regularity DiPerna-Lions setting. That is, we are concerned with advecting velocity fields that are spatially Sobolev regular and data that are merely integrable. We study the implicit upwind finite volume scheme for numerically approximating the advection-diffusion equation with a vector field in the low regularity DiPerna-Lions setting. We prove that on unstructured regular meshes the rate of convergence of approximate solutions generated by the upwind scheme towards the unique solution of the continuous model is at least one. The numerical error is estimated in terms of logarithmic Kantorovich-Rubinstein distances and provides thus a bound on the rate of weak convergence.

Kernel methods for solving partial differential equations on surfaces have the advantage that those methods work intrinsically on the surface and yield high approximation rates if the solution to the partial differential equation is smooth enough. Naive implementations of kernel based methods suffer, however, from the cubic complexity in the degrees of freedom. Localized Lagrange bases have proven to overcome this computational complexity to some extend. In this article we present a rigorous proof for a geometric multigrid method with $\tau\ge 2$-cycle for elliptic partial differential equations on surfaces which is based on precomputed Lagrange basis functions. Our new multigrid provably works on quasi-uniform point clouds on the surface and hence does not require a grid-structure. Moreover, the computational cost scales log-linear in the degrees of freedom.

Community detection is a fundamental task in social network analysis. Online social networks have dramatically increased the volume and speed of interactions among users, enabling advanced analysis of these dynamics. Despite a growing interest in tracking the evolution of groups of users in real-world social networks, most community detection efforts focus on communities within static networks. Here, we describe a framework for tracking communities over time in a dynamic network, where a series of significant events is identified for each community. To this end, a modularity-based strategy is proposed to effectively detect and track dynamic communities. The potential of our framework is shown by conducting extensive experiments on synthetic networks containing embedded events. Results indicate that our framework outperforms other state-of-the-art methods. In addition, we briefly explore how the proposed approach can identify dynamic communities in a Twitter network composed of more than 60,000 users, which posted over 5 million tweets throughout 2020. The proposed framework can be applied to different social network and provides a valuable tool to understand the evolution of communities in dynamic social networks.

The use of high order fully implicit Runge-Kutta methods is of significant importance in the context of the numerical solution of transient partial differential equations, in particular when solving large scale problems due to fine space resolution with many millions of spatial degrees of freedom and long time intervals. In this study we consider strongly A-stable implicit Runge-Kutta methods of arbitrary order of accuracy, based on Radau quadratures, for which efficient preconditioners have been introduced. A refined spectral analysis of the corresponding matrices and matrix-sequences is presented, both in terms of localization and asymptotic global distribution of the eigenvalues. Specific expressions of the eigenvectors are also obtained. The given study fully agrees with the numerically observed spectral behavior and substantially improves the theoretical studies done in this direction so far. Concluding remarks and open problems end the current work, with specific attention to the potential generalizations of the hereby suggested general approach.

Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For $(1+\varepsilon)$-approximation of job scheduling of $n$ jobs on a single machine, we develop a fully dynamic algorithm with $O(\frac{\log{n}}{\varepsilon})$ update and $O(\log{n})$ query worst-case time. Further, we design a local computation algorithm that uses only $O(\frac{\log{N}}{\varepsilon})$ queries when all jobs are length at least $1$ and have starting/ending times within $[0,N]$. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worst-case update and query time are $\operatorname{poly}(\log n,\frac{1}{\varepsilon})$. Equivalently, this is the first algorithm that maintains a $(1+\varepsilon)$-approximation of the maximum independent set of a collection of weighted intervals in $\operatorname{poly}(\log n,\frac{1}{\varepsilon})$ time updates/queries. This is an exponential improvement in $1/\varepsilon$ over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese ~[SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with $M$ machines.

北京阿比特科技有限公司