亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The sum-utility maximization problem is known to be important in the energy systems literature. The conventional assumption to address this problem is that the utility is concave. But for some key applications, such an assumption is not reasonable and does not reflect well the actual behavior of the consumer. To address this issue, the authors pose and address a more general optimization problem, namely by assuming the consumer's utility to be sigmoidal and in a given class of functions. The considered class of functions is very attractive for at least two reasons. First, the classical NP-hardness issue associated with sum-utility maximization is circumvented. Second, the considered class of functions encompasses well-known performance metrics used to analyze the problems of pricing and energy-efficiency. This allows one to design a new and optimal inclining block rates (IBR) pricing policy which also has the virtue of flattening the power consumption and reducing the peak power. We also show how to maximize the energy-efficiency by a low-complexity algorithm. When compared to existing policies, simulations fully support the benefit from using the proposed approach.

相關內容

Group testing is one of the fundamental problems in coding theory and combinatorics in which one is to identify a subset of contaminated items from a given ground set. There has been renewed interest in group testing recently due to its applications in diagnostic virology, including pool testing for the novel coronavirus. The majority of existing works on group testing focus on the \emph{uniform} setting in which any subset of size $d$ from a ground set $V$ of size $n$ is potentially contaminated. In this work, we consider a {\em generalized} version of group testing with an arbitrary set-system of potentially contaminated sets. The generalized problem is characterized by a hypergraph $H=(V,E)$, where $V$ represents the ground set and edges $e\in E$ represent potentially contaminated sets. The problem of generalized group testing is motivated by practical settings in which not all subsets of a given size $d$ may be potentially contaminated, rather, due to social dynamics, geographical limitations, or other considerations, there exist subsets that can be readily ruled out. For example, in the context of pool testing, the edge set $E$ may consist of families, work teams, or students in a classroom, i.e., subsets likely to be mutually contaminated. The goal in studying the generalized setting is to leverage the additional knowledge characterized by $H=(V,E)$ to significantly reduce the number of required tests. The paper considers both adaptive and non-adaptive group testing and makes the following contributions. First, for the non-adaptive setting, we show that finding an optimal solution for the generalized version of group testing is NP-hard. For this setting, we present a solution that requires $O(d\log{|E|})$ tests, where $d$ is the maximum size of a set $e \in E$. Our solutions generalize those given for the traditional setting and are shown to be of order-optimal size $O(\log{|E|})$ for hypergraphs with edges that have ``large'' symmetric differences. For the adaptive setting, when edges in $E$ are of size exactly $d$, we present a solution of size $O(\log{|E|}+d\log^2{d})$ that comes close to the lower bound of $\Omega(\log{|E|} + d)$.

Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters. To address key challenges of enabling FL over a wireless fog-cloud system (e.g., non-i.i.d. data, users' heterogeneity), we first propose an efficient FL algorithm based on Federated Averaging (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud. Next, we employ FedFog in wireless fog-cloud systems by investigating a novel network-aware FL optimization problem that strikes the balance between the global loss and completion time. An iterative algorithm is then developed to obtain a precise measurement of the system performance, which helps design an efficient stopping criteria to output an appropriate number of global rounds. To mitigate the straggler effect, we propose a flexible user aggregation strategy that trains fast users first to obtain a certain level of accuracy before allowing slow users to join the global training updates. Extensive numerical results using several real-world FL tasks are provided to verify the theoretical convergence of FedFog. We also show that the proposed co-design of FL and communication is essential to substantially improve resource utilization while achieving comparable accuracy of the learning model.

We study a new generative modeling technique based on adversarial training (AT). We show that in a setting where the model is trained to discriminate in-distribution data from adversarial examples perturbed from out-distribution samples, the model learns the support of the in-distribution data. The learning process is also closely related to MCMC-based maximum likelihood learning of energy-based models (EBMs), and can be considered as an approximate maximum likelihood learning method. We show that this AT generative model achieves competitive image generation performance to state-of-the-art EBMs, and at the same time is stable to train and has better sampling efficiency. We demonstrate that the AT generative model is well-suited for the task of image translation and worst-case out-of-distribution detection.

We are concerned with the problem of decomposing the parameter space of a parametric system of polynomial equations, and possibly some polynomial inequality constraints, with respect to the number of real solutions that the system attains. Previous studies apply a two step approach to this problem, where first the discriminant variety of the system is computed via a Groebner Basis (GB), and then a Cylindrical Algebraic Decomposition (CAD) of this is produced to give the desired computation. However, even on some reasonably small applied examples this process is too expensive, with computation of the discriminant variety alone infeasible. In this paper we develop new approaches to build the discriminant variety using resultant methods (the Dixon resultant and a new method using iterated univariate resultants). This reduces the complexity compared to GB and allows for a previous infeasible example to be tackled. We demonstrate the benefit by giving a symbolic solution to a problem from population dynamics -- the analysis of the steady states of three connected populations which exhibit Allee effects - which previously could only be tackled numerically.

Game engines are increasingly used as simulation platforms by the autonomous vehicle (AV) community to develop vehicle control systems and test environments. A key requirement for simulation-based development and verification is determinism, since a deterministic process will always produce the same output given the same initial conditions and event history. Thus, in a deterministic simulation environment, tests are rendered repeatable and yield simulation results that are trustworthy and straightforward to debug. However, game engines are seldom deterministic. This paper reviews and identifies the potential causes of non-deterministic behaviours in game engines. A case study using CARLA, an open-source autonomous driving simulation environment powered by Unreal Engine, is presented to highlight its inherent shortcomings in providing sufficient precision in experimental results. Different configurations and utilisations of the software and hardware are explored to determine an operational domain where the simulation precision is sufficiently low i.e.\ variance between repeated executions becomes negligible for development and testing work. Finally, a method of a general nature is proposed, that can be used to find the domains of permissible variance in game engine simulations for any given system configuration.

In the present work we tackle the problem of finding the optimal price tariff to be set by a risk-averse electric retailer participating in the pool and whose customers are price-sensitive. We assume that the retailer has access to a sufficiently large smart-meter dataset from which it can statistically characterize the relationship between the tariff price and the demand load of its clients. Three different models are analyzed to predict the aggregated load as a function of the electricity prices and other parameters, as humidity or temperature. More specifically, we train linear regression (predictive) models to forecast the resulting demand load as a function of the retail price. Then we will insert this model in a quadratic optimization problem which evaluates the optimal price to be offered. This optimization problem accounts for different sources of uncertainty including consumer's response, pool prices and renewable source availability, and relies on a stochastic and risk-averse formulation. In particular, one important contribution of this work is to base the scenario generation and reduction procedure on the statistical properties of the resulting predictive model. This allows us to properly quantify (data-driven) not only the expected value but the level of uncertainty associated with the main problem parameters. Moreover, we consider both standard forward based contracts and the recently introduced power purchase agreement contracts as risk-hedging tools for the retailer. The results are promising as profits are found for the retailer with highly competitive prices and some possible improvements are shown if richer datasets could be available in the future. A realistic case study and multiple sensitivity analyses have been performed to characterize the risk-aversion behavior of the retailer considering price-sensitive consumers.

Machine prediction algorithms (e.g., binary classifiers) often are adopted on the basis of claimed performance using classic metrics such as sensitivity and predictive value. However, classifier performance depends heavily upon the context (workflow) in which the classifier operates. Classic metrics do not reflect the realized utility of a predictor unless certain implicit assumptions are met, and these assumptions cannot be met in many common clinical scenarios. This often results in suboptimal implementations and in disappointment when expected outcomes are not achieved. One common failure mode for classic metrics arises when multiple predictions can be made for the same event, particularly when redundant true positive predictions produce little additional value. This describes many clinical alerting systems. We explain why classic metrics cannot correctly represent predictor performance in such contexts, and introduce an improved performance assessment technique using utility functions to score predictions based on their utility in a specific workflow context. The resulting utility metrics (u-metrics) explicitly account for the effects of temporal relationships on prediction utility. Compared to traditional measures, u-metrics more accurately reflect the real world costs and benefits of a predictor operating in a live clinical context. The improvement can be significant. We also describe a formal approach to snoozing, a mitigation strategy in which some predictions are suppressed to improve predictor performance by reducing false positives while retaining event capture. Snoozing is especially useful for predictors that generate interruptive alarms. U-metrics correctly measure and predict the performance benefits of snoozing, whereas traditional metrics do not.

To solve the information explosion problem and enhance user experience in various online applications, recommender systems have been developed to model users preferences. Although numerous efforts have been made toward more personalized recommendations, recommender systems still suffer from several challenges, such as data sparsity and cold start. In recent years, generating recommendations with the knowledge graph as side information has attracted considerable interest. Such an approach can not only alleviate the abovementioned issues for a more accurate recommendation, but also provide explanations for recommended items. In this paper, we conduct a systematical survey of knowledge graph-based recommender systems. We collect recently published papers in this field and summarize them from two perspectives. On the one hand, we investigate the proposed algorithms by focusing on how the papers utilize the knowledge graph for accurate and explainable recommendation. On the other hand, we introduce datasets used in these works. Finally, we propose several potential research directions in this field.

The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司