亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work considers the economic dispatch problem for a single micro-gas turbine, governed by a discrete state-space model, under combined heat and power (CHP) operation and coupled with a utility. If the exact power and heat demands are given, existing algorithms can be used to give a quick optimal solution to the economic dispatch problem. However, in practice, the power and heat demands can not be known deterministically, but are rather predicted, resulting in an estimate and a bound on the estimation error. We consider the case in which the power and heat demands are unknown, and present a robust optimization-based approach for scheduling the turbine's heat and power generation, in which the demand is assumed to be inside an uncertainty set. We consider two different choices of the uncertainty set relying on the $\ell^\infty$- and the $\ell^1$-norms, each with different advantages, and consider the associated robust economic dispatch problems. We recast these as robust shortest-path problems on appropriately defined graphs. For the first choice, we provide an exact linear-time algorithm for the solution of the robust shortest-path problem, and for the second, we provide an exact quadratic-time algorithm and an approximate linear-time algorithm. The efficiency and usefulness of the algorithms are demonstrated using a detailed case study that employs real data on energy demand profiles and electricity tariffs.

相關內容

We develop novel methods for using persistent homology to infer the homology of an unknown Riemannian manifold $(M, g)$ from a point cloud sampled from an arbitrary smooth probability density function. Standard distance-based filtered complexes, such as the \v{C}ech complex, often have trouble distinguishing noise from features that are simply small. We address this problem by defining a family of "density-scaled filtered complexes" that includes a density-scaled \v{C}ech complex and a density-scaled Vietoris--Rips complex. We show that the density-scaled \v{C}ech complex is homotopy-equivalent to $M$ for filtration values in an interval whose starting point converges to $0$ in probability as the number of points $N \to \infty$ and whose ending point approaches infinity as $N \to \infty$. By contrast, the standard \v{C}ech complex may only be homotopy-equivalent to $M$ for a very small range of filtration values. The density-scaled filtered complexes also have the property that they are invariant under conformal transformations, such as scaling. We implement a filtered complex $\widehat{DVR}$ that approximates the density-scaled Vietoris--Rips complex, and we empirically test the performance of our implementation. As examples, we use $\widehat{DVR}$ to identify clusters that have different densities, and we apply $\widehat{DVR}$ to a time-delay embedding of the Lorenz dynamical system. Our implementation is stable (under conditions that are almost surely satisfied) and designed to handle outliers in the point cloud that do not lie on $M$.

The optimisation in the ambulance dispatching process is significant for patients who need early treatments. However, the problem of dynamic ambulance redeployment for destination hospital selection has rarely been investigated. The paper proposes an approach to model and simulate the ambulance dispatching process in multi-agents healthcare environments of large cities. The proposed approach is based on using the coupled game-theoretic (GT) approach to identify hospital strategies (considering hospitals as players within a non-cooperative game) and performing discrete-event simulation (DES) of patient delivery and provision of healthcare services to evaluate ambulance dispatching (selection of target hospital). Assuming the collective nature of decisions on patient delivery, the approach assesses the influence of the diverse behaviours of hospitals on system performance with possible further optimisation of this performance. The approach is studied through a series of cases starting with a simplified 1D model and proceeding with a coupled 2D model and real-world application. The study considers the problem of dispatching ambulances to patients with the ACS directed to the PCI in the target hospital. A real-world case study of data from Saint Petersburg (Russia) is analysed showing the better conformity of the global characteristics (mortality rate) of the healthcare system with the proposed approach being applied to discovering the agents' diverse behaviour.

Artificial Neural Networks (ANNs) can be viewed as nonlinear sieves that can approximate complex functions of high dimensional variables more effectively than linear sieves. We investigate the computational performance of various ANNs in nonparametric instrumental variables (NPIV) models of moderately high dimensional covariates that are relevant to empirical economics. We present two efficient procedures for estimation and inference on a weighted average derivative (WAD): an orthogonalized plug-in with optimally-weighted sieve minimum distance (OP-OSMD) procedure and a sieve efficient score (ES) procedure. Both estimators for WAD use ANN sieves to approximate the unknown NPIV function and are root-n asymptotically normal and first-order equivalent. We provide a detailed practitioner's recipe for implementing both efficient procedures. This involves the choice of tuning parameters for the unknown NPIV, the conditional expectations and the optimal weighting function that are present in both procedures but also the choice of tuning parameters for the unknown Riesz representer in the ES procedure. We compare their finite-sample performances in various simulation designs that involve smooth NPIV function of up to 13 continuous covariates, different nonlinearities and covariate correlations. Some Monte Carlo findings include: 1) tuning and optimization are more delicate in ANN estimation; 2) given proper tuning, both ANN estimators with various architectures can perform well; 3) easier to tune ANN OP-OSMD estimators than ANN ES estimators; 4) stable inferences are more difficult to achieve with ANN (than spline) estimators; 5) there are gaps between current implementations and approximation theories. Finally, we apply ANN NPIV to estimate average partial derivatives in two empirical demand examples with multivariate covariates.

We estimate best-approximation errors using vector-valued finite elements for fields with low regularity in the scale of fractional-order Sobolev spaces. By assuming additionally that the target field has a curl or divergence property, we establish upper bounds on these errors that can be localized to the mesh cells. These bounds are derived using the quasi-interpolation errors with or without boundary prescription derived in [A. Ern and J.-L. Guermond, ESAIM Math. Model. Numer. Anal., 51 (2017), pp.~1367--1385]. By using the face-to-cell lifting operators analyzed in [A. Ern and J.-L. Guermond, Found. Comput. Math., (2021)], and exploiting the additional assumption made on the curl or the divergence of the target field, a localized upper bound on the quasi-interpolation error is derived. As an illustration, we show how to apply these results to the error analysis of the curl-curl problem associated with Maxwell's equations.

This paper proposes a new image-based localization framework that explicitly localizes the camera/robot by fusing Convolutional Neural Network (CNN) and sequential images' geometric constraints. The camera is localized using a single or few observed images and training images with 6-degree-of-freedom pose labels. A Siamese network structure is adopted to train an image descriptor network, and the visually similar candidate image in the training set is retrieved to localize the testing image geometrically. Meanwhile, a probabilistic motion model predicts the pose based on a constant velocity assumption. The two estimated poses are finally fused using their uncertainties to yield an accurate pose prediction. This method leverages the geometric uncertainty and is applicable in indoor scenarios predominated by diffuse illumination. Experiments on simulation and real data sets demonstrate the efficiency of our proposed method. The results further show that combining the CNN-based framework with geometric constraint achieves better accuracy when compared with CNN-only methods, especially when the training data size is small.

In this paper we discuss a reduced basis method for linear evolution PDEs, which is based on the application of the Laplace transform. The main advantage of this approach consists in the fact that, differently from time stepping methods, like Runge-Kutta integrators, the Laplace transform allows to compute the solution directly at a given instant, which can be done by approximating the contour integral associated to the inverse Laplace transform by a suitable quadrature formula. In terms of the reduced basis methodology, this determines a significant improvement in the reduction phase - like the one based on the classical proper orthogonal decomposition (POD) - since the number of vectors to which the decomposition applies is drastically reduced as it does not contain all intermediate solutions generated along an integration grid by a time stepping method. We show the effectiveness of the method by some illustrative parabolic PDEs arising from finance and also provide some evidence that the method we propose, when applied to a simple advection equation, does not suffer the problem of slow decay of singular values which instead affects methods based on time integration of the Cauchy problem arising from space discretization.

Behavioral science researchers have shown strong interest in disaggregating within-person relations from between-person differences (stable traits) using longitudinal data. In this paper, we propose a method of within-person variability score-based causal inference for estimating joint effects of time-varying continuous treatments by effectively controlling for stable traits. After explaining the assumed data-generating process and providing formal definitions of stable trait factors, within-person variability scores, and joint effects of time-varying treatments at the within-person level, we introduce the proposed method, which consists of a two-step analysis. Within-person variability scores for each person, which are disaggregated from stable traits of that person, are first calculated using weights based on a best linear correlation preserving predictor through structural equation modeling (SEM). Causal parameters are then estimated via a potential outcome approach, either marginal structural models (MSMs) or structural nested mean models (SNMMs), using calculated within-person variability scores. Unlike the approach that relies entirely on SEM, the present method does not assume linearity for observed time-varying confounders at the within-person level. We emphasize the use of SNMMs with G-estimation because of its property of being doubly robust to model misspecifications in how observed time-varying confounders are functionally related with treatments/predictors and outcomes at the within-person level. Through simulation, we show that the proposed method can recover causal parameters well and that causal estimates might be severely biased if one does not properly account for stable traits. An empirical application using data regarding sleep habits and mental health status from the Tokyo Teen Cohort study is also provided.

We study the class of first-order locally-balanced Metropolis--Hastings algorithms introduced in Livingstone & Zanella (2021). To choose a specific algorithm within the class the user must select a balancing function $g:\mathbb{R} \to \mathbb{R}$ satisfying $g(t) = tg(1/t)$, and a noise distribution for the proposal increment. Popular choices within the class are the Metropolis-adjusted Langevin algorithm and the recently introduced Barker proposal. We first establish a universal limiting optimal acceptance rate of 57% and scaling of $n^{-1/3}$ as the dimension $n$ tends to infinity among all members of the class under mild smoothness assumptions on $g$ and when the target distribution for the algorithm is of the product form. In particular we obtain an explicit expression for the asymptotic efficiency of an arbitrary algorithm in the class, as measured by expected squared jumping distance. We then consider how to optimise this expression under various constraints. We derive an optimal choice of noise distribution for the Barker proposal, optimal choice of balancing function under a Gaussian noise distribution, and optimal choice of first-order locally-balanced algorithm among the entire class, which turns out to depend on the specific target distribution. Numerical simulations confirm our theoretical findings and in particular show that a bi-modal choice of noise distribution in the Barker proposal gives rise to a practical algorithm that is consistently more efficient than the original Gaussian version.

Cellular networks are expected to be the main communication infrastructure to support the expanding applications of Unmanned Aerial Vehicles (UAVs). As these networks are deployed to serve ground User Equipment (UES), several issues need to be addressed to enhance cellular UAVs'services.In this paper, we propose a realistic communication model on the downlink,and we show that the Quality of Service (QoS)for the users is affected by the number of interfering BSs and the impact they cause. The joint problem of sub-carrier and power allocation is therefore addressed. Given its complexity, which is known to be NP-hard, we introduce a solution based on game theory. First, we argue that separating between UAVs and UEs in terms of the assigned sub-carriers reduces the interference impact on the users. This is materialized through a matching game. Moreover, in order to boost the partition, we propose a coalitional game that considers the outcome of the first one and enables users to change their coalitions and enhance their QoS. Furthermore, a power optimization solution is introduced, which is considered in the two games. Performance evaluations are conducted, and the obtained results demonstrate the effectiveness of the propositions.

In two-phase image segmentation, convex relaxation has allowed global minimisers to be computed for a variety of data fitting terms. Many efficient approaches exist to compute a solution quickly. However, we consider whether the nature of the data fitting in this formulation allows for reasonable assumptions to be made about the solution that can improve the computational performance further. In particular, we employ a well known dual formulation of this problem and solve the corresponding equations in a restricted domain. We present experimental results that explore the dependence of the solution on this restriction and quantify imrovements in the computational performance. This approach can be extended to analogous methods simply and could provide an efficient alternative for problems of this type.

北京阿比特科技有限公司