亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The estimation of wall thermal properties by \emph{in situ} measurement enables to increase the reliability of the model predictions for building energy efficiency. Nevertheless, retrieving the unknown parameters has an important computational cost. Indeed, several computations of the heat transfer problem are required to identify these thermal properties. To handle this drawback, an innovative approach is investigated. The first step is to search the optimal experiment design among the sequence of observation of several months. A reduced sequence of observations of three days is identified which guarantees to estimate the parameter with the maximum accuracy. Moreover, the inverse problem is only solved for this short sequence. To decrease further the computational efforts, a reduced order model based on the modal identification method is employed. This \emph{a posteriori} model reduction method approximates the solution with a lower degree of freedom. The whole methodology is illustrated to estimate the thermal diffusivity of an historical building that has been monitored with temperature sensors for several months. The computational efforts is cut by five. The estimated parameter improves the reliability of the predictions of the wall thermal efficiency.

相關內容

We consider the problem: is the optimal expected total reward to reach a goal state in a partially observable Markov decision process (POMDP) below a given threshold? We tackle this -- generally undecidable -- problem by computing under-approximations on these total expected rewards. This is done by abstracting finite unfoldings of the infinite belief MDP of the POMDP. The key issue is to find a suitable under-approximation of the value function. We provide two techniques: a simple (cut-off) technique that uses a good policy on the POMDP, and a more advanced technique (belief clipping) that uses minimal shifts of probabilities between beliefs. We use mixed-integer linear programming (MILP) to find such minimal probability shifts and experimentally show that our techniques scale quite well while providing tight lower bounds on the expected total reward.

Continuous determinantal point processes (DPPs) are a class of repulsive point processes on $\mathbb{R}^d$ with many statistical applications. Although an explicit expression of their density is known, it is too complicated to be used directly for maximum likelihood estimation. In the stationary case, an approximation using Fourier series has been suggested, but it is limited to rectangular observation windows and no theoretical results support it. In this contribution, we investigate a different way to approximate the likelihood by looking at its asymptotic behaviour when the observation window grows towards $\mathbb{R}^d$. This new approximation is not limited to rectangular windows, is faster to compute than the previous one, does not require any tuning parameter, and some theoretical justifications are provided. It moreover provides an explicit formula for estimating the asymptotic variance of the associated estimator. The performances are assessed in a simulation study on standard parametric models on $\mathbb{R}^d$ and compare favourably to common alternative estimation methods for continuous DPPs.

Much of the theory for the lasso in the linear model $Y = X \beta^* + \varepsilon$ hinges on the quantity $2 \| X^\top \varepsilon \|_{\infty} / n$, which we call the lasso's effective noise. Among other things, the effective noise plays an important role in finite-sample bounds for the lasso, the calibration of the lasso's tuning parameter, and inference on the parameter vector $\beta^*$. In this paper, we develop a bootstrap-based estimator of the quantiles of the effective noise. The estimator is fully data-driven, that is, does not require any additional tuning parameters. We equip our estimator with finite-sample guarantees and apply it to tuning parameter calibration for the lasso and to high-dimensional inference on the parameter vector $\beta^*$.

A method of Sequential Log-Convex Programming (SLCP) is constructed that exploits the log-convex structure present in many engineering design problems. The mathematical structure of Geometric Programming (GP) is combined with the ability of Sequential Quadratic Program (SQP) to accommodate a wide range of objective and constraint functions, resulting in a practical algorithm that can be adopted with little to no modification of existing design practices. Three test problems are considered to demonstrate the SLCP algorithm, comparing it with SQP and the modified Logspace Sequential Quadratic Programming (LSQP). In these cases, SLCP shows up to a 77% reduction in number of iterations compared to SQP, and an 11% reduction compared to LSQP. The airfoil analysis code XFOIL is integrated into one of the case studies to show how SLCP can be used to evolve the fidelity of design problems that have initially been modeled as GP compatible. Finally, a methodology for design based on GP and SLCP is briefly discussed.

For the approximation and simulation of twofold iterated stochastic integrals and the corresponding L\'{e}vy areas w.r.t. a multi-dimensional Wiener process, we review four algorithms based on a Fourier series approach. Especially, the very efficient algorithm due to Wiktorsson and a newly proposed algorithm due to Mrongowius and R\"ossler are considered. To put recent advances into context, we analyse the four Fourier-based algorithms in a unified framework to highlight differences and similarities in their derivation. A comparison of theoretical properties is complemented by a numerical simulation that reveals the order of convergence for each algorithm. Further, concrete instructions for the choice of the optimal algorithm and parameters for the simulation of solutions for stochastic (partial) differential equations are given. Additionally, we provide advice for an efficient implementation of the considered algorithms and incorporated these insights into an open source toolbox that is freely available for both Julia and MATLAB programming languages. The performance of this toolbox is analysed by comparing it to some existing implementations, where we observe a significant speed-up.

An informative measurement is the most efficient way to gain information about an unknown state. We give a first-principles derivation of a general-purpose dynamic programming algorithm that returns an optimal sequence of informative measurements by sequentially maximizing the entropy of possible measurement outcomes. This algorithm can be used by an autonomous agent or robot to decide where best to measure next, planning a path corresponding to an optimal sequence of informative measurements. The algorithm is applicable to states and controls that are continuous or discrete, and agent dynamics that is either stochastic or deterministic; including Markov decision processes and Gaussian processes. Recent results from approximate dynamic programming and reinforcement learning, including on-line approximations such as rollout and Monte Carlo tree search, allow the measurement task to be solved in real-time. The resulting solutions include non-myopic paths and measurement sequences that can generally outperform, sometimes substantially, commonly used greedy approaches. This is demonstrated for a global search problem, where on-line planning with an extended local search is found to reduce the number of measurements in the search by approximately half. A variant of the algorithm is derived for Gaussian processes for active sensing.

We address the solution of large-scale Bayesian optimal experimental design (OED) problems governed by partial differential equations (PDEs) with infinite-dimensional parameter fields. The OED problem seeks to find sensor locations that maximize the expected information gain (EIG) in the solution of the underlying Bayesian inverse problem. Computation of the EIG is usually prohibitive for PDE-based OED problems. To make the evaluation of the EIG tractable, we approximate the (PDE-based) parameter-to-observable map with a derivative-informed projected neural network (DIPNet) surrogate, which exploits the geometry, smoothness, and intrinsic low-dimensionality of the map using a small and dimension-independent number of PDE solves. The surrogate is then deployed within a greedy algorithm-based solution of the OED problem such that no further PDE solves are required. We analyze the EIG approximation error in terms of the generalization error of the DIPNet, and demonstrate the efficiency and accuracy of the method via numerical experiments involving inverse scattering and inverse reactive transport.

In the present paper non-convex multi-objective parameter optimization problems are considered which are governed by elliptic parametrized partial differential equations (PDEs). To solve these problems numerically the Pascoletti-Serafini scalarization is applied and the obtained scalar optimization problems are solved by an augmented Lagrangian method. However, due to the PDE constraints, the numerical solution is very expensive so that a model reduction is utilized by using the reduced basis (RB) method. The quality of the RB approximation is ensured by a trust-region strategy which does not require any offline procedure, where the RB functions are computed in a greedy algorithm. Moreover, convergence of the proposed method is guaranteed. Numerical examples illustrate the efficiency of the proposed solution technique.

Across research disciplines, cluster randomized trials (CRTs) are commonly implemented to evaluate interventions delivered to groups of participants, such as communities and clinics. Despite advances in the design and analysis of CRTs, several challenges remain. First, there are many possible ways to specify the intervention effect (e.g., at the individual-level or at the cluster-level). Second, the theoretical and practical performance of common methods for CRT analysis remain poorly understood. Here, we use causal models to formally define an array of causal effects as summary measures of counterfactual outcomes. Next, we provide a comprehensive overview of well-known CRT estimators, including the t-test and generalized estimating equations (GEE), as well as less known methods, including augmented-GEE and targeted maximum likelihood estimation (TMLE). In finite sample simulations, we illustrate the performance of these estimators and the importance of effect specification, especially when cluster size varies. Finally, our application to data from the Preterm Birth Initiative (PTBi) study demonstrates the real-world importance of selecting an analytic approach corresponding to the research question. Given its flexibility to estimate a variety of effects and ability to adaptively adjust for covariates for precision gains while maintaining Type-I error control, we conclude TMLE is a promising tool for CRT analysis.

The quadrature error associated with a regular quadrature rule for evaluation of a layer potential increases rapidly when the evaluation point approaches the surface and the integral becomes nearly singular. Error estimates are needed to determine when the accuracy is insufficient and a more costly special quadrature method should be utilized. The final result of this paper are such quadrature error estimates for the composite Gauss-Legendre rule and the global trapezoidal rule, when applied to evaluate layer potentials defined over smooth curved surfaces in R^3. The estimates have no unknown coefficients and can be efficiently evaluated given the discretization of the surface, invoking a local one-dimensional root-finding procedure. They are derived starting with integrals over curves, using complex analysis involving contour integrals, residue calculus and branch cuts. By complexifying the parameter plane, the theory can be used to derive estimates also for curves in in R^3. These results are then used in the derivation of the estimates for integrals over surfaces. In this procedure, we also obtain error estimates for layer potentials evaluated over curves in R^2. Such estimates combined with a local root-finding procedure for their evaluation were earlier derived for the composite Gauss-Legendre rule for layer potentials written on complex form [4]. This is here extended to provide quadrature error estimates for both complex and real formulations of layer potentials, both for the Gauss-Legendre and the trapezoidal rule. Numerical examples are given to illustrate the performance of the quadrature error estimates. The estimates for integration over curves are in many cases remarkably precise, and the estimates for curved surfaces in R^3 are also sufficiently precise, with sufficiently low computational cost, to be practically useful.

北京阿比特科技有限公司