亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an open digital ecosystem based on web-framework with a functional back-end server in user-centric energy retrofits. This data-driven web framework is proposed for building energy renovation benchmarking as part of an energy advisory service development for the V\"asterbotten region, Sweden. A 4-tiers architecture is developed and programmed to achieve users' interactive design and visualization via a web browser. Six data-driven methods are integrated into this framework as backend server functions. Based on those functions the users can be supported by this decision-making system when they want to know if it needs to be renovated or not. Meanwhile, influential factors (input values) from databases that affect energy usage in buildings are to be analyzed via quantitative analysis, i.e., sensitive analysis. The contributions to this open ecosystem platform in energy renovation are: 1) A systematic framework that can be applied to energy efficiency with data-driven approaches, 2) A user-friendly web-based platform that is easy and flexible to use, and 3) integrated quantitative analysis into the framework to obtain the importance among all the relevant factors. This computational framework is designed for stakeholders who would like to get preliminary information in energy advisory. The improved energy advisor service enabled by the developed platform can significantly reduce the cost of decision-making, enabling decision-makers to participate in such professional knowledge-required decisions in a deliberate and efficient manner. This work is funded by the AURORAL project, which integrates an open and interoperable digital platform, demonstrated through regional large-scale pilots in different countries of Europe by interdisciplinary applications.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

We study optimal data pooling for shared learning in two common maintenance operations: condition-based maintenance and spare parts management. We consider a set of systems subject to Poisson input -- the degradation or demand process -- that are coupled through an a-priori unknown rate. Decision problems involving these systems are high-dimensional Markov decision processes (MDPs) and hence notoriously difficult to solve. We present a decomposition result that reduces such an MDP to two-dimensional MDPs, enabling structural analyses and computations. Leveraging this decomposition, we (i) demonstrate that pooling data can lead to significant cost reductions compared to not pooling, and (ii) show that the optimal policy for the condition-based maintenance problem is a control limit policy, while for the spare parts management problem, it is an order-up-to level policy, both dependent on the pooled data.

Safe reinforcement learning (RL) with hard constraint guarantees is a promising optimal control direction for multi-energy management systems. It only requires the environment-specific constraint functions itself a priori and not a complete model. The project-specific upfront and ongoing engineering efforts are therefore still reduced, better representations of the underlying system dynamics can still be learnt, and modelling bias is kept to a minimum. However, even the constraint functions alone are not always trivial to accurately provide in advance, leading to potentially unsafe behaviour. In this paper, we present two novel advancements: (I) combining the OptLayer and SafeFallback method, named OptLayerPolicy, to increase the initial utility while keeping a high sample efficiency and the possibility to formulate equality constraints. (II) introducing self-improving hard constraints, to increase the accuracy of the constraint functions as more and new data becomes available so that better policies can be learnt. Both advancements keep the constraint formulation decoupled from the RL formulation, so new (presumably better) RL algorithms can act as drop-in replacements. We have shown that, in a simulated multi-energy system case study, the initial utility is increased to 92.4% (OptLayerPolicy) compared to 86.1% (OptLayer) and that the policy after training is increased to 104.9% (GreyOptLayerPolicy) compared to 103.4% (OptLayer) - all relative to a vanilla RL benchmark. Although introducing surrogate functions into the optimisation problem requires special attention, we conclude that the newly presented GreyOptLayerPolicy method is the most advantageous.

We collect robust proposals given in the field of regression models with heteroscedastic errors. Our motivation stems from the fact that the practitioner frequently faces the confluence of two phenomena in the context of data analysis: non--linearity and heteroscedasticity. The impact of heteroscedasticity on the precision of the estimators is well--known, however the conjunction of these two phenomena makes handling outliers more difficult. An iterative procedure to estimate the parameters of a heteroscedastic non--linear model is considered. The studied estimators combine weighted $MM-$regression estimators, to control the impact of high leverage points, and a robust method to estimate the parameters of the variance function.

Stress prediction in porous materials and structures is challenging due to the high computational cost associated with direct numerical simulations. Convolutional Neural Network (CNN) based architectures have recently been proposed as surrogates to approximate and extrapolate the solution of such multiscale simulations. These methodologies are usually limited to 2D problems due to the high computational cost of 3D voxel based CNNs. We propose a novel geometric learning approach based on a Graph Neural Network (GNN) that efficiently deals with three-dimensional problems by performing convolutions over 2D surfaces only. Following our previous developments using pixel-based CNN, we train the GNN to automatically add local fine-scale stress corrections to an inexpensively computed coarse stress prediction in the porous structure of interest. Our method is Bayesian and generates densities of stress fields, from which credible intervals may be extracted. As a second scientific contribution, we propose to improve the extrapolation ability of our network by deploying a strategy of online physics-based corrections. Specifically, we condition the posterior predictions of our probabilistic predictions to satisfy partial equilibrium at the microscale, at the inference stage. This is done using an Ensemble Kalman algorithm, to ensure tractability of the Bayesian conditioning operation. We show that this innovative methodology allows us to alleviate the effect of undesirable biases observed in the outputs of the uncorrected GNN, and improves the accuracy of the predictions in general.

The synthesis of information deriving from complex networks is a topic receiving increasing relevance in ecology and environmental sciences. In particular, the aggregation of multilayer networks, i.e. network structures formed by multiple interacting networks (the layers), constitutes a fast-growing field. In several environmental applications, the layers of a multilayer network are modelled as a collection of similarity matrices describing how similar pairs of biological entities are, based on different types of features (e.g. biological traits). The present paper first discusses two main techniques for combining the multi-layered information into a single network (the so-called monoplex), i.e. Similarity Network Fusion (SNF) and Similarity Matrix Average (SMA). Then, the effectiveness of the two methods is tested on a real-world dataset of the relative abundance of microbial species in the ecosystems of nine glaciers (four glaciers in the Alps and five in the Andes). A preliminary clustering analysis on the monoplexes obtained with different methods shows the emergence of a tightly connected community formed by species that are typical of cryoconite holes worldwide. Moreover, the weights assigned to different layers by the SMA algorithm suggest that two large South American glaciers (Exploradores and Perito Moreno) are structurally different from the smaller glaciers in both Europe and South America. Overall, these results highlight the importance of integration methods in the discovery of the underlying organizational structure of biological entities in multilayer ecological networks.

This paper investigates the multiple testing problem for high-dimensional sparse binary sequences, motivated by the crowdsourcing problem in machine learning. We study the empirical Bayes approach for multiple testing on the high-dimensional Bernoulli model with a conjugate spike and uniform slab prior. We first show that the hard thresholding rule deduced from the posterior distribution is suboptimal. Consequently, the $\ell$-value procedure constructed using this posterior tends to be overly conservative in estimating the false discovery rate (FDR). We then propose two new procedures based on $\adj\ell$-values and $q$-values to correct this issue. Sharp frequentist theoretical results are obtained, demonstrating that both procedures can effectively control the FDR under sparsity. Numerical experiments are conducted to validate our theory in finite samples. To our best knowledge, this work provides the first uniform FDR control result in multiple testing for high-dimensional sparse binary data.

Today, digital identity management for individuals is either inconvenient and error-prone or creates undesirable lock-in effects and violates privacy and security expectations. These shortcomings inhibit the digital transformation in general and seem particularly concerning in the context of novel applications such as access control for decentralized autonomous organizations and identification in the Metaverse. Decentralized or self-sovereign identity (SSI) aims to offer a solution to this dilemma by empowering individuals to manage their digital identity through machine-verifiable attestations stored in a "digital wallet" application on their edge devices. However, when presented to a relying party, these attestations typically reveal more attributes than required and allow tracking end users' activities. Several academic works and practical solutions exist to reduce or avoid such excessive information disclosure, from simple selective disclosure to data-minimizing anonymous credentials based on zero-knowledge proofs (ZKPs). We first demonstrate that the SSI solutions that are currently built with anonymous credentials still lack essential features such as scalable revocation, certificate chaining, and integration with secure elements. We then argue that general-purpose ZKPs in the form of zk-SNARKs can appropriately address these pressing challenges. We describe our implementation and conduct performance tests on different edge devices to illustrate that the performance of zk-SNARK-based anonymous credentials is already practical. We also discuss further advantages that general-purpose ZKPs can easily provide for digital wallets, for instance, to create "designated verifier presentations" that facilitate new design options for digital identity infrastructures that previously were not accessible because of the threat of man-in-the-middle attacks.

At least two, different approaches to define and solve statistical models for the analysis of economic systems exist: the typical, econometric one, interpreting the Gravity Model specification as the expected link weight of an arbitrary probability distribution, and the one rooted into statistical physics, constructing maximum-entropy distributions constrained to satisfy certain network properties. In a couple of recent, companion papers they have been successfully integrated within the framework induced by the constrained minimisation of the Kullback-Leibler divergence: specifically, two, broad classes of models have been devised, i.e. the integrated and the conditional ones, defined by different, probabilistic rules to place links, load them with weights and turn them into proper, econometric prescriptions. Still, the recipes adopted by the two approaches to estimate the parameters entering into the definition of each model differ. In econometrics, a likelihood that decouples the binary and weighted parts of a model, treating a network as deterministic, is typically maximised; to restore its random character, two alternatives exist: either solving the likelihood maximisation on each configuration of the ensemble and taking the average of the parameters afterwards or taking the average of the likelihood function and maximising the latter one. The difference between these approaches lies in the order in which the operations of averaging and maximisation are taken - a difference that is reminiscent of the quenched and annealed ways of averaging out the disorder in spin glasses. The results of the present contribution, devoted to comparing these recipes in the case of continuous, conditional network models, indicate that the annealed estimation recipe represents the best alternative to the deterministic one.

This paper focuses on the construction of accurate and predictive data-driven reduced models of large-scale numerical simulations with complex dynamics and sparse training data sets. In these settings, standard, single-domain approaches may be too inaccurate or may overfit and hence generalize poorly. Moreover, processing large-scale data sets typically requires significant memory and computing resources which can render single-domain approaches computationally prohibitive. To address these challenges, we introduce a domain decomposition formulation into the construction of a data-driven reduced model. In doing so, the basis functions used in the reduced model approximation become localized in space, which can increase the accuracy of the domain-decomposed approximation of the complex dynamics. The decomposition furthermore reduces the memory and computing requirements to process the underlying large-scale training data set. We demonstrate the effectiveness and scalability of our approach in a large-scale three-dimensional unsteady rotating detonation rocket engine simulation scenario with over $75$ million degrees of freedom and a sparse training data set. Our results show that compared to the single-domain approach, the domain-decomposed version reduces both the training and prediction errors for pressure by up to $13 \%$ and up to $5\%$ for other key quantities, such as temperature, and fuel and oxidizer mass fractions. Lastly, our approach decreases the memory requirements for processing by almost a factor of four, which in turn reduces the computing requirements as well.

Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.

北京阿比特科技有限公司