亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Simulated Moving Bed (SMB) chromatography is a well-known technique for the resolution of several high-value-added compounds. Parameters identification and model topology definition are arduous when one is dealing with complex systems such as a Simulated Moving Bed unit. Moreover, the large number of experiments necessary might be an expansive-long process. Hence, this work proposes a novel methodology for parameter estimation, screening the most suitable topology of the models sink-source (defined by the adsorption isotherm equation) and defining the minimum number of experiments necessary to identify the model. Therefore, a nested loop optimization problem is proposed with three levels considering the three main goals of the work: parameters estimation; topology screening by isotherm definition; minimum number of experiments necessary to yield a precise model. The proposed methodology emulated a real scenario by introducing noise in the data and using a Software-in-the-Loop (SIL) approach. Data reconciliation and uncertainty evaluation add robustness to the parameter estimation adding precision and reliability to the model. The methodology is validated considering experimental data from literature apart from the samples applied for parameter estimation, following a cross-validation. The results corroborate that it is possible to carry out trustworthy parameter estimation directly from an SMB unit with minimal system knowledge.

相關內容

In practical applications, data is used to make decisions in two steps: estimation and optimization. First, a machine learning model estimates parameters for a structural model relating decisions to outcomes. Second, a decision is chosen to optimize the structural model's predicted outcome as if its parameters were correctly estimated. Due to its flexibility and simple implementation, this ``estimate-then-optimize'' approach is often used for data-driven decision-making. Errors in the estimation step can lead estimate-then-optimize to sub-optimal decisions that result in regret, i.e., a difference in value between the decision made and the best decision available with knowledge of the structural model's parameters. We provide a novel bound on this regret for smooth and unconstrained optimization problems. Using this bound, in settings where estimated parameters are linear transformations of sub-Gaussian random vectors, we provide a general procedure for experimental design to minimize the regret resulting from estimate-then-optimize. We demonstrate our approach on simple examples and a pandemic control application.

Physical law learning is the ambiguous attempt at automating the derivation of governing equations with the use of machine learning techniques. The current literature focuses however solely on the development of methods to achieve this goal, and a theoretical foundation is at present missing. This paper shall thus serve as a first step to build a comprehensive theoretical framework for learning physical laws, aiming to provide reliability to according algorithms. One key problem consists in the fact that the governing equations might not be uniquely determined by the given data. We will study this problem in the common situation that a physical law is described by an ordinary or partial differential equation. For various different classes of differential equations, we provide both necessary and sufficient conditions for a function to uniquely determine the differential equation which is governing the phenomenon. We then use our results to devise numerical algorithms to determine whether a function solves a differential equation uniquely. Finally, we provide extensive numerical experiments showing that our algorithms in combination with common approaches for learning physical laws indeed allow to guarantee that a unique governing differential equation is learnt, without assuming any knowledge about the function, thereby ensuring reliability.

Sparse decision trees are one of the most common forms of interpretable models. While recent advances have produced algorithms that fully optimize sparse decision trees for prediction, that work does not address policy design, because the algorithms cannot handle weighted data samples. Specifically, they rely on the discreteness of the loss function, which means that real-valued weights cannot be directly used. For example, none of the existing techniques produce policies that incorporate inverse propensity weighting on individual data points. We present three algorithms for efficient sparse weighted decision tree optimization. The first approach directly optimizes the weighted loss function; however, it tends to be computationally inefficient for large datasets. Our second approach, which scales more efficiently, transforms weights to integer values and uses data duplication to transform the weighted decision tree optimization problem into an unweighted (but larger) counterpart. Our third algorithm, which scales to much larger datasets, uses a randomized procedure that samples each data point with a probability proportional to its weight. We present theoretical bounds on the error of the two fast methods and show experimentally that these methods can be two orders of magnitude faster than the direct optimization of the weighted loss, without losing significant accuracy.

Bayesian nonparametric mixture models are common for modeling complex data. While these models are well-suited for density estimation, their application for clustering has some limitations. Miller and Harrison (2014) proved posterior inconsistency in the number of clusters when the true number of clusters is finite for Dirichlet process and Pitman--Yor process mixture models. In this work, we extend this result to additional Bayesian nonparametric priors such as Gibbs-type processes and finite-dimensional representations of them. The latter include the Dirichlet multinomial process and the recently proposed Pitman--Yor and normalized generalized gamma multinomial processes. We show that mixture models based on these processes are also inconsistent in the number of clusters and discuss possible solutions. Notably, we show that a post-processing algorithm introduced by Guha et al. (2021) for the Dirichlet process extends to more general models and provides a consistent method to estimate the number of components.

Model Predictive Control (MPC) is a well-established approach to solve infinite horizon optimal control problems. Since optimization over an infinite time horizon is generally infeasible, MPC determines a suboptimal feedback control by repeatedly solving finite time optimal control problems. Although MPC has been successfully used in many applications, applying MPC to large-scale systems -- arising, e.g., through discretization of partial differential equations -- requires the solution of high-dimensional optimal control problems and thus poses immense computational effort. We consider systems governed by parametrized parabolic partial differential equations and employ the reduced basis (RB) method as a low-dimensional surrogate model for the finite time optimal control problem. The reduced order optimal control serves as feedback control for the original large-scale system. We analyze the proposed RB-MPC approach by first developing a posteriori error bounds for the errors in the optimal control and associated cost functional. These bounds can be evaluated efficiently in an offline-online computational procedure and allow us to guarantee asymptotic stability of the closed-loop system using the RB-MPC approach in several practical scenarios. We also propose an adaptive strategy to choose the prediction horizon of the finite time optimal control problem. Numerical results are presented to illustrate the theoretical properties of our approach.

We address the problem of sparse recovery using greedy compressed sensing recovery algorithms, without explicit knowledge of the sparsity. Estimating the sparsity order is a crucial problem in many practical scenarios, e.g., wireless communications, where exact value of the sparsity order of the unknown channel may be unavailable a priori. In this paper we have proposed a new greedy algorithm, referred to as the Multiple Choice Hard Thresholding Pursuit (MCHTP), which modifies the popular hard thresholding pursuit (HTP) suitably to iteratively recover the unknown sparse vector along with the sparsity order of the unknown vector. We provide provable performance guarantees which ensures that MCHTP can estimate the sparsity order exactly, along with recovering the unknown sparse vector exactly with noiseless measurements. The simulation results corroborate the theoretical findings, demonstrating that even without exact sparsity knowledge, with only the knowledge of a loose upper bound of the sparsity, MCHTP exhibits outstanding recovery performance, which is almost identical to that of the conventional HTP with exact sparsity knowledge. Furthermore, simulation results demonstrate much lower computational complexity of MCHTP compared to other state-of-the-art techniques like MSP.

The presence of outliers can significantly degrade the performance of ellipse fitting methods. We develop an ellipse fitting method that is robust to outliers based on the maximum correntropy criterion with variable center (MCC-VC), where a Laplacian kernel is used. For single ellipse fitting, we formulate a non-convex optimization problem to estimate the kernel bandwidth and center and divide it into two subproblems, each estimating one parameter. We design sufficiently accurate convex approximation to each subproblem such that computationally efficient closed-form solutions are obtained. The two subproblems are solved in an alternate manner until convergence is reached. We also investigate coupled ellipses fitting. While there exist multiple ellipses fitting methods that can be used for coupled ellipses fitting, we develop a couple ellipses fitting method by exploiting the special structure. Having unknown association between data points and ellipses, we introduce an association vector for each data point and formulate a non-convex mixed-integer optimization problem to estimate the data associations, which is approximately solved by relaxing it into a second-order cone program. Using the estimated data associations, we extend the proposed method to achieve the final coupled ellipses fitting. The proposed method is shown to have significantly better performance over the existing methods in both simulated data and real images.

Simulation-based Bayesian inference (SBI) can be used to estimate the parameters of complex mechanistic models given observed model outputs without requiring access to explicit likelihood evaluations. A prime example for the application of SBI in neuroscience involves estimating the parameters governing the response dynamics of Hodgkin-Huxley (HH) models from electrophysiological measurements, by inferring a posterior over the parameters that is consistent with a set of observations. To this end, many SBI methods employ a set of summary statistics or scientifically interpretable features to estimate a surrogate likelihood or posterior. However, currently, there is no way to identify how much each summary statistic or feature contributes to reducing posterior uncertainty. To address this challenge, one could simply compare the posteriors with and without a given feature included in the inference process. However, for large or nested feature sets, this would necessitate repeatedly estimating the posterior, which is computationally expensive or even prohibitive. Here, we provide a more efficient approach based on the SBI method neural likelihood estimation (NLE): We show that one can marginalize the trained surrogate likelihood post-hoc before inferring the posterior to assess the contribution of a feature. We demonstrate the usefulness of our method by identifying the most important features for inferring parameters of an example HH neuron model. Beyond neuroscience, our method is generally applicable to SBI workflows that rely on data features for inference used in other scientific fields.

The paper proposes a decoupled numerical scheme of the time-dependent Ginzburg-Landau equations under temporal gauge. For the order parameter and the magnetic potential, the discrete scheme adopts the second type Ned${\rm \acute{e}}$lec element and the linear element for spatial discretization, respectively, and a fully linearized backward Euler method and the first order exponential time differencing method for time discretization, respectively. The maximum bound principle of the order parameter and the energy dissipation law in the discrete sense are proved for this finite element-based scheme. This allows the application of the adaptive time stepping method which can significantly speed up long-time simulations compared to existing numerical schemes, especially for superconductors with complicated shapes. The error estimate is rigorously established in the fully discrete sense. Numerical examples verify the theoretical results of the proposed scheme and demonstrate the vortex motions of superconductors in an external magnetic field.

In this paper, we investigate the Gaussian graphical model inference problem in a novel setting that we call erose measurements, referring to irregularly measured or observed data. For graphs, this results in different node pairs having vastly different sample sizes which frequently arises in data integration, genomics, neuroscience, and sensor networks. Existing works characterize the graph selection performance using the minimum pairwise sample size, which provides little insights for erosely measured data, and no existing inference method is applicable. We aim to fill in this gap by proposing the first inference method that characterizes the different uncertainty levels over the graph caused by the erose measurements, named GI-JOE (Graph Inference when Joint Observations are Erose). Specifically, we develop an edge-wise inference method and an affiliated FDR control procedure, where the variance of each edge depends on the sample sizes associated with corresponding neighbors. We prove statistical validity under erose measurements, thanks to careful localized edge-wise analysis and disentangling the dependencies across the graph. Finally, through simulation studies and a real neuroscience data example, we demonstrate the advantages of our inference methods for graph selection from erosely measured data.

北京阿比特科技有限公司