亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper considers the optimal boundary control of chemical systems described by advection-diffusion-reaction (ADR) equations. We use a discontinuous Galerkin finite element method (DG-FEM) for the spatial discretization of the governing partial differential equations, and the optimal control problem is directly discretized using multiple shooting. The temporal discretization and the corresponding sensitivity calculations are achieved by an explicit singly diagonally-implicit Runge Kutta (ESDIRK) method. ADR systems arise in process systems engineering and their operation can potentially be improved by nonlinear model predictive control (NMPC). We demonstrate a numerical approach for the solution to their optimal control problems (OCPs) in a chromatography case study. Preparative liquid chromatography is an important downstream process in biopharmaceutical manufacturing. We show that multi-step elution trajectories for batch processes can be optimized for economic objectives, providing superior performance compared to classical gradient elution trajectories.

相關內容

Growth in system complexity increases the need for automated techniques dedicated to different log analysis tasks such as Log-based Anomaly Detection (LAD). The latter has been widely addressed in the literature, mostly by means of a variety of deep learning techniques. Despite their many advantages, that focus on deep learning techniques is somewhat arbitrary as traditional Machine Learning (ML) techniques may perform well in many cases, depending on the context and datasets. In the same vein, semi-supervised techniques deserve the same attention as supervised techniques since the former have clear practical advantages. Further, current evaluations mostly rely on the assessment of detection accuracy. However, this is not enough to decide whether or not a specific ML technique is suitable to address the LAD problem in a given context. Other aspects to consider include training and prediction times as well as the sensitivity to hyperparameter tuning, which in practice matters to engineers. In this paper, we present a comprehensive empirical study, in which we evaluate supervised and semi-supervised, traditional and deep ML techniques w.r.t. four evaluation criteria: detection accuracy, time performance, sensitivity of detection accuracy and time performance to hyperparameter tuning. The experimental results show that supervised traditional and deep ML techniques fare similarly in terms of their detection accuracy and prediction time. Moreover, overall, sensitivity analysis to hyperparameter tuning w.r.t. detection accuracy shows that supervised traditional ML techniques are less sensitive than deep learning techniques. Further, semi-supervised techniques yield significantly worse detection accuracy than supervised techniques.

The exploration of network structures through the lens of graph theory has become a cornerstone in understanding complex systems across diverse fields. Identifying densely connected subgraphs within larger networks is crucial for uncovering functional modules in biological systems, cohesive groups within social networks, and critical paths in technological infrastructures. The most representative approach, the SM algorithm, cannot locate subgraphs with large sizes, therefore cannot identify dense subgraphs; while the SA algorithm previously used by researchers combines simulated annealing and efficient moves for the Markov chain. However, the global optima cannot be guaranteed to be located by the simulated annealing methods including SA unless a logarithmic cooling schedule is used. To this end, our study introduces and evaluates the performance of the Simulated Annealing Algorithm (SAA), which combines simulated annealing with the stochastic approximation Monte Carlo algorithm. The performance of SAA against two other numerical algorithms-SM and SA, is examined in the context of identifying these critical subgraph structures using simulated graphs with embeded cliques. We have found that SAA outperforms both SA and SM by 1) the number of iterations to find the densest subgraph and 2) the percentage of time the algorithm is able to find a clique after 10,000 iterations, and 3) computation time. The promising result of the SAA algorithm could offer a robust tool for dissecting complex systems and potentially transforming our approach to solving problems in interdisciplinary fields.

Efficient parameter identification of electrochemical models is crucial for accurate monitoring and control of lithium-ion cells. This process becomes challenging when applied to complex models that rely on a considerable number of interdependent parameters that affect the output response. Gradient-based and metaheuristic optimization techniques, although previously employed for this task, are limited by their lack of robustness, high computational costs, and susceptibility to local minima. In this study, Bayesian Optimization is used for tuning the dynamic parameters of an electrochemical equivalent circuit battery model (E-ECM) for a nickel-manganese-cobalt (NMC)-graphite cell. The performance of the Bayesian Optimization is compared with baseline methods based on gradient-based and metaheuristic approaches. The robustness of the parameter optimization method is tested by performing verification using an experimental drive cycle. The results indicate that Bayesian Optimization outperforms Gradient Descent and PSO optimization techniques, achieving reductions on average testing loss by 28.8% and 5.8%, respectively. Moreover, Bayesian optimization significantly reduces the variance in testing loss by 95.8% and 72.7%, respectively.

The classical theory of Kosambi-Cartan-Chern (KCC) developed in differential geometry provides a powerful method for analyzing the behaviors of dynamical systems. In the KCC theory, the properties of a dynamical system are described in terms of five geometrical invariants, of which the second corresponds to the so-called Jacobi stability of the system. Different from that of the Lyapunov stability that has been studied extensively in the literature, the analysis of the Jacobi stability has been investigated more recently using geometrical concepts and tools. It turns out that the existing work on the Jacobi stability analysis remains theoretical and the problem of algorithmic and symbolic treatment of Jacobi stability analysis has yet to be addressed. In this paper, we initiate our study on the problem for a class of ODE systems of arbitrary dimension and propose two algorithmic schemes using symbolic computation to check whether a nonlinear dynamical system may exhibit Jacobi stability. The first scheme, based on the construction of the complex root structure of a characteristic polynomial and on the method of quantifier elimination, is capable of detecting the existence of the Jacobi stability of the given dynamical system. The second algorithmic scheme exploits the method of semi-algebraic system solving and allows one to determine conditions on the parameters for a given dynamical system to have a prescribed number of Jacobi stable fixed points. Several examples are presented to demonstrate the effectiveness of the proposed algorithmic schemes.

We present a novel efficient theoretical and numerical framework for solving global non-convex polynomial optimization problems. We analytically demonstrate that such problems can be efficiently reformulated using a non-linear objective over a convex set; further, these reformulated problems possess no spurious local minima (i.e., every local minimum is a global minimum). We introduce an algorithm for solving these resulting problems using the augmented Lagrangian and the method of Burer and Monteiro. We show through numerical experiments that polynomial scaling in dimension and degree is achievable for computing the optimal value and location of previously intractable global polynomial optimization problems in high dimension.

Stable partitioned techniques for simulating unsteady fluid-structure interaction (FSI) are known to be computationally expensive when high added-mass is involved. Multiple coupling strategies have been developed to accelerate these simulations, but often use predictors in the form of simple finite-difference extrapolations. In this work, we propose a non-intrusive data-driven predictor that couples reduced-order models of both the solid and fluid subproblems, providing an initial guess for the nonlinear problem of the next time step calculation. Each reduced order model is composed of a nonlinear encoder-regressor-decoder architecture and is equipped with an adaptive update strategy that adds robustness for extrapolation. In doing so, the proposed methodology leverages physics-based insights from high-fidelity solvers, thus establishing a physics-aware machine learning predictor. Using three strongly coupled FSI examples, this study demonstrates the improved convergence obtained with the new predictor and the overall computational speedup realized compared to classical approaches.

We study the sensitivity of infinite-dimensional Bayesian linear inverse problems governed by partial differential equations (PDEs) with respect to modeling uncertainties. In particular, we consider derivative-based sensitivity analysis of the information gain, as measured by the Kullback-Leibler divergence from the posterior to the prior distribution. To facilitate this, we develop a fast and accurate method for computing derivatives of the information gain with respect to auxiliary model parameters. Our approach combines low-rank approximations, adjoint-based eigenvalue sensitivity analysis, and post-optimal sensitivity analysis. The proposed approach also paves way for global sensitivity analysis by computing derivative-based global sensitivity measures. We illustrate different aspects of the proposed approach using an inverse problem governed by a scalar linear elliptic PDE, and an inverse problem governed by the three-dimensional equations of linear elasticity, which is motivated by the inversion of the fault-slip field after an earthquake.

We present a novel method for generating sequential parameter estimates and quantifying epistemic uncertainty in dynamical systems within a data-consistent (DC) framework. The DC framework differs from traditional Bayesian approaches due to the incorporation of the push-forward of an initial density, which performs selective regularization in parameter directions not informed by the data in the resulting updated density. This extends a previous study that included the linear Gaussian theory within the DC framework and introduced the maximal updated density (MUD) estimate as an alternative to both least squares and maximum a posterior (MAP) estimates. In this work, we introduce algorithms for operational settings of MUD estimation in real or near-real time where spatio-temporal datasets arrive in packets to provide updated estimates of parameters and identify potential parameter drift. Computational diagnostics within the DC framework prove critical for evaluating (1) the quality of the DC update and MUD estimate and (2) the detection of parameter value drift. The algorithms are applied to estimate (1) wind drag parameters in a high-fidelity storm surge model, (2) thermal diffusivity field for a heat conductivity problem, and (3) changing infection and incubation rates of an epidemiological model.

We propose an efficient semi-Lagrangian characteristic mapping method for solving the one+one-dimensional Vlasov-Poisson equations with high precision on a coarse grid. The flow map is evolved numerically and exponential resolution in linear time is obtained. Global third-order convergence in space and time is shown and conservation properties are assessed. For benchmarking, we consider linear and nonlinear Landau damping and the two-stream instability. We compare the results with a Fourier pseudo-spectral method. The extreme fine-scale resolution features are illustrated showing the method's capabilities to efficiently treat filamentation in fusion plasma simulations.

We propose a novel discrete Poisson equation approach to estimate the statistical error of a broad class of numerical integrators for the underdamped Langevin dynamics. The statistical error refers to the mean square error of the estimator to the exact ensemble average with a finite number of iterations. With the proposed error analysis framework, we show that when the potential function $U(x)$ is strongly convex in $\mathbb R^d$ and the numerical integrator has strong order $p$, the statistical error is $O(h^{2p}+\frac1{Nh})$, where $h$ is the time step and $N$ is the number of iterations. Besides, this approach can be adopted to analyze integrators with stochastic gradients, and quantitative estimates can be derived as well. Our approach only requires the geometric ergodicity of the continuous-time underdamped Langevin dynamics, and relaxes the constraint on the time step.

北京阿比特科技有限公司