亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Variational autoencoder (VAE) architectures have the potential to develop reduced-order models (ROMs) for chaotic fluid flows. We propose a method for learning compact and near-orthogonal ROMs using a combination of a $\beta$-VAE and a transformer, tested on numerical data from a two-dimensional viscous flow in both periodic and chaotic regimes. The $\beta$-VAE is trained to learn a compact latent representation of the flow velocity, and the transformer is trained to predict the temporal dynamics in latent space. Using the $\beta$-VAE to learn disentangled representations in latent-space, we obtain a more interpretable flow model with features that resemble those observed in the proper orthogonal decomposition, but with a more efficient representation. Using Poincar\'e maps, the results show that our method can capture the underlying dynamics of the flow outperforming other prediction models. The proposed method has potential applications in other fields such as weather forecasting, structural dynamics or biomedical engineering.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 線性的 · MoDELS · Extensibility · Integration ·
2024 年 1 月 5 日

This paper explores an iterative coupling approach to solve linear thermo-poroelasticity problems, with its application as a high-fidelity discretization utilizing finite elements during the training of projection-based reduced order models. One of the main challenges in addressing coupled multi-physics problems is the complexity and computational expenses involved. In this study, we introduce a decoupled iterative solution approach, integrated with reduced order modeling, aimed at augmenting the efficiency of the computational algorithm. The iterative coupling technique we employ builds upon the established fixed-stress splitting scheme that has been extensively investigated for Biot's poroelasticity. By leveraging solutions derived from this coupled iterative scheme, the reduced order model employs an additional Galerkin projection onto a reduced basis space formed by a small number of modes obtained through proper orthogonal decomposition. The effectiveness of the proposed algorithm is demonstrated through numerical experiments, showcasing its computational prowess.

Linear codes are widely studied in coding theory as they have nice applications in distributed storage, combinatorics, lattices, cryptography and so on. Constructing linear codes with desirable properties is an interesting research topic. In this paper, based on the augmentation technique, we present two families of linear codes from some functions over finite fields. The first family of linear codes is constructed from monomial functions over finite fields. The locality of them is determined and the weight distributions of two subfamilies of the codes are also given. An infinite family of locally recoverable codes which are at least almost optimal and some optimal recoverable codes are obtained from the linear codes. In particular, the two subfamilies of the codes are proved to be both optimally or almost optimally extendable and self-orthogonal. The second family of linear codes is constructed from weakly regular bent functions over finite fields and their weight distribution is determined. This family of codes is proved to have locality 3 for some cases and is conjectured to have locality 2 for other cases. Particularly, two families of optimal locally recoverable codes are derived from the linear codes. Besides, this family of codes is also proved to be both optimally or almost optimally extendable and self-orthogonal.

We consider a non-isothermal compositional gas liquid model for the simulation of well operations in geothermal processes. The model accounts for phase transitions assumed to be at thermodynamical equilibrium and is based on an hydrodynamical Drift Flux Model (DFM) combined with a No Pressure Wave approximation of the momentum equation. The focus of this work is on the design of a robust discretization accounting for slanted and multibranch wells with the ability to simulate both transient behavior such as well opening as well as coupled simulations at the time scale of the reservoir. It is based on a staggered finite volume scheme in space combined with a fully implicit Euler time integration. The construction of consistent and stable numerical fluxes is a key feature for a robust numerical method. It is achieved by combining a monotone flux approximation for the phase superficial velocities with an upwind approximation of the phase molar fractions, density and enthalpy. In order to facilitate the coupling of the well and reservoir models, the Newton linearization accounts for the elimination of the hydrodynamical unknowns leading to Jacobian systems using the same primary unknowns than those of the reservoir model. The efficiency of our approach is investigated on both stand alone well test cases without and with cross flow, and on a fully coupled well-reservoir simulation.

Gradient-enhanced Kriging (GE-Kriging) is a well-established surrogate modelling technique for approximating expensive computational models. However, it tends to get impractical for high-dimensional problems due to the size of the inherent correlation matrix and the associated high-dimensional hyper-parameter tuning problem. To address these issues, a new method, called sliced GE-Kriging (SGE-Kriging), is developed in this paper for reducing both the size of the correlation matrix and the number of hyper-parameters. We first split the training sample set into multiple slices, and invoke Bayes' theorem to approximate the full likelihood function via a sliced likelihood function, in which multiple small correlation matrices are utilized to describe the correlation of the sample set rather than one large one. Then, we replace the original high-dimensional hyper-parameter tuning problem with a low-dimensional counterpart by learning the relationship between the hyper-parameters and the derivative-based global sensitivity indices. The performance of SGE-Kriging is finally validated by means of numerical experiments with several benchmarks and a high-dimensional aerodynamic modeling problem. The results show that the SGE-Kriging model features an accuracy and robustness that is comparable to the standard one but comes at much less training costs. The benefits are most evident for high-dimensional problems with tens of variables.

Dynamic programming (DP) is an algorithmic design paradigm for the efficient, exact solution of otherwise intractable, combinatorial problems. However, DP algorithm design is often presented in an ad-hoc manner. It is sometimes difficult to justify algorithm correctness. To address this issue, this paper presents a rigorous algebraic formalism for systematically deriving DP algorithms, based on semiring polymorphism. We start with a specification, construct an algorithm to compute the required solution which is self-evidently correct because it exhaustively generates and evaluates all possible solutions meeting the specification. We then derive, through the use of shortcut fusion, an implementation of this algorithm which is both efficient and correct. We also demonstrate how, with the use of semiring lifting, the specification can be augmented with combinatorial constraints, showing how these constraints can be fused with the algorithm. We furthermore demonstrate how existing DP algorithms for a given combinatorial problem can be abstracted from their original context and re-purposed. This approach can be applied to the full scope of combinatorial problems expressible in terms of semirings. This includes, for example: optimal probability and Viterbi decoding, probabilistic marginalization, logical inference, fuzzy sets, differentiable softmax, relational and provenance queries. The approach, building on ideas from the existing literature on constructive algorithmics, exploits generic properties of polymorphic functions, tupling and formal sums and algebraic simplifications arising from constraint algebras. We demonstrate the effectiveness of this formalism for some example applications arising in signal processing, bioinformatics and reliability engineering. Python software implementing these algorithms can be downloaded from: //www.maxlittle.net/software/dppolyalg.zip.

We introduce a novel sampler called the energy based diffusion generator for generating samples from arbitrary target distributions. The sampling model employs a structure similar to a variational autoencoder, utilizing a decoder to transform latent variables from a simple distribution into random variables approximating the target distribution, and we design an encoder based on the diffusion model. Leveraging the powerful modeling capacity of the diffusion model for complex distributions, we can obtain an accurate variational estimate of the Kullback-Leibler divergence between the distributions of the generated samples and the target. Moreover, we propose a decoder based on generalized Hamiltonian dynamics to further enhance sampling performance. Through empirical evaluation, we demonstrate the effectiveness of our method across various complex distribution functions, showcasing its superiority compared to existing methods.

Dynamical models described by ordinary differential equations (ODEs) are a fundamental tool in the sciences and engineering. Exact reduction aims at producing a lower-dimensional model in which each macro-variable can be directly related to the original variables, and it is thus a natural step towards the model's formal analysis and mechanistic understanding. We present an algorithm which, given a polynomial ODE model, computes a longest possible chain of exact linear reductions of the model such that each reduction refines the previous one, thus giving a user control of the level of detail preserved by the reduction. This significantly generalizes over the existing approaches which compute only the reduction of the lowest dimension subject to an approach-specific constraint. The algorithm reduces finding exact linear reductions to a question about representations of finite-dimensional algebras. We provide an implementation of the algorithm, demonstrate its performance on a set of benchmarks, and illustrate the applicability via case studies. Our implementation is freely available at //github.com/x3042/ExactODEReduction.jl

In designing external validation studies of clinical prediction models, contemporary sample size calculation methods are based on the frequentist inferential paradigm. One of the widely reported metrics of model performance is net benefit (NB), and the relevance of conventional inference around NB as a measure of clinical utility is doubtful. Value of Information methodology quantifies the consequences of uncertainty in terms of its impact on clinical utility of decisions. We introduce the expected value of sample information (EVSI) for validation as the expected gain in NB from conducting an external validation study of a given size. We propose algorithms for EVSI computation, and in a case study demonstrate how EVSI changes as a function of the amount of current information and future study's sample size. Value of Information methodology provides a decision-theoretic lens to the process of planning a validation study of a risk prediction model and can complement conventional methods when designing such studies.

One class of statistical hypothesis testing procedures is the indisputable equivalence tests, whose main objective is to establish practical equivalence rather than the usual statistical significant difference. These hypothesis tests are prone in bioequivalence studies, where one would wish to show that, for example, an existing drug and a new one under development have the same therapeutic effect. In this article, we consider a two-stage randomized (RAND2) p-value utilizing the uniformly most powerful (UMP) p-value in the first stage when multiple two-one-sided hypotheses are of interest. We investigate the behavior of the distribution functions of the two p-values when there are changes in the boundaries of the null or alternative hypothesis or when the chosen parameters are too close to these boundaries. We also consider the behavior of the power functions to an increase in sample size. Specifically, we investigate the level of conservativity to the sample sizes to see if we control the type I error rate when using either of the two p-values for any sample size. In multiple tests, we evaluate the performance of the two p-values in estimating the proportion of true null hypotheses. We conduct a family-wise error rate control using an adaptive Bonferroni procedure with a plug-in estimator to account for the multiplicity that arises from the multiple hypotheses under consideration. We verify the various claims in this research using simulation study and real-world data analysis.

The coupling effects in multiphysics processes are often neglected in designing multiscale methods. The coupling may be described by a non-positive definite operator, which in turn brings significant challenges in multiscale simulations. In the paper, we develop a regularized coupling multiscale method based on the generalized multiscale finite element method (GMsFEM) to solve coupled thermomechanical problems, and it is referred to as the coupling generalized multiscale finite element method (CGMsFEM). The method consists of defining the coupling multiscale basis functions through local regularized coupling spectral problems in each coarse-grid block, which can be implemented by a novel design of two relaxation parameters. Compared to the standard GMsFEM, the proposed method can not only accurately capture the multiscale coupling correlation effects of multiphysics problems but also greatly improve computational efficiency with fewer multiscale basis functions. In addition, the convergence analysis is also established, and the optimal error estimates are derived, where the upper bound of errors is independent of the magnitude of the relaxation coefficient. Several numerical examples for periodic, random microstructure, and random material coefficients are presented to validate the theoretical analysis. The numerical results show that the CGMsFEM shows better robustness and efficiency than uncoupled GMsFEM.

北京阿比特科技有限公司