亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Stochastic Galerkin formulations of the two-dimensional shallow water systems parameterized with random variables may lose hyperbolicity, and hence change the nature of the original model. In this work, we present a hyperbolicity-preserving stochastic Galerkin formulation by carefully selecting the polynomial chaos approximations to the nonlinear terms in the shallow water equations. We derive a sufficient condition to preserve the hyperbolicity of the stochastic Galerkin system which requires only a finite collection of positivity conditions on the stochastic water height at selected quadrature points in parameter space. Based on our theoretical results for the stochastic Galerkin formulation, we develop a corresponding well-balanced hyperbolicity-preserving central-upwind scheme. We demonstrate the accuracy and the robustness of the new scheme on several challenging numerical tests.

相關內容

We propose and analyze volume-preserving parametric finite element methods for surface diffusion, conserved mean curvature flow and an intermediate evolution law in an axisymmetric setting. The weak formulations are presented in terms of the generating curves of the axisymmetric surfaces. The proposed numerical methods are based on piecewise linear parametric finite elements. The constructed fully practical schemes satisfy the conservation of the enclosed volume. In addition, we prove the unconditional stability and consider the distribution of vertices for the discretized schemes. The introduced methods are implicit and the resulting nonlinear systems of equations can be solved very efficiently and accurately via the Newton's iterative method. Numerical results are presented to show the accuracy and efficiency of the introduced schemes for computing the considered axisymmetric geometric flows.

Reversible visible watermarking (RVW) is an active copyright protection mechanism. It not only transparently superimposes copyright patterns on specific positions of digital images or video frames to declare the copyright ownership information, but also completely erases the visible watermark image and thus enables restoring the original host image without any distortion. However, existing RVW algorithms mostly construct the reversible mapping mechanism for a specific visible watermarking scheme, which is not versatile. Hence, we propose a generic RVW framework to accommodate various visible watermarking schemes. In particular, we obtain a reconstruction data packet -- the compressed difference image between the watermarked image and the original host image, which is embedded into the watermarked image via any conventional reversible data hiding method to facilitate the blind recovery of the host image. The key is to achieve compact compression of the difference image for efficient embedding of the reconstruction data packet. To this end, we propose regularized Graph Fourier Transform (GFT) coding, where the difference image is smoothed via the graph Laplacian regularizer for more efficient compression and then encoded by multi-resolution GFTs in an approximately optimal manner. Experimental results show that the proposed framework has much better versatility than state-of-the-art methods. Due to the small amount of auxiliary information to be embedded, the visual quality of the watermarked image is also higher.

We propose a general method for distributed Bayesian model choice, using the marginal likelihood, where a data set is split in non-overlapping subsets. These subsets are only accessed locally by individual workers and no data is shared between the workers. We approximate the model evidence for the full data set through Monte Carlo sampling from the posterior on every subset generating a model evidence per subset. The results are combined using a novel approach which corrects for the splitting using summary statistics of the generated samples. Our divide-and-conquer approach enables Bayesian model choice in the large data setting, exploiting all available information but limiting communication between workers. We derive theoretical error bounds that quantify the resulting trade-off between computational gain and loss in precision. The embarrassingly parallel nature yields important speed-ups when used on massive data sets as illustrated by our real world experiments. In addition, we show how the suggested approach can be extended to model choice within a reversible jump setting that explores multiple feature combinations within one run.

In this paper, we propose a deep learning based reduced order modeling method for stochastic underground flow problems in highly heterogeneous media. We aim to utilize supervised learning to build a reduced surrogate model from the stochastic parameter space that characterizes the possible highly heterogeneous media to the solution space of a stochastic flow problem to have fast online simulations. Dominant POD modes obtained from a well-designed spectral problem in a global snapshot space are used to represent the solution of the flow problem. Due to the small dimension of the solution, the complexity of the neural network is significantly reduced. We adopt the generalized multiscale finite element method (GMsFEM), in which a set of local multiscale basis functions that can capture the heterogeneity of the media and source information are constructed to efficiently generate globally defined snapshot space. Rigorous theoretical analyses are provided and extensive numerical experiments for linear and nonlinear stochastic flows are provided to verify the superior performance of the proposed method.

Semiconductor device models are essential to understand the charge transport in thin film transistors (TFTs). Using these TFT models to draw inference involves estimating parameters used to fit to the experimental data. These experimental data can involve extracted charge carrier mobility or measured current. Estimating these parameters help us draw inferences about device performance. Fitting a TFT model for a given experimental data using the model parameters relies on manual fine tuning of multiple parameters by human experts. Several of these parameters may have confounding effects on the experimental data, making their individual effect extraction a non-intuitive process during manual tuning. To avoid this convoluted process, we propose a new method for automating the model parameter extraction process resulting in an accurate model fitting. In this work, model choice based approximate Bayesian computation (aBc) is used for generating the posterior distribution of the estimated parameters using observed mobility at various gate voltage values. Furthermore, it is shown that the extracted parameters can be accurately predicted from the mobility curves using gradient boosted trees. This work also provides a comparative analysis of the proposed framework with fine-tuned neural networks wherein the proposed framework is shown to perform better.

We consider a biochemical model that consists of a system of partial differential equations based on reaction terms and subject to non--homogeneous Dirichlet boundary conditions. The model is discretised using the gradient discretisation method (GDM) which is a framework covering a large class of conforming and non conforming schemes. Under classical regularity assumptions on the exact solutions, the GDM enables us to establish the existence of the model solutions in a weak sense, and strong convergence for the approximate solution and its approximate gradient. Numerical test employing a finite volume method is presented to demonstrate the behaviour of the solutions to the model.

Approximate inference methods like the Laplace method, Laplace approximations and variational methods, amongst others, are popular methods when exact inference is not feasible due to the complexity of the model or the abundance of data. In this paper we propose a hybrid approximate method namely Low-Rank Variational Bayes correction (VBC), that uses the Laplace method and subsequently a Variational Bayes correction to the posterior mean. The cost is essentially that of the Laplace method which ensures scalability of the method. We illustrate the method and its advantages with simulated and real data, on small and large scale.

In this letter, we revisit the IEQ method and provide a new perspective on its ability to preserve the original energy dissipation laws. The invariant energy quadratization (IEQ) method has been widely used to design energy stable numerical schemes for phase-field or gradient flow models. Although there are many merits of the IEQ method, one major disadvantage is that the IEQ method usually respects a modified energy law, where the modified energy is expressed in the auxiliary variables. Still, the dissipation laws in terms of the original energy are not guaranteed. Using the widely-used Cahn-Hilliard equation as an example, we demonstrate that the Runge-Kutta IEQ method indeed can preserve the original energy dissipation laws for certain situations up to arbitrary high-order accuracy. Interested readers are highly encouraged to apply our idea to other phase-field equations or gradient flow models.

Bond graph is a unified graphical approach for describing the dynamics of complex engineering and physical systems and is widely adopted in a variety of domains, such as, electrical, mechanical, medical, thermal and fluid mechanics. Traditionally, these dynamics are analyzed using paper-and-pencil proof methods and computer-based techniques. However, both of these techniques suffer from their inherent limitations, such as human-error proneness, approximations of results and enormous computational requirements. Thus, these techniques cannot be trusted for performing the bond graph based dynamical analysis of systems from the safety-critical domains like robotics and medicine. Formal methods, in particular, higher-order-logic theorem proving, can overcome the shortcomings of these traditional methods and provide an accurate analysis of these systems. It has been widely used for analyzing the dynamics of engineering and physical systems. In this paper, we propose to use higher-order-logic theorem proving for performing the bond graph based analysis of the physical systems. In particular, we provide formalization of bond graph, which mainly includes functions that allow conversion of a bond graph to its corresponding mathematical model (state-space model) and the verification of its various properties, such as, stability. To illustrate the practical effectiveness of our proposed approach, we present the formal stability analysis of a prosthetic mechatronic hand using HOL Light theorem prover. Moreover, to help non-experts in HOL, we encode our formally verified stability theorems in MATLAB to perform the stability analysis of an anthropomorphic prosthetic mechatronic hand.

Owing to the recent advances in "Big Data" modeling and prediction tasks, variational Bayesian estimation has gained popularity due to their ability to provide exact solutions to approximate posteriors. One key technique for approximate inference is stochastic variational inference (SVI). SVI poses variational inference as a stochastic optimization problem and solves it iteratively using noisy gradient estimates. It aims to handle massive data for predictive and classification tasks by applying complex Bayesian models that have observed as well as latent variables. This paper aims to decentralize it allowing parallel computation, secure learning and robustness benefits. We use Alternating Direction Method of Multipliers in a top-down setting to develop a distributed SVI algorithm such that independent learners running inference algorithms only require sharing the estimated model parameters instead of their private datasets. Our work extends the distributed SVI-ADMM algorithm that we first propose, to an ADMM-based networked SVI algorithm in which not only are the learners working distributively but they share information according to rules of a graph by which they form a network. This kind of work lies under the umbrella of `deep learning over networks' and we verify our algorithm for a topic-modeling problem for corpus of Wikipedia articles. We illustrate the results on latent Dirichlet allocation (LDA) topic model in large document classification, compare performance with the centralized algorithm, and use numerical experiments to corroborate the analytical results.

北京阿比特科技有限公司