亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Most of the lattice Boltzmann methods simulate an approximation of the sharp interface problem of dissolution and precipitation. In such studies the curvature-driven motion of interface is neglected in the Gibbs-Thomson condition. In order to simulate those phenomena with or without curvature-driven motion, we propose a phase-field model which is derived from a thermodynamic functional of grand-potential. Compared to the well-known free energy, the main advantage of the grand-potential is to provide a theoretical framework which is consistent with the equilibrium properties such as the equality of chemical potentials. The model is composed of one equation for the phase-field {\phi} coupled with one equation for the chemical potential {\mu}. In the phase-field method, the curvature-driven motion is always contained in the phase-field equation. For canceling it, a counter term must be added in the {\phi}-equation. For reason of mass conservation, the {\mu}-equation is written with a mixed formulation which involves the composition c and the chemical potential. The closure relationship between c and {\mu} is derived by assuming quadratic free energies of bulk phases. The anti-trapping current is also considered in the composition equation for simulations with null diffusion in solid. The lattice Boltzmann schemes are implemented in LBM_saclay, a numerical code running on various High Performance Computing architectures. Validations are carried out with several analytical solutions representative of dissolution and precipitation. Simulations with or without counter term are compared on the shape of porous medium characterized by microtomography. The computations have run on a single GPU-V100.

相關內容

The numerical solution of a linear Schr\"odinger equation in the semiclassical regime is very well understood in a torus $\mathbb{T}^d$. A raft of modern computational methods are precise and affordable, while conserving energy and resolving high oscillations very well. This, however, is far from the case with regard to its solution in $\mathbb{R}^d$, a setting more suitable for many applications. In this paper we extend the theory of splitting methods to this end. The main idea is to derive the solution using a spectral method from a combination of solutions of the free Schr\"odinger equation and of linear scalar ordinary differential equations, in a symmetric Zassenhaus splitting method. This necessitates detailed analysis of certain orthonormal spectral bases on the real line and their evolution under the free Schr\"odinger operator.

Lattice-Boltzmann methods are known for their simplicity, efficiency and ease of parallelization, usually relying on uniform Cartesian meshes with a strong bond between spatial and temporal discretization. This fact complicates the crucial issue of reducing the computational cost and the memory impact by automatically coarsening the grid where a fine mesh is unnecessary, still ensuring the overall quality of the numerical solution through error control. This work provides a possible answer to this interesting question, by connecting, for the first time, the field of lattice-Boltzmann Methods (LBM) to the adaptive multiresolution (MR) approach based on wavelets. To this end, we employ a MR multi-scale transform to adapt the mesh as the solution evolves in time according to its local regularity. The collision phase is not affected due to its inherent local nature and because we do not modify the speed of the sound, contrarily to most of the LBM/Adaptive Mesh Refinement (AMR) strategies proposed in the literature, thus preserving the original structure of any LBM scheme. Besides, an original use of the MR allows the scheme to resolve the proper physics by efficiently controlling the accuracy of the transport phase. We carefully test our method to conclude on its adaptability to a wide family of existing lattice Boltzmann schemes, treating both hyperbolic and parabolic systems of equations, thus being less problem-dependent than the AMR approaches, which have a hard time guaranteeing an effective control on the error. The ability of the method to yield a very efficient compression rate and thus a computational cost reduction for solutions involving localized structures with loss of regularity is also shown, while guaranteeing a precise control on the approximation error introduced by the spatial adaptation of the grid. The numerical strategy is implemented on a specific open-source platform called SAMURAI with a dedicated data-structure relying on set algebra.

In this contribution, a novel framework for simulating mixed-mode failure in rock is presented. Based on a hybrid phase-field model for mixed-mode fracture, separate phase-field variables are introduced for tensile (mode I) and shear (mode II) fracture. The resulting three-field problem features separate length scale parameters for mode I and mode II cracks. In contrast to the classic two-field mixed-mode approaches it can thus account for different tensile and shear strength of rock. The two phase-field equations are implicitly coupled through the degradation of the material in the elastic equation, and the three fields are solved using a staggered iteration scheme. For its validation, the three-field model is calibrated for two types of rock, Solnhofen Limestone and Pfraundorfer Dolostone. To this end, double-edge notched Brazilian disk (DNBD) tests are performed to determine the mode II fracture toughness. The numerical results demonstrate that the proposed phase-field model is able to reproduce the different crack patterns observed in the DNBD tests. A final example of a uniaxial compression test on a rare drill core demonstrates, that the proposed model is able to capture complex, 3D mixed-mode crack patterns when calibrated with the correct mode I and mode II fracture toughness.

Lattice Boltzmann schemes rely on the enlargement of the size of the target problem in order to solve PDEs in a highly parallelizable and efficient kinetic-like fashion, split into a collision and a stream phase. This structure, despite the well-known advantages from a computational standpoint, is not suitable to construct a rigorous notion of consistency with respect to the target equations and to provide a precise notion of stability. In order to alleviate these shortages and introduce a rigorous framework, we demonstrate that any lattice Boltzmann scheme can be rewritten as a corresponding multi-step Finite Difference scheme on the conserved variables. This is achieved by devising a suitable formalism based on operators, commutative algebra and polynomials. Therefore, the notion of consistency of the corresponding Finite Difference scheme allows to invoke the Lax-Richtmyer theorem in the case of linear lattice Boltzmann schemes. Moreover, we show that the frequently-used von Neumann-like stability analysis for lattice Boltzmann schemes entirely corresponds to the von Neumann stability analysis of their Finite Difference counterpart. More generally, the usual tools for the analysis of Finite Difference schemes are now readily available to study lattice Boltzmann schemes. Their relevance is verified by means of numerical illustrations.

We propose a novel method for computing $p$-values based on nested sampling (NS) applied to the sampling space rather than the parameter space of the problem, in contrast to its usage in Bayesian computation. The computational cost of NS scales as $\log^2{1/p}$, which compares favorably to the $1/p$ scaling for Monte Carlo (MC) simulations. For significances greater than about $4\sigma$ in both a toy problem and a simplified resonance search, we show that NS requires orders of magnitude fewer simulations than ordinary MC estimates. This is particularly relevant for high-energy physics, which adopts a $5\sigma$ gold standard for discovery. We conclude with remarks on new connections between Bayesian and frequentist computation and possibilities for tuning NS implementations for still better performance in this setting.

We consider the inverse problem of reconstructing the boundary curve of a cavity embedded in a bounded domain. The problem is formulated in two dimensions for the wave equation. We combine the Laguerre transform with the integral equation method and we reduce the inverse problem to a system of boundary integral equations. We propose an iterative scheme that linearizes the equation using the Fr\'echet derivative of the forward operator. The application of special quadrature rules results to an ill-conditioned linear system which we solve using Tikhonov regularization. The numerical results show that the proposed method produces accurate and stable reconstructions.

The program performance on modern hardware is characterized by \emph{locality of reference}, that is, it is faster to access data that is close in address space to data that has been accessed recently than data in a random location. This is due to many architectural features including caches, prefetching, virtual address translation and the physical properties of a hard disk drive; attempting to model all the components that constitute the performance of a modern machine is impossible, especially for general algorithm design purposes. What if one could prove an algorithm is asymptotically optimal on all systems that reward locality of reference, no matter how it manifests itself within reasonable limits? We show that this is possible, and that excluding some pathological cases, cache-oblivious algorithms that are asymptotically optimal in the ideal-cache model are asymptotically optimal in any reasonable setting that rewards locality of reference. This is surprising as the cache-oblivious framework envisions a particular architectural model involving blocked memory transfer into a multi-level hierarchy of caches of varying sizes, and was not designed to directly model locality-of-reference correlated performance.

When AI agents don't align their actions with human values they may cause serious harm. One way to solve the value alignment problem is by including a human operator who monitors all of the agent's actions. Despite the fact, that this solution guarantees maximal safety, it is very inefficient, since it requires the human operator to dedicate all of his attention to the agent. In this paper, we propose a much more efficient solution that allows an operator to be engaged in other activities without neglecting his monitoring task. In our approach the AI agent requests permission from the operator only for critical actions, that is, potentially harmful actions. We introduce the concept of critical actions with respect to AI safety and discuss how to build a model that measures action criticality. We also discuss how the operator's feedback could be used to make the agent smarter.

Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen "robustly": commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司