亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many interesting physical problems described by systems of hyperbolic conservation laws are stiff, and thus impose a very small time-step because of the restrictive CFL stability condition. In this case, one can exploit the superior stability properties of implicit time integration which allows to choose the time-step only from accuracy requirements, and thus avoid the use of small time-steps. We discuss an efficient framework to devise high order implicit schemes for stiff hyperbolic systems without tailoring it to a specific problem. The nonlinearity of high order schemes, due to space- and time-limiting procedures which control nonphysical oscillations, makes the implicit time integration difficult, e.g.~because the discrete system is nonlinear also on linear problems. This nonlinearity of the scheme is circumvented as proposed in (Puppo et al., Comm.~Appl.~Math.~\& Comput., 2023) for scalar conservation laws, where a first order implicit predictor is computed to freeze the nonlinear coefficients of the essentially non-oscillatory space reconstruction, and also to assist limiting in time. In addition, we propose a novel conservative flux-centered a-posteriori time-limiting procedure using numerical entropy indicators to detect troubled cells. The numerical tests involve classical and artificially devised stiff problems using the Euler's system of gas-dynamics.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜(za)志。 Publisher:Elsevier。 SIT:

We investigate various forms of (model-theoretic) stability for hypergraphs and their corresponding strengthenings of the hypergraph regularity lemma with respect to partitions of vertices. On the one hand, we provide a complete classification of the various possibilities in the ternary case. On the other hand, we provide an example of a family of slice-wise stable 3-hypergraphs so that for no partition of the vertices, any triple of parts has density close to 0 or 1. In particular, this addresses some questions and conjectures of Terry and Wolf. We work in the general measure theoretic context of graded probability spaces, so all our results apply both to measures in ultraproducts of finite graphs, leading to the aforementioned combinatorial applications, and to commuting definable Keisler measures, leading to applications in model theory.

The dynamic mode decomposition (DMD) is a simple and powerful data-driven modeling technique that is capable of revealing coherent spatiotemporal patterns from data. The method's linear algebra-based formulation additionally allows for a variety of optimizations and extensions that make the algorithm practical and viable for real-world data analysis. As a result, DMD has grown to become a leading method for dynamical system analysis across multiple scientific disciplines. PyDMD is a Python package that implements DMD and several of its major variants. In this work, we expand the PyDMD package to include a number of cutting-edge DMD methods and tools specifically designed to handle dynamics that are noisy, multiscale, parameterized, prohibitively high-dimensional, or even strongly nonlinear. We provide a complete overview of the features available in PyDMD as of version 1.0, along with a brief overview of the theory behind the DMD algorithm, information for developers, tips regarding practical DMD usage, and introductory coding examples. All code is available at //github.com/PyDMD/PyDMD .

Multi-sequence magnetic resonance imaging (MRI) has found wide applications in both modern clinical studies and deep learning research. However, in clinical practice, it frequently occurs that one or more of the MRI sequences are missing due to different image acquisition protocols or contrast agent contraindications of patients, limiting the utilization of deep learning models trained on multi-sequence data. One promising approach is to leverage generative models to synthesize the missing sequences, which can serve as a surrogate acquisition. State-of-the-art methods tackling this problem are based on convolutional neural networks (CNN) which usually suffer from spectral biases, resulting in poor reconstruction of high-frequency fine details. In this paper, we propose Conditional Neural fields with Shift modulation (CoNeS), a model that takes voxel coordinates as input and learns a representation of the target images for multi-sequence MRI translation. The proposed model uses a multi-layer perceptron (MLP) instead of a CNN as the decoder for pixel-to-pixel mapping. Hence, each target image is represented as a neural field that is conditioned on the source image via shift modulation with a learned latent code. Experiments on BraTS 2018 and an in-house clinical dataset of vestibular schwannoma patients showed that the proposed method outperformed state-of-the-art methods for multi-sequence MRI translation both visually and quantitatively. Moreover, we conducted spectral analysis, showing that CoNeS was able to overcome the spectral bias issue common in conventional CNN models. To further evaluate the usage of synthesized images in clinical downstream tasks, we tested a segmentation network using the synthesized images at inference.

We propose a hybrid iterative method based on MIONet for PDEs, which combines the traditional numerical iterative solver and the recent powerful machine learning method of neural operator, and further systematically analyze its theoretical properties, including the convergence condition, the spectral behavior, as well as the convergence rate, in terms of the errors of the discretization and the model inference. We show the theoretical results for the frequently-used smoothers, i.e. Richardson (damped Jacobi) and Gauss-Seidel. We give an upper bound of the convergence rate of the hybrid method w.r.t. the model correction period, which indicates a minimum point to make the hybrid iteration converge fastest. Several numerical examples including the hybrid Richardson (Gauss-Seidel) iteration for the 1-d (2-d) Poisson equation are presented to verify our theoretical results, and also reflect an excellent acceleration effect. As a meshless acceleration method, it is provided with enormous potentials for practice applications.

We provide a new theoretical framework for the variable-step deferred correction (DC) methods based on the well-known BDF2 formula. By using the discrete orthogonal convolution kernels, some high-order BDF2-DC methods are proven to be stable on arbitrary time grids according to the recent definition of stability (SINUM, 60: 2253-2272). It significantly relaxes the existing step-ratio restrictions for the BDF2-DC methods (BIT, 62: 1789-1822). The associated sharp error estimates are established by taking the numerical effects of the starting approximations into account, and they suggest that the BDF2-DC methods have no aftereffect, that is, the lower-order starting scheme for the BDF2 scheme will not cause a loss in the accuracy of the high-order BDF2-DC methods. Extensive tests on the graded and random time meshes are presented to support the new theory.

Liquid droplet dynamics are widely used in biological and engineering applications, which contain complex interfacial instabilities and pattern formulation such as droplet merging, splitting, and transport. This paper studies a class of mean field control formulation towards these droplet dynamics. They are used to control and maintain the manipulation of droplets in applications. We first formulate the droplet dynamics as gradient flows of free energies in modified optimal transport metrics with nonlinear mobilities. We then design an optimal control problem for these gradient flows. We lastly apply the primal-dual hybrid gradient algorithm with high-order finite element methods to simulate the proposed mean field control problems. Numerical examples, including droplet formation, bead-up/spreading, transport, and merging/splitting on a two-dimensional spatial domain, demonstrate the effectiveness of the proposed mean field control mechanism.

In the realm of statistical exploration, the manipulation of pseudo-random values to discern their impact on data distribution presents a compelling avenue of inquiry. This article investigates the question: Is it possible to add pseudo-random values without compelling a shift towards a normal distribution?. Employing Python techniques, the study explores the nuances of pseudo-random value addition within the context of additions, aiming to unravel the interplay between randomness and resulting statistical characteristics. The Materials and Methods chapter details the construction of datasets comprising up to 300 billion pseudo-random values, employing three distinct layers of manipulation. The Results chapter visually and quantitatively explores the generated datasets, emphasizing distribution and standard deviation metrics. The study concludes with reflections on the implications of pseudo-random value manipulation and suggests avenues for future research. In the layered exploration, the first layer introduces subtle normalization with increasing summations, while the second layer enhances normality. The third layer disrupts typical distribution patterns, leaning towards randomness despite pseudo-random value summation. Standard deviation patterns across layers further illuminate the dynamic interplay of pseudo-random operations on statistical characteristics. While not aiming to disrupt academic norms, this work modestly contributes insights into data distribution complexities. Future studies are encouraged to delve deeper into the implications of data manipulation on statistical outcomes, extending the understanding of pseudo-random operations in diverse contexts.

Deep learning solvers for partial differential equations typically have limited accuracy. We propose to overcome this problem by using them as preconditioners. More specifically, we apply discretization-invariant neural operators to learn preconditioners for the flexible conjugate gradient method (FCG). Architecture paired with novel loss function and training scheme allows for learning efficient preconditioners that can be used across different resolutions. On the theoretical side, FCG theory allows us to safely use nonlinear preconditioners that can be applied in $O(N)$ operations without constraining the form of the preconditioners matrix. To justify learning scheme components (the loss function and the way training data is collected) we perform several ablation studies. Numerical results indicate that our approach favorably compares with classical preconditioners and allows to reuse of preconditioners learned for lower resolution to the higher resolution data.

We propose a new loss function for supervised and physics-informed training of neural networks and operators that incorporates a posteriori error estimate. More specifically, during the training stage, the neural network learns additional physical fields that lead to rigorous error majorants after a computationally cheap postprocessing stage. Theoretical results are based upon the theory of functional a posteriori error estimates, which allows for the systematic construction of such loss functions for a diverse class of practically relevant partial differential equations. From the numerical side, we demonstrate on a series of elliptic problems that for a variety of architectures and approaches (physics-informed neural networks, physics-informed neural operators, neural operators, and classical architectures in the regression and physics-informed settings), we can reach better or comparable accuracy and in addition to that cheaply recover high-quality upper bounds on the error after training.

We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class. The tail bound is established by introducing an initial "deflation" step to the standard generic chaining argument. The resulting tail bound is the sum of the complexity of the "deflated function class" in terms of a generalization of Talagrand's $\gamma$ functional, and the deviation of the function instance, both of which are formulated based on the natural seminorm induced by the corresponding Cram\'{e}r functions. We also provide certain approximations for the mentioned seminorm when the function class lies in a given (exponential type) Orlicz space, that can be used to make the complexity term and the deviation term more explicit.

北京阿比特科技有限公司