亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The scheduled launch of the LISA Mission in the next decade has called attention to the gravitational self-force problem. Despite an extensive body of theoretical work, long-time numerical computations of gravitational waves from extreme-mass-ratio-inspirals remain challenging. This work proposes a class of numerical evolution schemes suitable to this problem based on Hermite integration. Their most important feature is time-reversal symmetry and unconditional stability, which enables these methods to preserve symplectic structure, energy, momentum and other Noether charges over long time periods. We apply Noether's theorem to the master fields of black hole perturbation theory on a hyperboloidal slice of Schwarzschild spacetime to show that there exist constants of evolution that numerical simulations must preserve. We demonstrate that time-symmetric integration schemes based on a 2-point Taylor expansion (such as Hermite integration) numerically conserve these quantities, unlike schemes based on a 1-point Taylor expansion (such as Runge-Kutta). This makes time-symmetric schemes ideal for long-time EMRI simulations.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

We design, analyze, and implement a new conservative Discontinuous Galerkin (DG) method for the simulation of solitary wave solutions to the generalized Korteweg-de Vries (KdV) Equation. The key feature of our method is the conservation, at the numerical level, of the mass, energy and Hamiltonian that are conserved by exact solutions of all KdV equations. To our knowledge, this is the first DG method that conserves all these three quantities, a property critical for the accurate long-time evolution of solitary waves. To achieve the desired conservation properties, our novel idea is to introduce two stabilization parameters in the numerical fluxes as new unknowns which then allow us to enforce the conservation of energy and Hamiltonian in the formulation of the numerical scheme. We prove the conservation properties of the scheme which are corroborated by numerical tests. This idea of achieving conservation properties by implicitly defining penalization parameters, that are traditionally specified a priori, can serve as a framework for designing physics-preserving numerical methods for other types of problems.

Projection-based model order reduction allows for the parsimonious representation of full order models (FOMs), typically obtained through the discretization of certain partial differential equations (PDEs) using conventional techniques where the discretization may contain a very large number of degrees of freedom. As a result of this more compact representation, the resulting projection-based reduced order models (ROMs) can achieve considerable computational speedups, which are especially useful in real-time or multi-query analyses. One known deficiency of projection-based ROMs is that they can suffer from a lack of robustness, stability and accuracy, especially in the predictive regime, which ultimately limits their useful application. Another research gap that has prevented the widespread adoption of ROMs within the modeling and simulation community is the lack of theoretical and algorithmic foundations necessary for the "plug-and-play" integration of these models into existing multi-scale and multi-physics frameworks. This paper describes a new methodology that has the potential to address both of the aforementioned deficiencies by coupling projection-based ROMs with each other as well as with conventional FOMs by means of the Schwarz alternating method. Leveraging recent work that adapted the Schwarz alternating method to enable consistent and concurrent multi-scale coupling of finite element FOMs in solid mechanics, we present a new extension of the Schwarz formulation that enables ROM-FOM and ROM-ROM coupling in nonlinear solid mechanics. In order to maintain efficiency, we employ hyper-reduction via the Energy-Conserving Sampling and Weighting approach. We evaluate the proposed coupling approach in the reproductive as well as in the predictive regime on a canonical test case that involves the dynamic propagation of a traveling wave in a nonlinear hyper-elastic material.

Learning high-dimensional distributions is often done with explicit likelihood modeling or implicit modeling via minimizing integral probability metrics (IPMs). In this paper, we expand this learning paradigm to stochastic orders, namely, the convex or Choquet order between probability measures. Towards this end, exploiting the relation between convex orders and optimal transport, we introduce the Choquet-Toland distance between probability measures, that can be used as a drop-in replacement for IPMs. We also introduce the Variational Dominance Criterion (VDC) to learn probability measures with dominance constraints, that encode the desired stochastic order between the learned measure and a known baseline. We analyze both quantities and show that they suffer from the curse of dimensionality and propose surrogates via input convex maxout networks (ICMNs), that enjoy parametric rates. We provide a min-max framework for learning with stochastic orders and validate it experimentally on synthetic and high-dimensional image generation, with promising results. Finally, our ICMNs class of convex functions and its derived Rademacher Complexity are of independent interest beyond their application in convex orders.

The diffusive-viscous wave equation (DVWE) is widely used in seismic exploration since it can explain frequency-dependent seismic reflections in a reservoir with hydrocarbons. Most of the existing numerical approximations for the DVWE are based on domain truncation with ad hoc boundary conditions. However, this would generate artificial reflections as well as truncation errors. To this end, we directly consider the DVWE in unbounded domains. We first show the existence, uniqueness, and regularity of the solution of the DVWE. We then develop a Hermite spectral Galerkin scheme and derive the corresponding error estimate showing that the Hermite spectral Galerkin approximation delivers a spectral rate of convergence provided sufficiently smooth solutions. Several numerical experiments with constant and discontinuous coefficients are provided to verify the theoretical result and to demonstrate the effectiveness of the proposed method. In particular, We verify the error estimate for both smooth and non-smooth source terms and initial conditions. In view of the error estimate and the regularity result, we show the sharpness of the convergence rate in terms of the regularity of the source term. We also show that the artificial reflection does not occur by using the present method.

We propose Characteristic-Neural Ordinary Differential Equations (C-NODEs), a framework for extending Neural Ordinary Differential Equations (NODEs) beyond ODEs. While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves. This in turn allows the application of the standard frameworks for solving ODEs, namely the adjoint method. Learning optimal characteristic curves for given tasks improves the performance and computational efficiency, compared to state of the art NODE models. We prove that the C-NODE framework extends the classical NODE on classification tasks by demonstrating explicit C-NODE representable functions not expressible by NODEs. Additionally, we present C-NODE-based continuous normalizing flows, which describe the density evolution of latent variables along multiple dimensions. Empirical results demonstrate the improvements provided by the proposed method for classification and density estimation on CIFAR-10, SVHN, and MNIST datasets under a similar computational budget as the existing NODE methods. The results also provide empirical evidence that the learned curves improve the efficiency of the system through a lower number of parameters and function evaluations compared with baselines.

In this paper, we analyze two classes of spectral volume (SV) methods for one-dimensional hyperbolic equations with degenerate variable coefficients. The two classes of SV methods are constructed by letting a piecewise $k$-th order ($k\ge 1$ is an arbitrary integer) polynomial function satisfy the local conservation law in each {\it control volume} obtained by dividing the interval element of the underlying mesh with $k$ Gauss-Legendre points (LSV) or Radaus points (RSV). The $L^2$-norm stability and optimal order convergence properties for both methods are rigorously proved for general non-uniform meshes. The superconvergence behaviors of the two SV schemes have been also investigated: it is proved that under the $L^2$ norm, the SV flux function approximates the exact flux with $(k+2)$-th order and the SV solution approximates the exact solution with $(k+\frac32)$-th order; some superconvergence behaviors at certain special points and for element averages have been also discovered and proved. Our theoretical findings are verified by several numerical experiments.

The gradient flow (GF) is an ODE for which its explicit Euler's discretization is the gradient descent method. In this work, we investigate a family of methods derived from \emph{approximate implicit discretizations} of (\GF), drawing the connection between larger stability regions and less sensitive hyperparameter tuning. We focus on the implicit $\tau$-step backwards differentiation formulas (BDFs), approximated in an inner loop with a few iterations of vanilla gradient descent, and give their convergence rate when the objective function is convex, strongly convex, or nonconvex. Numerical experiments show the wide range of effects of these different methods on extremely poorly conditioned problems, especially those brought about in training deep neural networks.

Improving model's generalizability against domain shifts is crucial, especially for safety-critical applications such as autonomous driving. Real-world domain styles can vary substantially due to environment changes and sensor noises, but deep models only know the training domain style. Such domain style gap impedes model generalization on diverse real-world domains. Our proposed Normalization Perturbation (NP) can effectively overcome this domain style overfitting problem. We observe that this problem is mainly caused by the biased distribution of low-level features learned in shallow CNN layers. Thus, we propose to perturb the channel statistics of source domain features to synthesize various latent styles, so that the trained deep model can perceive diverse potential domains and generalizes well even without observations of target domain data in training. We further explore the style-sensitive channels for effective style synthesis. Normalization Perturbation only relies on a single source domain and is surprisingly effective and extremely easy to implement. Extensive experiments verify the effectiveness of our method for generalizing models under real-world domain shifts.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

北京阿比特科技有限公司