亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Linear wave equations sourced by a Dirac delta distribution $\delta(x)$ and its derivative(s) can serve as a model for many different phenomena. We describe a discontinuous Galerkin (DG) method to numerically solve such equations with source terms proportional to $\partial^n \delta /\partial x^n$. Despite the presence of singular source terms, which imply discontinuous or potentially singular solutions, our DG method achieves global spectral accuracy even at the source's location. Our DG method is developed for the wave equation written in fully first-order form. The first-order reduction is carried out using a distributional auxiliary variable that removes some of the source term's singular behavior. While this is helpful numerically, it gives rise to a distributional constraint. We show that a time-independent spurious solution can develop if the initial constraint violation is proportional to $\delta(x)$. Numerical experiments verify this behavior and our scheme's convergence properties by comparing against exact solutions.

相關內容

We adopt the integral definition of the fractional Laplace operator and study, on Lipschitz domains, an optimal control problem that involves a fractional elliptic partial differential equation (PDE) as state equation and a control variable that enters the state equation as a coefficient; pointwise constraints on the control variable are considered as well. We establish the existence of optimal solutions and analyze first and, necessary and sufficient, second order optimality conditions. Regularity estimates for optimal variables are also analyzed. We devise two strategies of finite element discretization: a semidiscrete scheme where the control variable is not discretized and a fully discrete scheme where the control variable is discretized with piecewise constant functions. For both solution techniques, we analyze convergence properties of discretizations and derive error estimates.

We consider the estimation of two-sample integral functionals, of the type that occur naturally, for example, when the object of interest is a divergence between unknown probability densities. Our first main result is that, in wide generality, a weighted nearest neighbour estimator is efficient, in the sense of achieving the local asymptotic minimax lower bound. Moreover, we also prove a corresponding central limit theorem, which facilitates the construction of asymptotically valid confidence intervals for the functional, having asymptotically minimal width. One interesting consequence of our results is the discovery that, for certain functionals, the worst-case performance of our estimator may improve on that of the natural `oracle' estimator, which is given access to the values of the unknown densities at the observations.

This work describes experiments on thermal dynamics of pure H2O excited by hydrodynamic cavitation, which has been reported to facilitate the spin conversion of para- and ortho-isomers at water interfaces. Previous measurements by NMR and capillary methods of excited samples demonstrated changes of proton density by 12-15%, the surface tension up to 15.7%, which can be attributed to a non-equilibrium para-/ortho- ratio. Beside these changes, we also expect a variation of heat capacity. Experiments use a differential calorimetric approach with two devices: one with an active thermostat for diathermic measurements, another is fully passive for long-term measurements. Samples after excitation are degassed at -0.09MPa and thermally equalized in a water bath. Conducted attempts demonstrated changes in the heat capacity of experimental samples by 4.17%--5.72% measured in the transient dynamics within 60 min after excitation, which decreases to 2.08% in the steady-state dynamics 90-120 min after excitation. Additionally, we observed occurrence of thermal fluctuations at the level of 10^-3 C relative temperature on 20-40 min mesoscale dynamics and a long-term increase of such fluctuations in experimental samples. Obtained results are reproducible in both devices and are supported by previously published outcomes on four-photon scattering spectra in the range from -1.5 to 1.5 cm^-1 and electrochemical reactivity in CO2 and H2O2 pathways. Based on these results, we propose a hypothesis about ongoing spin conversion process on mesoscopic scales under weak influx of energy caused by thermal, EM or geomagnetic factors; this enables explaining electrochemical and thermal anomalies observed in long-term measurements.

In the present work, we examine and analyze an alternative of the unfitted mesh finite element method improved by omitting computationally expensive, especially for fluids, stabilization type of penalty onto the boundary area, namely the so-called ghost penalty. This approach is based on the discontinuous Galerkin method, enriched by arbitrarily shaped boundary elements techniques. In this framework, we examine a stationary Stokes fluid system and we prove the inf/sup condition, the hp- a priori error estimates, to our knowledge for the first time in the literature, while we investigate the optimal convergence rates numerically. This approach recovers and integrates the flexibility and superiority of the unfitted methods whenever geometrical deformations are taking place, combined with the efficiency of the hp-version techniques based on arbitrarily shaped elements on the boundary.

Scientists often must simultaneously localize and discover signals. For instance, in genetic fine-mapping, high correlations between nearby genetic variants make it hard to identify the exact locations of causal variants. So the statistical task is to output as many disjoint regions containing a signal as possible, each as small as possible, while controlling false positives. Similar problems arise in any application where signals cannot be perfectly localized, such as locating stars in astronomical surveys and changepoint detection in sequential data. Common Bayesian approaches to these problems involve computing a posterior distribution over signal locations. However, existing procedures to translate these posteriors into actual credible regions for the signals fail to capture all the information in the posterior, leading to lower power and (sometimes) inflated false discoveries. With this motivation, we introduce Bayesian Linear Programming (BLiP). Given a posterior distribution over signals, BLiP outputs credible regions for signals which verifiably nearly maximize expected power while controlling false positives. BLiP overcomes an extremely high-dimensional and nonconvex problem to verifiably nearly maximize expected power while controlling false positives. BLiP is very computationally efficient compared to the cost of computing the posterior and can wrap around nearly any Bayesian model and algorithm. Applying BLiP to existing state-of-the-art analyses of UK Biobank data (for genetic fine-mapping) and the Sloan Digital Sky Survey (for astronomical point source detection) increased power by 30-120% in just a few minutes of additional computation. BLiP is implemented in pyblip (Python) and blipr (R).

The hybrid high-order method is a modern numerical framework for the approximation of elliptic PDEs. We present here an extension of the hybrid high-order method to meshes possessing curved edges/faces. Such an extension allows us to enforce boundary conditions exactly on curved domains, and capture curved geometries that appear internally in the domain e.g. discontinuities in a diffusion coefficient. The method makes use of non-polynomial functions on the curved faces and does not require any mappings between reference elements/faces. Such an approach does not require the faces to be polynomial, and has a strict upper bound on the number of degrees of freedom on a curved face for a given polynomial degree. Moreover, this approach of enriching the space of unknowns on the curved faces with non-polynomial functions should extend naturally to other polytopal methods. We show the method to be stable and consistent on curved meshes and derive optimal error estimates in $L^2$ and energy norms. We present numerical examples of the method on a domain with curved boundary, and for a diffusion problem such that the diffusion tensor is discontinuous along a curved arc.

Stochastic versions of proximal methods have gained much attention in statistics and machine learning. These algorithms tend to admit simple, scalable forms, and enjoy numerical stability via implicit updates. In this work, we propose and analyze a stochastic version of the recently proposed proximal distance algorithm, a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter $\rho \rightarrow \infty$. By uncovering connections to related stochastic proximal methods and interpreting the penalty parameter as the learning rate, we justify heuristics used in practical manifestations of the proximal distance method, establishing their convergence guarantees for the first time. Moreover, we extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates regimes. We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks.

We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i.i.d. samples. All existing statistical analyses of this problem require the assumption that the transport map is Lipschitz, a strong requirement that, in particular, excludes any examples where the transport map is discontinuous. As a first step towards developing estimation procedures for discontinuous maps, we consider the important special case where the data distribution $Q$ is a discrete measure supported on a finite number of points in $\mathbb R^d$. We study a computationally efficient estimator initially proposed by Pooladian and Niles-Weed (2021), based on entropic optimal transport, and show in the semi-discrete setting that it converges at the minimax-optimal rate $n^{-1/2}$, independent of dimension. Other standard map estimation techniques both lack finite-sample guarantees in this setting and provably suffer from the curse of dimensionality. We confirm these results in numerical experiments, and provide experiments for other settings, not covered by our theory, which indicate that the entropic estimator is a promising methodology for other discontinuous transport map estimation problems.

This paper aims to simulate viscoplastic flow in a shallow-water regime. We specifically use the Bingham model in which the material behaves as a solid if the stress is below a certain threshold, otherwise, it moves as a fluid. The main difficulty of this problem is the coupling of the shallow-water equations with the viscoplastic constitutive laws and the high computational effort needed in its solution. Although there have been many studies of this problem, most of these works use explicit methods with simplified empirical models. In our work, to accommodate non-uniform grids and complicated geometries, we use the discontinuous Galerkin method to solve shallow viscoplastic flows. This method is attractive due to its high parallelization, h- and p-adaptivity, and ability to capture shocks. Additionally, we treat the discontinuities in the interfaces between elements with numerical fluxes that ensure a stable solution of the nonlinear hyperbolic equations. To couple the Bingham model with the shallow-water equations, we regularize the problem with three alternatives. Finally, in order to show the effectiveness of our approach, we perform numerical examples for the usual benchmarks of the shallow-water equations.

The geometric optimisation of crystal structures is a procedure widely used in Chemistry that changes the geometrical placement of the particles inside a structure. It is called structural relaxation and constitutes a local minimization problem with a non-convex objective function whose domain complexity increases along with the number of particles involved. In this work we study the performance of the two most popular first order optimisation methods, Gradient Descent and Conjugate Gradient, in structural relaxation. The respective pseudocodes can be found in Section 6. Although frequently employed, there is a lack of their study in this context from an algorithmic point of view. In order to accurately define the problem, we provide a thorough derivation of all necessary formulae related to the crystal structure energy function and the function's differentiation. We run each algorithm in combination with a constant step size, which provides a benchmark for the methods' analysis and direct comparison. We also design dynamic step size rules and study how these improve the two algorithms' performance. Our results show that there is a trade-off between convergence rate and the possibility of an experiment to succeed, hence we construct a function to assign utility to each method based on our respective preference. The function is built according to a recently introduced model of preference indication concerning algorithms with deadline and their run time. Finally, building on all our insights from the experimental results, we provide algorithmic recipes that best correspond to each of the presented preferences and select one recipe as the optimal for equally weighted preferences.

北京阿比特科技有限公司