亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The propagation of charged particles through a scattering medium in the presence of a magnetic field can be described by a Fokker-Planck equation with Lorentz force. This model is studied both, from a theoretical and a numerical point of view. A particular trace estimate is derived for the relevant function spaces to clarify the meaning of boundary values. Existence of a weak solution is then proven by the Rothe method. In the second step of our investigations, a fully practicable discretization scheme is proposed based on implicit time-stepping through the energy levels and a spherical-harmonics finite-element discretization with respect to the remaining variables. A full error analysis of the resulting scheme is given, and numerical results are presented to illustrate the theoretical results and the performance of the proposed method.

相關內容

Motivated by Fredholm theory, we develop a framework to establish the convergence of spectral methods for operator equations $\mathcal L u = f$. The framework posits the existence of a left-Fredholm regulator for $\mathcal L$ and the existence of a sufficiently good approximation of this regulator. Importantly, the numerical method itself need not make use of this extra approximant. We apply the framework to finite-section and collocation-based numerical methods for solving differential equations with periodic boundary conditions and to solving Riemann--Hilbert problems on the unit circle. We also obtain improved results concerning the approximation of eigenvalues of differential operators with periodic coefficients.

Stein thinning is a promising algorithm proposed by (Riabiz et al., 2022) for post-processing outputs of Markov chain Monte Carlo (MCMC). The main principle is to greedily minimize the kernelized Stein discrepancy (KSD), which only requires the gradient of the log-target distribution, and is thus well-suited for Bayesian inference. The main advantages of Stein thinning are the automatic remove of the burn-in period, the correction of the bias introduced by recent MCMC algorithms, and the asymptotic properties of convergence towards the target distribution. Nevertheless, Stein thinning suffers from several empirical pathologies, which may result in poor approximations, as observed in the literature. In this article, we conduct a theoretical analysis of these pathologies, to clearly identify the mechanisms at stake, and suggest improved strategies. Then, we introduce the regularized Stein thinning algorithm to alleviate the identified pathologies. Finally, theoretical guarantees and extensive experiments show the high efficiency of the proposed algorithm.

In this paper, a multiscale constitutive framework for one-dimensional blood flow modeling is presented and discussed. By analyzing the asymptotic limits of the proposed model, it is shown that different types of blood propagation phenomena in arteries and veins can be described through an appropriate choice of scaling parameters, which are related to distinct characterizations of the fluid-structure interaction mechanism (whether elastic or viscoelastic) that exist between vessel walls and blood flow. In these asymptotic limits, well-known blood flow models from the literature are recovered. Additionally, by analyzing the perturbation of the local elastic equilibrium of the system, a new viscoelastic blood flow model is derived. The proposed approach is highly flexible and suitable for studying the human cardiovascular system, which is composed of vessels with high morphological and mechanical variability. The resulting multiscale hyperbolic model of blood flow is solved using an asymptotic-preserving Implicit-Explicit Runge-Kutta Finite Volume method, which ensures the consistency of the numerical scheme with the different asymptotic limits of the mathematical model without affecting the choice of the time step by restrictions related to the smallness of the scaling parameters. Several numerical tests confirm the validity of the proposed methodology, including a case study investigating the hemodynamics of a thoracic aorta in the presence of a stent.

In the field of computational finance, it is common for the quantity of interest to be expected values of functions of random variables via stochastic differential equations (SDEs). For SDEs with globally Lipschitz coefficients and commutative diffusion coefficients, the explicit Milstein scheme, relying on only Brownian increments and thus easily implementable, can be combined with the multilevel Monte Carlo (MLMC) method proposed by Giles \cite{giles2008multilevel} to give the optimal overall computational cost $\mathcal{O}(\epsilon^{-2})$, where $\epsilon$ is the required target accuracy. For multi-dimensional SDEs that do not satisfy the commutativity condition, a kind of one-half order truncated Milstein-type scheme without L\'evy areas is introduced by Giles and Szpruch \cite{giles2014antithetic}, which combined with the antithetic MLMC gives the optimal computational cost under globally Lipschitz conditions. In the present work, we turn to SDEs with non-globally Lipschitz continuous coefficients, for which a family of modified Milstein-type schemes without L\'evy areas is proposed. The expected one-half order of strong convergence is recovered in a non-globally Lipschitz setting, where the diffusion coefficients are allowed to grow superlinearly. This helps us to analyze the relevant variance of the multilevel estimator and the optimal computational cost is finally achieved for the antithetic MLMC. The analysis of both the convergence rate and the desired variance in the non-globally Lipschitz setting is highly non-trivial and non-standard arguments are developed to overcome some essential difficulties. Numerical experiments are provided to confirm the theoretical findings.

In recent years, there has been a significant growth in research focusing on minimum $\ell_2$ norm (ridgeless) interpolation least squares estimators. However, the majority of these analyses have been limited to a simple regression error structure, assuming independent and identically distributed errors with zero mean and common variance, independent of the feature vectors. Additionally, the main focus of these theoretical analyses has been on the out-of-sample prediction risk. This paper breaks away from the existing literature by examining the mean squared error of the ridgeless interpolation least squares estimator, allowing for more general assumptions about the regression errors. Specifically, we investigate the potential benefits of overparameterization by characterizing the mean squared error in a finite sample. Our findings reveal that including a large number of unimportant parameters relative to the sample size can effectively reduce the mean squared error of the estimator. Notably, we establish that the estimation difficulties associated with the variance term can be summarized through the trace of the variance-covariance matrix of the regression errors.

Temporal point processes (TPP) are a natural tool for modeling event-based data. Among all TPP models, Hawkes processes have proven to be the most widely used, mainly due to their adequate modeling for various applications, particularly when considering exponential or non-parametric kernels. Although non-parametric kernels are an option, such models require large datasets. While exponential kernels are more data efficient and relevant for specific applications where events immediately trigger more events, they are ill-suited for applications where latencies need to be estimated, such as in neuroscience. This work aims to offer an efficient solution to TPP inference using general parametric kernels with finite support. The developed solution consists of a fast $\ell_2$ gradient-based solver leveraging a discretized version of the events. After theoretically supporting the use of discretization, the statistical and computational efficiency of the novel approach is demonstrated through various numerical experiments. Finally, the method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG). Given the use of general parametric kernels, results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.

Plug-and-play (PnP) prior is a well-known class of methods for solving imaging inverse problems by computing fixed-points of operators combining physical measurement models and learned image denoisers. While PnP methods have been extensively used for image recovery with known measurement operators, there is little work on PnP for solving blind inverse problems. We address this gap by presenting a new block-coordinate PnP (BC-PnP) method that efficiently solves this joint estimation problem by introducing learned denoisers as priors on both the unknown image and the unknown measurement operator. We present a new convergence theory for BC-PnP compatible with blind inverse problems by considering nonconvex data-fidelity terms and expansive denoisers. Our theory analyzes the convergence of BC-PnP to a stationary point of an implicit function associated with an approximate minimum mean-squared error (MMSE) denoiser. We numerically validate our method on two blind inverse problems: automatic coil sensitivity estimation in magnetic resonance imaging (MRI) and blind image deblurring. Our results show that BC-PnP provides an efficient and principled framework for using denoisers as PnP priors for jointly estimating measurement operators and images.

In a network of reinforced stochastic processes, for certain values of the parameters, all the agents' inclinations synchronize and converge almost surely toward a certain random variable. The present work aims at clarifying when the agents can asymptotically polarize, i.e. when the common limit inclination can take the extreme values, 0 or 1, with probability zero, strictly positive, or equal to one. Moreover, we present a suitable technique in order to estimate this probability that, along with the theoretical results, has been framed in the general setting of a class of martingales taking values in [0,1].

Markov chain Monte Carlo (MCMC) allows one to generate dependent replicates from a posterior distribution for effectively any Bayesian hierarchical model. However, MCMC can produce a significant computational burden. This motivates us to consider finding expressions of the posterior distribution that are computationally straightforward to obtain independent replicates from directly. We focus on a broad class of Bayesian latent Gaussian process (LGP) models that allow for spatially dependent data. First, we derive a new class of distributions we refer to as the generalized conjugate multivariate (GCM) distribution. The GCM distribution's theoretical development is similar to that of the CM distribution with two main differences; namely, (1) the GCM allows for latent Gaussian process assumptions, and (2) the GCM explicitly accounts for hyperparameters through marginalization. The development of GCM is needed to obtain independent replicates directly from the exact posterior distribution, which has an efficient projection/regression form. Hence, we refer to our method as Exact Posterior Regression (EPR). Illustrative examples are provided including simulation studies for weakly stationary spatial processes and spatial basis function expansions. An additional analysis of poverty incidence data from the U.S. Census Bureau's American Community Survey (ACS) using a conditional autoregressive model is presented.

When dealing with time series data, causal inference methods often employ structural vector autoregressive (SVAR) processes to model time-evolving random systems. In this work, we rephrase recursive SVAR processes with possible latent component processes as a linear Structural Causal Model (SCM) of stochastic processes on a simple causal graph, the \emph{process graph}, that models every process as a single node. Using this reformulation, we generalise Wright's well-known path-rule for linear Gaussian SCMs to the newly introduced process SCMs and we express the auto-covariance sequence of an SVAR process by means of a generalised trek-rule. Employing the Fourier-Transformation, we derive compact expressions for causal effects in the frequency domain that allow us to efficiently visualise the causal interactions in a multivariate SVAR process. Finally, we observe that the process graph can be used to formulate graphical criteria for identifying causal effects and to derive algebraic relations with which these frequency domain causal effects can be recovered from the observed spectral density.

北京阿比特科技有限公司