亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces a new algorithm to improve the accuracy of numerical phase-averaging in oscillatory, multiscale, differential equations. Phase-averaging is a technique that applies averaging to a mapped variable to remove highly oscillatory linear terms from the differential equation. This retains the main contribution of fast oscillations on the low frequencies without needing to resolve the rapid oscillations themselves. However, this comes at the cost of an averaging error, which we aim to offset with a modified mapping. The new mapping includes a mean correction which encodes an average measure of the nonlinear interactions. This mapping was introduced in Tao (2019) for weak nonlinearity and relied on classical time averaging. Our algorithm extends this work to the case where 1) the nonlinearity is not weak but the linear oscillations are fast and 2) finite averaging windows are applied via a smooth kernel, which has the advantage of retaining low frequencies whilst still eliminating the fastest oscillations. We show that the new algorithm reduces phase errors in the mapped variable for the swinging spring ODE. We also demonstrate accuracy improvements compared to standard phase-averaging in numerical experiments with the one-dimensional Rotating Shallow Water Equations, a useful test case for weather and climate applications. As the mean correction term can be computed in parallel, this new mapping has potential as a more accurate, yet still computationally cheap, coarse propagator for the oscillatory parareal method.

相關內容

機器學習系統設計系統評估標準

This paper presents a novel generic asymptotic expansion formula of expectations of multidimensional Wiener functionals through a Malliavin calculus technique. The uniform estimate of the asymptotic expansion is shown under a weaker condition on the Malliavin covariance matrix of the target Wiener functional. In particular, the method provides a tractable expansion for the expectation of an irregular functional of the solution to a multidimensional rough differential equation driven by fractional Brownian motion with Hurst index $H<1/2$, without using complicated fractional integral calculus for the singular kernel. In a numerical experiment, our expansion shows a much better approximation for a probability distribution function than its normal approximation, which demonstrates the validity of the proposed method.

The paper concerns problems of the recovery of linear operators defined on sets of functions from information of these functions given with stochastic errors. The constructed optimal recovery methods, in general, do not use all the available information. As a consequence, optimal methods are obtained for recovering derivatives of functions from Sobolev classes by the information of their Fourier transforms given with stochastic errors. A similar problem is considered for solutions of the heat equation.

Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy, provided that the type II error probability is sufficiently small. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.

This paper introduces the design and implementation of PyOptInterface, a modeling language for mathematical optimization embedded in Python programming language. PyOptInterface uses lightweight and compact data structure to bridge high-level entities in optimization models like variables and constraints to internal indices of optimizers efficiently. It supports a variety of optimization solvers and a range of common problem classes. We provide benchmarks to exhibit the competitive performance of PyOptInterface compared with other state-of-the-art modeling languages.

This paper studies linear reconstruction of partially observed functional data which are recorded on a discrete grid. We propose a novel estimation approach based on approximate factor models with increasing rank taking into account potential covariate information. Whereas alternative reconstruction procedures commonly involve some preliminary smoothing, our method separates the signal from noise and reconstructs missing fragments at once. We establish uniform convergence rates of our estimator and introduce a new method for constructing simultaneous prediction bands for the missing trajectories. A simulation study examines the performance of the proposed methods in finite samples. Finally, a real data application of temperature curves demonstrates that our theory provides a simple and effective method to recover missing fragments.

This paper addresses the problem of pathological lung segmentation, a significant challenge in medical image analysis, particularly pronounced in cases of peripheral opacities (severe fibrosis and consolidation) because of the textural similarity between lung tissue and surrounding areas. To overcome these challenges, this paper emphasizes the use of CycleGAN for unpaired image-to-image translation, in order to provide an augmentation method able to generate fake pathological images matching an existing ground truth. Although previous studies have employed CycleGAN, they often neglect the challenge of shape deformation, which is crucial for accurate medical image segmentation. Our work introduces an innovative strategy that incorporates additional loss functions. Specifically, it proposes an L1 loss based on the lung surrounding which shape is constrained to remain unchanged at the transition from the healthy to pathological domains. The lung surrounding is derived based on ground truth lung masks available in the healthy domain. Furthermore, preprocessing steps, such as cropping based on ribs/vertebra locations, are applied to refine the input for the CycleGAN, ensuring that the network focus on the lung region. This is essential to avoid extraneous biases, such as the zoom effect bias, which can divert attention from the main task. The method is applied to enhance in semi-supervised manner the lung segmentation process by employing a U-Net model trained with on-the-fly data augmentation incorporating synthetic pathological tissues generated by the CycleGAN model. Preliminary results from this research demonstrate significant qualitative and quantitative improvements, setting a new benchmark in the field of pathological lung segmentation. Our code is available at //github.com/noureddinekhiati/Semi-supervised-lung-segmentation

We present a subspace method based on neural networks for solving the partial differential equation in weak form with high accuracy. The basic idea of our method is to use some functions based on neural networks as base functions to span a subspace, then find an approximate solution in this subspace. Training base functions and finding an approximate solution can be separated, that is different methods can be used to train these base functions, and different methods can also be used to find an approximate solution. In this paper, we find an approximate solution of the partial differential equation in the weak form. Our method can achieve high accuracy with low cost of training. Numerical examples show that the cost of training these base functions is low, and only one hundred to two thousand epochs are needed for most tests. The error of our method can fall below the level of $10^{-7}$ for some tests. The proposed method has the better performance in terms of the accuracy and computational cost.

We study the numerical approximation of the stochastic heat equation with a distributional reaction term. Under a condition on the Besov regularity of the reaction term, it was proven recently that a strong solution exists and is unique in the pathwise sense, in a class of H\"older continuous processes. For a suitable choice of sequence $(b^k)_{k\in \mathbb{N}}$ approximating $b$, we prove that the error between the solution $u$ of the SPDE with reaction term $b$ and its tamed Euler finite-difference scheme with mollified drift $b^k$, converges to $0$ in $L^m(\Omega)$ with a rate that depends on the Besov regularity of $b$. In particular, one can consider two interesting cases: first, even when $b$ is only a (finite) measure, a rate of convergence is obtained. On the other hand, when $b$ is a bounded measurable function, the (almost) optimal rate of convergence $(\frac{1}{2}-\varepsilon)$-in space and $(\frac{1}{4}-\varepsilon)$-in time is achieved. Stochastic sewing techniques are used in the proofs, in particular to deduce new regularising properties of the discrete Ornstein-Uhlenbeck process.

This paper presents an analysis of properties of two hybrid discretization methods for Gaussian derivatives, based on convolutions with either the normalized sampled Gaussian kernel or the integrated Gaussian kernel followed by central differences. The motivation for studying these discretization methods is that in situations when multiple spatial derivatives of different order are needed at the same scale level, they can be computed significantly more efficiently compared to more direct derivative approximations based on explicit convolutions with either sampled Gaussian kernels or integrated Gaussian kernels. While these computational benefits do also hold for the genuinely discrete approach for computing discrete analogues of Gaussian derivatives, based on convolution with the discrete analogue of the Gaussian kernel followed by central differences, the underlying mathematical primitives for the discrete analogue of the Gaussian kernel, in terms of modified Bessel functions of integer order, may not be available in certain frameworks for image processing, such as when performing deep learning based on scale-parameterized filters in terms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these hybrid discretization methods, in terms of quantitative performance measures concerning the amount of spatial smoothing that they imply, as well as the relative consistency of scale estimates obtained from scale-invariant feature detectors with automatic scale selection, with an emphasis on the behaviour for very small values of the scale parameter, which may differ significantly from corresponding results obtained from the fully continuous scale-space theory, as well as between different types of discretization methods.

This paper proposes famillies of multimatricvariate and multimatrix variate distributions based on elliptically contoured laws in the context of real normed division algebras. The work allows to answer the following inference problems about random matrix variate distributions: 1) Modeling of two or more probabilistically dependent random variables in all possible combinations whether univariate, vector and matrix simultaneously. 2) Expected marginal distributions under independence and joint estimation of models under likelihood functions of dependent samples. 3) Definition of a likelihood function for dependent samples in the mentioned random dimensions and under real normed division algebras. The corresponding real distributions are alternative approaches to the existing univariate and vector variate copulas, with the additional advantages previously listed. An application for quaternionic algebra is illustrated by a computable dependent sample joint distribution for landmark data emerged from shape theory.

北京阿比特科技有限公司