亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work develops a class of probabilistic algorithms for the numerical solution of nonlinear, time-dependent partial differential equations (PDEs). Current state-of-the-art PDE solvers treat the space- and time-dimensions separately, serially, and with black-box algorithms, which obscures the interactions between spatial and temporal approximation errors and misguides the quantification of the overall error. To fix this issue, we introduce a probabilistic version of a technique called method of lines. The proposed algorithm begins with a Gaussian process interpretation of finite difference methods, which then interacts naturally with filtering-based probabilistic ordinary differential equation (ODE) solvers because they share a common language: Bayesian inference. Joint quantification of space- and time-uncertainty becomes possible without losing the performance benefits of well-tuned ODE solvers. Thereby, we extend the toolbox of probabilistic programs for differential equation simulation to PDEs.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 泛函 · Notability · 圖像降噪 · CASES ·
2021 年 12 月 20 日

In the context of image processing, given a $k$-th order, homogeneous and linear differential operator with constant coefficients, we study a class of variational problems whose regularizing terms depend on the operator. Precisely, the regularizers are integrals of spatially inhomogeneous integrands with convex dependence on the differential operator applied to the image function. The setting is made rigorous by means of the theory of Radon measures and of suitable function spaces modeled on $BV$. We prove the lower semicontinuity of the functionals at stake and existence of minimizers for the corresponding variational problems. Then, we embed the latter into a bilevel scheme in order to automatically compute the space-dependent regularization parameters, thus allowing for good flexibility and preservation of details in the reconstructed image. We establish existence of optima for the scheme and we finally substantiate its feasibility by numerical examples in image denoising. The cases that we treat are Huber versions of the first and second order total variation with both the Huber and the regularization parameter being spatially dependent. Notably the spatially dependent version of second order total variation produces high quality reconstructions when compared to regularizations of similar type, and the introduction of the spatially dependent Huber parameter leads to a further enhancement of the image details.

In the field of finance, insurance, and system reliability, etc., it is often of interest to measure the dependence among variables by modeling a multivariate distribution using a copula. The copula models with parametric assumptions are easy to estimate but can be highly biased when such assumptions are false, while the empirical copulas are non-smooth and often not genuine copula making the inference about dependence challenging in practice. As a compromise, the empirical Bernstein copula provides a smooth estimator but the estimation of tuning parameters remains elusive. In this paper, by using the so-called empirical checkerboard copula we build a hierarchical empirical Bayes model that enables the estimation of a smooth copula function for arbitrary dimensions. The proposed estimator based on the multivariate Bernstein polynomials is itself a genuine copula and the selection of its dimension-varying degrees is data-dependent. We also show that the proposed copula estimator provides a more accurate estimate of several multivariate dependence measures which can be obtained in closed form. We investigate the asymptotic and finite-sample performance of the proposed estimator and compare it with some nonparametric estimators through simulation studies. An application to portfolio risk management is presented along with a quantification of estimation uncertainty.

In comparative research on time-to-event data for two groups, when two survival curves cross each other, it may be difficult to use the log-rank test and hazard ratio (HR) to properly assess the treatment benefit. Our aim was to identify a method for evaluating the treatment benefits for two groups in the above situation. We quantified treatment benefits based on an intuitive measure called the area between two survival curves (ABS), which is a robust measure of treatment benefits in clinical trials regardless of whether the proportional hazards assumption is violated or two survival curves cross each other. Additionally, we propose a permutation test based on the ABS, and we evaluate the effectiveness and reliability of this test with simulated data. The ABS permutation test is a robust statistical inference method with an acceptable type I error rate and superior power to detect differences in treatment effects, especially when the proportional hazards assumption is violated. The ABS can be used to intuitively quantify treatment differences over time and provide reliable conclusions in complicated situations, such as crossing survival curves. The R Package "ComparisonSurv" contains the proposed methods and is available from //CRAN.R-project.org/package=ComparisonSurv. Keywords: Survival analysis; Area between two survival curves; Crossing survival curves; Treatment benefit

Existing frameworks for probabilistic inference assume the inferential target is the posited statistical model's parameter. In machine learning applications, however, often there is no statistical model, so the quantity of interest is not a model parameter but a statistical functional. In this paper, we develop a generalized inferential model framework for cases when this functional is a risk minimizer or solution to an estimating equation. We construct a data-dependent possibility measure for uncertainty quantification and inference whose computation is based on the bootstrap. We then prove that this new generalized inferential model provides approximately valid inference in the sense that the plausibility values assigned to hypotheses about the unknowns are asymptotically well-calibrated in a frequentist sense. Among other things, this implies that confidence regions for the underlying functional derived from our new generalized inferential model are approximately valid. The method is shown to perform well in classical examples, including quantile regression, and in a personalized medicine application.

Let $P$ be a linear differential operator over $\mathcal{D} \subset \mathbb{R}^d$ and $U = (U_x)_{x \in \mathcal{D}}$ a second order stochastic process. In the first part of this article, we prove a new simple necessary and sufficient condition for all the trajectories of $U$ to verify the partial differential equation (PDE) $T(U) = 0$. This condition is formulated in terms of the covariance kernel of $U$. The novelty of this result is that the equality $T(U) = 0$ is understood in the sense of distributions, which is a functional analysis framework particularly adapted to the study of PDEs. This theorem provides precious insights during the second part of this article, which is dedicated to performing "physically informed" machine learning on data that is solution to the homogeneous 3 dimensional free space wave equation. We perform Gaussian Process Regression (GPR) on this data, which is a kernel based Bayesian approach to machine learning. To do so, we put Gaussian process (GP) priors over the wave equation's initial conditions and propagate them through the wave equation. We obtain explicit formulas for the covariance kernel of the corresponding stochastic process; this kernel can then be used for GPR. We explore two particular cases : the radial symmetry and the point source. For the former, we derive convolution-free GPR formulas; for the latter, we show a direct link between GPR and the classical triangulation method for point source localization used e.g. in GPS systems. Additionally, this Bayesian framework gives rise to a new answer for the ill-posed inverse problem of reconstructing initial conditions for the wave equation with finite dimensional data, and simultaneously provides a way of estimating physical parameters from this data as in [Raissi et al,2017]. We finish by showcasing this physically informed GPR on a number of practical examples.

A local discontinuous Galerkin (LDG) method for approximating large deformations of prestrained plates is introduced and tested on several insightful numerical examples in our previous computational work. This paper presents a numerical analysis of this LDG method, focusing on the free boundary case. The problem consists of minimizing a fourth order bending energy subject to a nonlinear and nonconvex metric constraint. The energy is discretized using LDG and a discrete gradient flow is used for computing discrete minimizers. We first show $\Gamma$-convergence of the discrete energy to the continuous one. Then we prove that the discrete gradient flow decreases the energy at each step and computes discrete minimizers with control of the metric constraint defect. We also present a numerical scheme for initialization of the gradient flow, and discuss the conditional stability of it.

Computational methods for fractional differential equations exhibit essential instability. Even a minor modification of the coefficients or other entry data may switch good results to the divergent. The goal of this paper is to suggest the reliable dual approach which fixes this inconsistency. We suggest to use two parallel methods based on the transformation of fractional derivatives through integration by parts or by means of substitution. We introduce the method of substitution and choose the proper discretization scheme that fits the grid points for the by-parts method. The solution is reliable only if both methods produce the same results. As an additional control tool, the Taylor series expansion allows to estimate the approximation errors for fractional derivatives. In order to demonstrate the proposed dual approach, we apply it to linear, quasilinear and semilinear equations and obtain very good precision of the results. The provided examples and counterexamples support the necessity to use the dual approach because either method, used separately, may produce incorrect results. The order of the exactness is close to the exactness of fractional derivatives approximations.

There is a recent interest on first-order methods for linear programming (LP). In this paper,we propose a stochastic algorithm using variance reduction and restarts for solving sharp primal-dual problems such as LP. We show that the proposed stochastic method exhibits a linear convergence rate for solving sharp instances with a high probability. In addition, we propose an efficient coordinate-based stochastic oracle for unconstrained bilinear problems, which has $\mathcal O(1)$ per iteration cost and improves the complexity of the existing deterministic and stochastic algorithms. Finally, we show that the obtained linear convergence rate is nearly optimal (upto $\log$ terms) for a wide class of stochastic primal dual methods.

In this paper, we study an adaptive finite element method for the elliptic equation with line Dirac delta functions as a source term.We investigate the regularity of the solution and the corresponding transmission problem to obtain the jump of normal derivative of the solution on line fractures. To handle the singularity of the solution, we adopt the meshes that conform to line fractures, and propose a novel a posteriori error estimator, in which the edge jump residual essentially use the jump of the normal derivative of the solution on line fractures. The error estimator is proven to be both reliable and efficient, finally an adaptive finite element algorithm is proposed based on the error estimator and the bisection refinement method. Numerical tests are presented to justify the theoretical findings.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

北京阿比特科技有限公司