亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the two-dimensional Cahn-Hilliard equation with logarithmic potentials and periodic boundary conditions. We employ the standard semi-implicit numerical scheme which treats the linear fourth-order dissipation term implicitly and the nonlinear term explicitly. Under natural constraints on the time step we prove strict phase separation and energy stability of the semi-implicit scheme. This appears to be the first rigorous result for the semi-implicit discretization of the Cahn-Hilliard equation with singular potentials.

相關內容

In this work, we aim to calibrate the score outputs of an estimator for the binary classification problem by finding an 'optimal' mapping to class probabilities, where the 'optimal' mapping is in the sense that minimizes the classification error (or equivalently, maximizes the accuracy). We show that for the given target variables and the score outputs of an estimator, an 'optimal' soft mapping, which monotonically maps the score values to probabilities, is a hard mapping that maps the score values to $0$ and $1$. We show that for class weighted (where the accuracy for one class is more important) and sample weighted (where the samples' accurate classifications are not equally important) errors, or even general linear losses; this hard mapping characteristic is preserved. We propose a sequential recursive merger approach, which produces an 'optimal' hard mapping (for the observed samples so far) sequentially with each incoming new sample. Our approach has a logarithmic in sample size time complexity, which is optimally efficient.

The Wilcoxon rank-sum test is one of the most popular distribution-free procedures for testing the equality of two univariate probability distributions. One of the main reasons for its popularity can be attributed to the remarkable result of Hodges and Lehmann (1956), which shows that the asymptotic relative efficiency of Wilcoxon's test with respect to Student's $t$-test, under location alternatives, never falls below 0.864, despite the former being exactly distribution-free for all sample sizes. Even more striking is the result of Chernoff and Savage (1958), which shows that the efficiency of a Gaussian score transformed Wilcoxon's test, against the $t$-test, is lower bounded by 1. In this paper we study the two-sample problem in the multivariate setting and propose distribution-free analogues of the Hotelling $T^2$ test (the natural multidimensional counterpart of Student's $t$-test) based on optimal transport and obtain extensions of the above celebrated results over various natural families of multivariate distributions. Our proposed tests are consistent against a general class of alternatives and satisfy Hodges-Lehmann and Chernoff-Savage-type efficiency lower bounds, despite being entirely agnostic to the underlying data generating mechanism. In particular, a collection of our proposed tests suffer from no loss in asymptotic efficiency, when compared to Hotelling $T^2$. To the best of our knowledge, these are the first collection of multivariate, nonparametric, exactly distribution-free tests that provably achieve such attractive efficiency lower bounds. We also demonstrate the broader scope of our methods in optimal transport based nonparametric inference by constructing exactly distribution-free multivariate tests for mutual independence, which suffer from no loss in asymptotic efficiency against the classical Wilks' likelihood ratio test, under Konijn alternatives.

This paper studies bulk-surface splitting methods of first order for (semi-linear) parabolic partial differential equations with dynamic boundary conditions. The proposed Lie splitting scheme is based on a reformulation of the problem as a coupled partial differential-algebraic equation system, i.e., the boundary conditions are considered as a second dynamic equation which is coupled to the bulk problem. The splitting approach is combined with bulk-surface finite elements and an implicit Euler discretization of the two subsystems. We prove first-order convergence of the resulting fully discrete scheme in the presence of a weak CFL condition of the form $\tau \leq c h$ for some constant $c>0$. The convergence is also illustrated numerically using dynamic boundary conditions of Allen-Cahn-type.

Solving semiparametric models can be computationally challenging because the dimension of parameter space may grow large with increasing sample size. Classical Newton's method becomes quite slow and unstable with intensive calculation of the large Hessian matrix and its inverse. Iterative methods separately update parameters for finite dimensional component and infinite dimensional component have been developed to speed up single iteration, but they often take more steps until convergence or even sometimes sacrifice estimation precision due to sub-optimal update direction. We propose a computationally efficient implicit profiling algorithm that achieves simultaneously the fast iteration step in iterative methods and the optimal update direction in the Newton's method by profiling out the infinite dimensional component as the function of the finite dimensional component. We devise a first order approximation when the profiling function has no explicit analytical form. We show that our implicit profiling method always solve any local quadratic programming problem in two steps. In two numerical experiments under semiparametric transformation models and GARCH-M models, we demonstrated the computational efficiency and statistical precision of our implicit profiling method.

We propose a new iterative scheme to compute the numerical solution to an over-determined boundary value problem for a general quasilinear elliptic PDE. The main idea is to repeatedly solve its linearization by using the quasi-reversibility method with a suitable Carleman weight function. The presence of the Carleman weight function allows us to employ a Carleman estimate to prove the convergence of the sequence generated by the iterative scheme above to the desired solution. The convergence of the iteration is fast at an exponential rate without the need of an initial good guess. We apply this method to compute solutions to some general quasilinear elliptic equations and a large class of first-order Hamilton-Jacobi equations. Numerical results are presented.

A new fixed (non-adaptive) recursive scheme for multigrid algorithms is introduced. Governed by a positive parameter $\kappa$ called the cycle counter, this scheme generates a family of multigrid cycles dubbed $\kappa$-cycles. The well-known $V$-cycle, $F$-cycle, and $W$-cycle are shown to be particular members of this rich $\kappa$-cycle family, which satisfies the property that the total number of recursive calls in a single cycle is a polynomial of degree $\kappa$ in the number of levels of the cycle. This broadening of the scope of fixed multigrid cycles is shown to be potentially significant for the solution of some large problems on platforms, such as GPU processors, where the overhead induced by recursive calls may be relatively significant. In cases of problems for which the convergence of standard $V$-cycles or $F$-cycles (corresponding to $\kappa=1$ and $\kappa=2$, respectively) is particularly slow, and yet the cost of $W$-cycles is very high due to the large number of recursive calls (which is exponential in the number of levels), intermediate values of $\kappa$ may prove to yield significantly faster run-times. This is demonstrated in examples where $\kappa$-cycles are used for the solution of rotated anisotropic diffusion problems, both as a stand-alone solver and as a preconditioner. Moreover, a simple model is presented for predicting the approximate run-time of the $\kappa$-cycle, which is useful in pre-selecting an appropriate cycle counter for a given problem on a given platform. Implementing the $\kappa$-cycle requires making just a small change in the classical multigrid cycle.

Physics-informed neural networks (PINNs) show great advantages in solving partial differential equations. In this paper, we for the first time propose to study conformable time fractional diffusion equations by using PINNs. By solving the supervise learning task, we design a new spatio-temporal function approximator with high data efficiency. L-BFGS algorithm is used to optimize our loss function, and back propagation algorithm is used to update our parameters to give our numerical solutions. For the forward problem, we can take IC/BCs as the data, and use PINN to solve the corresponding partial differential equation. Three numerical examples are are carried out to demonstrate the effectiveness of our methods. In particular, when the order of the conformable fractional derivative $\alpha$ tends to $1$, a class of weighted PINNs is introduced to overcome the accuracy degradation caused by the singularity of solutions. For the inverse problem, we use the data obtained to train the neural network, and the estimation of parameter $\lambda$ in the equation is elaborated. Similarly, we give three numerical examples to show that our method can accurately identify the parameters, even if the training data is corrupted with 1\% uncorrelated noise.

We consider a nonlocal evolution equation representing the continuum limit of a large ensemble of interacting particles on graphs forced by noise. The two principle ingredients of the continuum model are a nonlocal term and Q-Wiener process describing the interactions among the particles in the network and stochastic forcing respectively. The network connectivity is given by a square integrable function called a graphon. We prove that the initial value problem for the continuum model is well-posed. Further, we construct a semidiscrete (discrete in space and continuous in time) and a fully discrete schemes for the nonlocal model. The former is obtained by a discontinuous Galerkin method and the latter is based on further discretizing time using the Euler-Maruyama method. We prove convergence and estimate the rate of convergence in each case. For the semidiscrete scheme, the rate of convergence estimate is expressed in terms of the regularity of the graphon, Q-Wiener process, and the initial data. We work in generalized Lipschitz spaces, which allows to treat models with data of lower regularity. This is important for applications as many interesting types of connectivity including small-world and power-law are expressed by graphons that are not smooth. The error analysis of the fully discrete scheme, on the other hand, reveals that for some models common in applied science, one has a higher speed of convergence than that predicted by the standard estimates for the Euler-Maruyama method. The rate of convergence analysis is supplemented with detailed numerical experiments, which are consistent with our analytical results. As a by-product, this work presents a rigorous justification for taking continuum limit for a large class of interacting dynamical systems on graphs subject to noise.

In this paper, we study the weight spectrum of linear codes with \emph{super-linear} field size and use the probabilistic method to show that for nearly all such codes, the corresponding weight spectrum is very close to that of a maximum distance separable (MDS) code.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司