亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper applies the gradient discretisation method (GDM) for fourth order elliptic variational inequalities. The GDM provides a new formulation of error estimates and a complete convergence analysis of several numerical methods. We show that the convergence is unconditional. Classical assumptions on data are only sufficient to establish the convergence results. These results are applicable for all schemes fall in the framework of GDM.

相關內容

We consider the framework of penalized estimation where the penalty term is given by a real-valued polyhedral gauge, which encompasses methods such as LASSO (and many variants thereof such as the generalized LASSO), SLOPE, OSCAR, PACS and others. Each of these estimators can uncover a different structure or ``pattern'' of the unknown parameter vector. We define a general notion of patterns based on subdifferentials and formalize an approach to measure their complexity. For pattern recovery, we provide a minimal condition for a particular pattern to be detected by the procedure with positive probability, the so-called accessibility condition. Using our approach, we also introduce the stronger noiseless recovery condition. For the LASSO, it is well known that the irrepresentability condition is necessary for pattern recovery with probability larger than $1/2$ and we show that the noiseless recovery plays exactly the same role, thereby extending and unifying the irrepresentability condition of the LASSO to a broad class of penalized estimators. We show that the noiseless recovery condition can be relaxed when turning to thresholded penalized estimators, extending the idea of the thresholded LASSO: we prove that the accessibility condition is already sufficient (and necessary) for sure pattern recovery by thresholded penalized estimation provided that the signal of the pattern is large enough. Throughout the article, we demonstrate how our findings can be interpreted through a geometrical lens.

We study Bayesian histograms for distribution estimation on $[0,1]^d$ under the Wasserstein $W_v, 1 \leq v < \infty$ distance in the i.i.d sampling regime. We newly show that when $d < 2v$, histograms possess a special \textit{memory efficiency} property, whereby in reference to the sample size $n$, order $n^{d/2v}$ bins are needed to obtain minimax rate optimality. This result holds for the posterior mean histogram and with respect to posterior contraction: under the class of Borel probability measures and some classes of smooth densities. The attained memory footprint overcomes existing minimax optimal procedures by a polynomial factor in $n$; for example an $n^{1 - d/2v}$ factor reduction in the footprint when compared to the empirical measure, a minimax estimator in the Borel probability measure class. Additionally constructing both the posterior mean histogram and the posterior itself can be done super--linearly in $n$. Due to the popularity of the $W_1,W_2$ metrics and the coverage provided by the $d < 2v$ case, our results are of most practical interest in the $(d=1,v =1,2), (d=2,v=2), (d=3,v=2)$ settings and we provide simulations demonstrating the theory in several of these instances.

Prophet inequalities consist of many beautiful statements that establish tight performance ratios between online and offline allocation algorithms. Typically, tightness is established by constructing an algorithmic guarantee and a worst-case instance separately, whose bounds match as a result of some "ingenuity". In this paper, we instead formulate the construction of the worst-case instance as an optimization problem, which directly finds the tight ratio without needing to construct two bounds separately. Our analysis of this complex optimization problem involves identifying the structure in a new "Type Coverage" dual problem. It can be seen as akin to the celebrated Magician and OCRS problems, except more general in that it can also provide tight ratios relative to the optimal offline allocation, whereas the earlier problems only concerns the ex-ante relaxation of the offline problem. Through this analysis, our paper provides a unified framework that derives new prophet inequalities and recovers existing ones, including two important new results. First, we show that the "oblivious" method of setting a static threshold due to Chawla et al. (2020), surprisingly, is best-possible among all static threshold algorithms, under any number $k$ of units. We emphasize that this result is derived without needing to explicitly find any counterexample instances. This implies the tightness of the asymptotic convergence rate of $1-O(\sqrt{\log k/k})$ for static threshold algorithms from Hajiaghayi et al. (2007), is tight; this confirms for the first time a separation with the convergence rate of adaptive algorithms, which is $1-\Theta(\sqrt{1/k})$ due to Alaei (2014). Second, turning to the IID setting, our framework allows us to numerically illustrate the tight guarantee (of adaptive algorithms) under any number $k$ of starting units. Our guarantees for $k>1$ exceed the state-of-the-art.

In this paper, we study a priori error estimates for the finite element approximation of the nonlinear Schr\"{o}dinger-Poisson model. The electron density is defined by an infinite series over all eigenvalues of the Hamiltonian operator. To establish the error estimate, we present a unified theory of error estimates for a class of nonlinear problems. The theory is based on three conditions: 1) the original problem has a solution $u$ which is the fixed point of a compact operator $\Ca$, 2) $\Ca$ is Fr\'{e}chet-differentiable at $u$ and $\Ci-\Ca'[u]$ has a bounded inverse in a neighborhood of $u$, and 3) there exists an operator $\Ca_h$ which converges to $\Ca$ in the neighborhood of $u$. The theory states that $\Ca_h$ has a fixed point $u_h$ which solves the approximate problem. It also gives the error estimate between $u$ and $u_h$, without assumptions on the well-posedness of the approximate problem. We apply the unified theory to the finite element approximation of the Schr\"{o}dinger-Poisson model and obtain optimal error estimate between the numerical solution and the exact solution. Numerical experiments are presented to verify the convergence rates of numerical solutions.

This paper focuses on the achievable accuracy of center-of-gravity (CoG) centroiding with respect to the ultimate limits defined by the Cramer Rao lower variance bounds. In a practical scenario, systematic centroiding errors occur through coarse sampling of the points-spread-function (PSF) as well as signal truncation errors at the boundaries of the region-of-interest (ROI). While previous studies focused on sampling errors alone, this paper derives and analyzes the full systematic error, as truncation error become increasingly important for small ROIs where the effect of random pixel noise may be more efficiently suppressed than for large ROIs. Unbiased estimators are introduced and analytical expressions derived for their variance, detailing the effects of photon shot noise, pixel random noise and residual systematic error. Analytical results are verified by Monte Carlo simulations and the performances compared to those of other algorithms, such as Iteratively Weighted CoG, Thresholded CoG, and Least Squares Fits. The unbiased estimators allow achieving centroiding errors very close to the Cramer Rao Lower Bound (CRLB), for low and high photon number, at significantly lower computational effort than other algorithms. Additionally, optimal configurations in relation to PSF radius and ROI size and other specific parameters, are determined for all other algorithms, and their normalized centroid error assessed with respect to the CRLB.

The starting point for much of multivariate analysis (MVA) is an $n\times p$ data matrix whose $n$ rows represent observations and whose $p$ columns represent variables. Some multivariate data sets, however, may be best conceptualized not as $n$ discrete $p$-variate observations, but as $p$ curves or functions defined on a common time interval. We introduce a framework for extending techniques of multivariate analysis to such settings. The proposed framework rests on the assumption that the curves can be represented as linear combinations of basis functions such as B-splines. This is formally identical to the Ramsay-Silverman representation of functional data; but whereas functional data analysis extends MVA to the case of observations that are curves rather than vectors -- heuristically, $n\times p$ data with $p$ infinite -- we are instead concerned with what happens when $n$ is infinite. We describe how to translate the classical MVA methods of covariance and correlation estimation, principal component analysis, Fisher's linear discriminant analysis, and $k$-means clustering to the continuous-time setting. We illustrate the methods with a novel perspective on a well-known Canadian weather data set, and with applications to neurobiological and environmetric data. The methods are implemented in the publicly available R package \texttt{ctmva}.

The Number needed to treat (NNT) is an efficacy index defined as the average number of patients needed to treat to attain one additional treatment benefit. In observational studies, specifically in epidemiology, the adequacy of the populationwise NNT is questionable since the exposed group characteristics may substantially differ from the unexposed. To address this issue, groupwise efficacy indices were defined: the Exposure Impact Number (EIN) for the exposed group and the Number Needed to Expose (NNE) for the unexposed. Each defined index answers a unique research question since it targets a unique sub-population. In observational studies, the group allocation is typically affected by confounders that might be unmeasured. The available estimation methods that rely either on randomization or the sufficiency of the measured covariates for confounding control will result in inconsistent estimators of the true NNT (EIN, NNE) in such settings. Using Rubin's potential outcomes framework, we explicitly define the NNT and its derived indices as causal contrasts. Next, we introduce a novel method that uses instrumental variables to estimate the three aforementioned indices in observational studies. We present two analytical examples and a corresponding simulation study. The simulation study illustrates that the novel estimators are consistent, unlike the previously available methods, and their confidence intervals meet the nominal coverage rates. Finally, a real-world data example of the effect of vitamin D deficiency on the mortality rate is presented.

During recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization methods has been growing. One of the main reasons for this is that high-probability complexity bounds are more accurate and less studied than in-expectation ones. However, SOTA high-probability non-asymptotic convergence results are derived under strong assumptions such as the boundedness of the gradient noise variance or of the objective's gradient itself. In this paper, we propose several algorithms with high-probability convergence results under less restrictive assumptions. In particular, we derive new high-probability convergence results under the assumption that the gradient/operator noise has bounded central $\alpha$-th moment for $\alpha \in (1,2]$ in the following setups: (i) smooth non-convex / Polyak-Lojasiewicz / convex / strongly convex / quasi-strongly convex minimization problems, (ii) Lipschitz / star-cocoercive and monotone / quasi-strongly monotone variational inequalities. These results justify the usage of the considered methods for solving problems that do not fit standard functional classes studied in stochastic optimization.

The quasi-random discrete ordinates method (QRDOM) is here proposed for the approximation of transport problems. Its central idea is to explore a quasi Monte Carlo integration within the classical source iteration technique. It preserves the main characteristics of the discrete ordinates method, but it has the advantage of providing mitigated ray effect solutions. The QRDOM is discussed in details for applications to one-group transport problems with isotropic scattering in rectangular domains. The method is tested against benchmark problems for which DOM solutions are known to suffer from the ray effects. The numerical experiments indicate that the QRDOM provides accurate results and it demands less discrete ordinates per source iteration when compared against the classical DOM.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.

北京阿比特科技有限公司