亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, a computational method is developed to find an approximate solution of the stochastic Volterra-Fredholm integral equation using the Walsh function approximation and its operational matrix. Moreover, convergence and error analysis of the method is carried out to strengthen the validity of the method. Furthermore, the method is numerically compared to the block pulse function method and the Haar wavelet method for some non-trivial examples.

相關內容

Despite widespread adoption in practice, guarantees for the LASSO and Group LASSO are strikingly lacking in settings beyond statistical problems, and these algorithms are usually considered to be a heuristic in the context of sparse convex optimization on deterministic inputs. We give the first recovery guarantees for the Group LASSO for sparse convex optimization with vector-valued features. We show that if a sufficiently large Group LASSO regularization is applied when minimizing a strictly convex function $l$, then the minimizer is a sparse vector supported on vector-valued features with the largest $\ell_2$ norm of the gradient. Thus, repeating this procedure selects the same set of features as the Orthogonal Matching Pursuit algorithm, which admits recovery guarantees for any function $l$ with restricted strong convexity and smoothness via weak submodularity arguments. This answers open questions of Tibshirani et al. and Yasuda et al. Our result is the first to theoretically explain the empirical success of the Group LASSO for convex functions under general input instances assuming only restricted strong convexity and smoothness. Our result also generalizes provable guarantees for the Sequential Attention algorithm, which is a feature selection algorithm inspired by the attention mechanism proposed by Yasuda et al. As an application of our result, we give new results for the column subset selection problem, which is well-studied when the loss is the Frobenius norm or other entrywise matrix losses. We give the first result for general loss functions for this problem that requires only restricted strong convexity and smoothness.

Forecasting water content dynamics in heterogeneous porous media has significant interest in hydrological applications; in particular, the treatment of infiltration when in presence of cracks and fractures can be accomplished resorting to peridynamic theory, which allows a proper modeling of non localities in space. In this framework, we make use of Chebyshev transform on the diffusive component of the equation and then we integrate forward in time using an explicit method. We prove that the proposed spectral numerical scheme provides a solution converging to the unique solution in some appropriate Sobolev space. We finally exemplify on several different soils, also considering a sink term representing the root water uptake.

Variance reduction is a crucial idea for Monte Carlo simulation and the stochastic Lanczos quadrature method is a dedicated method to approximate the trace of a matrix function. Inspired by their advantages, we combine these two techniques to approximate the log-determinant of large-scale symmetric positive definite matrices. Key questions to be answered for such a method are how to construct or choose an appropriate projection subspace and derive guaranteed theoretical analysis. This paper applies some probabilistic approaches including the projection-cost-preserving sketch and matrix concentration inequalities to construct a suboptimal subspace. Furthermore, we provide some insights on choosing design parameters in the underlying algorithm by deriving corresponding approximation error and probabilistic error estimations. Numerical experiments demonstrate our method's effectiveness and illustrate the quality of the derived error bounds.

We introduce a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on estimating equations that are $U$-statistics in the observations. The $U$-statistics are based on higher order influence functions that extend ordinary linear influence functions of the parameter of interest, and represent higher derivatives of this parameter. For parameters for which the representation cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than $\sqrt n$-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at $\sqrt n$-rate, but we also consider efficient $\sqrt n$-estimation using novel nonlinear estimators. The general approach is applied in detail to the example of estimating a mean response when the response is not always observed.

For solving the discretized three-temperature energy linear systems, Xu et al. proposed a physical-variable based coarsening two-level iterative method (PCTL algorithm) in 2009 and verified its efficiency by numerical experiments in practical applications. In this paper, we study in detail the specific convergence property of the PCTL algorithm based on the theory of algebraic multigrid method (AMG),and give a reasonable upper bound on the convergence factor, which provides a theoretical guarantee for the PCTL algorithm. Moreover, we also analyse the algebraic features that affect the convergence of the PCTL algorithm, such as diagonal dominance and coupling strength, hoping provides theoretical guidance for the applications and algorithm optimization of the PCTL algorithm.

The quantum thermal average plays a central role in describing the thermodynamic properties of a quantum system. From the computational perspective, the quantum thermal average can be computed by the path integral molecular dynamics (PIMD), but the knowledge on the quantitative convergence of such approximations is lacking. We propose an alternative computational framework named the continuous loop path integral molecular dynamics (CL-PIMD), which replaces the ring polymer beads by a continuous loop in the spirit of the Feynman--Kac formula. By truncating the number of normal modes to a finite integer $N\in\mathbb N$, we quantify the discrepancy of the statistical average of the truncated CL-PIMD from the true quantum thermal average, and prove that the truncated CL-PIMD has uniform-in-$N$ geometric ergodicity. These results show that the CL-PIMD provides an accurate approximation to the quantum thermal average, and serves as a mathematical justification of the PIMD methodology.

For several decades the dominant techniques for integer linear programming have been branching and cutting planes. Recently, several authors have developed core point methods for solving symmetric integer linear programs (ILPs). An integer point is called a core point if its orbit polytope is lattice-free. It has been shown that for symmetric ILPs, optimizing over the set of core points gives the same answer as considering the entire space. Existing core point techniques rely on the number of core points (or equivalence classes) being finite, which requires special symmetry groups. In this paper we develop some new methods for solving symmetric ILPs (based on outer approximations of core points) that do not depend on finiteness but are more efficient if the group has large disjoint cycles in its set of generators.

We investigate the problem of fixed-budget best arm identification (BAI) for minimizing expected simple regret. In an adaptive experiment, a decision maker draws one of multiple treatment arms based on past observations and observes the outcome of the drawn arm. After the experiment, the decision maker recommends the treatment arm with the highest expected outcome. We evaluate the decision based on the expected simple regret, which is the difference between the expected outcomes of the best arm and the recommended arm. Due to inherent uncertainty, we evaluate the regret using the minimax criterion. First, we derive asymptotic lower bounds for the worst-case expected simple regret, which are characterized by the variances of potential outcomes (leading factor). Based on the lower bounds, we propose the Two-Stage (TS)-Hirano-Imbens-Ridder (HIR) strategy, which utilizes the HIR estimator (Hirano et al., 2003) in recommending the best arm. Our theoretical analysis shows that the TS-HIR strategy is asymptotically minimax optimal, meaning that the leading factor of its worst-case expected simple regret matches our derived worst-case lower bound. Additionally, we consider extensions of our method, such as the asymptotic optimality for the probability of misidentification. Finally, we validate the proposed method's effectiveness through simulations.

We present a new residual-type energy-norm a posteriori error analysis for interior penalty discontinuous Galerkin (dG) methods for linear elliptic problems. The new error bounds are also applicable to dG methods on meshes consisting of elements with very general polygonal/polyhedral shapes. The case of simplicial and/or box-type elements is included in the analysis as a special case. In particular, for the upper bounds, an arbitrary number of very small faces are allowed on each polygonal/polyhedral element, as long as certain mild shape regularity assumptions are satisfied. As a corollary, the present analysis generalizes known a posteriori error bounds for dG methods, allowing in particular for meshes with an arbitrary number of irregular hanging nodes per element. The proof hinges on a new conforming recovery strategy in conjunction with a Helmholtz decomposition formula. The resulting a posteriori error bound involves jumps on the tangential derivatives along elemental faces. Local lower bounds are also proven for a number of practical cases. Numerical experiments are also presented, highlighting the practical value of the derived a posteriori error bounds as error estimators.

The number of modes in a probability density function is representative of the model's complexity and can also be viewed as the number of existing subpopulations. Despite its relevance, little research has been devoted to its estimation. Focusing on the univariate setting, we propose a novel approach targeting prediction accuracy inspired by some overlooked aspects of the problem. We argue for the need for structure in the solutions, the subjective and uncertain nature of modes, and the convenience of a holistic view blending global and local density properties. Our method builds upon a combination of flexible kernel estimators and parsimonious compositional splines. Feature exploration, model selection and mode testing are implemented in the Bayesian inference paradigm, providing soft solutions and allowing to incorporate expert judgement in the process. The usefulness of our proposal is illustrated through a case study in sports analytics, showcasing multiple companion visualisation tools. A thorough simulation study demonstrates that traditional modality-driven approaches paradoxically struggle to provide accurate results. In this context, our method emerges as a top-tier alternative offering innovative solutions for analysts.

北京阿比特科技有限公司