亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Algorithms for solving the linear classification problem have a long history, dating back at least to 1936 with linear discriminant analysis. For linearly separable data, many algorithms can obtain the exact solution to the corresponding 0-1 loss classification problem efficiently, but for data which is not linearly separable, it has been shown that this problem, in full generality, is NP-hard. Alternative approaches all involve approximations of some kind, including the use of surrogates for the 0-1 loss (for example, the hinge or logistic loss) or approximate combinatorial search, none of which can be guaranteed to solve the problem exactly. Finding efficient algorithms to obtain an exact i.e. globally optimal solution for the 0-1 loss linear classification problem with fixed dimension, remains an open problem. In research we report here, we detail the construction of a new algorithm, incremental cell enumeration (ICE), that can solve the 0-1 loss classification problem exactly in polynomial time. To our knowledge, this is the first, rigorously-proven polynomial time algorithm for this long-standing problem.

相關內容

The dual consistency is an important issue in developing stable DWR error estimation towards the goal-oriented mesh adaptivity. In this paper, such an issue is studied in depth based on a Newton-GMG framework for the steady Euler equations. Theoretically, the numerical framework is redescribed using the Petrov-Galerkin scheme, based on which the dual consistency is depicted. A boundary modification technique is discussed for preserving the dual consistency within the Newton-GMG framework. Numerically, a geometrical multigrid is proposed for solving the dual problem, and a regularization term is designed to guarantee the convergence of the iteration. The following features of our method can be observed from numerical experiments, i). a stable numerical convergence of the quantity of interest can be obtained smoothly for problems with different configurations, and ii). towards accurate calculation of quantity of interest, mesh grids can be saved significantly using the proposed dual-consistent DWR method, compared with the dual-inconsistent one.

We present the full approximation scheme constraint decomposition (FASCD) multilevel method for solving variational inequalities (VIs). FASCD is a common extension of both the full approximation scheme (FAS) multigrid technique for nonlinear partial differential equations, due to A.~Brandt, and the constraint decomposition (CD) method introduced by X.-C.~Tai for VIs arising in optimization. We extend the CD idea by exploiting the telescoping nature of certain function space subset decompositions arising from multilevel mesh hierarchies. When a reduced-space (active set) Newton method is applied as a smoother, with work proportional to the number of unknowns on a given mesh level, FASCD V-cycles exhibit nearly mesh-independent convergence rates, and full multigrid cycles are optimal solvers. The example problems include differential operators which are symmetric linear, nonsymmetric linear, and nonlinear, in unilateral and bilateral VI problems.

Tikhonov regularization is a widely used technique in solving inverse problems that can enforce prior properties on the desired solution. In this paper, we propose a Krylov subspace based iterative method for solving linear inverse problems with general-form Tikhonov regularization term $x^TMx$, where $M$ is a positive semi-definite matrix. An iterative process called the preconditioned Golub-Kahan bidiagonalization (pGKB) is designed, which implicitly utilizes a proper preconditioner to generate a series of solution subspaces with desirable properties encoded by the regularizer $x^TMx$. Based on the pGKB process, we propose an iterative regularization algorithm via projecting the original problem onto small dimensional solution subspaces. We analyze regularization effect of this algorithm, including the incorporation of prior properties of the desired solution into the solution subspace and the semi-convergence behavior of regularized solution. To overcome instabilities caused by semi-convergence, we further propose two pGKB based hybrid regularization algorithms. All the proposed algorithms are tested on both small-scale and large-scale linear inverse problems. Numerical results demonstrate that these iterative algorithms exhibit excellent performance, outperforming other state-of-the-art algorithms in some cases.

We study the problem of estimating non-linear functionals of discrete distributions in the context of local differential privacy. The initial data $x_1,\ldots,x_n \in [K]$ are supposed i.i.d. and distributed according to an unknown discrete distribution $p = (p_1,\ldots,p_K)$. Only $\alpha$-locally differentially private (LDP) samples $z_1,...,z_n$ are publicly available, where the term 'local' means that each $z_i$ is produced using one individual attribute $x_i$. We exhibit privacy mechanisms (PM) that are interactive (i.e. they are allowed to use already published confidential data) or non-interactive. We describe the behavior of the quadratic risk for estimating the power sum functional $F_{\gamma} = \sum_{k=1}^K p_k^{\gamma}$, $\gamma >0$ as a function of $K, \, n$ and $\alpha$. In the non-interactive case, we study two plug-in type estimators of $F_{\gamma}$, for all $\gamma >0$, that are similar to the MLE analyzed by Jiao et al. (2017) in the multinomial model. However, due to the privacy constraint the rates we attain are slower and similar to those obtained in the Gaussian model by Collier et al. (2020). In the interactive case, we introduce for all $\gamma >1$ a two-step procedure which attains the faster parametric rate $(n \alpha^2)^{-1/2}$ when $\gamma \geq 2$. We give lower bounds results over all $\alpha$-LDP mechanisms and all estimators using the private samples.

The direct deep learning simulation for multi-scale problems remains a challenging issue. In this work, a novel higher-order multi-scale deep Ritz method (HOMS-DRM) is developed for thermal transfer equation of authentic composite materials with highly oscillatory and discontinuous coefficients. In this novel HOMS-DRM, higher-order multi-scale analysis and modeling are first employed to overcome limitations of prohibitive computation and Frequency Principle when direct deep learning simulation. Then, improved deep Ritz method are designed to high-accuracy and mesh-free simulation for macroscopic homogenized equation without multi-scale property and microscopic lower-order and higher-order cell problems with highly discontinuous coefficients. Moreover, the theoretical convergence of the proposed HOMS-DRM is rigorously demonstrated under appropriate assumptions. Finally, extensive numerical experiments are presented to show the computational accuracy of the proposed HOMS-DRM. This study offers a robust and high-accuracy multi-scale deep learning framework that enables the effective simulation and analysis of multi-scale problems of authentic composite materials.

Blumer et al. (1987, 1989) showed that any concept class that is learnable by Occam algorithms is PAC learnable. Board and Pitt (1990) showed a partial converse of this theorem: for concept classes that are closed under exception lists, any class that is PAC learnable is learnable by an Occam algorithm. However, their Occam algorithm outputs a hypothesis whose complexity is $\delta$-dependent, which is an important limitation. In this paper, we show that their partial converse applies to Occam algorithms with $\delta$-independent complexities as well. Thus, we provide a posteriori justification of various theoretical results and algorithm design methods which use the partial converse as a basis for their work.

We describe a novel algorithm for solving general parametric (nonlinear) eigenvalue problems. Our method has two steps: first, high-accuracy solutions of non-parametric versions of the problem are gathered at some values of the parameters; these are then combined to obtain global approximations of the parametric eigenvalues. To gather the non-parametric data, we use non-intrusive contour-integration-based methods, which, however, cannot track eigenvalues that migrate into/out of the contour as the parameter changes. Special strategies are described for performing the combination-over-parameter step despite having only partial information on such "migrating" eigenvalues. Moreover, we dedicate a special focus to the approximation of eigenvalues that undergo bifurcations. Finally, we propose an adaptive strategy that allows one to effectively apply our method even without any a priori information on the behavior of the sought-after eigenvalues. Numerical tests are performed, showing that our algorithm can achieve remarkably high approximation accuracy.

By combining a logarithm transformation with a corrected Milstein-type method, the present article proposes an explicit, unconditional boundary and dynamics preserving scheme for the stochastic susceptible-infected-susceptible (SIS) epidemic model that takes value in (0,N). The scheme applied to the model is first proved to have a strong convergence rate of order one. Further, the dynamic behaviors are analyzed for the numerical approximations and it is shown that the scheme can unconditionally preserve both the domain and the dynamics of the model. More precisely, the proposed scheme gives numerical approximations living in the domain (0,N) and reproducing the extinction and persistence properties of the original model for any time discretization step-size h > 0, without any additional requirements on the model parameters. Numerical experiments are presented to verify our theoretical results.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.

北京阿比特科技有限公司