亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

One of the most interesting problems in the recent renaissance of the studies in kernel regression might be whether the kernel interpolation can generalize well, since it may help us understand the `benign overfitting henomenon' reported in the literature on deep networks. In this paper, under mild conditions, we show that for any $\varepsilon>0$, the generalization error of kernel interpolation is lower bounded by $\Omega(n^{-\varepsilon})$. In other words, the kernel interpolation generalizes poorly for a large class of kernels. As a direct corollary, we can show that overfitted wide neural networks defined on the sphere generalize poorly.

相關內容

The numerical solution of continuum damage mechanics (CDM) problems suffers from convergence-related challenges during the material softening stage, and consequently existing iterative solvers are subject to a trade-off between computational expense and solution accuracy. In this work, we present a novel unified arc-length (UAL) method, and we derive the formulation of the analytical tangent matrix and governing system of equations for both local and non-local gradient damage problems. Unlike existing versions of arc-length solvers that monolithically scale the external force vector, the proposed method treats the latter as an independent variable and determines the position of the system on the equilibrium path based on all the nodal variations of the external force vector. This approach renders the proposed solver substantially more efficient and robust than existing solvers used in CDM problems. We demonstrate the considerable advantages of the proposed algorithm through several benchmark 1D problems with sharp snap-backs and 2D examples under various boundary conditions and loading scenarios. The proposed UAL approach exhibits a superior ability of overcoming critical increments along the equilibrium path. Moreover, the proposed UAL method is 1-2 orders of magnitude faster than force-controlled arc-length and monolithic Newton-Raphson solvers.

We evaluate the performance of novel numerical methods for solving one-dimensional nonlinear fractional dispersive and dissipative evolution equations. The methods are based on affine combinations of time-splitting integrators and pseudo-spectral discretizations using Hermite and Fourier expansions. We show the effectiveness of the proposed methods by numerically computing the dynamics of soliton solutions of the the standard and fractional variants of the nonlinear Schr{\"o}dinger equation (NLSE) and the complex Ginzburg-Landau equation (CGLE), and by comparing the results with those obtained by standard splitting integrators. An exhaustive numerical investigation shows that the new technique is competitive when compared to traditional composition-splitting schemes for the case of Hamiltonian problems both in terms accuracy and computational cost. Moreover, it is applicable straightforwardly to irreversible models, outperforming high-order symplectic integrators which could become unstable due to their need of negative time steps. Finally, we discuss potential improvements of the numerical methods aimed to increase their efficiency, and possible applications to the investigation of dissipative solitons that arise in nonlinear optical systems of contemporary interest. Overall, the method offers a promising alternative for solving a wide range of evolutionary partial differential equations.

We present here a new splitting method to solve Lyapunov equations in a Kronecker product form. Although this resulting matrix is of order $n^2$, each iteration demands two operations with the matrix $A$: a multiplication of the form $(A-\sigma I) \tilde{B}$ and a inversion of the form $(A-\sigma I)^{-1}\tilde{B}$. We see that for some choice of a parameter the iteration matrix is such that all their eigenvalues are in absolute value less than 1. Moreover we present a theorem that enables us to get a good starting vector for the method.

We consider a one-dimensional singularly perturbed 4th order problem with the additional feature of a shift term. An expansion into a smooth term, boundary layers and an inner layer yields a formal solution decomposition, and together with a stability result we have estimates for the subsequent numerical analysis. With classical layer adapted meshes we present a numerical method, that achieves supercloseness and optimal convergence orders in the associated energy norm. We also consider coarser meshes in view of the weak layers. Some numerical examples conclude the paper and support the theory.

Many mechanisms behind the evolution of cooperation, such as reciprocity, indirect reciprocity, and altruistic punishment, require group knowledge of individual actions. But what keeps people cooperating when no one is looking? Conformist norm internalization, the tendency to abide by the behavior of the majority of the group, even when it is individually harmful, could be the answer. In this paper, we analyze a world where (1) there is group selection and punishment by indirect reciprocity but (2) many actions (half) go unobserved, and therefore unpunished. Can norm internalization fill this "observation gap" and lead to high levels of cooperation, even when agents may in principle cooperate only when likely to be caught and punished? Specifically, we seek to understand whether adding norm internalization to the strategy space in a public goods game can lead to higher levels of cooperation when both norm internalization and cooperation start out rare. We found the answer to be positive, but, interestingly, not because norm internalizers end up making up a substantial fraction of the population, nor because they cooperate much more than other agent types. Instead, norm internalizers, by polarizing, catalyzing, and stabilizing cooperation, can increase levels of cooperation of other agent types, while only making up a minority of the population themselves.

We present a novel stabilized isogeometric formulation for the Stokes problem, where the geometry of interest is obtained via overlapping NURBS (non-uniform rational B-spline) patches, i.e., one patch on top of another in an arbitrary but predefined hierarchical order. All the visible regions constitute the computational domain, whereas independent patches are coupled through visible interfaces using Nitsche's formulation. Such a geometric representation inevitably involves trimming, which may yield trimmed elements of extremely small measures (referred to as bad elements) and thus lead to the instability issue. Motivated by the minimal stabilization method that rigorously guarantees stability for trimmed geometries [1], in this work we generalize it to the Stokes problem on overlapping patches. Central to our method is the distinct treatments for the pressure and velocity spaces: Stabilization for velocity is carried out for the flux terms on interfaces, whereas pressure is stabilized in all the bad elements. We provide a priori error estimates with a comprehensive theoretical study. Through a suite of numerical tests, we first show that optimal convergence rates are achieved, which consistently agrees with our theoretical findings. Second, we show that the accuracy of pressure is significantly improved by several orders using the proposed stabilization method, compared to the results without stabilization. Finally, we also demonstrate the flexibility and efficiency of the proposed method in capturing local features in the solution field.

A sequential pattern with negation, or negative sequential pattern, takes the form of a sequential pattern for which the negation symbol may be used in front of some of the pattern's itemsets. Intuitively, such a pattern occurs in a sequence if negated itemsets are absent in the sequence. Recent work has shown that different semantics can be attributed to these pattern forms, and that state-of-the-art algorithms do not extract the same sets of patterns. This raises the important question of the interpretability of sequential pattern with negation. In this study, our focus is on exploring how potential users perceive negation in sequential patterns. Our aim is to determine whether specific semantics are more "intuitive" than others and whether these align with the semantics employed by one or more state-of-the-art algorithms. To achieve this, we designed a questionnaire to reveal the semantics' intuition of each user. This article presents both the design of the questionnaire and an in-depth analysis of the 124 responses obtained. The outcomes indicate that two of the semantics are predominantly intuitive; however, neither of them aligns with the semantics of the primary state-of-the-art algorithms. As a result, we provide recommendations to account for this disparity in the conclusions drawn.

Neuromorphic computing is one of the few current approaches that have the potential to significantly reduce power consumption in Machine Learning and Artificial Intelligence. Imam & Cleland presented an odour-learning algorithm that runs on a neuromorphic architecture and is inspired by circuits described in the mammalian olfactory bulb. They assess the algorithm's performance in "rapid online learning and identification" of gaseous odorants and odorless gases (short "gases") using a set of gas sensor recordings of different odour presentations and corrupting them by impulse noise. We replicated parts of the study and discovered limitations that affect some of the conclusions drawn. First, the dataset used suffers from sensor drift and a non-randomised measurement protocol, rendering it of limited use for odour identification benchmarks. Second, we found that the model is restricted in its ability to generalise over repeated presentations of the same gas. We demonstrate that the task the study refers to can be solved with a simple hash table approach, matching or exceeding the reported results in accuracy and runtime. Therefore, a validation of the model that goes beyond restoring a learned data sample remains to be shown, in particular its suitability to odour identification tasks.

We examine a stochastic formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution, but has knowledge that it lies in some hypothesis set and possesses a historical data set, from which information about it can be gleaned. We define a prescriptive solution as a decision rule mapping such a data set to decisions. As there does not exist prescriptive solutions that are generalizable over the entire hypothesis set, we define out-of-sample optimality as a local average over a neighbourhood of hypotheses, and averaged over the sampling distribution. We prove sufficient conditions for local out-of-sample optimality, which reduces to functions of the sufficient statistic of the hypothesis family. We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms. Finally, we illustrate our model on the newsvendor model, and find strong performance when compared against alternatives in the literature. There are potential implications of our research on end-to-end learning and Bayesian optimization.

We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.

北京阿比特科技有限公司