亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Vandermonde matrices are usually exponentially ill-conditioned and often result in unstable approximations. In this paper, we introduce and analyze the \textit{multivariate Vandermonde with Arnoldi (V+A) method}, which is based on least-squares approximation together with a Stieltjes orthogonalization process, for approximating continuous, multivariate functions on $d$-dimensional irregular domains. The V+A method addresses the ill-conditioning of the Vandermonde approximation by creating a set of discrete orthogonal basis with respect to a discrete measure. The V+A method is simple and general. It relies only on the sample points from the domain and requires no prior knowledge of the domain. In this paper, we first analyze the sample complexity of the V+A approximation. In particular, we show that, for a large class of domains, the V+A method gives a well-conditioned and near-optimal $N$-dimensional least-squares approximation using $M=\mathcal{O}(N^2)$ equispaced sample points or $M=\mathcal{O}(N^2\log N)$ random sample points, independently of $d$. We also give a comprehensive analysis of the error estimates and rate of convergence of the V+A approximation. Based on the multivariate V+A approximation, we propose a new variant of the weighted V+A least-squares algorithm that uses only $M=\mathcal{O}(N\log N)$ sample points to give a near-optimal approximation. Our numerical results confirm that the (weighted) V+A method gives a more accurate approximation than the standard orthogonalization method for high-degree approximation using the Vandermonde matrix.

相關內容

In this article, we propose a new class of consistent tests for $p$-variate normality. These tests are based on the characterization of the standard multivariate normal distribution, that the Hessian of the corresponding cumulant generating function is identical to the $p\times p$ identity matrix and the idea of decomposing the information from the joint distribution into the dependence copula and all marginal distributions. Under the null hypothesis of multivariate normality, our proposed test statistic is independent of the unknown mean vector and covariance matrix so that the distribution-free critical value of the test can be obtained by Monte Carlo simulation. We also derive the asymptotic null distribution of proposed test statistic and establish the consistency of the test against different fixed alternatives. Last but not least, a comprehensive and extensive Monte Carlo study also illustrates that our test is a superb yet computationally convenient competitor to many well-known existing test statistics.

We present an extension of the linear sampling method for solving the sound-soft inverse acoustic scattering problem with randomly distributed point sources. The theoretical justification of our sampling method is based on the Helmholtz--Kirchhoff identity, the cross-correlation between measurements, and the volume and imaginary near-field operators, which we introduce and analyze. Implementations in MATLAB using boundary elements, the SVD, Tikhonov regularization, and Morozov's discrepancy principle are also discussed. We demonstrate the robustness and accuracy of our algorithms with several numerical experiments in two dimensions.

The use of expectiles in risk management has recently gathered remarkable momentum due to their excellent axiomatic and probabilistic properties. In particular, the class of elicitable law-invariant coherent risk measures only consists of expectiles. While the theory of expectile estimation at central levels is substantial, tail estimation at extreme levels has so far only been considered when the tail of the underlying distribution is heavy. This article is the first work to handle the short-tailed setting where the loss (e.g. negative log-returns) distribution of interest is bounded to the right and the corresponding extreme value index is negative. We derive an asymptotic expansion of tail expectiles in this challenging context under a general second-order extreme value condition, which allows to come up with two semiparametric estimators of extreme expectiles, and with their asymptotic properties in a general model of strictly stationary but weakly dependent observations. A simulation study and a real data analysis from a forecasting perspective are performed to verify and compare the proposed competing estimation procedures.

We study the hidden-action principal-agent problem in an online setting. In each round, the principal posts a contract that specifies the payment to the agent based on each outcome. The agent then makes a strategic choice of action that maximizes her own utility, but the action is not directly observable by the principal. The principal observes the outcome and receives utility from the agent's choice of action. Based on past observations, the principal dynamically adjusts the contracts with the goal of maximizing her utility. We introduce an online learning algorithm and provide an upper bound on its Stackelberg regret. We show that when the contract space is $[0,1]^m$, the Stackelberg regret is upper bounded by $\widetilde O(\sqrt{m} \cdot T^{1-1/(2m+1)})$, and lower bounded by $\Omega(T^{1-1/(m+2)})$, where $\widetilde O$ omits logarithmic factors. This result shows that exponential-in-$m$ samples are sufficient and necessary to learn a near-optimal contract, resolving an open problem on the hardness of online contract design. Moreover, when contracts are restricted to some subset $\mathcal{F} \subset [0,1]^m$, we define an intrinsic dimension of $\mathcal{F}$ that depends on the covering number of the spherical code in the space and bound the regret in terms of this intrinsic dimension. When $\mathcal{F}$ is the family of linear contracts, we show that the Stackelberg regret grows exactly as $\Theta(T^{2/3})$. The contract design problem is challenging because the utility function is discontinuous. Bounding the discretization error in this setting has been an open problem. In this paper, we identify a limited set of directions in which the utility function is continuous, allowing us to design a new discretization method and bound its error. This approach enables the first upper bound with no restrictions on the contract and action space.

The paper concerns problems of the recovery of operators from noisy information in weighted $L_q$-spaces with homogeneous weights. A number of general theorems are proved and applied to finding exact constants in multidimensional Carlson type inequalities with several weights and problems of the recovery of differential operators from a noisy Fourier transform. In particular, optimal methods are obtained for the recovery of powers of generalized Laplace operators from a noisy Fourier transform in the $L_p$-metric.

We consider the truncated multivariate normal distributions for which every component is one-sided truncated. We show that this family of distributions is an exponential family. We identify $\mathcal{D}$, the corresponding natural parameter space, and deduce that the family of distributions is not regular. We prove that the gradient of the cumulant-generating function of the family of distributions remains bounded near certain boundary points in $\mathcal{D}$, and therefore the family also is not steep. We also consider maximum likelihood estimation for $\boldsymbol{\mu}$, the location vector parameter, and $\boldsymbol{\Sigma}$, the positive definite (symmetric) matrix dispersion parameter, of a truncated non-singular multivariate normal distribution. We prove that each solution to the score equations for $(\boldsymbol{\mu},\boldsymbol{\Sigma})$ satisfies the method-of-moments equations, and we obtain a necessary condition for the existence of solutions to the score equations.

We consider network games where a large number of agents interact according to a network sampled from a random network model, represented by a graphon. By exploiting previous results on convergence of such large network games to graphon games, we examine a procedure for estimating unknown payoff parameters, from observations of equilibrium actions, without the need for exact network information. We prove smoothness and local convexity of the optimization problem involved in computing the proposed estimator. Additionally, under a notion of graphon parameter identifiability, we show that the optimal estimator is globally unique. We present several examples of identifiable homogeneous and heterogeneous parameters in different classes of linear quadratic network games with numerical simulations to validate the proposed estimator.

We propose a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution. Assuming a mild distribution shift, we wish to accept out-of-distribution (OOD) data whenever a model's estimated competence foresees trustworthy responses, instead of rejecting OOD data outright. Trustworthiness is then predicted via a proxy incompetence score that is tightly linked to the performance of a classifier. We present a comprehensive experimental evaluation of incompetence scores for classification and highlight the resulting trade-offs between rejection rate and accuracy gain. For comparability with prior work, we focus on standard DG benchmarks and consider the effect of measuring incompetence via different learned representations in a closed versus an open world setting. Our results suggest that increasing incompetence scores are indeed predictive of reduced accuracy, leading to significant improvements of the average accuracy below a suitable incompetence threshold. However, the scores are not yet good enough to allow for a favorable accuracy/rejection trade-off in all tested domains. Surprisingly, our results also indicate that classifiers optimized for DG robustness do not outperform a naive Empirical Risk Minimization (ERM) baseline in the competence region, that is, where test samples elicit low incompetence scores.

Computing accurate splines of degree greater than three is still a challenging task in today's applications. In this type of interpolation, high-order derivatives are needed on the given mesh. As these derivatives are rarely known and are often not easy to approximate accurately, high-degree splines are difficult to obtain using standard approaches. In Beaudoin (1998), Beaudoin and Beauchemin (2003), and Pepin et al. (2019), a new method to compute spline approximations of low or high degree from equidistant interpolation nodes based on the discrete Fourier transform is analyzed. The accuracy of this method greatly depends on the accuracy of the boundary conditions. An algorithm for the computation of the boundary conditions can be found in Beaudoin (1998), and Beaudoin and Beauchemin (2003). However, this algorithm lacks robustness since the approximation of the boundary conditions is strongly dependant on the choice of $\theta$ arbitrary parameters, $\theta$ being the degree of the spline. The goal of this paper is therefore to propose two new robust algorithms, independent of arbitrary parameters, for the computation of the boundary conditions in order to obtain accurate splines of any degree. Numerical results will be presented to show the efficiency of these new approaches.

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.

北京阿比特科技有限公司