亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work we introduce a reduced-rank algorithm for Gaussian process regression. Our numerical scheme converts a Gaussian process on a user-specified interval to its Karhunen-Lo\`eve expansion, the $L^2$-optimal reduced-rank representation. Numerical evaluation of the Karhunen-Lo\`eve expansion is performed once during precomputation and involves computing a numerical eigendecomposition of an integral operator whose kernel is the covariance function of the Gaussian process. The Karhunen-Lo\`eve expansion is independent of observed data and depends only on the covariance kernel and the size of the interval on which the Gaussian process is defined. The scheme of this paper does not require translation invariance of the covariance kernel. We also introduce a class of fast algorithms for Bayesian fitting of hyperparameters, and demonstrate the performance of our algorithms with numerical experiments in one and two dimensions. Extensions to higher dimensions are mathematically straightforward but suffer from the standard curses of high dimensions.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

This article studies global testing of the slope function in functional linear regression model in the framework of reproducing kernel Hilbert space. We propose a new testing statistic based on smoothness regularization estimators. The asymptotic distribution of the testing statistic is established under null hypothesis. It is shown that the null asymptotic distribution is determined jointly by the reproducing kernel and the covariance function. Our theoretical analysis shows that the proposed testing is consistent over a class of smooth local alternatives. Despite the generality of the method of regularization, we show the procedure is easily implementable. Numerical examples are provided to demonstrate the empirical advantages over the competing methods.

We consider a variant of the clustering problem for a complete weighted graph. The aim is to partition the nodes into clusters maximizing the sum of the edge weights within the clusters. This problem is known as the clique partitioning problem, being NP-hard in the general case of having edge weights of different signs. We propose a new method of estimating an upper bound of the objective function that we combine with the classical branch-and-bound technique to find the exact solution. We evaluate our approach on a broad range of random graphs and real-world networks. The proposed approach provided tighter upper bounds and achieved significant convergence speed improvements compared to known alternative methods.

Numerical stabilization is often used to eliminate (alleviate) the spurious oscillations generally produced by full order models (FOMs) in under-resolved or marginally-resolved simulations of convection-dominated flows. In this paper, we investigate the role of numerical stabilization in reduced order models (ROMs) of marginally-resolved convection-dominated flows. Specifically, we investigate the FOM-ROM consistency, i.e., whether the numerical stabilization is beneficial both at the FOM and the ROM level. As a numerical stabilization strategy, we focus on the evolve-filter-relax (EFR) regularization algorithm, which centers around spatial filtering. To investigate the FOM-ROM consistency, we consider two ROM strategies: (I) the EFR-ROM, in which the EFR stabilization is used at the FOM level, but not at the ROM level; and (ii) the EFR-EFRROM, in which the EFR stabilization is used both at the FOM and at the ROM level. We compare the EFR-ROM with the EFR-EFRROM in the numerical simulation of a 2D flow past a circular cylinder in the convection-dominated, marginally-resolved regime. We also perform model reduction with respect to both time and Reynolds number. Our numerical investigation shows that the EFR-EFRROM is more accurate than the EFR-ROM, which suggests that FOM-ROM consistency is beneficial in convection-dominated,marginally-resolved flows.

Multiclass logistic regression is a fundamental task in machine learning with applications in classification and boosting. Previous work (Foster et al., 2018) has highlighted the importance of improper predictors for achieving "fast rates" in the online multiclass logistic regression problem without suffering exponentially from secondary problem parameters, such as the norm of the predictors in the comparison class. While Foster et al. (2018) introduced a statistically optimal algorithm, it is in practice computationally intractable due to its run-time complexity being a large polynomial in the time horizon and dimension of input feature vectors. In this paper, we develop a new algorithm, FOLKLORE, for the problem which runs significantly faster than the algorithm of Foster et al.(2018) -- the running time per iteration scales quadratically in the dimension -- at the cost of a linear dependence on the norm of the predictors in the regret bound. This yields the first practical algorithm for online multiclass logistic regression, resolving an open problem of Foster et al.(2018). Furthermore, we show that our algorithm can be applied to online bandit multiclass prediction and online multiclass boosting, yielding more practical algorithms for both problems compared to the ones in Foster et al.(2018) with similar performance guarantees. Finally, we also provide an online-to-batch conversion result for our algorithm.

We argue that proven exponential upper bounds on runtimes, an established area in classic algorithms, are interesting also in heuristic search and we prove several such results. We show that any of the algorithms randomized local search, Metropolis algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function under a large set of noise assumptions in a runtime that is at most exponential in the problem dimension~$n$. This drastically extends a previous such result, limited to the (1+1) EA, the LeadingOnes function, and one-bit or bit-wise prior noise with noise probability at most $1/2$, and at the same time simplifies its proof. With the same general argument, among others, we also derive a sub-exponential upper bound for the runtime of the $(1,\lambda)$ evolutionary algorithm on the OneMax problem when the offspring population size $\lambda$ is logarithmic, but below the efficiency threshold. To show that our approach can also deal with non-trivial parent population sizes, we prove an exponential upper bound for the runtime of the mutation-based version of the simple genetic algorithm on the OneMax benchmark, matching a known exponential lower bound.

We design an adaptive unfitted finite element method on the Cartesian mesh with hanging nodes. We derive an hp-reliable and efficient residual type a posteriori error estimate on K-meshes. A key ingredient is a novel hp-domain inverse estimate which allows us to prove the stability of the finite element method under practical interface resolving mesh conditions and also prove the lower bound of the hp a posteriori error estimate. Numerical examples are included.

Multi-output Gaussian processes (MOGPs) are an extension of Gaussian Processes (GPs) for predicting multiple output variables (also called channels, tasks) simultaneously. In this paper we use the convolution theorem to design a new kernel for MOGPs, by modeling cross channel dependencies through cross convolution of time and phase delayed components in the spectral domain. The resulting kernel is called Multi-Output Convolution Spectral Mixture (MOCSM) kernel. Results of extensive experiments on synthetic and real-life datasets demonstrate the advantages of the proposed kernel and its state of the art performance. MOCSM enjoys the desirable property to reduce to the well known Spectral Mixture (SM) kernel when a single-channel is considered. A comparison with the recently introduced Multi-Output Spectral Mixture kernel reveals that this is not the case for the latter kernel, which contains quadratic terms that generate undesirable scale effects when the spectral densities of different channels are either very close or very far from each other in the frequency domain.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GPs with a comparable number of parameters.

This work considers the problem of provably optimal reinforcement learning for episodic finite horizon MDPs, i.e. how an agent learns to maximize his/her long term reward in an uncertain environment. The main contribution is in providing a novel algorithm --- Variance-reduced Upper Confidence Q-learning (vUCQ) --- which enjoys a regret bound of $\widetilde{O}(\sqrt{HSAT} + H^5SA)$, where the $T$ is the number of time steps the agent acts in the MDP, $S$ is the number of states, $A$ is the number of actions, and $H$ is the (episodic) horizon time. This is the first regret bound that is both sub-linear in the model size and asymptotically optimal. The algorithm is sub-linear in that the time to achieve $\epsilon$-average regret for any constant $\epsilon$ is $O(SA)$, which is a number of samples that is far less than that required to learn any non-trivial estimate of the transition model (the transition model is specified by $O(S^2A)$ parameters). The importance of sub-linear algorithms is largely the motivation for algorithms such as $Q$-learning and other "model free" approaches. vUCQ algorithm also enjoys minimax optimal regret in the long run, matching the $\Omega(\sqrt{HSAT})$ lower bound. Variance-reduced Upper Confidence Q-learning (vUCQ) is a successive refinement method in which the algorithm reduces the variance in $Q$-value estimates and couples this estimation scheme with an upper confidence based algorithm. Technically, the coupling of both of these techniques is what leads to the algorithm enjoying both the sub-linear regret property and the asymptotically optimal regret.

北京阿比特科技有限公司