亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For radial basis function (RBF) kernel interpolation of scattered data, Schaback in 1995 proved that the attainable approximation error and the condition number of the underlying interpolation matrix cannot be made small simultaneously. He referred to this finding as an "uncertainty relation", an undesirable consequence of which is that RBF kernel interpolation is susceptible to noisy data. In this paper, we propose and study a distributed interpolation method to manage and quantify the uncertainty brought on by interpolating noisy spherical data of non-negligible magnitude. We also present numerical simulation results showing that our method is practical and robust in terms of handling noisy data from challenging computing environments.

相關內容

Quantum Relative Entropy (QRE) programming is a recently popular and challenging class of convex optimization problems with significant applications in quantum computing and quantum information theory. We are interested in modern interior point (IP) methods based on optimal self-concordant barriers for the QRE cone. A range of theoretical and numerical challenges associated with such barrier functions and the QRE cones have hindered the scalability of IP methods. To address these challenges, we propose a series of numerical and linear algebraic techniques and heuristics aimed at enhancing the efficiency of gradient and Hessian computations for the self-concordant barrier function, solving linear systems, and performing matrix-vector products. We also introduce and deliberate about some interesting concepts related to QRE such as symmetric quantum relative entropy (SQRE). We also introduce a two-phase method for performing facial reduction that can significantly improve the performance of QRE programming. Our new techniques have been implemented in the latest version (DDS 2.2) of the software package DDS. In addition to handling QRE constraints, DDS accepts any combination of several other conic and non-conic convex constraints. Our comprehensive numerical experiments encompass several parts including 1) a comparison of DDS 2.2 with Hypatia for the nearest correlation matrix problem, 2) using DDS for combining QRE constraints with various other constraint types, and 3) calculating the key rate for quantum key distribution (QKD) channels and presenting results for several QKD protocols.

This study explores the robustness of label noise classifiers, aiming to enhance model resilience against noisy data in complex real-world scenarios. Label noise in supervised learning, characterized by erroneous or imprecise labels, significantly impairs model performance. This research focuses on the increasingly pertinent issue of label noise's impact on practical applications. Addressing the prevalent challenge of inaccurate training data labels, we integrate adversarial machine learning (AML) and importance reweighting techniques. Our approach involves employing convolutional neural networks (CNN) as the foundational model, with an emphasis on parameter adjustment for individual training samples. This strategy is designed to heighten the model's focus on samples critically influencing performance.

Nonparametric estimates of frequency response functions (FRFs) are often suitable for describing the dynamics of a mechanical system. If treating these estimates as measurement inputs, they can be used for parametric identification of, e.g., a gray-box model. Classical methods for nonparametric FRF estimation of MIMO systems require at least as many experiments as the system has inputs. Local parametric FRF estimation methods have been developed for avoiding multiple experiments. In this paper, these local methods are adapted and applied for estimating the FRFs of a 6-axes robotic manipulator, which is a nonlinear MIMO system operating in closed loop. The aim is to reduce the experiment time and amount of data needed for identification. The resulting FRFs are analyzed in an experimental study and compared to estimates obtained by classical MIMO techniques. It is furthermore shown that an accurate parametric model identification is possible based on local parametric FRF estimates and that the total experiment time can be significantly reduced.

Randomizing the mapping of addresses to cache entries has proven to be an effective technique for hardening caches against contention-based attacks like Prime+Prome. While attacks and defenses are still evolving, it is clear that randomized caches significantly increase the security against such attacks. However, one aspect that is missing from most analyses of randomized cache architectures is the choice of the replacement policy. Often, only the random- and LRU replacement policies are investigated. However, LRU is not applicable to randomized caches due to its immense hardware overhead, while the random replacement policy is not ideal from a performance and security perspective. In this paper, we explore replacement policies for randomized caches. We develop two new replacement policies and evaluate a total of five replacement policies regarding their security against Prime+Prune+Probe attackers. Moreover, we analyze the effect of the replacement policy on the system's performance and quantify the introduced hardware overhead. We implement randomized caches with configurable replacement policies in software and hardware using a custom cache simulator, gem5, and the CV32E40P RISC-V core. Among others, we show that the construction of eviction sets with our new policy, VARP-64, requires over 25-times more cache accesses than with the random replacement policy while also enhancing overall performance.

A sharp, distribution free, non-asymptotic result is proved for the concentration of a random function around the mean function, when the randomization is generated by a finite sequence of independent data and the random functions satisfy uniform bounded variation assumptions. The specific motivation for the work comes from the need for inference on the distributional impacts of social policy intervention. However, the family of randomized functions that we study is broad enough to cover wide-ranging applications. For example, we provide a Kolmogorov-Smirnov like test for randomized functions that are almost surely Lipschitz continuous, and novel tools for inference with heterogeneous treatment effects. A Dvoretzky-Kiefer-Wolfowitz like inequality is also provided for the sum of almost surely monotone random functions, extending the famous non-asymptotic work of Massart for empirical cumulative distribution functions generated by i.i.d. data, to settings without micro-clusters proposed by Canay, Santos, and Shaikh. We illustrate the relevance of our theoretical results for applied work via empirical applications. Notably, the proof of our main concentration result relies on a novel stochastic rendition of the fundamental result of Debreu, generally dubbed the "gap lemma," that transforms discontinuous utility representations of preorders into continuous utility representations, and on an envelope theorem of an infinite dimensional optimisation problem that we carefully construct.

Modeling of multivariate random fields through Gaussian processes calls for the construction of valid cross-covariance functions describing the dependence between any two component processes at different spatial locations. The required validity conditions often present challenges that lead to complicated restrictions on the parameter space. The purpose of this paper is to present a simplified techniques for establishing multivariate validity for the recently-introduced Confluent Hypergeometric (CH) class of covariance functions. Specifically, we use multivariate mixtures to present both simplified and comprehensive conditions for validity, based on results on conditionally negative semidefinite matrices and the Schur product theorem. In addition, we establish the spectral density of the CH covariance and use this to construct valid multivariate models as well as propose new cross-covariances. We show that our proposed approach leads to valid multivariate cross-covariance models that inherit the desired marginal properties of the CH model and outperform the multivariate Mat\'ern model in out-of-sample prediction under slowly-decaying correlation of the underlying multivariate random field. We also establish properties of multivariate CH models, including equivalence of Gaussian measures, and demonstrate their use in modeling a multivariate oceanography data set consisting of temperature, salinity and oxygen, as measured by autonomous floats in the Southern Ocean.

Code provides a general syntactic structure to build complex programs and perform precise computations when paired with a code interpreter - we hypothesize that language models (LMs) can leverage code-writing to improve Chain of Thought reasoning not only for logic and arithmetic tasks, but also for semantic ones (and in particular, those that are a mix of both). For example, consider prompting an LM to write code that counts the number of times it detects sarcasm in an essay: the LM may struggle to write an implementation for "detect_sarcasm(string)" that can be executed by the interpreter (handling the edge cases would be insurmountable). However, LMs may still produce a valid solution if they not only write code, but also selectively "emulate" the interpreter by generating the expected output of "detect_sarcasm(string)" and other lines of code that cannot be executed. In this work, we propose Chain of Code (CoC), a simple yet surprisingly effective extension that improves LM code-driven reasoning. The key idea is to encourage LMs to format semantic sub-tasks in a program as flexible pseudocode that the interpreter can explicitly catch undefined behaviors and hand off to simulate with an LM (as an "LMulator"). Experiments demonstrate that Chain of Code outperforms Chain of Thought and other baselines across a variety of benchmarks; on BIG-Bench Hard, Chain of Code achieves 84%, a gain of 12% over Chain of Thought. CoC scales well with large and small models alike, and broadens the scope of reasoning questions that LMs can correctly answer by "thinking in code". Project webpage: //chain-of-code.github.io.

Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

北京阿比特科技有限公司