亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Consider the supervised learning setting where the goal is to learn to predict labels $\mathbf y$ given points $\mathbf x$ from a distribution. An \textit{omnipredictor} for a class $\mathcal L$ of loss functions and a class $\mathcal C$ of hypotheses is a predictor whose predictions incur less expected loss than the best hypothesis in $\mathcal C$ for every loss in $\mathcal L$. Since the work of [GKR+21] that introduced the notion, there has been a large body of work in the setting of binary labels where $\mathbf y \in \{0, 1\}$, but much less is known about the regression setting where $\mathbf y \in [0,1]$ can be continuous. Our main conceptual contribution is the notion of \textit{sufficient statistics} for loss minimization over a family of loss functions: these are a set of statistics about a distribution such that knowing them allows one to take actions that minimize the expected loss for any loss in the family. The notion of sufficient statistics relates directly to the approximate rank of the family of loss functions. Our key technical contribution is a bound of $O(1/\varepsilon^{2/3})$ on the $\epsilon$-approximate rank of convex, Lipschitz functions on the interval $[0,1]$, which we show is tight up to a factor of $\mathrm{polylog} (1/\epsilon)$. This yields improved runtimes for learning omnipredictors for the class of all convex, Lipschitz loss functions under weak learnability assumptions about the class $\mathcal C$. We also give efficient omnipredictors when the loss families have low-degree polynomial approximations, or arise from generalized linear models (GLMs). This translation from sufficient statistics to faster omnipredictors is made possible by lifting the technique of loss outcome indistinguishability introduced by [GKH+23] for Boolean labels to the regression setting.

相關內容

Reinforcement learning (RL) on high-dimensional and complex problems relies on abstraction for improved efficiency and generalization. In this paper, we study abstraction in the continuous-control setting, and extend the definition of Markov decision process (MDP) homomorphisms to the setting of continuous state and action spaces. We derive a policy gradient theorem on the abstract MDP for both stochastic and deterministic policies. Our policy gradient results allow for leveraging approximate symmetries of the environment for policy optimization. Based on these theorems, we propose a family of actor-critic algorithms that are able to learn the policy and the MDP homomorphism map simultaneously, using the lax bisimulation metric. Finally, we introduce a series of environments with continuous symmetries to further demonstrate the ability of our algorithm for action abstraction in the presence of such symmetries. We demonstrate the effectiveness of our method on our environments, as well as on challenging visual control tasks from the DeepMind Control Suite. Our method's ability to utilize MDP homomorphisms for representation learning leads to improved performance, and the visualizations of the latent space clearly demonstrate the structure of the learned abstraction.

Let $\mathcal{P}$ be a simple polygon with $m$ vertices and let $P$ be a set of $n$ points inside $\mathcal{P}$. We prove that there exists, for any $\varepsilon>0$, a set $\mathcal{C} \subset P$ of size $O(1/\varepsilon^2)$ such that the following holds: for any query point $q$ inside the polygon $\mathcal{P}$, the geodesic distance from $q$ to its furthest neighbor in $\mathcal{C}$ is at least $1-\varepsilon$ times the geodesic distance to its further neighbor in $P$. Thus the set $\mathcal{C}$ can be used for answering $\varepsilon$-approximate furthest-neighbor queries with a data structure whose storage requirement is independent of the size of $P$. The coreset can be constructed in $O\left(\frac{1}{\varepsilon} \left( n\log(1/\varepsilon) + (n+m)\log(n+m)\right) \right)$ time.

An asymptotic theory is established for linear functionals of the predictive function given by kernel ridge regression, when the reproducing kernel Hilbert space is equivalent to a Sobolev space. The theory covers a wide variety of linear functionals, including point evaluations, evaluation of derivatives, $L_2$ inner products, etc. We establish the upper and lower bounds of the estimates and their asymptotic normality. It is shown that $\lambda\sim n^{-1}$ is the universal optimal order of magnitude for the smoothing parameter to balance the variance and the worst-case bias. The theory also implies that the optimal $L_\infty$ error of kernel ridge regression can be attained under the optimal smoothing parameter $\lambda\sim n^{-1}\log n$. These optimal rates for the smoothing parameter differ from the known optimal rate $\lambda\sim n^{-\frac{2m}{2m+d}}$ that minimizes the $L_2$ error of the kernel ridge regression.

Acoustic recognition is a common task for deep learning in recent researches, with the employment of spectral feature extraction such as Short-time Fourier transform and Wavelet transform. However, not many researches have found that discuss the advantages and drawbacks, as well as performance comparison amongst spectral feature extractors. In this consideration, this paper aims to comparing the attributes of these two transform types, called spectrogram and scalogram. A Convolutional Neural Networks for acoustic faults recognition is implemented, then the performance of these two types of spectral extractor is recorded for comparison. A latest research on the same audio database is considered for benchmarking to see how good the designed spectrogram and scalogram is. The advantages and limitations of them are also analyzed. By doing so, the results of this paper provide indications for application scenarios of spectrogram and scalogram, as well as potential further research directions in acoustic recognition.

Consider a binary statistical hypothesis testing problem, where $n$ independent and identically distributed random variables $Z^n$ are either distributed according to the null hypothesis $P$ or the alternative hypothesis $Q$, and only $P$ is known. A well-known test that is suitable for this case is the so-called Hoeffding test, which accepts $P$ if the Kullback-Leibler (KL) divergence between the empirical distribution of $Z^n$ and $P$ is below some threshold. This work characterizes the first and second-order terms of the type-II error probability for a fixed type-I error probability for the Hoeffding test as well as for divergence tests, where the KL divergence is replaced by a general divergence. It is demonstrated that, irrespective of the divergence, divergence tests achieve the first-order term of the Neyman-Pearson test, which is the optimal test when both $P$ and $Q$ are known. In contrast, the second-order term of divergence tests is strictly worse than that of the Neyman-Pearson test. It is further demonstrated that divergence tests with an invariant divergence achieve the same second-order term as the Hoeffding test, but divergence tests with a non-invariant divergence may outperform the Hoeffding test for some alternative hypotheses $Q$. Potentially, this behavior could be exploited by a composite hypothesis test with partial knowledge of the alternative hypothesis $Q$ by tailoring the divergence of the divergence test to the set of possible alternative hypotheses.

This study presents a systematic enumeration of spherical ($SO(3)$) type parallel robots' variants using an analytical velocity-level approach. These robots are known for their ability to perform arbitrary rotations around a fixed point, making them suitable for numerous applications. Despite their architectural diversity, existing research has predominantly approached them on a case-by-case basis. This approach hinders the exploration of all possible variants, thereby limiting the benefits derived from architectural diversity. By employing a generalized analytical approach through the reciprocal screw method, we systematically explore all the kinematic conditions for limbs yielding $SO(3)$ motion.Consequently, all 73 possible types of non-redundant limbs suitable for generating the target $SO(3)$ motion are identified. The approach involves performing an in-depth algebraic motion-constraint analysis and identifying common characteristics among different variants. This leads us to systematically explore all 73 symmetric and 5256 asymmetric variants, which in turn become a total of 5329, each potentially having different workspace capability, stiffness performance, and dynamics. Hence, having all these variants can facilitate the innovation of novel spherical robots and help us easily find the best and optimal ones for our specific applications.

The $(k, z)$-Clustering problem in Euclidean space $\mathbb{R}^d$ has been extensively studied. Given the scale of data involved, compression methods for the Euclidean $(k, z)$-Clustering problem, such as data compression and dimension reduction, have received significant attention in the literature. However, the space complexity of the clustering problem, specifically, the number of bits required to compress the cost function within a multiplicative error $\varepsilon$, remains unclear in existing literature. This paper initiates the study of space complexity for Euclidean $(k, z)$-Clustering and offers both upper and lower bounds. Our space bounds are nearly tight when $k$ is constant, indicating that storing a coreset, a well-known data compression approach, serves as the optimal compression scheme. Furthermore, our lower bound result for $(k, z)$-Clustering establishes a tight space bound of $\Theta( n d )$ for terminal embedding, where $n$ represents the dataset size. Our technical approach leverages new geometric insights for principal angles and discrepancy methods, which may hold independent interest.

This paper presents an algorithmic method that, given a positive integer $j$, generates the $j$-th convergence stair containing all natural numbers from where the Collatz conjecture holds by exactly $j$ applications of the Collatz function. To this end, we present a novel formulation of the Collatz conjecture as a concurrent program, and provide the general case specification of the $j$-th convergence stair for any $j > 0$. The proposed specifications provide a layered and linearized orientation of Collatz numbers organized in an infinite set of infinite binary trees. To the best of our knowledge, this is the first time that such a general specification is provided, which can have significant applications in analyzing and testing the behaviors of complex non-linear systems. We have implemented this method as a software tool that generates the Collatz numbers of individual stairs. We also show that starting from any value in any convergence stair the conjecture holds. However, to prove the conjecture, one has to show that every natural number will appear in some stair; i.e., the union of all stairs is equal to the set of natural numbers, which remains an open problem.

The fusion of causal models with deep learning introducing increasingly intricate data sets, such as the causal associations within images or between textual components, has surfaced as a focal research area. Nonetheless, the broadening of original causal concepts and theories to such complex, non-statistical data has been met with serious challenges. In response, our study proposes redefinitions of causal data into three distinct categories from the standpoint of causal structure and representation: definite data, semi-definite data, and indefinite data. Definite data chiefly pertains to statistical data used in conventional causal scenarios, while semi-definite data refers to a spectrum of data formats germane to deep learning, including time-series, images, text, and others. Indefinite data is an emergent research sphere inferred from the progression of data forms by us. To comprehensively present these three data paradigms, we elaborate on their formal definitions, differences manifested in datasets, resolution pathways, and development of research. We summarize key tasks and achievements pertaining to definite and semi-definite data from myriad research undertakings, present a roadmap for indefinite data, beginning with its current research conundrums. Lastly, we classify and scrutinize the key datasets presently utilized within these three paradigms.

In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing, and accumulating new knowledge coming at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularization, knowledge distillation, memory, generative replay, parameter isolation, and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.

北京阿比特科技有限公司