亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Nonlinear mapping is an essential and common demand in online systems, such as sensor systems and mobile phones. Accelerating nonlinear mapping will directly speed up online systems. Previously the authors of this paper proposed a Dendrite Net (DD) with enormously lower time complexity than the existing nonlinear mapping algorithms; however, there still are redundant calculations in DD. This paper presents a DD with an acceleration module (AC) to accelerate nonlinear mapping further. We conduct three experiments to verify whether DD with AC has lower time complexity while retaining DD's nonlinear mapping properties and system identification properties: The first experiment is the precision and identification of unary nonlinear mapping, reflecting the calculation performance using DD with AC for basic functions in online systems. The second experiment is the mapping precision and identification of the multi-input nonlinear system, reflecting the performance for designing online systems via DD with AC. Finally, this paper compares the time complexity of DD and DD with AC and analyzes the theoretical reasons through repeated experiments. Results: DD with AC retains DD's excellent mapping and identification properties and has lower time complexity. Significance: DD with AC can be used for most engineering systems, such as sensor systems, and will speed up computation in these online systems. The code of DD with AC is available on //github.com/liugang1234567/Gang-neuron

相關內容

In this note, we prove that the following function space with absolutely convergent Fourier series \[ F_d:=\left\{ f\in L^2([0,1)^d)\:\middle| \: \|f\|:=\sum_{\boldsymbol{k}\in \mathbb{Z}^d}|\hat{f}(\boldsymbol{k})| \max\left(1,\min_{j\in \mathrm{supp}(\boldsymbol{k})}\log |k_j|\right) <\infty \right\}\] with $\hat{f}(\boldsymbol{k})$ being the $\boldsymbol{k}$-th Fourier coefficient of $f$ and $\mathrm{supp}(\boldsymbol{k}):=\{j\in \{1,\ldots,d\}\mid k_j\neq 0\}$ is polynomially tractable for multivariate integration in the worst-case setting. Here polynomial tractability means that the minimum number of function evaluations required to make the worst-case error less than or equal to a tolerance $\varepsilon$ grows only polynomially with respect to $\varepsilon^{-1}$ and $d$. It is important to remark that the function space $F_d$ is unweighted, that is, all variables contribute equally to the norm of functions. Our tractability result is in contrast to those for most of the unweighted integration problems studied in the literature, in which polynomial tractability does not hold and the problem suffers from the curse of dimensionality. Our proof is constructive in the sense that we provide an explicit quasi-Monte Carlo rule that attains a desired worst-case error bound.

Knowledge transfer is shown to be a very successful technique for training neural classifiers: together with the ground truth data, it uses the "privileged information" (PI) obtained by a "teacher" network to train a "student" network. It has been observed that classifiers learn much faster and more reliably via knowledge transfer. However, there has been little or no theoretical analysis of this phenomenon. To bridge this gap, we propose to approach the problem of knowledge transfer by regularizing the fit between the teacher and the student with PI provided by the teacher. Using tools from dynamical systems theory, we show that when the student is an extremely wide two layer network, we can analyze it in the kernel regime and show that it is able to interpolate between PI and the given data. This characterization sheds new light on the relation between the training error and capacity of the student relative to the teacher. Another contribution of the paper is a quantitative statement on the convergence of student network. We prove that the teacher reduces the number of required iterations for a student to learn, and consequently improves the generalization power of the student. We give corresponding experimental analysis that validates the theoretical results and yield additional insights.

Online meta-learning has recently emerged as a marriage between batch meta-learning and online learning, for achieving the capability of quick adaptation on new tasks in a lifelong manner. However, most existing approaches focus on the restrictive setting where the distribution of the online tasks remains fixed with known task boundaries. In this work, we relax these assumptions and propose a novel algorithm for task-agnostic online meta-learning in non-stationary environments. More specifically, we first propose two simple but effective detection mechanisms of task switches and distribution shift based on empirical observations, which serve as a key building block for more elegant online model updates in our algorithm: the task switch detection mechanism allows reusing of the best model available for the current task at hand, and the distribution shift detection mechanism differentiates the meta model update in order to preserve the knowledge for in-distribution tasks and quickly learn the new knowledge for out-of-distribution tasks. In particular, our online meta model updates are based only on the current data, which eliminates the need of storing previous data as required in most existing methods. We further show that a sublinear task-averaged regret can be achieved for our algorithm under mild conditions. Empirical studies on three different benchmarks clearly demonstrate the significant advantage of our algorithm over related baseline approaches.

This paper presents a mobile supernumerary robotic approach to physical assistance in human-robot conjoined actions. The study starts with a description of the SUPER-MAN concept. The idea is to develop and utilize mobile collaborative systems that can follow human loco-manipulation commands to perform industrial tasks through three main components: i) an admittance-type interface, ii) a human-robot interaction controller, and iii) a supernumerary robotic body. Next, we present two possible implementations within the framework from theoretical and hardware perspectives. The first system is called MOCA-MAN and comprises a redundant torque-controlled robotic arm and an omnidirectional mobile platform. The second one is called Kairos-MAN, formed by a high-payload 6-DoF velocity-controlled robotic arm and an omnidirectional mobile platform. The systems share the same admittance interface, through which user wrenches are translated to loco-manipulation commands generated by whole-body controllers of each system. Besides, a thorough user study with multiple and cross-gender subjects is presented to reveal the quantitative performance of the two systems in effort-demanding and dexterous tasks. Moreover, we provide qualitative results from the NASA-TLX questionnaire to demonstrate the SUPER-MAN approach's potential and its acceptability from the users' viewpoint.

We study the problem of optimal sampling in an edge-based video analytics system (VAS), where sensor samples collected at a terminal device are offloaded to a back-end server that processes them and generates feedback for a user. Sampling the system with the maximum allowed frequency results in the timely detection of relevant events with minimum delay. However, it incurs high energy costs and causes unnecessary usage of network and compute resources via communication and processing of redundant samples. On the other hand, an infrequent sampling result in a higher delay in detecting the relevant event, thus increasing the idle energy usage and degrading the quality of experience in terms of responsiveness of the system. We quantify this sampling frequency trade-off as a weighted function between the number of samples and the responsiveness. We propose an energy-optimal aperiodic sampling policy that improves over the state-of-the-art optimal periodic sampling policy. Numerically, we show the proposed policy provides a consistent improvement of more than 10$\mathbf{\%}$ over the state-of-the-art.

Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. Various sensitivity measures have been defined to quantify such influence, most commonly some function of the quantity of interest's partial derivative with respect to the network's conditional probabilities. However, computing these measures in large networks with thousands of parameters can become computationally very expensive. We propose an algorithm combining automatic differentiation and exact inference to efficiently calculate the sensitivity measures in a single pass. It first marginalizes the whole network once, using e.g. variable elimination, and then backpropagates this operation to obtain the gradient with respect to all input parameters. Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions. Simulation studies highlight the efficiency of our algorithm by scaling it to massive networks with up to 100'000 parameters and investigate the feasibility of generic multi-way analyses. Our routines are also showcased over two medium-sized Bayesian networks: the first modeling the country-risks of a humanitarian crisis, the second studying the relationship between the use of technology and the psychological effects of forced social isolation during the COVID-19 pandemic. An implementation of the methods using the popular machine learning library PyTorch is freely available.

Anderson acceleration (AA) is a well-known method for accelerating the convergence of iterative algorithms, with applications in various fields including deep learning and optimization. Despite its popularity in these areas, the effectiveness of AA in classical machine learning classifiers has not been thoroughly studied. Tabular data, in particular, presents a unique challenge for deep learning models, and classical machine learning models are known to perform better in these scenarios. However, the convergence analysis of these models has received limited attention. To address this gap in research, we implement a support vector machine (SVM) classifier variant that incorporates AA to speed up convergence. We evaluate the performance of our SVM with and without Anderson acceleration on several datasets from the biology domain and demonstrate that the use of AA significantly improves convergence and reduces the training loss as the number of iterations increases. Our findings provide a promising perspective on the potential of Anderson acceleration in the training of simple machine learning classifiers and underscore the importance of further research in this area. By showing the effectiveness of AA in this setting, we aim to inspire more studies that explore the applications of AA in classical machine learning.

Millions of smart contracts have been deployed onto the Ethereum platform, posing potential attack subjects. Therefore, analyzing contract binaries is vital since their sources are unavailable, involving identification comprising function entry identification and detecting its boundaries. Such boundaries are critical to many smart contract applications, e.g. reverse engineering and profiling. Unfortunately, it is challenging to identify functions from these stripped contract binaries due to the lack of internal function call statements and the compiler-inducing instruction reshuffling. Recently, several existing works excessively relied on a set of handcrafted heuristic rules which impose several faults. To address this issue, we propose a novel neural network-based framework for EVM bytecode Function Entries and Boundaries Identification (neural-FEBI) that does not rely on a fixed set of handcrafted rules. Instead, it used a two-level bi-Long Short-Term Memory network and a Conditional Random Field network to locate the function entries. The suggested framework also devises a control flow traversal algorithm to determine the code segments reachable from the function entry as its boundary. Several experiments on 38,996 publicly available smart contracts collected as binary demonstrate that neural-FEBI confirms the lowest and highest F1-scores for the function entries identification task across different datasets of 88.3 to 99.7, respectively. Its performance on the function boundary identification task is also increased from 79.4% to 97.1% compared with state-of-the-art. We further demonstrate that the identified function information can be used to construct more accurate intra-procedural CFGs and call graphs. The experimental results confirm that the proposed framework significantly outperforms state-of-the-art, often based on handcrafted heuristic rules.

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.

We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SimpleQuestions dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.

北京阿比特科技有限公司