亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of private computation (PC) in a distributed storage system. In such a setting a user wishes to compute a function of $f$ messages replicated across $n$ noncolluding databases, while revealing no information about the desired function to the databases. We provide an information-theoretically accurate achievable PC rate, which is the ratio of the smallest desired amount of information and the total amount of downloaded information, for the scenario of nonlinear computation. For a large message size the rate equals the PC capacity, i.e., the maximum achievable PC rate, when the candidate functions are the $f$ independent messages and one arbitrary nonlinear function of these. When the number of messages grows, the PC rate approaches an outer bound on the PC capacity. As a special case, we consider private monomial computation (PMC) and numerically compare the achievable PMC rate to the outer bound for a finite number of messages.

相關內容

Secure computation protocols combine inputs from involved parties to generate an output while keeping their inputs private. Private Set Intersection (PSI) is a secure computation protocol that allows two parties, who each hold a set of items, to learn the intersection of their sets without revealing anything else about the items. Private Intersection Sum (PIS) extends PSI when the two parties want to learn the cardinality of the intersection, as well as the sum of the associated integer values for each identifier in the intersection, but nothing more. Finally, Private Join and Compute (PJC) is a scalable extension of PIS protocol to help organizations work together with confidential data sets. The extensions proposed in this paper include: (a) extending PJC protocol to additional data columns and applying columnar aggregation based on supported homomorphic operations, (b) exploring Ring Learning with Errors (RLWE) homomorphic encryption schemes to apply arithmetic operations such as sum and sum of squares, (c) ensuring stronger security using mutual authentication of communicating parties using certificates, and (d) developing a Website to operationalize such a service offering. We applied our results to develop a Proof-of-Concept solution called JingBing, a voter list validation service that allows different states to register, acquire secure communication modules, install it, and then conduct authenticated peer-to-peer communication. We conclude our paper with directions for future research to make such a solution scalable for practical real-life scenarios.

We present a novel deep learning method for computing eigenvalues of the fractional Schr\"odinger operator. Our approach combines a newly developed loss function with an innovative neural network architecture that incorporates prior knowledge of the problem. These improvements enable our method to handle both high-dimensional problems and problems posed on irregular bounded domains. We successfully compute up to the first 30 eigenvalues for various fractional Schr\"odinger operators. As an application, we share a conjecture to the fractional order isospectral problem that has not yet been studied.

Adapting the User Interface (UI) of software systems to user requirements and the context of use is challenging. The main difficulty consists of suggesting the right adaptation at the right time in the right place in order to make it valuable for end-users. We believe that recent progress in Machine Learning techniques provides useful ways in which to support adaptation more effectively. In particular, Reinforcement learning (RL) can be used to personalise interfaces for each context of use in order to improve the user experience (UX). However, determining the reward of each adaptation alternative is a challenge in RL for UI adaptation. Recent research has explored the use of reward models to address this challenge, but there is currently no empirical evidence on this type of model. In this paper, we propose a confirmatory study design that aims to investigate the effectiveness of two different approaches for the generation of reward models in the context of UI adaptation using RL: (1) by employing a reward model derived exclusively from predictive Human-Computer Interaction (HCI) models (HCI), and (2) by employing predictive HCI models augmented by Human Feedback (HCI&HF). The controlled experiment will use an AB/BA crossover design with two treatments: HCI and HCI&HF. We shall determine how the manipulation of these two treatments will affect the UX when interacting with adaptive user interfaces (AUI). The UX will be measured in terms of user engagement and user satisfaction, which will be operationalized by means of predictive HCI models and the Questionnaire for User Interaction Satisfaction (QUIS), respectively. By comparing the performance of two reward models in terms of their ability to adapt to user preferences with the purpose of improving the UX, our study contributes to the understanding of how reward modelling can facilitate UI adaptation using RL.

Teleoperation enables a user to perform tasks from a remote location. Hence, the user can interact with a long-distance environment through the operation of a robotic system. Often, teleoperation is required in order to perform dangerous tasks (e.g., work in disaster zones or in chemical plants) while keeping the user out of harm's way. Nevertheless, common approaches often provide cumbersome and unnatural usage. In this letter, we propose TeleFMG, an approach for teleoperation of a multi-finger robotic hand through natural motions of the user's hand. By using a low-cost wearable Force-Myography (FMG) device, musculoskeletal activities on the user's forearm are mapped to hand poses which, in turn, are mimicked by a robotic hand. The mapping is performed by a data-based model that considers spatial positions of the sensors on the forearm along with temporal dependencies of the FMG signals. A set of experiments show the ability of a teleoperator to control a multi-finger hand through intuitive and natural finger motion. Furthermore, transfer to new users is demonstrated.

We study the problem of estimating the derivatives of a regression function, which has a wide range of applications as a key nonparametric functional of unknown functions. Standard analysis may be tailored to specific derivative orders, and parameter tuning remains a daunting challenge particularly for high-order derivatives. In this article, we propose a simple plug-in kernel ridge regression (KRR) estimator in nonparametric regression with random design that is broadly applicable for multi-dimensional support and arbitrary mixed-partial derivatives. We provide a non-asymptotic analysis to study the behavior of the proposed estimator in a unified manner that encompasses the regression function and its derivatives, leading to two error bounds for a general class of kernels under the strong $L_\infty$ norm. In a concrete example specialized to kernels with polynomially decaying eigenvalues, the proposed estimator recovers the minimax optimal rate up to a logarithmic factor for estimating derivatives of functions in H\"older and Sobolev classes. Interestingly, the proposed estimator achieves the optimal rate of convergence with the same choice of tuning parameter for any order of derivatives. Hence, the proposed estimator enjoys a \textit{plug-in property} for derivatives in that it automatically adapts to the order of derivatives to be estimated, enabling easy tuning in practice. Our simulation studies show favorable finite sample performance of the proposed method relative to several existing methods and corroborate the theoretical findings on its minimax optimality.

A cloud scheduler packs tasks onto machines with contradictory goals of (1) using the machines as efficiently as possible while (2) avoiding overloading that might result in CPU throttling or out-of-memory errors. We take a stochastic approach that models the uncertainty of tasks' resource requirements by random variables. We focus on a little-explored case of items, each having a Bernoulli distribution that corresponds to tasks that are either idle or need a certain CPU share. RPAP, our online approximation algorithm, upper-bounds a subset of items by Poisson distributions. Unlike existing algorithms for Bernoulli items that prove the approximation ratio only up to a multiplicative constant, we provide a closed-form expression. We derive RPAPC, a combined approach having the same theoretical guarantees as RPAP. In simulations, RPAPC's results are close to FFR, a greedy heuristic with no worst-case guarantees; RPAPC slightly outperforms FFR on datasets with small items.

One-class classification (OCC) is a longstanding method for anomaly detection. With the powerful representation capability of the pre-trained backbone, OCC methods have witnessed significant performance improvements. Typically, most of these OCC methods employ transfer learning to enhance the discriminative nature of the pre-trained backbone's features, thus achieving remarkable efficacy. While most current approaches emphasize feature transfer strategies, we argue that the optimization objective space within OCC methods could also be an underlying critical factor influencing performance. In this work, we conducted a thorough investigation into the optimization objective of OCC. Through rigorous theoretical analysis and derivation, we unveil a key insights: any space with the suitable norm can serve as an equivalent substitute for the hypersphere center, without relying on the distribution assumption of training samples. Further, we provide guidelines for determining the feasible domain of norms for the OCC optimization objective. This novel insight sparks a simple and data-agnostic deep one-class classification method. Our method is straightforward, with a single 1x1 convolutional layer as a trainable projector and any space with suitable norm as the optimization objective. Extensive experiments validate the reliability and efficacy of our findings and the corresponding methodology, resulting in state-of-the-art performance in both one-class classification and industrial vision anomaly detection and segmentation tasks.

A prevalent practice in recommender systems consists of averaging item embeddings to represent users or higher-level concepts in the same embedding space. This paper investigates the relevance of such a practice. For this purpose, we propose an expected precision score, designed to measure the consistency of an average embedding relative to the items used for its construction. We subsequently analyze the mathematical expression of this score in a theoretical setting with specific assumptions, as well as its empirical behavior on real-world data from music streaming services. Our results emphasize that real-world averages are less consistent for recommendation, which paves the way for future research to better align real-world embeddings with assumptions from our theoretical setting.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

北京阿比特科技有限公司