亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Number Theoretic Transform (NTT) is an essential mathematical tool for computing polynomial multiplication in promising lattice-based cryptography. However, costly division operations and complex data dependencies make efficient and flexible hardware design to be challenging, especially on resource-constrained edge devices. Existing approaches either focus on only limited parameter settings or impose substantial hardware overhead. In this paper, we introduce a hardware-algorithm methodology to efficiently accelerate NTT in various settings using in-cache computing. By leveraging an optimized bit-parallel modular multiplication and introducing costless shift operations, our proposed solution provides up to 29x higher throughput-per-area and 2.8-100x better throughput-per-area-per-joule compared to the state-of-the-art.

相關內容

Principal Component Analysis (PCA) is a pivotal technique in the fields of machine learning and data analysis. It aims to reduce the dimensionality of a dataset while minimizing the loss of information. In recent years, there have been endeavors to utilize homomorphic encryption in privacy-preserving PCA algorithms. These approaches commonly employ a PCA routine known as PowerMethod, which takes the covariance matrix as input and generates an approximate eigenvector corresponding to the primary component of the dataset. However, their performance and accuracy are constrained by the incapability of homomorphic covariance matrix computation and the absence of a universal vector normalization strategy for the PowerMethod algorithm. In this study, we propose a novel approach to privacy-preserving PCA that addresses these limitations, resulting in superior efficiency, accuracy, and scalability compared to previous approaches. We attain such efficiency and precision through the following contributions: (i) We implement space optimization techniques for a homomorphic matrix multiplication method (Jiang et al., SIGSAC 2018), making it less prone to memory saturation in parallel computation scenarios. (ii) Leveraging the benefits of this optimized matrix multiplication, we devise an efficient homomorphic circuit for computing the covariance matrix homomorphically. (iii) Utilizing the covariance matrix, we develop a novel and efficient homomorphic circuit for the PowerMethod that incorporates a universal homomorphic vector normalization strategy to enhance both its accuracy and practicality.

General Matrix Multiplication (GEMM) is a fundamental operation widely used in scientific computations. Its performance and accuracy significantly impact the performance and accuracy of applications that depend on it. One such application is semidefinite programming (SDP), and it often requires binary128 or higher precision arithmetic to solve problems involving SDP stably. However, only some processors support binary128 arithmetic, which makes SDP solvers generally slow. In this study, we focused on accelerating GEMM with binary128 arithmetic on field-programmable gate arrays (FPGAs) to enable the flexible design of accelerators for the desired computations. Our binary128 GEMM designs on a recent high-performance FPGA achieved approximately 90GFlops, 147x faster than the computation executed on a recent CPU with 20 threads for large matrices. Using our binary128 GEMM design on the FPGA, we successfully accelerated two numerical applications: LU decomposition and SDP problems, for the first time.

Continuum robots have gained widespread popularity due to their inherent compliance and flexibility, particularly their adjustable levels of stiffness for various application scenarios. Despite efforts to dynamic modeling and control synthesis over the past decade, few studies have focused on incorporating stiffness regulation in their feedback control design; however, this is one of the initial motivations to develop continuum robots. This paper aims to address the crucial challenge of controlling both the position and stiffness of a class of highly underactuated continuum robots that are actuated by antagonistic tendons. To this end, the first step involves presenting a high-dimensional rigid-link dynamical model that can analyze the open-loop stiffening of tendon-driven continuum robots. Based on this model, we propose a novel passivity-based position-and-stiffness controller adheres to the non-negative tension constraint. To demonstrate the effectiveness of our approach, we tested the theoretical results on our continuum robot, and the experimental results show the efficacy and precise performance of the proposed methodology.

In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted integrated communication and localization network in emergency scenarios where a single UAV is deployed as both an airborne base station (BS) and anchor node to assist ground BSs in communication and localization services. We formulate an optimization problem to maximize the sum communication rate of all users under localization accuracy constraints by jointly optimizing the 3D position of the UAV, and communication bandwidth and power allocation of the UAV and ground BSs. To address the intractable localization accuracy constraints, we introduce a new performance metric and geometrically characterize the UAV feasible deployment region in which the localization accuracy constraints are satisfied. Accordingly, we combine Gibbs sampling (GS) and block coordinate descent (BCD) techniques to tackle the non-convex joint optimization problem. Numerical results show that the proposed method attains almost identical rate performance as the meta-heuristic benchmark method while reducing the CPU time by 89.3%.

Integrated sensing, computation, and communication (ISCC) has been recently considered as a promising technique for beyond 5G systems. In ISCC systems, the competition for communication and computation resources between sensing tasks for ambient intelligence and computation tasks from mobile devices becomes an increasingly challenging issue. To address it, we first propose an efficient sensing framework with a novel action detection module. In this module, a threshold is used for detecting whether the sensing target is static and thus the overhead can be reduced. Subsequently, we mathematically analyze the sensing performance of the proposed framework and theoretically prove its effectiveness with the help of the sampling theorem. Based on sensing performance models, we formulate a sensing performance maximization problem while guaranteeing the quality-of-service (QoS) requirements of tasks. To solve it, we propose an optimal resource allocation strategy, in which the minimum resource is allocated to computation tasks, and the rest is devoted to the sensing task. Besides, a threshold selection policy is derived and the results further demonstrate the necessity of the proposed sensing framework. Finally, a real-world test of action recognition tasks based on USRP B210 is conducted to verify the sensing performance analysis. Extensive experiments demonstrate the performance improvement of our proposal by comparing it with some benchmark schemes.

A stable and high-order accurate solver for linear and nonlinear parabolic equations is presented. An additive Runge-Kutta method is used for the time stepping, which integrates the linear stiff terms by an implicit singly diagonally implicit Runge-Kutta (ESDIRK) method and the nonlinear terms by an explicit Runge-Kutta (ERK) method. In each time step, the implicit solve is performed by the recently developed Hierarchical Poincar\'e-Steklov (HPS) method. This is a fast direct solver for elliptic equations that decomposes the space domain into a hierarchical tree of subdomains and builds spectral collocation solvers locally on the subdomains. These ideas are naturally combined in the presented method since the singly diagonal coefficient in ESDIRK and a fixed time-step ensures that the coefficient matrix in the implicit solve of HPS remains the same for all time stages. This means that the precomputed inverse can be efficiently reused, leading to a scheme with complexity (in two dimensions) $\mathcal{O}(N^{1.5})$ for the precomputation where the solution operator to the elliptic problems is built, and then $\mathcal{O}(N)$ for each time step. The stability of the method is proved for first order in time and any order in space, and numerical evidence substantiates a claim of stability for a much broader class of time discretization methods.Numerical experiments supporting the accuracy of efficiency of the method in one and two dimensions are presented.

In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally. Nevertheless, the performance of FL is often limited by the slow convergence due to poor communications links when FL is deployed over wireless networks. Due to the scarceness of radio resources, it is crucial to select clients precisely and allocate communication resource accurately for enhancing FL performance. To address these challenges, in this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network. Specifically, considering the staleness of the local FL models, we propose an age of update (AoU) based novel client selection scheme. Subsequently, the closed-form expressions for resource allocation are derived by monotonicity analysis and dual decomposition method. In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance. Finally, extensive simulation results demonstrate the superior performance of the proposed schemes over FL performance, average AoU and total time consumption.

Distributed full-graph training of Graph Neural Networks (GNNs) over large graphs is bandwidth-demanding and time-consuming. Frequent exchanges of node features, embeddings and embedding gradients (all referred to as messages) across devices bring significant communication overhead for nodes with remote neighbors on other devices (marginal nodes) and unnecessary waiting time for nodes without remote neighbors (central nodes) in the training graph. This paper proposes an efficient GNN training system, AdaQP, to expedite distributed full-graph GNN training. We stochastically quantize messages transferred across devices to lower-precision integers for communication traffic reduction and advocate communication-computation parallelization between marginal nodes and central nodes. We provide theoretical analysis to prove fast training convergence (at the rate of O(T^{-1}) with T being the total number of training epochs) and design an adaptive quantization bit-width assignment scheme for each message based on the analysis, targeting a good trade-off between training convergence and efficiency. Extensive experiments on mainstream graph datasets show that AdaQP substantially improves distributed full-graph training's throughput (up to 3.01 X) with negligible accuracy drop (at most 0.30%) or even accuracy improvement (up to 0.19%) in most cases, showing significant advantages over the state-of-the-art works.

The number of satellites, especially those operating in low-earth orbit (LEO), is exploding in recent years. Additionally, the use of COTS hardware into those satellites enables a new paradigm of computing: orbital edge computing (OEC). OEC entails more technically advanced steps compared to single-satellite computing. This feature allows for vast design spaces with multiple parameters, rendering several novel approaches feasible. The mobility of LEO satellites in the network and limited resources of communication, computation, and storage make it challenging to design an appropriate scheduling algorithm for specific tasks in comparison to traditional ground-based edge computing. This article comprehensively surveys the significant areas of focus in orbital edge computing, which include protocol optimization, mobility management, and resource allocation. This article provides the first comprehensive survey of OEC. Previous survey papers have only concentrated on ground-based edge computing or the integration of space and ground technologies. This article presents a review of recent research from 2000 to 2023 on orbital edge computing that covers network design, computation offloading, resource allocation, performance analysis, and optimization. Moreover, having discussed several related works, both technological challenges and future directions are highlighted in the field.

It is known that the multiplication of an $N \times M$ matrix with an $M \times P$ matrix can be performed using fewer multiplications than what the naive $NMP$ approach suggests. The most famous instance of this is Strassen's algorithm for multiplying two $2\times 2$ matrices in 7 instead of 8 multiplications. This gives rise to the constraint satisfaction problem of fast matrix multiplication, where a set of $R < NMP$ multiplication terms must be chosen and combined such that they satisfy correctness constraints on the output matrix. Despite its highly combinatorial nature, this problem has not been exhaustively examined from that perspective, as evidenced for example by the recent deep reinforcement learning approach of AlphaTensor. In this work, we propose a simple yet novel Constraint Programming approach to find non-commutative algorithms for fast matrix multiplication or provide proof of infeasibility otherwise. We propose a set of symmetry-breaking constraints and valid inequalities that are particularly helpful in proving infeasibility. On the feasible side, we find that exploiting solver performance variability in conjunction with a sparsity-based problem decomposition enables finding solutions for larger (feasible) instances of fast matrix multiplication. Our experimental results using CP Optimizer demonstrate that we can find fast matrix multiplication algorithms for matrices up to $3\times 3$ in a short amount of time.

北京阿比特科技有限公司