亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The reconfigurable intelligent surface (RIS) has received a lot of interest because of its capacity to reconfigure the wireless communication environment in a cost- and energy-efficient way. However, the realistic power consumption modeling and measurement validation of RIS has received far too little attention. Therefore, in this work, we model the power consumption of RIS and conduct measurement validations using various RISs to fill this vacancy. Firstly, we propose a practical power consumption model of RIS. The RIS hardware is divided into three basic parts: the FPGA control board, the drive circuits, and the RIS unit cells. The power consumption of the first two parts is modeled as $P_{\text {static}}$ and that of the last part is modeled as $P_{\text {units}}$. Expressions of $P_{\text {static}}$ and $P_{\text {units}}$ vary amongst different types of RISs. Secondly, we conduct measurements on various RISs to validate the proposed model. Five different RISs including the PIN diode, varactor diode, and RF switch types are measured, and measurement results validate the generality and applicability of the proposed power consumption model of RIS. Finally, we summarize the measurement results and discuss the approaches to achieve the low-power-consumption design of RIS-assisted wireless communication systems.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 情景 · 鏈式法則 · 估計/估計量 · 最優化 ·
2024 年 3 月 18 日

We study the problem of efficiently computing the derivative of the fixed-point of a parametric non-differentiable contraction map. This problem has wide applications in machine learning, including hyperparameter optimization, meta-learning and data poisoning attacks. We analyze two popular approaches: iterative differentiation (ITD) and approximate implicit differentiation (AID). A key challenge behind the nonsmooth setting is that the chain rule does not hold anymore. Building upon the recent work by Bolte et al. (2022), who proved the linear convergence of non-differentiable ITD, we provide refined linear convergence rates for both ITD and AID in the deterministic case. We further introduce NSID, a new method to compute the implicit derivative when the fixed point is defined as the composition of an outer map and an inner map which is accessible only through a stochastic unbiased estimator. We establish rates for the convergence of NSID to the true derivative, encompassing the best available rates in the smooth setting. We present illustrative experiments confirming our analysis.

Reconfigurable intelligent surface (RIS) and ambient backscatter communication (AmBC) have been envisioned as two promising technologies due to their high transmission reliability as well as energy-efficiency. This paper investigates the secrecy performance of RIS assisted AmBC networks. New closed-form and asymptotic expressions of secrecy outage probability for RIS-AmBC networks are derived by taking into account both imperfect successive interference cancellation (ipSIC) and perfect SIC (pSIC) cases. On top of these, the secrecy diversity order of legitimate user is obtained in high signal-to-noise ratio region, which equals \emph{zero} and is proportional to the number of RIS elements for ipSIC and pSIC, respectively. The secrecy throughput and energy efficiency are further surveyed to evaluate the secure effectiveness of RIS-AmBC networks. Numerical results are provided to verify the accuracy of theoretical analyses and manifest that: i) The secrecy outage behavior of RIS-AmBC networks exceeds that of conventional AmBC networks; ii) Due to the mutual interference between direct and backscattering links, the number of RIS elements has an optimal value to minimise the secrecy system outage probability; and iii) Secrecy throughput and energy efficiency are strongly influenced by the reflecting coefficient and eavesdropper's wiretapping ability.

Recent advances in the theory of Neural Operators (NOs) have enabled fast and accurate computation of the solutions to complex systems described by partial differential equations (PDEs). Despite their great success, current NO-based solutions face important challenges when dealing with spatio-temporal PDEs over long time scales. Specifically, the current theory of NOs does not present a systematic framework to perform data assimilation and efficiently correct the evolution of PDE solutions over time based on sparsely sampled noisy measurements. In this paper, we propose a learning-based state-space approach to compute the solution operators to infinite-dimensional semilinear PDEs. Exploiting the structure of semilinear PDEs and the theory of nonlinear observers in function spaces, we develop a flexible recursive method that allows for both prediction and data assimilation by combining prediction and correction operations. The proposed framework is capable of producing fast and accurate predictions over long time horizons, dealing with irregularly sampled noisy measurements to correct the solution, and benefits from the decoupling between the spatial and temporal dynamics of this class of PDEs. We show through experiments on the Kuramoto-Sivashinsky, Navier-Stokes and Korteweg-de Vries equations that the proposed model is robust to noise and can leverage arbitrary amounts of measurements to correct its prediction over a long time horizon with little computational overhead.

The field of efficient Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges. Although the field has expanded and is vibrant, there hasn't been a concise framework that analyzes the various methods of LLM Inference to provide a clear understanding of this domain. Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model for systematic analysis of LLM inference techniques. This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems, such as why LLMs are memory-bound, how much memory and computation they need, and how to choose the right hardware. We systematically collate the latest advancements in efficient LLM inference, covering crucial areas such as model compression (e.g., Knowledge Distillation and Quantization), algorithm improvements (e.g., Early Exit and Mixture-of-Expert), and both hardware and system-level enhancements. Our survey stands out by analyzing these methods with roofline model, helping us understand their impact on memory access and computation. This distinctive approach not only showcases the current research landscape but also delivers valuable insights for practical implementation, positioning our work as an indispensable resource for researchers new to the field as well as for those seeking to deepen their understanding of efficient LLM deployment. The analyze tool, LLM-Viewer, is open-sourced.

Kernel ridge regression (KRR) is widely used for nonparametric regression over reproducing kernel Hilbert spaces. It offers powerful modeling capabilities at the cost of significant computational costs, which typically require $O(n^3)$ computational time and $O(n^2)$ storage space, with the sample size n. We introduce a novel framework of multi-layer kernel machines that approximate KRR by employing a multi-layer structure and random features, and study how the optimal number of random features and layer sizes can be chosen while still preserving the minimax optimality of the approximate KRR estimate. For various classes of random features, including those corresponding to Gaussian and Matern kernels, we prove that multi-layer kernel machines can achieve $O(n^2\log^2n)$ computational time and $O(n\log^2n)$ storage space, and yield fast and minimax optimal approximations to the KRR estimate for nonparametric regression. Moreover, we construct uncertainty quantification for multi-layer kernel machines by using conformal prediction techniques with robust coverage properties. The analysis and theoretical predictions are supported by simulations and real data examples.

Finding an approximate second-order stationary point (SOSP) is a well-studied and fundamental problem in stochastic nonconvex optimization with many applications in machine learning. However, this problem is poorly understood in the presence of outliers, limiting the use of existing nonconvex algorithms in adversarial settings. In this paper, we study the problem of finding SOSPs in the strong contamination model, where a constant fraction of datapoints are arbitrarily corrupted. We introduce a general framework for efficiently finding an approximate SOSP with \emph{dimension-independent} accuracy guarantees, using $\widetilde{O}({D^2}/{\epsilon})$ samples where $D$ is the ambient dimension and $\epsilon$ is the fraction of corrupted datapoints. As a concrete application of our framework, we apply it to the problem of low rank matrix sensing, developing efficient and provably robust algorithms that can tolerate corruptions in both the sensing matrices and the measurements. In addition, we establish a Statistical Query lower bound providing evidence that the quadratic dependence on $D$ in the sample complexity is necessary for computationally efficient algorithms.

Causal Machine Learning (CausalML) is an umbrella term for machine learning methods that formalize the data-generation process as a structural causal model (SCM). This allows one to reason about the effects of changes to this process (i.e., interventions) and what would have happened in hindsight (i.e., counterfactuals). We categorize work in \causalml into five groups according to the problems they tackle: (1) causal supervised learning, (2) causal generative modeling, (3) causal explanations, (4) causal fairness, (5) causal reinforcement learning. For each category, we systematically compare its methods and point out open problems. Further, we review modality-specific applications in computer vision, natural language processing, and graph representation learning. Finally, we provide an overview of causal benchmarks and a critical discussion of the state of this nascent field, including recommendations for future work.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

Generative commonsense reasoning which aims to empower machines to generate sentences with the capacity of reasoning over a set of concepts is a critical bottleneck for text generation. Even the state-of-the-art pre-trained language generation models struggle at this task and often produce implausible and anomalous sentences. One reason is that they rarely consider incorporating the knowledge graph which can provide rich relational information among the commonsense concepts. To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graph augmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output. Moreover, KG-BART can leverage the graph attention to aggregate the rich concept semantics that enhances the model generalization on unseen concept sets. Experiments on benchmark CommonGen dataset verify the effectiveness of our proposed approach by comparing with several strong pre-trained language generation models, particularly KG-BART outperforms BART by 5.80, 4.60, in terms of BLEU-3, 4. Moreover, we also show that the generated context by our model can work as background scenarios to benefit downstream commonsense QA tasks.

The difficulty of deploying various deep learning (DL) models on diverse DL hardwares has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the DL models described in different DL frameworks as input, and then generate optimized codes for diverse DL hardwares as output. However, none of the existing survey has analyzed the unique design of the DL compilers comprehensively. In this paper, we perform a comprehensive survey of existing DL compilers by dissecting the commonly adopted design in details, with emphasis on the DL oriented multi-level IRs, and frontend/backend optimizations. Specifically, we provide a comprehensive comparison among existing DL compilers from various aspects. In addition, we present detailed analysis of the multi-level IR design and compiler optimization techniques. Finally, several insights are highlighted as the potential research directions of DL compiler. This is the first survey paper focusing on the unique design of DL compiler, which we hope can pave the road for future research towards the DL compiler.

北京阿比特科技有限公司