亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Binary multipliers have long been a staple component in digital circuitry, serving crucial roles in microprocessor design, digital signal processing units and many more applications. This work presents a unique design for a multiplier that utilizes a reformed-array-logic approach to compute the product of two unsigned binary numbers. We employed a multiplexer and a barrel shifter to multiply partial products in a single clock cycle to speed up the traditional array logic. In addition, we have employed a combination of Carry Save Adders (CSA) and Ripple Carry Adders (RCA) to accumulate the partial products instead of using standalone RCAs to speed up the multiplication process further. Finally, we have demonstrated our design to perform multiplication of two 16-bit unsigned binary numbers on Cadence Virtuoso. Our design is modular and can be scaled up or down to accommodate the multiplication of any n-bit unsigned numbers.

相關內容

Class imbalance and label noise are pervasive in large-scale datasets, yet much of machine learning research assumes well-labeled, balanced data, which rarely reflects real world conditions. Existing approaches typically address either label noise or class imbalance in isolation, leading to suboptimal results when both issues coexist. In this work, we propose Conformal-in-the-Loop (CitL), a novel training framework that addresses both challenges with a conformal prediction-based approach. CitL evaluates sample uncertainty to adjust weights and prune unreliable examples, enhancing model resilience and accuracy with minimal computational cost. Our extensive experiments include a detailed analysis showing how CitL effectively emphasizes impactful data in noisy, imbalanced datasets. Our results show that CitL consistently boosts model performance, achieving up to a 6.1% increase in classification accuracy and a 5.0 mIoU improvement in segmentation. Our code is publicly available: CitL.

Robust estimation provides essential tools for analyzing data that contain outliers, ensuring that statistical models remain reliable even in the presence of some anomalous data. While robust methods have long been available in R, users of Python have lacked a comprehensive package that offers these methods in a cohesive framework. RobPy addresses this gap by offering a wide range of robust methods in Python, built upon established libraries including NumPy, SciPy, and scikit-learn. This package includes tools for robust preprocessing, univariate estimation, covariance matrices, regression, and principal component analysis, which are able to detect outliers and to mitigate their effect. In addition, RobPy provides specialized diagnostic plots for visualizing casewise and cellwise outliers. This paper presents the structure of the RobPy package, demonstrates its functionality through examples, and compares its features to existing implementations in other statistical software. By bringing robust methods to Python, RobPy enables more users to perform robust data analysis in a modern and versatile programming language.

Modern optimizers such as AdamW, equipped with momentum and adaptive learning rate, are designed to escape local minima and explore the vast parameter space. This exploration is beneficial for finding good loss basins when training from scratch. It is not necessarily ideal when resuming from a powerful foundation model because it can lead to large deviations from the pre-trained initialization and, consequently, worse robustness and generalization. At the same time, strong regularization on all parameters can lead to under-fitting. We hypothesize that selectively regularizing the parameter space is the key to fitting and retraining the pre-trained knowledge. This paper proposes a new weight decay technique, Selective Projection Decay (SPD), that selectively imposes a strong penalty on certain layers while allowing others to change freely. Intuitively, SPD expands and contracts the parameter search space for layers with consistent and inconsistent loss reduction, respectively. Experimentally, when equipped with SPD, Adam consistently provides better in-distribution generalization and out-of-distribution robustness performance on multiple popular vision and language benchmarks. Code available at~\url{//github.com/GT-RIPL/Selective-Projection-Decay.git}

Error detection and correction are essential for ensuring robust and reliable operation in modern communication systems, particularly in complex transmission environments. However, discussions on these topics have largely been overlooked in semantic communication (SemCom), which focuses on transmitting meaning rather than symbols, leading to significant improvements in communication efficiency. Despite these advantages, semantic errors -- stemming from discrepancies between transmitted and received meanings -- present a major challenge to system reliability. This paper addresses this gap by proposing a comprehensive framework for detecting and correcting semantic errors in SemCom systems. We formally define semantic error, detection, and correction mechanisms, and identify key sources of semantic errors. To address these challenges, we develop a Gaussian process (GP)-based method for latent space monitoring to detect errors, alongside a human-in-the-loop reinforcement learning (HITL-RL) approach to optimize semantic model configurations using user feedback. Experimental results validate the effectiveness of the proposed methods in mitigating semantic errors under various conditions, including adversarial attacks, input feature changes, physical channel variations, and user preference shifts. This work lays the foundation for more reliable and adaptive SemCom systems with robust semantic error management techniques.

The emergence of distinct local mark behaviours is becoming increasingly common in the applications of spatial marked point processes. This dynamic highlights the limitations of existing global mark correlation functions in accurately identifying the true patterns of mark associations/variations among points as distinct mark behaviours might dominate one another, giving rise to an incomplete understanding of mark associations. In this paper, we introduce a family of local indicators of mark association (LIMA) functions for spatial marked point processes. These functions are defined on general state spaces and can include marks that are either real-valued or function-valued. Unlike global mark correlation functions, which are often distorted by the existence of distinct mark behaviours, LIMA functions reliably identify all types of mark associations and variations among points. Additionally, they accurately determine the interpoint distances where individual points show significant mark associations. Through simulation studies, featuring various scenarios, and four real applications in forestry, criminology, and urban mobility, we study spatial marked point processes in $\R^2$ and on linear networks with either real-valued or function-valued marks, demonstrating that LIMA functions significantly outperform the existing global mark correlation functions.

The emergence of intelligent applications has fostered the development of a task-oriented communication paradigm, where a comprehensive, universal, and practical metric is crucial for unleashing the potential of this paradigm. To this end, we introduce an innovative metric, the Task-oriented Age of Information (TAoI), to measure whether the content of information is relevant to the system task, thereby assisting the system in efficiently completing designated tasks. Also, we study the TAoI in a remote monitoring system, whose task is to identify target images and transmit them for subsequent analysis. We formulate the dynamic transmission problem as a Semi-Markov Decision Process (SMDP) and transform it into an equivalent Markov Decision Process (MDP) to minimize TAoI and find the optimal transmission policy. Furthermore, we demonstrate that the optimal strategy is a threshold-based policy regarding TAoI and propose a relative value iteration algorithm based on the threshold structure to obtain the optimal transmission policy. Finally, simulation results show the superior performance of the optimal transmission policy compared to the baseline policies.

The proliferation of malware, particularly through the use of packing, presents a significant challenge to static analysis and signature-based malware detection techniques. The application of packing to the original executable code renders extracting meaningful features and signatures challenging. To deal with the increasing amount of malware in the wild, researchers and anti-malware companies started harnessing machine learning capabilities with very promising results. However, little is known about the effects of packing on static machine learning-based malware detection and classification systems. This work addresses this gap by investigating the impact of packing on the performance of static machine learning-based models used for malware detection and classification, with a particular focus on those using visualisation techniques. To this end, we present a comprehensive analysis of various packing techniques and their effects on the performance of machine learning-based detectors and classifiers. Our findings highlight the limitations of current static detection and classification systems and underscore the need to be proactive to effectively counteract the evolving tactics of malware authors.

The problem of attacks on new generation network infrastructures is becoming increasingly relevant, given the widening of the attack surface of these networks resulting from the greater number of devices that will access them in the future (sensors, actuators, vehicles, household appliances, etc.). Approaches to the design of intrusion detection systems must evolve and go beyond the traditional concept of perimeter control to build on new paradigms that exploit the typical characteristics of future 5G and 6G networks, such as in-network computing and intelligent programmable data planes. The aim of this research is to propose a disruptive paradigm in which devices in a typical data plane of a future programmable network have %classification and anomaly detection capabilities and cooperate in a fully distributed fashion to act as an ML-enabled Active Intrusion Detection System "embedded" into the network. The reported proof-of-concept experiments demonstrate that the proposed paradigm allows working effectively and with a good level of precision while occupying overall less CPU and RAM resources of the devices involved.

Self-supervised learning (SSL) models have become crucial in speech processing, with recent advancements concentrating on developing architectures that capture representations across multiple timescales. The primary goal of these multi-scale architectures is to exploit the hierarchical nature of speech, where lower-resolution components aim to capture representations that align with increasingly abstract concepts (e.g., from phones to words to sentences). Although multi-scale approaches have demonstrated some improvements over single-scale models, the precise reasons for these enhancements have poor empirical support. In this study, we present an initial analysis of layer-wise representations in multi-scale architectures, with a focus on Canonical Correlation Analysis (CCA) and Mutual Information (MI). We apply this analysis to Multi-Resolution HuBERT (MR-HuBERT) and find that (1) the improved performance on SUPERB tasks is primarily due to the auxiliary low-resolution loss rather than the downsampling itself, and (2) downsampling to lower resolutions neither improves downstream performance nor correlates with higher-level information (e.g., words), though it does improve computational efficiency. These findings challenge assumptions about the multi-scale nature of MR-HuBERT and motivate the importance of disentangling computational efficiency from learning better representations.

We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.

北京阿比特科技有限公司