亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Randomizing the mapping of addresses to cache entries has proven to be an effective technique for hardening caches against contention-based attacks like Prime+Prome. While attacks and defenses are still evolving, it is clear that randomized caches significantly increase the security against such attacks. However, one aspect that is missing from most analyses of randomized cache architectures is the choice of the replacement policy. Often, only the random- and LRU replacement policies are investigated. However, LRU is not applicable to randomized caches due to its immense hardware overhead, while the random replacement policy is not ideal from a performance and security perspective. In this paper, we explore replacement policies for randomized caches. We develop two new replacement policies and evaluate a total of five replacement policies regarding their security against Prime+Prune+Probe attackers. Moreover, we analyze the effect of the replacement policy on the system's performance and quantify the introduced hardware overhead. We implement randomized caches with configurable replacement policies in software and hardware using a custom cache simulator, gem5, and the CV32E40P RISC-V core. Among others, we show that the construction of eviction sets with our new policy, VARP-64, requires over 25-times more cache accesses than with the random replacement policy while also enhancing overall performance.

相關內容

Membership inference attacks (MIA) can reveal whether a particular data point was part of the training dataset, potentially exposing sensitive information about individuals. This article provides theoretical guarantees by exploring the fundamental statistical limitations associated with MIAs on machine learning models. More precisely, we first derive the statistical quantity that governs the effectiveness and success of such attacks. We then deduce that in a very general regression setting with overfitting algorithms, attacks may have a high probability of success. Finally, we investigate several situations for which we provide bounds on this quantity of interest. Our results enable us to deduce the accuracy of potential attacks based on the number of samples and other structural parameters of learning models. In certain instances, these parameters can be directly estimated from the dataset.

Distributed deep neural networks (DNNs) have been shown to reduce the computational burden of mobile devices and decrease the end-to-end inference latency in edge computing scenarios. While distributed DNNs have been studied, to the best of our knowledge the resilience of distributed DNNs to adversarial action still remains an open problem. In this paper, we fill the existing research gap by rigorously analyzing the robustness of distributed DNNs against adversarial action. We cast this problem in the context of information theory and introduce two new measurements for distortion and robustness. Our theoretical findings indicate that (i) assuming the same level of information distortion, latent features are always more robust than input representations; (ii) the adversarial robustness is jointly determined by the feature dimension and the generalization capability of the DNN. To test our theoretical findings, we perform extensive experimental analysis by considering 6 different DNN architectures, 6 different approaches for distributed DNN and 10 different adversarial attacks to the ImageNet-1K dataset. Our experimental results support our theoretical findings by showing that the compressed latent representations can reduce the success rate of adversarial attacks by 88% in the best case and by 57% on the average compared to attacks to the input space.

Missing data is a pernicious problem in epidemiologic research. Research on the validity of complete case analysis for missing data has typically focused on estimating the average treatment effect (ATE) in the whole population. However, other target populations like the treated (ATT) or external targets can be of substantive interest. In such cases, whether missing covariate data occurs within or outside the target population may impact the validity of complete case analysis. We sought to assess bias in complete case analysis when covariate data is missing outside the target (e.g., missing covariate data among the untreated when estimating the ATT). We simulated a study of the effect of a binary treatment X on a binary outcome Y in the presence of 3 confounders C1-C3 that modified the risk difference (RD). We induced missingness in C1 only among the untreated under 4 scenarios: completely randomly (similar to MCAR); randomly based on C2 and C3 (similar to MAR); randomly based on C1 (similar to MNAR); or randomly based on Y (similar to MAR). We estimated the ATE and ATT using weighting and averaged results across the replicates. We conducted a parallel simulation transporting trial results to a target population in the presence of missing covariate data in the trial. In the complete case analysis, estimated ATE was unbiased only when C1 was MCAR among the untreated. The estimated ATT, on the other hand, was unbiased in all scenarios except when Y caused missingness. The parallel simulation of generalizing and transporting trial results saw similar bias patterns. If missing covariate data is only present outside the target population, complete case analysis is unbiased except when missingness is associated with the outcome.

Shape-morphing devices, a crucial branch in soft robotics, hold significant application value in areas like human-machine interfaces, biomimetic robotics, and tools for interacting with biological systems. To achieve three-dimensional (3D) programmable shape morphing (PSM), the deployment of array-based actuators is essential. However, a critical knowledge gap impeding the development of 3D PSM is the challenge of controlling the complex systems formed by these soft actuator arrays. This study introduces a novel approach, for the first time, representing the configuration of shape morphing devices using point cloud data and employing deep learning to map these configurations to control inputs. We propose Shape Morphing Net (SMNet), a method that realizes the regression from point cloud data to high-dimensional continuous vectors. Applied to previous 2D PSM actuator arrays, SMNet significantly enhances control precision from 82.23% to 97.68%. Further, we extend its application to 3D PSM devices with three different actuator mechanisms, demonstrating the universal applicability of SMNet to the control of 3D shape morphing technologies. In our demonstrations, we confirm the efficacy of inverse control, where 3D PSM devices successfully replicate target shapes. These shapes are obtained either through 3D scanning of physical objects or via 3D modeling software. The results show that within the deformable range of 3D PSM devices, accurate reproduction of the desired shapes is achievable. The findings of this research represent a substantial advancement in soft robotics, particularly for applications demanding intricate 3D shape transformations, and establish a foundational framework for future developments in the field.

We study the geometry of linear networks with one-dimensional convolutional layers. The function spaces of these networks can be identified with semi-algebraic families of polynomials admitting sparse factorizations. We analyze the impact of the network's architecture on the function space's dimension, boundary, and singular points. We also describe the critical points of the network's parameterization map. Furthermore, we study the optimization problem of training a network with the squared error loss. We prove that for architectures where all strides are larger than one and generic data, the non-zero critical points of that optimization problem are smooth interior points of the function space. This property is known to be false for dense linear networks and linear convolutional networks with stride one.

We present a numerical discretisation of the coupled moment systems, previously introduced in Dahm and Helzel, which approximate the kinetic multi-scale model by Helzel and Tzavaras for sedimentation in suspensions of rod-like particles for a two-dimensional flow problem and a shear flow problem. We use a splitting ansatz which, during each time step, separately computes the update of the macroscopic flow equation and of the moment system. The proof of the hyperbolicity of the moment systems in \cite{Dahm} suggests solving the moment systems with standard numerical methods for hyperbolic problems, like LeVeque's Wave Propagation Algorithm \cite{LeV}. The number of moment equations used in the hyperbolic moment system can be adapted to locally varying flow features. An error analysis is proposed, which compares the approximation with $2N+1$ moment equations to an approximation with $2N+3$ moment equations. This analysis suggests an error indicator which can be computed from the numerical approximation of the moment system with $2N+1$ moment equations. In order to use moment approximations with a different number of moment equations in different parts of the computational domain, we consider an interface coupling of moment systems with different resolution. Finally, we derive a conservative high-resolution Wave Propagation Algorithm for solving moment systems with different numbers of moment equations.

Hyperspectral Imaging comprises excessive data consequently leading to significant challenges for data processing, storage and transmission. Compressive Sensing has been used in the field of Hyperspectral Imaging as a technique to compress the large amount of data. This work addresses the recovery of hyperspectral images 2.5x compressed. A comparative study in terms of the accuracy and the performance of the convex FISTA/ADMM in addition to the greedy gOMP/BIHT/CoSaMP recovery algorithms is presented. The results indicate that the algorithms recover successfully the compressed data, yet the gOMP algorithm achieves superior accuracy and faster recovery in comparison to the other algorithms at the expense of high dependence on unknown sparsity level of the data to recover.

Mobile edge computing (MEC) is powerful to alleviate the heavy computing tasks in integrated sensing and communication (ISAC) systems. In this paper, we investigate joint beamforming and offloading design in a three-tier integrated sensing, communication and computation (ISCC) framework comprising one cloud server, multiple mobile edge servers, and multiple terminals. While executing sensing tasks, the user terminals can optionally offload sensing data to either MEC server or cloud servers. To minimize the execution latency, we jointly optimize the transmit beamforming matrices and offloading decision variables under the constraint of sensing performance. An alternating optimization algorithm based on multidimensional fractional programming is proposed to tackle the non-convex problem. Simulation results demonstrates the superiority of the proposed mechanism in terms of convergence and task execution latency reduction, compared with the state-of-the-art two-tier ISCC framework.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司