亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Identity and Access Management (IAM) is an access control service in cloud platforms. To securely manage cloud resources, customers are required to configure IAM to specify the access control rules for their cloud organizations. However, IAM misconfiguration may be exploited to perform privilege escalation attacks, which can cause severe economic loss. To detect privilege escalations due to IAM misconfigurations, existing third-party cloud security services apply whitebox penetration testing techniques, requiring the access of complete IAM configurations. To prevent sensitive information disclosure, this requirement places a considerable burden on customers, demanding lots of manual efforts for the anonymization of their configurations. In this paper, we propose a precise greybox penetration testing approach called TAC for third-party services to detect IAM privilege escalations. To mitigate the dual challenges of labor-intensive anonymizations and potential sensitive information disclosures, TAC interacts with customers by selectively querying only the essential information needed. To accomplish this, we first propose abstract IAM modeling, enabling TAC to detect IAM privilege escalations based on the partial information collected from queries. Moreover, to improve the efficiency and applicability of TAC, we minimize the interactions with customers by applying Reinforcement Learning (RL) with Graph Neural Networks (GNNs), allowing TAC to learn to make as few queries as possible. To pretrain and evaluate TAC with enough diverse tasks, we propose an IAM privilege escalation task generator called IAMVulGen. Experimental results on both our task set and the only publicly available task set IAM Vulnerable show that, in comparison to state-of-the-art whitebox approaches, TAC detects IAM privilege escalations with competitively low false negative rates, employing a limited number of queries.

相關內容

IEEE情感計算TAC(IEEE Transactions on Affective Computing)是一份跨學科的國際檔案期刊,旨在傳播能夠識別、解釋和模擬人類情感和相關情感現象的系統設計研究成果。 官網地址:

Q&A platforms have been an integral part of the web-help-seeking behavior of programmers over the past decade. However, with the recent introduction of ChatGPT, the paradigm of web-help-seeking behavior is experiencing a shift. Despite the popularity of ChatGPT, no comprehensive study has been conducted to evaluate the characteristics or usability of ChatGPT's answers to software engineering questions. To bridge the gap, we conducted the first in-depth analysis of ChatGPT's answers to 517 Stack Overflow (SO) questions and examined the correctness, consistency, comprehensiveness, and conciseness of ChatGPT's answers. Furthermore, we conducted a large-scale linguistic analysis, and a user study to understand the characteristics of ChatGPT answers from linguistic and human aspects. Our analysis shows that 52% of ChatGPT answers are incorrect and 77% are verbose. Nonetheless, ChatGPT answers are still preferred 39.34% of the time due to their comprehensiveness and well-articulated language style. Our result implies the necessity of close examination and rectification of errors in ChatGPT, at the same time creating awareness among its users of the risks associated with seemingly correct ChatGPT answers.

Secure data deletion enables data owners to fully control the erasure of their data stored on local or cloud data centers and is essential for preventing data leakage, especially for cloud storage. However, traditional data deletion based on unlinking, overwriting, and cryptographic key management either ineffectiveness in cloud storage or rely on unpractical assumption. In this paper, we present SevDel, a secure and verifiable data deletion scheme, which leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts, while the deletion of the encryption keys are guaranteed based on Intel SGX. SevDel implements secure interfaces to perform data encryption and decryption for secure cloud storage. It also utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers. Evaluation on real-world workload demonstrates that SevDel achieves efficient data deletion verification and maintain high bandwidth savings.

While a practical wireless network has many tiers where end users do not directly communicate with the central server, the users' devices have limited computation and battery powers, and the serving base station (BS) has a fixed bandwidth. Owing to these practical constraints and system models, this paper leverages model pruning and proposes a pruning-enabled hierarchical federated learning (PHFL) in heterogeneous networks (HetNets). We first derive an upper bound of the convergence rate that clearly demonstrates the impact of the model pruning and wireless communications between the clients and the associated BS. Then we jointly optimize the model pruning ratio, central processing unit (CPU) frequency and transmission power of the clients in order to minimize the controllable terms of the convergence bound under strict delay and energy constraints. However, since the original problem is not convex, we perform successive convex approximation (SCA) and jointly optimize the parameters for the relaxed convex problem. Through extensive simulation, we validate the effectiveness of our proposed PHFL algorithm in terms of test accuracy, wall clock time, energy consumption and bandwidth requirement.

Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT), where a UAV can use its limited service coverage to harvest and disseminate data from IoT devices with low transmission abilities. The existing UAV-assisted data harvesting and dissemination schemes largely require UAVs to frequently fly between the IoTs and access points, resulting in extra energy and time costs. To reduce both energy and time costs, a key way is to enhance the transmission performance of IoT and UAVs. In this work, we introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination from multiple IoT clusters to remote base stations (BSs). Except for reducing these costs, another non-ignorable threat lies in the existence of the potential eavesdroppers, whereas the handling of eavesdroppers often increases the energy and time costs, resulting in a conflict with the minimization of the costs. Moreover, the importance of these goals may vary relatively in different applications. Thus, we formulate a multi-objective optimization problem (MOP) to simultaneously minimize the mission completion time, signal strength towards the eavesdropper, and total energy cost of the UAVs. We prove that the formulated MOP is an NP-hard, mixed-variable optimization, and large-scale optimization problem. Thus, we propose a swarm intelligence-based algorithm to find a set of candidate solutions with different trade-offs which can meet various requirements in a low computational complexity. We also show that swarm intelligence methods need to enhance solution initialization, solution update, and algorithm parameter update phases when dealing with mixed-variable optimization and large-scale problems. Simulation results demonstrate the proposed algorithm outperforms state-of-the-art swarm intelligence algorithms.

Many conversational domains require the system to present nuanced information to users. Such systems must follow up what they say to address clarification questions and repair misunderstandings. In this work, we explore this interactive strategy in a referential communication task. Using simulation, we analyze the communication trade-offs between initial presentation and subsequent followup as a function of user clarification strategy, and compare the performance of several baseline strategies to policies derived by reinforcement learning. We find surprising advantages to coherence-based representations of dialogue strategy, which bring minimal data requirements, explainable choices, and strong audit capabilities, but incur little loss in predicted outcomes across a wide range of user models.

Search query classification, as an effective way to understand user intents, is of great importance in real-world online ads systems. To ensure a lower latency, a shallow model (e.g. FastText) is widely used for efficient online inference. However, the representation ability of the FastText model is insufficient, resulting in poor classification performance, especially on some low-frequency queries and tailed categories. Using a deeper and more complex model (e.g. BERT) is an effective solution, but it will cause a higher online inference latency and more expensive computing costs. Thus, how to juggle both inference efficiency and classification performance is obviously of great practical importance. To overcome this challenge, in this paper, we propose knowledge condensation (KC), a simple yet effective knowledge distillation framework to boost the classification performance of the online FastText model under strict low latency constraints. Specifically, we propose to train an offline BERT model to retrieve more potentially relevant data. Benefiting from its powerful semantic representation, more relevant labels not exposed in the historical data will be added into the training set for better FastText model training. Moreover, a novel distribution-diverse multi-expert learning strategy is proposed to further improve the mining ability of relevant data. By training multiple BERT models from different data distributions, it can respectively perform better at high, middle, and low-frequency search queries. The model ensemble from multi-distribution makes its retrieval ability more powerful. We have deployed two versions of this framework in JD search, and both offline experiments and online A/B testing from multiple datasets have validated the effectiveness of the proposed approach.

Due to the risks of correctness and security in outsourced cloud computing, we consider a new paradigm called crowdsourcing: distribute tasks, receive answers and aggregate the results from multiple entities. Through this approach, we can aggregate the wisdom of the crowd to complete tasks, ensuring the accuracy of task completion while reducing the risks posed by the malicious acts of a single entity. However, the ensuing question is, how can we ensure that the aggregator has done its work honestly and each contributor's work has been evaluated fairly? In this paper, we propose a new scheme called $\mathsf{zkTI}$. This scheme ensures that the aggregator has honestly completed the aggregation and each data source is fairly evaluated. We combine a cryptographic primitive called \textit{zero-knowledge proof} with a class of \textit{truth inference algorithms} which is widely studied in AI/ML scenarios. Under this scheme, various complex outsourced tasks can be solved with efficiency and accuracy. To build our scheme, a novel method to prove the precise computation of floating-point numbers is proposed, which is nearly optimal and well-compatible with existing argument systems. This may become an independent point of interest. Thus our work can prove the process of aggregation and inference without loss of precision. We fully implement and evaluate our ideas. Compared with recent works, our scheme achieves $2-4 \times$ efficiency improvement and is robust to be widely applied.

With the popularity of cloud computing and machine learning, it has been a trend to outsource machine learning processes (including model training and model-based inference) to cloud. By the outsourcing, other than utilizing the extensive and scalable resource offered by the cloud service provider, it will also be attractive to users if the cloud servers can manage the machine learning processes autonomously on behalf of the users. Such a feature will be especially salient when the machine learning is expected to be a long-term continuous process and the users are not always available to participate. Due to security and privacy concerns, it is also desired that the autonomous learning preserves the confidentiality of users' data and models involved. Hence, in this paper, we aim to design a scheme that enables autonomous and confidential model refining in cloud. Homomorphic encryption and trusted execution environment technology can protect confidentiality for autonomous computation, but each of them has their limitations respectively and they are complementary to each other. Therefore, we further propose to integrate these two techniques in the design of the model refining scheme. Through implementation and experiments, we evaluate the feasibility of our proposed scheme. The results indicate that, with our proposed scheme the cloud server can autonomously refine an encrypted model with newly provided encrypted training data to continuously improve its accuracy. Though the efficiency is still significantly lower than the baseline scheme that refines plaintext-model with plaintext-data, we expect that it can be improved by fully utilizing the higher level of parallelism and the computational power of GPU at the cloud server.

Point cloud-based large scale place recognition is fundamental for many applications like Simultaneous Localization and Mapping (SLAM). Although many models have been proposed and have achieved good performance by learning short-range local features, long-range contextual properties have often been neglected. Moreover, the model size has also become a bottleneck for their wide applications. To overcome these challenges, we propose a super light-weight network model termed SVT-Net for large scale place recognition. Specifically, on top of the highly efficient 3D Sparse Convolution (SP-Conv), an Atom-based Sparse Voxel Transformer (ASVT) and a Cluster-based Sparse Voxel Transformer (CSVT) are proposed to learn both short-range local features and long-range contextual features in this model. Consisting of ASVT and CSVT, SVT-Net can achieve state-of-the-art on benchmark datasets in terms of both accuracy and speed with a super-light model size (0.9M). Meanwhile, two simplified versions of SVT-Net are introduced, which also achieve state-of-the-art and further reduce the model size to 0.8M and 0.4M respectively.

The task of detecting 3D objects in point cloud has a pivotal role in many real-world applications. However, 3D object detection performance is behind that of 2D object detection due to the lack of powerful 3D feature extraction methods. In order to address this issue, we propose to build a 3D backbone network to learn rich 3D feature maps by using sparse 3D CNN operations for 3D object detection in point cloud. The 3D backbone network can inherently learn 3D features from almost raw data without compressing point cloud into multiple 2D images and generate rich feature maps for object detection. The sparse 3D CNN takes full advantages of the sparsity in the 3D point cloud to accelerate computation and save memory, which makes the 3D backbone network achievable. Empirical experiments are conducted on the KITTI benchmark and results show that the proposed method can achieve state-of-the-art performance for 3D object detection.

北京阿比特科技有限公司