亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modern computer systems are ubiquitous in contemporary life yet many of them remain opaque. This poses significant challenges in domains where desiderata such as fairness or accountability are crucial. We suggest that the best strategy for achieving system transparency varies depending on the specific source of opacity prevalent in a given context. Synthesizing and extending existing discussions, we propose a taxonomy consisting of eight sources of opacity that fall into three main categories: architectural, analytical, and socio-technical. For each source, we provide initial suggestions as to how to address the resulting opacity in practice. The taxonomy provides a starting point for requirements engineers and other practitioners to understand contextually prevalent sources of opacity, and to select or develop appropriate strategies for overcoming them.

相關內容

分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)學(xue)(xue)是分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)的(de)(de)(de)(de)實踐(jian)和(he)科學(xue)(xue)。Wikipedia類(lei)(lei)別(bie)說(shuo)明了一(yi)(yi)種分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa),可以(yi)通過自動方(fang)式提取Wikipedia類(lei)(lei)別(bie)的(de)(de)(de)(de)完整分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)。截至(zhi)2009年,已經證明,可以(yi)使用(yong)人(ren)工構(gou)(gou)建的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(例如(ru)像WordNet這樣(yang)的(de)(de)(de)(de)計算詞(ci)(ci)典的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa))來改進和(he)重(zhong)組(zu)Wikipedia類(lei)(lei)別(bie)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)。 從廣義上講,分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)還適(shi)用(yong)于(yu)除父(fu)子層(ceng)(ceng)次結(jie)構(gou)(gou)以(yi)外的(de)(de)(de)(de)關(guan)系方(fang)案,例如(ru)網絡結(jie)構(gou)(gou)。然后分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)可能(neng)(neng)包括有(you)多父(fu)母的(de)(de)(de)(de)單身孩子,例如(ru),“汽車(che)”可能(neng)(neng)與父(fu)母雙方(fang)一(yi)(yi)起出現“車(che)輛”和(he)“鋼(gang)結(jie)構(gou)(gou)”;但(dan)是對(dui)某些人(ren)而(er)言,這僅意味著“汽車(che)”是幾種不同(tong)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)的(de)(de)(de)(de)一(yi)(yi)部(bu)分(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)也可能(neng)(neng)只是將事物(wu)組(zu)織成(cheng)組(zu),或(huo)者是按字母順序排列的(de)(de)(de)(de)列表;但(dan)是在(zai)這里,術語詞(ci)(ci)匯更合適(shi)。在(zai)知識(shi)管(guan)理中的(de)(de)(de)(de)當前用(yong)法(fa)中,分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)被(bei)認(ren)為比本體(ti)論窄,因為本體(ti)論應用(yong)了各種各樣(yang)的(de)(de)(de)(de)關(guan)系類(lei)(lei)型。 在(zai)數學(xue)(xue)上,分(fen)(fen)(fen)(fen)(fen)層(ceng)(ceng)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)是給定對(dui)象(xiang)(xiang)(xiang)集(ji)的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)樹結(jie)構(gou)(gou)。該結(jie)構(gou)(gou)的(de)(de)(de)(de)頂部(bu)是適(shi)用(yong)于(yu)所(suo)有(you)對(dui)象(xiang)(xiang)(xiang)的(de)(de)(de)(de)單個分(fen)(fen)(fen)(fen)(fen)類(lei)(lei),即根節點。此根下的(de)(de)(de)(de)節點是更具(ju)體(ti)的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei),適(shi)用(yong)于(yu)總(zong)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)對(dui)象(xiang)(xiang)(xiang)集(ji)的(de)(de)(de)(de)子集(ji)。推理的(de)(de)(de)(de)進展從一(yi)(yi)般到更具(ju)體(ti)。

知識薈萃

精品入門和進(jin)階教(jiao)程(cheng)、論文和代碼(ma)整(zheng)理等

更多

查看相關(guan)VIP內容、論(lun)文、資訊(xun)等

Physical attacks form one of the most severe threats against secure computing platforms. Their criticality arises from their corresponding threat model: By, e.g., passively measuring an integrated circuit's (IC's) environment during a security-related operation, internal secrets may be disclosed. Furthermore, by actively disturbing the physical runtime environment of an IC, an adversary can cause a specific, exploitable misbehavior. The set of physical attacks consists of techniques that apply either globally or locally. When compared to global techniques, local techniques exhibit a much higher precision, hence having the potential to be used in advanced attack scenarios. However, using physical techniques with additional spatial dependency expands the parameter search space exponentially. In this work, we present and compare two techniques, namely laser logic state imaging (LLSI) and lock-in thermography (LIT), that can be used to discover sub-circuitry of an entirely unknown IC based on optical and thermal principles. We show that the time required to identify specific regions can be drastically reduced, thus lowering the complexity of physical attacks requiring positional information. Our case study on an Intel H610 Platform Controller Hub showcases that, depending on the targeted voltage rail, our technique reduces the search space by around 90 to 98 percent.

Controlled execution of dynamic motions in quadrupedal robots, especially those with articulated soft bodies, presents a unique set of challenges that traditional methods struggle to address efficiently. In this study, we tackle these issues by relying on a simple yet effective two-stage learning framework to generate dynamic motions for quadrupedal robots. First, a gradient-free evolution strategy is employed to discover simply represented control policies, eliminating the need for a predefined reference motion. Then, we refine these policies using deep reinforcement learning. Our approach enables the acquisition of complex motions like pronking and back-flipping, effectively from scratch. Additionally, our method simplifies the traditionally labour-intensive task of reward shaping, boosting the efficiency of the learning process. Importantly, our framework proves particularly effective for articulated soft quadrupeds, whose inherent compliance and adaptability make them ideal for dynamic tasks but also introduce unique control challenges.

Federated training of Graph Neural Networks (GNN) has become popular in recent years due to its ability to perform graph-related tasks under data isolation scenarios while preserving data privacy. However, graph heterogeneity issues in federated GNN systems continue to pose challenges. Existing frameworks address the problem by representing local tasks using different statistics and relating them through a simple aggregation mechanism. However, these approaches suffer from limited efficiency from two aspects: low quality of task-relatedness quantification and inefficacy of exploiting the collaboration structure. To address these issues, we propose FedGKD, a novel federated GNN framework that utilizes a novel client-side graph dataset distillation method to extract task features that better describe task-relatedness, and introduces a novel server-side aggregation mechanism that is aware of the global collaboration structure. We conduct extensive experiments on six real-world datasets of different scales, demonstrating our framework's outperformance.

Detecting firearms and accurately localizing individuals carrying them in images or videos is of paramount importance in security, surveillance, and content customization. However, this task presents significant challenges in complex environments due to clutter and the diverse shapes of firearms. To address this problem, we propose a novel approach that leverages human-firearm interaction information, which provides valuable clues for localizing firearm carriers. Our approach incorporates an attention mechanism that effectively distinguishes humans and firearms from the background by focusing on relevant areas. Additionally, we introduce a saliency-driven locality-preserving constraint to learn essential features while preserving foreground information in the input image. By combining these components, our approach achieves exceptional results on a newly proposed dataset. To handle inputs of varying sizes, we pass paired human-firearm instances with attention masks as channels through a deep network for feature computation, utilizing an adaptive average pooling layer. We extensively evaluate our approach against existing methods in human-object interaction detection and achieve significant results (AP=77.8\%) compared to the baseline approach (AP=63.1\%). This demonstrates the effectiveness of leveraging attention mechanisms and saliency-driven locality preservation for accurate human-firearm interaction detection. Our findings contribute to advancing the fields of security and surveillance, enabling more efficient firearm localization and identification in diverse scenarios.

The transformation to Industry 4.0 changes the way embedded software systems are developed. Digital twins have the potential for cost-effective software development and maintenance strategies. With reduced costs and faster development cycles, small and medium-sized enterprises (SME) have the chance to grow with new smart products. We interviewed SMEs about their current development processes. In this paper, we present the first results of these interviews. First results show that real-time requirements prevent, to date, a Software-in-the-Loop development approach, due to a lack of proper tooling. Security/safety concerns, and the accessibility of hardware are the main impediments. Only temporary access to the hardware leads to Software-in-the-Loop development approaches based on simulations/emulators. Yet, this is not in all use cases possible. All interviewees see the potential of Software-in-the-Loop approaches and digital twins with regard to quality and customization. One reason it will take some effort to convince engineers, is the conservative nature of the embedded community, particularly in SMEs.

In neural network training, RMSProp and ADAM remain widely favoured optimization algorithms. One of the keys to their performance lies in selecting the correct step size, which can significantly influence their effectiveness. It is worth noting that these algorithms performance can vary considerably, depending on the chosen step sizes. Additionally, questions about their theoretical convergence properties continue to be a subject of interest. In this paper, we theoretically analyze a constant stepsize version of ADAM in the non-convex setting. We show sufficient conditions for the stepsize to achieve almost sure asymptotic convergence of the gradients to zero with minimal assumptions. We also provide runtime bounds for deterministic ADAM to reach approximate criticality when working with smooth, non-convex functions.

Knowledge plays a critical role in artificial intelligence. Recently, the extensive success of pre-trained language models (PLMs) has raised significant attention about how knowledge can be acquired, maintained, updated and used by language models. Despite the enormous amount of related studies, there still lacks a unified view of how knowledge circulates within language models throughout the learning, tuning, and application processes, which may prevent us from further understanding the connections between current progress or realizing existing limitations. In this survey, we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods, and investigating how knowledge circulates when it is built, maintained and used. To this end, we systematically review existing studies of each period of the knowledge life cycle, summarize the main challenges and current limitations, and discuss future directions.

Multi-agent influence diagrams (MAIDs) are a popular form of graphical model that, for certain classes of games, have been shown to offer key complexity and explainability advantages over traditional extensive form game (EFG) representations. In this paper, we extend previous work on MAIDs by introducing the concept of a MAID subgame, as well as subgame perfect and trembling hand perfect equilibrium refinements. We then prove several equivalence results between MAIDs and EFGs. Finally, we describe an open source implementation for reasoning about MAIDs and computing their equilibria.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs (18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures - a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes. In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.

北京阿比特科技有限公司