亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Application compartmentalization and privilege separation are our primary weapons against ever-increasing security threats and privacy concerns on connected devices. Despite significant progress, it is still challenging to privilege separate inside an application address space and in multithreaded environments, particularly on resource-constrained and mobile devices. We propose MicroGuards, a lightweight kernel modification and set of security primitives and APIs aimed at flexible and fine-grained in-process memory protection and privilege separation in multithreaded applications. MicroGuards take advantage of hardware support in modern CPUs and are high-level enough to be adaptable to various architectures. This paper focuses on enabling MicroGuards on embedded and mobile devices running Linux kernel and utilizes tagged memory support to achieve good performance. Our evaluation show that MicroGuards add small runtime overhead (less than 3.5\%), minimal memory footprint, and are practical to get integrated with existing applications to enable fine-grained privilege separation.

相關內容

In certain Blockchain systems, multiple Blockchains are required to operate cooperatively for security, performance, and capacity considerations. This invention defines a cross-chain mechanism where a main Blockchain issues the tokens, which can then be transferred and used in multiple side Blockchains to drive their operations. A set of witnesses are created to securely manage the token exchange across the main chain and multiple side chains. The system decouples the consensus algorithms between the main chain and side chains. We also discuss the coexistence of the main tokens and the native tokens in the side chains.

We consider the fair allocation of indivisible items to several agents with additional conflict constraints. These are represented by a conflict graph where each item corresponds to a vertex of the graph and edges in the graph represent incompatible pairs of items which should not be allocated to the same agent. This setting combines the issues of Partition and Independent Set and can be seen as a partial coloring of the conflict graph. In the resulting optimization problem each agent has its own valuation function for the profits of the items. We aim at maximizing the lowest total profit obtained by any of the agents. In a previous paper this problem was shown to be strongly \NP-hard for several well-known graph classes, e.g., bipartite graphs and their line graphs. On the other hand, it was shown that pseudo-polynomial time algorithms exist for the classes of chordal graphs, cocomparability graphs, biconvex bipartite graphs, and graphs of bounded treewidth. In this contribution we extend this line of research by developing pseudo-polynomial time algorithms that solve the problem for the class of convex bipartite conflict graphs, graphs of bounded clique-width, and graphs of bounded tree-independence number. The algorithms are based on dynamic programming and also permit fully polynomial-time approximation schemes (FPTAS).

Deep neural networks are susceptible to human imperceptible adversarial perturbations. One of the strongest defense mechanisms is \emph{Adversarial Training} (AT). In this paper, we aim to address two predominant problems in AT. First, there is still little consensus on how to set hyperparameters with a performance guarantee for AT research, and customized settings impede a fair comparison between different model designs in AT research. Second, the robustly trained neural networks struggle to generalize well and suffer from tremendous overfitting. This paper focuses on the primary AT framework - Projected Gradient Descent Adversarial Training (PGD-AT). We approximate the dynamic of PGD-AT by a continuous-time Stochastic Differential Equation (SDE), and show that the diffusion term of this SDE determines the robust generalization. An immediate implication of this theoretical finding is that robust generalization is positively correlated with the ratio between learning rate and batch size. We further propose a novel approach, \emph{Diffusion Enhanced Adversarial Training} (DEAT), to manipulate the diffusion term to improve robust generalization with virtually no extra computational burden. We theoretically show that DEAT obtains a tighter generalization bound than PGD-AT. Our empirical investigation is extensive and firmly attests that DEAT universally outperforms PGD-AT by a significant margin.

Filtering is concerned with online estimation of the state of a dynamical system from partial and noisy observations. In applications where the state of the system is high dimensional, ensemble Kalman filters are often the method of choice. These algorithms rely on an ensemble of interacting particles to sequentially estimate the state as new observations become available. Despite the practical success of ensemble Kalman filters, theoretical understanding is hindered by the intricate dependence structure of the interacting particles. This paper investigates ensemble Kalman filters that incorporate an additional resampling step to break the dependency between particles. The new algorithm is amenable to a theoretical analysis that extends and improves upon those available for filters without resampling, while also performing well in numerical examples.

We consider a Multi-Agent Path Finding (MAPF) setting where agents have been assigned a plan, but during its execution some agents are delayed. Instead of replanning from scratch when such a delay occurs, we propose delay introduction, whereby we delay some additional agents so that the remainder of the plan can be executed safely. We show that the corresponding decision problem is NP-Complete in general. However, in practice we can find optimal delay-introductions using CBS for very large numbers of agents, and both planning time and the resulting length of the plan are comparable, and sometimes outperform, the state-of-the-art heuristics for replanning.

The emergence of new communication technologies allows us to expand our understanding of distributed control and consider collaborative decision-making paradigms. With collaborative algorithms, certain local decision-making entities (or agents) are enabled to communicate and collaborate on their actions with one another to attain better system behavior. By limiting the amount of communication, these algorithms exist somewhere between centralized and fully distributed approaches. To understand the possible benefits of this inter-agent collaboration, we model a multi-agent system as a common-interest game in which groups of agents can collaborate on their actions to jointly increase the system welfare. We specifically consider $k$-strong Nash equilibria as the emergent behavior of these systems and address how well these states approximate the system optimal, formalized by the $k$-strong price of anarchy ratio. Our main contributions are in generating tight bounds on the $k$-strong price of anarchy in finite resource allocation games as the solution to a tractable linear program. By varying $k$ --the maximum size of a collaborative coalition--we observe exactly how much performance is gained from inter-agent collaboration. To investigate further opportunities for improvement, we generate upper bounds on the maximum attainable $k$-strong price of anarchy when the agents' utility function can be designed.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

Since the cyberspace consolidated as fifth warfare dimension, the different actors of the defense sector began an arms race toward achieving cyber superiority, on which research, academic and industrial stakeholders contribute from a dual vision, mostly linked to a large and heterogeneous heritage of developments and adoption of civilian cybersecurity capabilities. In this context, augmenting the conscious of the context and warfare environment, risks and impacts of cyber threats on kinetic actuations became a critical rule-changer that military decision-makers are considering. A major challenge on acquiring mission-centric Cyber Situational Awareness (CSA) is the dynamic inference and assessment of the vertical propagations from situations that occurred at the mission supportive Information and Communications Technologies (ICT), up to their relevance at military tactical, operational and strategical views. In order to contribute on acquiring CSA, this paper addresses a major gap in the cyber defence state-of-the-art: the dynamic identification of Key Cyber Terrains (KCT) on a mission-centric context. Accordingly, the proposed KCT identification approach explores the dependency degrees among tasks and assets defined by commanders as part of the assessment criteria. These are correlated with the discoveries on the operational network and the asset vulnerabilities identified thorough the supported mission development. The proposal is presented as a reference model that reveals key aspects for mission-centric KCT analysis and supports its enforcement and further enforcement by including an illustrative application case.

Seamlessly interacting with humans or robots is hard because these agents are non-stationary. They update their policy in response to the ego agent's behavior, and the ego agent must anticipate these changes to co-adapt. Inspired by humans, we recognize that robots do not need to explicitly model every low-level action another agent will make; instead, we can capture the latent strategy of other agents through high-level representations. We propose a reinforcement learning-based framework for learning latent representations of an agent's policy, where the ego agent identifies the relationship between its behavior and the other agent's future strategy. The ego agent then leverages these latent dynamics to influence the other agent, purposely guiding them towards policies suitable for co-adaptation. Across several simulated domains and a real-world air hockey game, our approach outperforms the alternatives and learns to influence the other agent.

Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.

北京阿比特科技有限公司