亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Digital Twins (DT) virtually model cyber-physical objects using Internet of Things (IoT) components (e.g., sensors) to gather and process senstive information stored in the cloud. Trustworthiness of the streamed data is crucial which requires quantum safety and breach resiliency. Digital signatures are essential for scalable authentication and non-repudiation. Yet, NIST PQC signature standards are exorbitantly costly for low-end IoT without considering forward security. Moreover, Post-Quantum (PQ) signatures lack aggregation, which is highly desirable to reduce the transmission and storage burdens in DTs. Hence, there is an urgent need for lightweight digital signatures that offer compromise resiliency and compactness while permitting an effective transition into the PQ era for DTs. We create a series of highly lightweight digital signatures called Hardware-ASsisted Efficient Signature (HASES) that meets the above requirements. The core of HASES is a hardware-assisted cryptographic commitment construct oracle (CCO) that permits verifiers to obtain expensive commitments without signer interaction. We created three HASES schemes: PQ-HASES is a forward-secure PQ signature, LA-HASES is an efficient aggregate Elliptic-Curve signature, and HY-HASES is a novel hybrid scheme that combines PQ-HASES and LA-HASES with novel strong nesting and sequential aggregation. HASES does not require a secure-hardware on the signer. We proved that HASES schemes are secure and implemented them on commodity hardware and an 8-bit AVR ATmega2560. Our experiments confirm that PQ-HASES and LA-HASES are two magnitudes of times more signer efficient than their PQ and conventional-secure counterparts, respectively. HY-HASES outperforms NIST PQC and conventional signature combinations, offering a standardcompliant transitional solution for emerging DTs. We open-source HASES schemes for public testing and adaptation.

相關內容

Differential privacy is a widely used notion of security that enables the processing of sensitive information. In short, differentially private algorithms map "neighbouring" inputs to close output distributions. Prior work proposed several quantum extensions of differential privacy, each of them built on substantially different notions of neighbouring quantum states. In this paper, we propose a novel and general definition of neighbouring quantum states. We demonstrate that this definition captures the underlying structure of quantum encodings and can be used to provide exponentially tighter privacy guarantees for quantum measurements. Our approach combines the addition of classical and quantum noise and is motivated by the noisy nature of near-term quantum devices. Moreover, we also investigate an alternative setting where we are provided with multiple copies of the input state. In this case, differential privacy can be ensured with little loss in accuracy combining concentration of measure and noise-adding mechanisms. En route, we prove the advanced joint convexity of the quantum hockey-stick divergence and we demonstrate how this result can be applied to quantum differential privacy. Finally, we complement our theoretical findings with an empirical estimation of the certified adversarial robustness ensured by differentially private measurements.

Resilience testing, which measures the ability to minimize service degradation caused by unexpected failures, is crucial for microservice systems. The current practice for resilience testing relies on manually defining rules for different microservice systems. Due to the diverse business logic of microservices, there are no one-size-fits-all microservice resilience testing rules. As the quantity and dynamic of microservices and failures largely increase, manual configuration exhibits its scalability and adaptivity issues. To overcome the two issues, we empirically compare the impacts of common failures in the resilient and unresilient deployments of a benchmark microservice system. Our study demonstrates that the resilient deployment can block the propagation of degradation from system performance metrics (e.g., memory usage) to business metrics (e.g., response latency). In this paper, we propose AVERT, the first AdaptiVE Resilience Testing framework for microservice systems. AVERT first injects failures into microservices and collects available monitoring metrics. Then AVERT ranks all the monitoring metrics according to their contributions to the overall service degradation caused by the injected failures. Lastly, AVERT produces a resilience index by how much the degradation in system performance metrics propagates to the degradation in business metrics. The higher the degradation propagation, the lower the resilience of the microservice system. We evaluate AVERT on two open-source benchmark microservice systems. The experimental results show that AVERT can accurately and efficiently test the resilience of microservice systems.

Cutting planes and branching are two of the most important algorithms for solving mixed-integer linear programs. For both algorithms, disjunctions play an important role, being used both as branching candidates and as the foundation for some cutting planes. We relate branching decisions and cutting planes to each other through the underlying disjunctions that they are based on, with a focus on Gomory mixed-integer cuts and their corresponding split disjunctions. We show that selecting branching decisions based on quality measures of Gomory mixed-integer cuts leads to relatively small branch-and-bound trees, and that the result improves when using cuts that more accurately represent the branching decisions. Finally, we show how the history of previously computed Gomory mixed-integer cuts can be used to improve the performance of the state-of-the-art hybrid branching rule of SCIP. Our results show a 4% decrease in solve time, and an 8% decrease in number of nodes over affected instances of MIPLIB 2017.

Traditional centralized multi-agent reinforcement learning (MARL) algorithms are sometimes unpractical in complicated applications, due to non-interactivity between agents, curse of dimensionality and computation complexity. Hence, several decentralized MARL algorithms are motivated. However, existing decentralized methods only handle the fully cooperative setting where massive information needs to be transmitted in training. The block coordinate gradient descent scheme they used for successive independent actor and critic steps can simplify the calculation, but it causes serious bias. In this paper, we propose a flexible fully decentralized actor-critic MARL framework, which can combine most of actor-critic methods, and handle large-scale general cooperative multi-agent setting. A primal-dual hybrid gradient descent type algorithm framework is designed to learn individual agents separately for decentralization. From the perspective of each agent, policy improvement and value evaluation are jointly optimized, which can stabilize multi-agent policy learning. Furthermore, our framework can achieve scalability and stability for large-scale environment and reduce information transmission, by the parameter sharing mechanism and a novel modeling-other-agents methods based on theory-of-mind and online supervised learning. Sufficient experiments in cooperative Multi-agent Particle Environment and StarCraft II show that our decentralized MARL instantiation algorithms perform competitively against conventional centralized and decentralized methods.

A package query returns a package -- a multiset of tuples -- that maximizes or minimizes a linear objective function subject to linear constraints, thereby enabling in-database decision support. Prior work has established the equivalence of package queries to Integer Linear Programs (ILPs) and developed the SketchRefine algorithm for package query processing. While this algorithm was an important first step toward supporting prescriptive analytics scalably inside a relational database, it struggles when the data size grows beyond a few hundred million tuples or when the constraints become very tight. In this paper, we present Progressive Shading, a novel algorithm for processing package queries that can scale efficiently to billions of tuples and gracefully handle tight constraints. Progressive Shading solves a sequence of optimization problems over a hierarchy of relations, each resulting from an ever-finer partitioning of the original tuples into homogeneous groups until the original relation is obtained. This strategy avoids the premature discarding of high-quality tuples that can occur with SketchRefine. Our novel partitioning scheme, Dynamic Low Variance, can handle very large relations with multiple attributes and can dynamically adapt to both concentrated and spread-out sets of attribute values, provably outperforming traditional partitioning schemes such as KD-Tree. We further optimize our system by replacing our off-the-shelf optimization software with customized ILP and LP solvers, called Dual Reducer and Parallel Dual Simplex respectively, that are highly accurate and orders of magnitude faster.

Elementary trapping sets (ETSs) are the main culprits for the performance of LDPC codes in the error floor region. Due to the large quantity, complex structures, and computational difficulties of ETSs, how to eliminate dominant ETSs in designing LDPC codes becomes a pivotal issue to improve the error floor behavior. In practice, researchers commonly address this problem by avoiding some special graph structures to free specific ETSs in Tanner graph. In this paper, we deduce the accurate Tur\'an number of $\theta(1,2,2)$ and prove that all $(a,b)$-ETSs in Tanner graph with variable-regular degree $d_L(v)=\gamma$ must satisfy the bound $b\geq a\gamma-\frac{1}{2}a^2$, which improves the lower bound obtained by Amirzade when the girth is 6. For the case of girth 8, by limiting the relation between any two 8-cycles in the Tanner graph, we prove a similar inequality $b\geq a\gamma-\frac{a(\sqrt{8a-7}-1)}{2}$. The simulation results show that the designed codes have good performance with lower error floor over additive white Gaussian noise channels.

Transition amplitudes and transition probabilities are relevant to many areas of physics simulation, including the calculation of response properties and correlation functions. These quantities can also be related to solving linear systems of equations. Here we present three related algorithms for calculating transition probabilities. First, we extend a previously published short-depth algorithm, allowing for the two input states to be non-orthogonal. Building on this first procedure, we then derive a higher-depth algorithm based on Trotterization and Richardson extrapolation that requires fewer circuit evaluations. Third, we introduce a tunable algorithm that allows for trading off circuit depth and measurement complexity, yielding an algorithm that can be tailored to specific hardware characteristics. Finally, we implement proof-of-principle numerics for models in physics and chemistry and for a subroutine in variational quantum linear solving (VQLS). The primary benefits of our approaches are that (a) arbitrary non-orthogonal states may now be used with small increases in quantum resources, (b) we (like another recently proposed method) entirely avoid subroutines such as the Hadamard test that may require three-qubit gates to be decomposed, and (c) in some cases fewer quantum circuit evaluations are required as compared to the previous state-of-the-art in NISQ algorithms for transition probabilities.

Computerization has created a digital ecological niche where humans live in a state of interconnection that modifies their Epigenetics. Within this hyper-datafied virtual space, the logged-in agent enhances their intellectual and rational abilities, giving rise to a new cognitive entity. Humans are evolving towards a new anthropological status that shifts the terms of the Digital History debate from History to the historian, compelling the latter to reflect on the positions of Fichte and Schelling regarding the mind-body-world relationship (ecological niche). This reflection leads to the possibility of overcoming the crisis of History imposed by presentism and the necessity of redefining the research methodology based on the new vision of the interconnection between the mind and the digital niche as an investigative tool.

By interacting, synchronizing, and cooperating with its physical counterpart in real time, digital twin is promised to promote an intelligent, predictive, and optimized modern city. Via interconnecting massive physical entities and their virtual twins with inter-twin and intra-twin communications, the Internet of digital twins (IoDT) enables free data exchange, dynamic mission cooperation, and efficient information aggregation for composite insights across vast physical/virtual entities. However, as IoDT incorporates various cutting-edge technologies to spawn the new ecology, severe known/unknown security flaws and privacy invasions of IoDT hinders its wide deployment. Besides, the intrinsic characteristics of IoDT such as \emph{decentralized structure}, \emph{information-centric routing} and \emph{semantic communications} entail critical challenges for security service provisioning in IoDT. To this end, this paper presents an in-depth review of the IoDT with respect to system architecture, enabling technologies, and security/privacy issues. Specifically, we first explore a novel distributed IoDT architecture with cyber-physical interactions and discuss its key characteristics and communication modes. Afterward, we investigate the taxonomy of security and privacy threats in IoDT, discuss the key research challenges, and review the state-of-the-art defense approaches. Finally, we point out the new trends and open research directions related to IoDT.

Automated Driving Systems (ADS) have made great achievements in recent years thanks to the efforts from both academia and industry. A typical ADS is composed of multiple modules, including sensing, perception, planning and control, which brings together the latest advances in multiple domains. Despite these achievements, safety assurance of the systems is still of great significance, since the unsafe behavior of ADS can bring catastrophic consequences and unacceptable economic and social losses. Testing is an important approach to system validation for the deployment in practice; in the context of ADS, it is extremely challenging, due to the system complexity and multidisciplinarity. There has been a great deal of literature that focuses on the testing of ADS, and a number of surveys have also emerged to summarize the technical advances. However, most of these surveys focus on the system-level testing that is performed within software simulators, and thereby ignore the distinct features of individual modules. In this paper, we provide a comprehensive survey on the existing ADS testing literature, which takes into account both module-level and system-level testing. Specifically, we make the following contributions: (1) we build a threat model that reveals the potential safety threats for each module of an ADS; (2) we survey the module-level testing techniques for ADS and highlight the technical differences affected by the properties of the modules; (3) we also survey the system-level testing techniques, but we focus on empirical studies that take a bird's-eye view on the system, the problems due to the collaborations between modules, and the gaps between ADS testing in simulators and real world; (4) we identify the challenges and opportunities in ADS testing, which facilitates the future research in this field.

北京阿比特科技有限公司