亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Security properties are often focused on the technological side of the system. One implicitly assumes that the users will behave in the right way to preserve the property at hand. In real life, this cannot be taken for granted. In particular, security mechanisms that are difficult and costly to use are often ignored by the users, and do not really defend the system against possible attacks. Here, we propose a graded notion of security based on the complexity of the user's strategic behavior. More precisely, we suggest that the level to which a security property $\varphi$ is satisfied can be defined in terms of (a) the complexity of the strategy that the voter needs to execute to make $\varphi$ true, and (b) the resources that the user must employ on the way. The simpler and cheaper to obtain $\varphi$, the higher the degree of security. We demonstrate how the idea works in a case study based on an electronic voting scenario. To this end, we model the vVote implementation of the \Pret voting protocol for coercion-resistant and voter-verifiable elections. Then, we identify "natural" strategies for the voter to obtain receipt-freeness, and measure the voter's effort that they require. We also look at how hard it is for the coercer to compromise the election through a randomization attack.

相關內容

Most modern computing tasks have digital electronic input and output data. Due to these constraints imposed by real-world use cases of computer systems, any analog computing accelerator, whether analog electronic or optical, must perform an analog-to-digital conversion on its input data and a subsequent digital-to-analog conversion on its output data. The energy and latency costs incurred by data conversion place performance limits on analog computing accelerators. To avoid this overhead, analog hardware must replace the full functionality of traditional digital electronic computer hardware. This is not currently possible for optical computing accelerators due to limitations in gain, input-output isolation, and information storage in optical hardware. This article presents a case study that profiles 27 benchmarks for an analog optical Fourier transform and convolution accelerator which we designed and built. The case study shows that an ideal optical Fourier transform and convolution accelerator can produce an average speedup of 9.4 times and a median speedup of 1.9 times for the set of benchmarks. The optical Fourier transform and convolution accelerator only produces significant speedup for pure Fourier transform (45.3 times) and convolution (159.4 times) applications.

Optimal transport aims to learn a mapping of sources to targets by minimizing the cost, which is typically defined as a function of distance. The solution to this problem consists of straight line segments optimally connecting sources to targets, and it does not exhibit branching. These optimal solutions are in stark contrast with both natural, and man-made transportation networks, where branching structures are prevalent. Here we discuss a fast heuristic branching method for optimal transport in networks, and we provide several applications.

In multivariate functional data analysis, different functional covariates can be homogeneous in some sense. The hidden homogeneity structure is informative about the connectivity or association of different covariates. The covariates with pronounced homogeneity can be analyzed jointly in the same group and this gives rise to a way of parsimoniously modeling multivariate functional data. In this paper, we develop a multivariate functional regression technique by a new regularization approach termed "coefficient shape alignment" to tackle the potential homogeneity of different functional covariates. The modeling procedure includes two main steps: first the unknown grouping structure is detected with a new regularization approach to aggregate covariates into disjoint groups; and then a grouped multivariate functional regression model is established based on the detected grouping structure. In this new grouped model, the coefficient functions of covariates in the same homogeneous group share the same shape invariant to scaling. The new regularization approach builds on penalizing the discrepancy of coefficient shape. The consistency property of the detected grouping structure is thoroughly investigated, and the conditions that guarantee uncovering the underlying true grouping structure are developed. The asymptotic properties of the model estimates are also developed. Extensive simulation studies are conducted to investigate the finite-sample properties of the developed methods. The practical utility of the proposed methods is illustrated in an analysis on sugar quality evaluation. This work provides a novel means for analyzing the underlying homogeneity of functional covariates and developing parsimonious model structures for multivariate functional data.

Autonomous systems are emerging in many application domains. With the recent advancements in artificial intelligence and machine learning, sensor technology, perception algorithms and robotics, scenarios previously requiring strong human involvement can be handled by autonomous systems. With the independence from human control, cybersecurity of such systems becomes even more critical as no human intervention in case of undesired behavior is possible. In this context, this paper discusses emerging security challenges in autonomous systems design which arise in many domains such as autonomous incident response, risk assessment, data availability, systems interaction, trustworthiness, updatability, access control, as well as the reliability and explainability of machine learning methods. In all these areas, this paper thoroughly discusses the state of the art, identifies emerging security challenges and proposes research directions to address these challenges for developing secure autonomous systems.

Regulated industries, such as Healthcare and Finance, are starting to move parts of their data and workloads to the public cloud. However, they are still reluctant to trust the public cloud with their most sensitive records, and hence leave them in their premises, leveraging the hybrid cloud architecture. We address the security and performance challenges of big data analytics using a hybrid cloud in a real-life use case from a hospital. In this use case, the hospital collects sensitive patient data and wants to run analytics on it in order to lower antibiotics resistance, a significant challenge in healthcare. We show that it is possible to run large-scale analytics on data that is securely stored in the public cloud encrypted using Apache Parquet Modular Encryption (PME), without significant performance losses even if the secret encryption keys are stored on-premises. PME is a standard mechanism for data encryption and key management, not specific to any public cloud, and therefore helps prevent vendor lock-in. It also provides privacy and integrity guarantees, and enables granular access control to the data. We also present an innovation in PME for lowering the performance hit incurred by calls to the Key Management Service. Our solution therefore enables protecting large amounts of sensitive data in hybrid clouds and still allows to efficiently gain valuable insights from it.

By allowing users to erase their data's impact on federated learning models, federated unlearning protects users' right to be forgotten and data privacy. Despite a burgeoning body of research on federated unlearning's technical feasibility, there is a paucity of literature investigating the considerations behind users' requests for data revocation. This paper proposes a non-cooperative game framework to study users' data revocation strategies in federated unlearning. We prove the existence of a Nash equilibrium. However, users' best response strategies are coupled via model performance and unlearning costs, which makes the equilibrium computation challenging. We obtain the Nash equilibrium by establishing its equivalence with a much simpler auxiliary optimization problem. We also summarize users' multi-dimensional attributes into a single-dimensional metric and derive the closed-form characterization of an equilibrium, when users' unlearning costs are negligible. Moreover, we compare the cases of allowing and forbidding partial data revocation in federated unlearning. Interestingly, the results reveal that allowing partial revocation does not necessarily increase users' data contributions or payoffs due to the game structure. Additionally, we demonstrate that positive externalities may exist between users' data revocation decisions when users incur unlearning costs, while this is not the case when their unlearning costs are negligible.

Semantic communication, recognized as a promising technology for future intelligent applications, has received widespread research attention. Despite the potential of semantic communication to enhance transmission reliability, especially in low signal-to-noise (SNR) environments, the critical issue of resource allocation and compatibility in the dynamic wireless environment remains largely unexplored. In this paper, we propose an adaptive semantic resource allocation paradigm with semantic-bit quantization (SBQ) compatibly for existing wireless communications, where the inaccurate environment perception introduced by the additional mapping relationship between semantic metrics and transmission metrics is solved. In order to investigate the performance of semantic communication networks, the quality of service for semantic communication (SC-QoS), including the semantic quantization efficiency (SQE) and transmission latency, is proposed for the first time. A problem of maximizing the overall effective SC-QoS is formulated by jointly optimizing the transmit beamforming of the base station, the bits for semantic representation, the subchannel assignment, and the bandwidth resource allocation. To address the non-convex formulated problem, an intelligent resource allocation scheme is proposed based on a hybrid deep reinforcement learning (DRL) algorithm, where the intelligent agent can perceive both semantic tasks and dynamic wireless environments. Simulation results demonstrate that our design can effectively combat semantic noise and achieve superior performance in wireless communications compared to several benchmark schemes. Furthermore, compared to mapping-guided paradigm based resource allocation schemes, our proposed adaptive scheme can achieve up to 13% performance improvement in terms of SC-QoS.

To ensure the usefulness of Reinforcement Learning (RL) in real systems, it is crucial to ensure they are robust to noise and adversarial attacks. In adversarial RL, an external attacker has the power to manipulate the victim agent's interaction with the environment. We study the full class of online manipulation attacks, which include (i) state attacks, (ii) observation attacks (which are a generalization of perceived-state attacks), (iii) action attacks, and (iv) reward attacks. We show the attacker's problem of designing a stealthy attack that maximizes its own expected reward, which often corresponds to minimizing the victim's value, is captured by a Markov Decision Process (MDP) that we call a meta-MDP since it is not the true environment but a higher level environment induced by the attacked interaction. We show that the attacker can derive optimal attacks by planning in polynomial time or learning with polynomial sample complexity using standard RL techniques. We argue that the optimal defense policy for the victim can be computed as the solution to a stochastic Stackelberg game, which can be further simplified into a partially-observable turn-based stochastic game (POTBSG). Neither the attacker nor the victim would benefit from deviating from their respective optimal policies, thus such solutions are truly robust. Although the defense problem is NP-hard, we show that optimal Markovian defenses can be computed (learned) in polynomial time (sample complexity) in many scenarios.

This manuscript portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.

In the era of deep learning, modeling for most NLP tasks has converged to several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, NER, Chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have observed a rising trend of Paradigm Shift, which is solving one NLP task by reformulating it as another one. Paradigm shift has achieved great success on many tasks, becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.

北京阿比特科技有限公司