亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The pervasive use of smart speakers has raised numerous privacy concerns. While work to date provides an understanding of user perceptions of these threats, limited research focuses on how we can mitigate these concerns, either through redesigning the smart speaker or through dedicated privacy-preserving interventions. In this paper, we present the design and prototyping of two privacy-preserving interventions: `Obfuscator' targeted at disabling recording at the microphones, and `BlackOut' targeted at disabling power to the smart speaker. We present our findings from a technology probe study involving 24 households that interacted with our prototypes; the primary objective was to gain a better understanding of the design space for technological interventions that might address these concerns. Our data and findings reveal complex trade-offs among utility, privacy, and usability and stresses the importance of multi-functionality, aesthetics, ease-of-use, and form factor. We discuss the implications of our findings for the development of subsequent interventions and the future design of smart speakers.

相關內容

The increasing concerns about data privacy and security drive an emerging field of studying privacy-preserving machine learning from isolated data sources, i.e., federated learning. A class of federated learning, vertical federated learning, where different parties hold different features for common users, has a great potential of driving a more variety of business cooperation among enterprises in many fields. In machine learning, decision tree ensembles such as gradient boosting decision tree (GBDT) and random forest are widely applied powerful models with high interpretability and modeling efficiency. However, the interpretability is compromised in state-of-the-art vertical federated learning frameworks such as SecureBoost with anonymous features to avoid possible data breaches. To address this issue in the inference process, in this paper, we propose Fed-EINI to protect data privacy and allow the disclosure of feature meaning by concealing decision paths with a communication-efficient secure computation method for inference outputs. The advantages of Fed-EINI will be demonstrated through both theoretical analysis and extensive numerical results.

Privacy is important for all individuals in everyday life. With emerging technologies, smartphones with AR, various social networking applications and artificial intelligence driven modes of surveillance, they tend to intrude privacy. This study aimed to explore and discover various concerns related to perception of privacy in this era of ubiquitous technologies. It employed online survey questionnaire to study user perspectives of privacy. Purposive sampling was used to collect data from 60 participants. Inductive thematic analysis was used to analyze data. Our study discovered key themes like attitude towards privacy in public and private spaces, privacy awareness, consent seeking, dilemmas/confusions related to various technologies, impact of attitude and beliefs on individuals actions regarding how to protect oneself from invasion of privacy in both public and private spaces. These themes interacted amongst themselves and influenced formation of various actions. They were like core principles that molded actions that prevented invasion of privacy for both participant and bystander. Findings of this study would be helpful to improve privacy and personalization of various emerging technologies. This study contributes to privacy by design and positive design by considering psychological needs of users. This is suggestive that the findings can be applied in the areas of experience design, positive technologies, social computing and behavioral interventions.

Activity-based learning helps students to learn through participation. A virtual codeathon activity, as part of this learning scheme, was conducted for 180 undergraduate students to focus on analysis and design of solutions to crucial real-world problems in the existing Covid-19 pandemic situation. In this paper, an analysis is made to know the problem solving skills of students given a single problem statement. Evaluators can further collate these multiple solutions into one optimal solution. This Codeathon activity impacts their practical approach towards the analysis and design.

Speech synthesis, voice cloning, and voice conversion techniques present severe privacy and security threats to users of voice user interfaces (VUIs). These techniques transform one or more elements of a speech signal, e.g., identity and emotion, while preserving linguistic information. Adversaries may use advanced transformation tools to trigger a spoofing attack using fraudulent biometrics for a legitimate speaker. Conversely, such techniques have been used to generate privacy-transformed speech by suppressing personally identifiable attributes in the voice signals, achieving anonymization. Prior works have studied the security and privacy vectors in parallel, and thus it raises alarm that if a benign user can achieve privacy by a transformation, it also means that a malicious user can break security by bypassing the anti-spoofing mechanism. In this paper, we take a step towards balancing two seemingly conflicting requirements: security and privacy. It remains unclear what the vulnerabilities in one domain imply for the other, and what dynamic interactions exist between them. A better understanding of these aspects is crucial for assessing and mitigating vulnerabilities inherent with VUIs and building effective defenses. In this paper,(i) we investigate the applicability of the current voice anonymization methods by deploying a tandem framework that jointly combines anti-spoofing and authentication models, and evaluate the performance of these methods;(ii) examining analytical and empirical evidence, we reveal a duality between the two mechanisms as they offer different ways to achieve the same objective, and we show that leveraging one vector significantly amplifies the effectiveness of the other;(iii) we demonstrate that to effectively defend from potential attacks against VUIs, it is necessary to investigate the attacks from multiple complementary perspectives(security and privacy).

In this article we present hygiea, an end-to-end blockchain-based solution for the Covid-19 pandemic. hygiea has two main objectives. The first is to allow governments to issue Covid-19 related certificates to citizens that can be verified by designated verifiers to ensure safer workplaces. The second is to provide the necessary tools to experts and decision makers to better understand the impact of the pandemic through statistical models built on top of the data collected by the platform. This work covers all steps of the certificate issuance, verification and revocation cycles with well-defined roles for all stakeholders. We also propose a governance model that is implemented via smart contracts ensuring security, transparency and auditability. Finally, we propose techniques for deriving statistical models that can be used by decision makers.

Coronavirus disease 2019 (COVID-19) pandemic is an unprecedented global public health challenge. In the United States (US), state governments have implemented various non-pharmaceutical interventions (NPIs), such as physical distance closure (lockdown), stay-at-home order, mandatory facial mask in public in response to the rapid spread of COVID-19. To evaluate the effectiveness of these NPIs, we propose a nested case-control design with propensity score weighting under the quasi-experiment framework to estimate the average intervention effect on disease transmission across states. We further develop a method to test for factors that moderate intervention effect to assist precision public health intervention. Our method takes account of the underlying dynamics of disease transmission and balance state-level pre-intervention characteristics. We prove that our estimator provides causal intervention effect under assumptions. We apply this method to analyze US COVID-19 incidence cases to estimate the effects of six interventions. We show that lockdown has the largest effect on reducing transmission and reopening bars significantly increase transmission. States with a higher percentage of non-white population are at greater risk of increased $R_t$ associated with reopening bars.

In the given technology-driven era, smart cities are the next frontier of technology, aiming at improving the quality of people's lives. Many research works focus on future smart cities with a holistic approach towards smart city development. In this paper, we introduce such future smart cities that leverage blockchain technology in areas like data security, energy and waste management, governance, transport, supply chain, including emergency events, and environmental monitoring. Blockchain, being a decentralized immutable ledger, has the potential to promote the development of smart cities by guaranteeing transparency, data security, interoperability, and privacy. Particularly, using blockchain in emergency events will provide interoperability between many parties involved in the response, will increase timeliness of services, and establish transparency. In that case, if a current fee-based or first-come-first-serve-based processing is used, emergency events may get delayed in being processed due to competition, and thus, threatening people's lives. Thus, there is a need for transaction prioritization based on the priority of information and quick creation of blocks (variable interval block creation mechanism). Also, since the leaders ensure transaction prioritization while generating blocks, leader rotation and proper election procedure become important for the transaction prioritization process to take place honestly and efficiently. In our consensus protocol, we deploy a machine learning (ML) algorithm to achieve efficient leader election and design a novel dynamic block creation algorithm. Also, to ensure honest assessment from the followers on the blocks generated by the leaders, a peer-prediction-based verification mechanism is proposed. Both security analysis and simulation experiments are carried out to demonstrate the robustness and accuracy of our proposed scheme.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

We detail a new framework for privacy preserving deep learning and discuss its assets. The framework puts a premium on ownership and secure processing of data and introduces a valuable representation based on chains of commands and tensors. This abstraction allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user. We report early results on the Boston Housing and Pima Indian Diabetes datasets. While the privacy features apart from Differential Privacy do not impact the prediction accuracy, the current implementation of the framework introduces a significant overhead in performance, which will be addressed at a later stage of the development. We believe this work is an important milestone introducing the first reliable, general framework for privacy preserving deep learning.

Voice conversion (VC) aims at conversion of speaker characteristic without altering content. Due to training data limitations and modeling imperfections, it is difficult to achieve believable speaker mimicry without introducing processing artifacts; performance assessment of VC, therefore, usually involves both speaker similarity and quality evaluation by a human panel. As a time-consuming, expensive, and non-reproducible process, it hinders rapid prototyping of new VC technology. We address artifact assessment using an alternative, objective approach leveraging from prior work on spoofing countermeasures (CMs) for automatic speaker verification. Therein, CMs are used for rejecting `fake' inputs such as replayed, synthetic or converted speech but their potential for automatic speech artifact assessment remains unknown. This study serves to fill that gap. As a supplement to subjective results for the 2018 Voice Conversion Challenge (VCC'18) data, we configure a standard constant-Q cepstral coefficient CM to quantify the extent of processing artifacts. Equal error rate (EER) of the CM, a confusability index of VC samples with real human speech, serves as our artifact measure. Two clusters of VCC'18 entries are identified: low-quality ones with detectable artifacts (low EERs), and higher quality ones with less artifacts. None of the VCC'18 systems, however, is perfect: all EERs are < 30 % (the `ideal' value would be 50 %). Our preliminary findings suggest potential of CMs outside of their original application, as a supplemental optimization and benchmarking tool to enhance VC technology.

北京阿比特科技有限公司