亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Privacy is important for all individuals in everyday life. With emerging technologies, smartphones with AR, various social networking applications and artificial intelligence driven modes of surveillance, they tend to intrude privacy. This study aimed to explore and discover various concerns related to perception of privacy in this era of ubiquitous technologies. It employed online survey questionnaire to study user perspectives of privacy. Purposive sampling was used to collect data from 60 participants. Inductive thematic analysis was used to analyze data. Our study discovered key themes like attitude towards privacy in public and private spaces, privacy awareness, consent seeking, dilemmas/confusions related to various technologies, impact of attitude and beliefs on individuals actions regarding how to protect oneself from invasion of privacy in both public and private spaces. These themes interacted amongst themselves and influenced formation of various actions. They were like core principles that molded actions that prevented invasion of privacy for both participant and bystander. Findings of this study would be helpful to improve privacy and personalization of various emerging technologies. This study contributes to privacy by design and positive design by considering psychological needs of users. This is suggestive that the findings can be applied in the areas of experience design, positive technologies, social computing and behavioral interventions.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 學成 · INFORMS · 變分自編碼 · Machine Learning ·
2021 年 9 月 24 日

In machine learning, differential privacy and federated learning concepts are gaining more and more importance in an increasingly interconnected world. While the former refers to the sharing of private data characterized by strict security rules to protect individual privacy, the latter refers to distributed learning techniques in which a central server exchanges information with different clients for machine learning purposes. In recent years, many studies have shown the possibility of bypassing the privacy shields of these systems and exploiting the vulnerabilities of machine learning models, making them leak the information with which they have been trained. In this work, we present the 3DGL framework, an alternative to the current federated learning paradigms. Its goal is to share generative models with high levels of $\varepsilon$-differential privacy. In addition, we propose DDP-$\beta$VAE, a deep generative model capable of generating synthetic data with high levels of utility and safety for the individual. We evaluate the 3DGL framework based on DDP-$\beta$VAE, showing how the overall system is resilient to the principal attacks in federated learning and improves the performance of distributed learning algorithms.

This demo presents a functional Proof-of-Concept prototype of a smart bracelet that utilizes IoT and ML to help in the effort to contain pandemics such as COVID-19. The designed smart bracelet aids people to navigate life safely by monitoring health signs; and detecting and alerting people when they violate social distancing regulations. In addition, the bracelet communicates with similar bracelets to keep track of recent contacts. Using RFID technology, the bracelet helps in automating access control to premises such as workplaces. All this is achieved while preserving the privacy of the users.

In recent years, emerging hardware storage technologies have focused on divergent goals: better performance or lower cost-per-bit of storage. Correspondingly, data systems that employ these new technologies are typically optimized either to be fast (but expensive) or cheap (but slow). We take a different approach: by designing a storage engine to natively utilize two tiers of fast and low-cost storage technologies, we can achieve a Pareto-efficient balance between performance and cost-per-bit. This paper presents the design and implementation of PrismDB, a novel key-value store that exploits two extreme ends of the spectrum of modern NVMe storage technologies (3D XPoint and QLC NAND) simultaneously. Unlike prior work that has retrofitted data structures designed for a single tier to multi-tier storage, PrismDB's data structures and migration mechanism are tailored for different storage layers. The key design contributions of PrismDB is a novel hybrid data structure for tiered storage, and an adaptive and lightweight migration mechanism that is able to efficiently demote cold objects from the fast to the slow storage tier, and promote hot objects to the fast tier. Compared to the standard use of RocksDB on flash in datacenters today, PrismDB's average throughput on tiered storage is 3.3$\times$ faster and its read tail latency is 2$\times$ better, using equivalently-priced hardware.

Mobile devices have become an indispensable component of modern life. Their high storage capacity gives these devices the capability to store vast amounts of sensitive personal data, which makes them a high-value target: these devices are routinely stolen by criminals for data theft, and are increasingly viewed by law enforcement agencies as a valuable source of forensic data. Over the past several years, providers have deployed a number of advanced cryptographic features intended to protect data on mobile devices, even in the strong setting where an attacker has physical access to a device. Many of these techniques draw from the research literature, but have been adapted to this entirely new problem setting. This involves a number of novel challenges, which are incompletely addressed in the literature. In this work, we outline those challenges, and systematize the known approaches to securing user data against extraction attacks. Our work proposes a methodology that researchers can use to analyze cryptographic data confidentiality for mobile devices. We evaluate the existing literature for securing devices against data extraction adversaries with powerful capabilities including access to devices and to the cloud services they rely on. We then analyze existing mobile device confidentiality measures to identify research areas that have not received proper attention from the community and represent opportunities for future research.

Machine learning (ML) is increasingly being adopted in a wide variety of application domains. Usually, a well-performing ML model relies on a large volume of training data and high-powered computational resources. Such a need for and the use of huge volumes of data raise serious privacy concerns because of the potential risks of leakage of highly privacy-sensitive information; further, the evolving regulatory environments that increasingly restrict access to and use of privacy-sensitive data add significant challenges to fully benefiting from the power of ML for data-driven applications. A trained ML model may also be vulnerable to adversarial attacks such as membership, attribute, or property inference attacks and model inversion attacks. Hence, well-designed privacy-preserving ML (PPML) solutions are critically needed for many emerging applications. Increasingly, significant research efforts from both academia and industry can be seen in PPML areas that aim toward integrating privacy-preserving techniques into ML pipeline or specific algorithms, or designing various PPML architectures. In particular, existing PPML research cross-cut ML, systems and applications design, as well as security and privacy areas; hence, there is a critical need to understand state-of-the-art research, related challenges and a research roadmap for future research in PPML area. In this paper, we systematically review and summarize existing privacy-preserving approaches and propose a Phase, Guarantee, and Utility (PGU) triad based model to understand and guide the evaluation of various PPML solutions by decomposing their privacy-preserving functionalities. We discuss the unique characteristics and challenges of PPML and outline possible research directions that leverage as well as benefit multiple research communities such as ML, distributed systems, security and privacy.

Smart Meters (SMs) are a fundamental component of smart grids, but they carry sensitive information about users such as occupancy status of houses and therefore, they have raised serious concerns about leakage of consumers' private information. In particular, we focus on real-time privacy threats, i.e., potential attackers that try to infer sensitive data from SMs reported data in an online fashion. We adopt an information-theoretic privacy measure and show that it effectively limits the performance of any real-time attacker. Using this privacy measure, we propose a general formulation to design a privatization mechanism that can provide a target level of privacy by adding a minimal amount of distortion to the SMs measurements. On the other hand, to cope with different applications, a flexible distortion measure is considered. This formulation leads to a general loss function, which is optimized using a deep learning adversarial framework, where two neural networks $-$ referred to as the releaser and the adversary $-$ are trained with opposite goals. An exhaustive empirical study is then performed to validate the performances of the proposed approach for the occupancy detection privacy problem, assuming the attacker disposes of either limited or full access to the training dataset.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

In recent years, disinformation including fake news, has became a global phenomenon due to its explosive growth, particularly on social media. The wide spread of disinformation and fake news can cause detrimental societal effects. Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation. The goal of this chapter is to pave the way for appreciating the challenges and advancements via: (1) introducing the types of information disorder on social media and examine their differences and connections; (2) describing important and emerging tasks to combat disinformation for characterization, detection and attribution; and (3) discussing a weak supervision approach to detect disinformation with limited labeled data. We then provide an overview of the chapters in this book that represent the recent advancements in three related parts: (1) user engagements in the dissemination of information disorder; (2) techniques on detecting and mitigating disinformation; and (3) trending issues such as ethics, blockchain, clickbaits, etc. We hope this book to be a convenient entry point for researchers, practitioners, and students to understand the problems and challenges, learn state-of-the-art solutions for their specific needs, and quickly identify new research problems in their domains.

Fake news can significantly misinform people who often rely on online sources and social media for their information. Current research on fake news detection has mostly focused on analyzing fake news content and how it propagates on a network of users. In this paper, we emphasize the detection of fake news by assessing its credibility. By analyzing public fake news data, we show that information on news sources (and authors) can be a strong indicator of credibility. Our findings suggest that an author's history of association with fake news, and the number of authors of a news article, can play a significant role in detecting fake news. Our approach can help improve traditional fake news detection methods, wherein content features are often used to detect fake news.

Recommender systems rely on large datasets of historical data and entail serious privacy risks. A server offering recommendations as a service to a client might leak more information than necessary regarding its recommendation model and training dataset. At the same time, the disclosure of the client's preferences to the server is also a matter of concern. Providing recommendations while preserving privacy in both senses is a difficult task, which often comes into conflict with the utility of the system in terms of its recommendation-accuracy and efficiency. Widely-purposed cryptographic primitives such as secure multi-party computation and homomorphic encryption offer strong security guarantees, but in conjunction with state-of-the-art recommender systems yield far-from-practical solutions. We precisely define the above notion of security and propose CryptoRec, a novel recommendations-as-a-service protocol, which encompasses a crypto-friendly recommender system. This model possesses two interesting properties: (1) It models user-item interactions in a user-free latent feature space in which it captures personalized user features by an aggregation of item features. This means that a server with a pre-trained model can provide recommendations for a client without having to re-train the model with the client's preferences. Nevertheless, re-training the model still improves accuracy. (2) It only uses addition and multiplication operations, making the model straightforwardly compatible with homomorphic encryption schemes.

北京阿比特科技有限公司