亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

While digital divide studies primarily focused on access to information and communications technology (ICT) in the past, its influence on other associated dimensions such as privacy is becoming critical with a far-reaching impact on the people and society. For example, the various levels of government legislation and compliance on information privacy worldwide have created a new era of digital divide in the privacy preservation domain. In this article, the concept "digital privacy divide (DPD)" is introduced to describe the perceived gap in the privacy preservation of individuals based on the geopolitical location of different countries. To better understand the DPD phenomenon, we created an online questionnaire and collected answers from more than 700 respondents from four different countries (the United States, Germany, Bangladesh, and India) who come from two distinct cultural orientations as per Hofstede's individualist vs. collectivist society. However, our results revealed some interesting findings. DPD does not depend on Hofstede's cultural orientation of the countries. For example, individuals residing in Germany and Bangladesh share similar privacy concerns, while there is a significant similarity among individuals residing in the United States and India. Moreover, while most respondents acknowledge the importance of privacy legislation to protect their digital privacy, they do not mind their governments to allow domestic companies and organizations collecting personal data on individuals residing outside their countries, if there are economic, employment, and crime prevention benefits. These results suggest a social dilemma in the perceived privacy preservation, which could be dependent on many other contextual factors beyond government legislation and countries' cultural orientation.

相關內容

分布式并行數據庫(DPD)在所有傳統的以及新興的數據庫研究領域中發表論文,包括:數據集成、數據共享、安全和隱私、事務管理、流程和工作流管理、信息提取、查詢處理和優化、分析大型數據集的挖掘和可視化、存儲、數據碎片,放置和分配復制協議、可靠性、容錯、持久性、保留、性能和可伸縮性以及各種通信和傳播平臺及中間件的使用。 官網地址:

Decentralized exchange markets leveraging blockchain have been proposed recently to provide open and equal access to traders, improve transparency and reduce systemic risk of centralized exchanges. However, they compromise on the privacy of traders with respect to their asset ownership, account balance, order details and their identity. In this paper, we present Rialto, a fully decentralized privacy-preserving exchange marketplace with support for matching trade orders, on-chain settlement and market price discovery. Rialto provides confidentiality of order rates and account balances and unlinkability between traders and their trade orders, while retaining the desirable properties of a traditional marketplace like front-running resilience and market fairness. We define formal security notions and present a security analysis of the marketplace. We perform a detailed evaluation of our solution, demonstrate that it scales well and is suitable for a large class of goods and financial instruments traded in modern exchange markets.

With the growing processing power of computing systems and the increasing availability of massive datasets, machine learning algorithms have led to major breakthroughs in many different areas. This development has influenced computer security, spawning a series of work on learning-based security systems, such as for malware detection, vulnerability discovery, and binary code analysis. Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance and render learning-based systems potentially unsuitable for security tasks and practical deployment. In this paper, we look at this problem with critical eyes. First, we identify common pitfalls in the design, implementation, and evaluation of learning-based security systems. We conduct a study of 30 papers from top-tier security conferences within the past 10 years, confirming that these pitfalls are widespread in the current security literature. In an empirical analysis, we further demonstrate how individual pitfalls can lead to unrealistic performance and interpretations, obstructing the understanding of the security problem at hand. As a remedy, we propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible. Furthermore, we identify open problems when applying machine learning in security and provide directions for further research.

Disaster victim identification (DVI) entails a protracted process of evidence collection and data matching to reconcile physical remains with victim identity. Technology is critical to DVI by enabling the linkage of physical evidence to information. However, labelling physical remains and collecting data at the scene are dominated by low-technology paper-based practices. We ask, how can technology help us tag and track the victims of disaster? Our response has two parts. First, we conducted a human-computer interaction led investigation into the systematic factors impacting DVI tagging and tracking processes. Through interviews with Australian DVI practitioners, we explored how technologies to improve linkage might fit with prevailing work practices and preferences; practical and social considerations; and existing systems and processes. Using insights from these interviews and relevant literature, we identified four critical themes: protocols and training; stress and stressors; the plurality of information capture and management systems; and practicalities and constraints. Second, we applied the themes identified in the first part of the investigation to critically review technologies that could support DVI practitioners by enhancing DVI processes that link physical evidence to information. This resulted in an overview of candidate technologies matched with consideration of their key attributes. This study recognises the importance of considering human factors that can affect technology adoption into existing practices. We provide a searchable table (Supplementary Information) that relates technologies to the key attributes relevant to DVI practice, for the reader to apply to their own context. While this research directly contributes to DVI, it also has applications to other domains in which a physical/digital linkage is required, particularly within high-stress environments.

Female researchers may have experienced more difficulties than their male counterparts since the COVID-19 outbreak because of gendered housework and childcare. Using Microsoft Academic Graph data from 2016 to 2020, this study examined how the proportion of female authors in academic journals on a global scale changed in 2020 (net of recent yearly trends). We observed a decrease in research productivity for female researchers in 2020, mostly as first authors, followed by last author position. Female researchers were not necessarily excluded from but were marginalised in research. We also identified various factors that amplified the gender gap by dividing the authors' backgrounds into individual, organisational and national characteristics. Female researchers were more vulnerable when they were in their mid-career, affiliated to the least influential organisations, and more importantly from less gender-equal countries with higher mortality and restricted mobility as a result of COVID-19.

Differential privacy (DP) allows the quantification of privacy loss when the data of individuals is subjected to algorithmic processing such as machine learning, as well as the provision of objective privacy guarantees. However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss. Here we extend the view of individual RDP by introducing a new concept we call partial sensitivity, which leverages symbolic automatic differentiation to determine the influence of each input feature on the gradient norm of a function. We experimentally evaluate our approach on queries over private databases, where we obtain a feature-level contribution of private attributes to the DP guarantee of individuals. Furthermore, we explore our findings in the context of neural network training on synthetic data by investigating the partial sensitivity of input pixels on an image classification task.

Data protection law, including the General Data Protection Regulation (GDPR), usually requires a privacy policy before data can be collected from individuals. We analysed 15,145 privacy policies from 26,910 mobile apps in May 2019 (about one year after the GDPR came into force), finding that only opening the policy webpages shares data with third-parties for 48.5% of policies, potentially violating the GDPR. We compare this data sharing across countries, payment models (free, in-app-purchases, paid) and platforms (Google Play Store, Apple App Store). We further contacted 52 developers of apps, which did not provide a privacy policy, and asked them about their data practices. Despite being legally required to answer such queries, 12 developers (23%) failed to respond.

Power consumption data is very useful as it allows to optimize power grids, detect anomalies and prevent failures, on top of being useful for diverse research purposes. However, the use of power consumption data raises significant privacy concerns, as this data usually belongs to clients of a power company. As a solution, we propose a method to generate synthetic power consumption samples that faithfully imitate the originals, but are detached from the clients and their identities. Our method is based on Generative Adversarial Networks (GANs). Our contribution is twofold. First, we focus on the quality of the generated data, which is not a trivial task as no standard evaluation methods are available. Then, we study the privacy guarantees provided to members of the training set of our neural network. As a minimum requirement for privacy, we demand our neural network to be robust to membership inference attacks, as these provide a gateway for further attacks in addition to presenting a privacy threat on their own. We find that there is a compromise to be made between the privacy and the performance provided by the algorithm.

Social networks are usually considered as positive sources of social support, a role which has been extensively studied in the context of domestic violence. To victims of abuse, social networks often provide initial emotional and practical help as well useful information ahead of formal institutions. Recently, however, attention has been paid to the negative responses of social networks. In this article, we advance the theoretical debate on social networks as a source of social support by moving beyond the distinction between positive and negative ties. We do so by proposing the concepts of relational ambivalence and consistency, which describe the interactive processes by which people, intentionally or inadvertently, disregard or align with each other role relational expectations, therefore undermining or reinforcing individual choices of action. We analyse the qualitative accounts of nineteen female victims of domestic violence in Sweden, who described the responses of their personal networks during and after the abuse. We observe how the relationships embedded in these networks were described in ambivalent and consistent terms, and how they played a role in supporting or undermining women in reframing their loving relationships as abusive; in accounting or dismissing perpetrators responsibilities for the abuse; in relieving women from role expectations and obligations or in burdening them with further responsibilities; and in supporting or challenging their pathways out of domestic abuse. Our analysis suggests that social isolation cannot be considered a simple result of a lack of support but of the complex dynamics in which support is offered and accepted or withdrawn and refused.

Virtual Research Environments (VREs) provide user-centric support in the lifecycle of research activities, e.g., discovering and accessing research assets, or composing and executing application workflows. A typical VRE is often implemented as an integrated environment, which includes a catalog of research assets, a workflow management system, a data management framework, and tools for enabling collaboration among users. Notebook environments, such as Jupyter, allow researchers to rapidly prototype scientific code and share their experiments as online accessible notebooks. Jupyter can support several popular languages that are used by data scientists, such as Python, R, and Julia. However, such notebook environments do not have seamless support for running heavy computations on remote infrastructure or finding and accessing software code inside notebooks. This paper investigates the gap between a notebook environment and a VRE and proposes an embedded VRE solution for the Jupyter environment called Notebook-as-a-VRE (NaaVRE). The NaaVRE solution provides functional components via a component marketplace and allows users to create a customized VRE on top of the Jupyter environment. From the VRE, a user can search research assets (data, software, and algorithms), compose workflows, manage the lifecycle of an experiment, and share the results among users in the community. We demonstrate how such a solution can enhance a legacy workflow that uses Light Detection and Ranging (LiDAR) data from country-wide airborne laser scanning surveys for deriving geospatial data products of ecosystem structure at high resolution over broad spatial extents. This enables users to scale out the processing of multi-terabyte LiDAR point clouds for ecological applications to more data sources in a distributed cloud environment.

Machine Learning is a widely-used method for prediction generation. These predictions are more accurate when the model is trained on a larger dataset. On the other hand, the data is usually divided amongst different entities. For privacy reasons, the training can be done locally and then the model can be safely aggregated amongst the participants. However, if there are only two participants in \textit{Collaborative Learning}, the safe aggregation loses its power since the output of the training already contains much information about the participants. To resolve this issue, they must employ privacy-preserving mechanisms, which inevitably affect the accuracy of the model. In this paper, we model the training process as a two-player game where each player aims to achieve a higher accuracy while preserving its privacy. We introduce the notion of \textit{Price of Privacy}, a novel approach to measure the effect of privacy protection on the accuracy of the model. We develop a theoretical model for different player types, and we either find or prove the existence of a Nash Equilibrium with some assumptions. Moreover, we confirm these assumptions via a Recommendation Systems use case: for a specific learning algorithm, we apply three privacy-preserving mechanisms on two real-world datasets. Finally, as a complementary work for the designed game, we interpolate the relationship between privacy and accuracy for this use case and present three other methods to approximate it in a real-world scenario.

北京阿比特科技有限公司