亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Contemporary mobile applications (apps) are designed to track, use, and share users' data, often without their consent, which results in potential privacy and transparency issues. To investigate whether mobile apps have always been (non-)transparent regarding how they collect information about users, we perform a longitudinal analysis of the historical versions of 268 Android apps. These apps comprise 5,240 app releases or versions between 2008 and 2016. We detect inconsistencies between apps' behaviors and the stated use of data collection in privacy policies to reveal compliance issues. We utilize machine learning techniques for the classification of the privacy policy text to identify the purported practices that collect and/or share users' personal information, such as phone numbers and email addresses. We then uncover the data leaks of an app through static and dynamic analysis. Over time, our results show a steady increase in the number of apps' data collection practices that are undisclosed in the privacy policies. This behavior is particularly troubling since privacy policy is the primary tool for describing the app's privacy protection practices. We find that newer versions of the apps are likely to be more non-compliant than their preceding versions. The discrepancies between the purported and the actual data practices show that privacy policies are often incoherent with the apps' behaviors, thus defying the 'notice and choice' principle when users install apps.

相關內容

Vagueness and ambiguity in privacy policies threaten the ability of consumers to make informed choices about how businesses collect, use, and share their personal information. The California Consumer Privacy Act (CCPA) of 2018 was intended to provide Californian consumers with more control by mandating that businesses (1) clearly disclose their data practices and (2) provide choices for consumers to opt out of specific data practices. In this work, we explore to what extent CCPA's disclosure requirements, as implemented in actual privacy policies, can help consumers to answer questions about the data practices of businesses. First, we analyzed 95 privacy policies from popular websites; our findings showed that there is considerable variance in how businesses interpret CCPA's definitions. Then, our user survey of 364 Californian consumers showed that this variance affects the ability of users to understand the data practices of businesses. Our results suggest that CCPA's mandates for privacy disclosures, as currently implemented, have not yet yielded the level of clarity they were designed to deliver, due to both vagueness and ambiguity in CCPA itself as well as potential non-compliance by businesses in their privacy policies.

Password managers help users more effectively manage their passwords, encouraging them to adopt stronger passwords across their many accounts. In contrast to desktop systems where password managers receive no system-level support, mobile operating systems provide autofill frameworks designed to integrate with password managers to provide secure and usable autofill for browsers and other apps installed on mobile devices. In this paper, we evaluate mobile autofill frameworks on iOS and Android, examining whether they achieve substantive benefits over the ad-hoc desktop environment or become a problematic single point of failure. Our results find that while the frameworks address several common issues, they also enforce insecure behavior and fail to provide password managers sufficient information to override the frameworks' insecure behavior, resulting in mobile managers being less secure than their desktop counterparts overall. We also demonstrate how these frameworks act as a confused deputy in manager-assisted credential phishing attacks. Our results demonstrate the need for significant improvements to mobile autofill frameworks. We conclude the paper with recommendations for the design and implementation of secure autofill frameworks.

While many studies have looked at privacy properties of the Android and Google Play app ecosystem, comparatively much less is known about iOS and the Apple App Store, the most widely used ecosystem in the US. At the same time, there is increasing competition around privacy between these smartphone operating system providers. In this paper, we present a study of 24k Android and iOS apps from 2020 along several dimensions relating to user privacy. We find that third-party tracking and the sharing of unique user identifiers was widespread in apps from both ecosystems, even in apps aimed at children. In the children's category, iOS apps used much fewer advertising-related tracking than their Android counterparts, but could more often access children's location (by a factor of 7). Across all studied apps, our study highlights widespread potential violations of US, EU and UK privacy law, including 1) the use of third-party tracking without user consent, 2) the lack of parental consent before sharing PII with third-parties in children's apps, 3) the non-data-minimising configuration of tracking libraries, 4) the sending of personal data to countries without an adequate level of data protection, and 5) the continued absence of transparency around tracking, partly due to design decisions by Apple and Google. Overall, we find that neither platform is clearly better than the other for privacy across the dimensions we studied.

Offline reinforcement learning (RL) enables learning policies using pre-collected datasets without environment interaction, which provides a promising direction to make RL usable in real-world systems. Although recent offline RL studies have achieved much progress, existing methods still face many practical challenges in real-world system control tasks, such as computational restriction during agent training and the requirement of extra control flexibility. Model-based planning framework provides an attractive solution for such tasks. However, most model-based planning algorithms are not designed for offline settings. Simply combining the ingredients of offline RL with existing methods either provides over-restrictive planning or leads to inferior performance. We propose a new light-weighted model-based offline planning framework, namely MOPP, which tackles the dilemma between the restrictions of offline learning and high-performance planning. MOPP encourages more aggressive trajectory rollout guided by the behavior policy learned from data, and prunes out problematic trajectories to avoid potential out-of-distribution samples. Experimental results show that MOPP provides competitive performance compared with existing model-based offline planning and RL approaches.

Demonstrating the veracity of videos is a longstanding problem that has recently become more urgent and acute. It is extremely hard to accurately detect manipulated videos using content analysis, especially in the face of subtle, yet effective, manipulations, such as frame rate changes or skin tone adjustments. One prominent alternative to content analysis is to securely embed provenance information into videos. However, prior approaches have poor performance and/or granularity that is too coarse. To this end, we construct Vronicle -- a video provenance system that offers fine-grained provenance information and substantially better performance. It allows a video consumer to authenticate the camera that originated the video and the exact sequence of video filters that were subsequently applied to it. Vronicle exploits the increasing popularity and availability of Trusted Execution Environments (TEEs) on many types of computing platforms. One contribution of Vronicle is the design of provenance information that allows the consumer to verify various aspects of the video, thereby defeating numerous fake-video creation methods. Vronicle's adversarial model allows for a powerful adversary that can manipulate the video (e.g., in transit) and the software state outside the TEE. Another contribution is the use of fixed-function Intel SGX enclaves to post-process videos. This design facilitates verification of provenance information. We present a prototype implementation of Vronicle (to be open sourced), which relies on current technologies, making it readily deployable. Our evaluation demonstrates that Vronicle's performance is well-suited for offline use-cases.

Decentralized Finance (DeFi), a blockchain powered peer-to-peer financial system, is mushrooming. One and a half years ago the total value locked in DeFi systems was approximately $700$m USD, now, as of September 2021, it stands at around $100$bn USD. The frenetic evolution of the ecosystem has created challenges in understanding the basic principles of these systems and their security risks. In this Systematization of Knowledge (SoK) we delineate the DeFi ecosystem along the following axes: its primitives, its operational protocol types and its security. We provide a distinction between technical security, which has a healthy literature, and economic security, which is largely unexplored, connecting the latter with new models and thereby synthesizing insights from computer science, economics and finance. Finally, we outline the open research challenges in the ecosystem across these security types.

Health-related rumors spreading online during a public crisis may pose a serious threat to people's well-being. Existing crisis informatics research lacks in-depth insights into the characteristics of health rumors and the efforts to debunk them on social media in a pandemic. To fill this gap, we conduct a comprehensive analysis of four months of rumor-related online discussion during COVID-19 on Weibo, a Chinese microblogging site. Results suggest that the dread (cause fear) type of health rumors provoked significantly more discussions and lasted longer than the wish (raise hope) type. We further explore how four kinds of social media users (i.e., government, media, organization, and individual) combat health rumors, and identify their preferred way of sharing debunking information and the key rhetoric strategies used in the process. We examine the relationship between debunking and rumor discussions using a Granger causality approach, and show the efficacy of debunking in suppressing rumor discussions, which is time-sensitive and varies across rumor types and debunkers. Our results can provide insights into crisis informatics and risk management on social media in pandemic settings.

We study a fundamental cooperative message-delivery problem on the plane. Assume $n$ robots which can move in any direction, are placed arbitrarily on the plane. Robots each have their own maximum speed and can communicate with each other face-to-face (i.e., when they are at the same location at the same time). There are also two designated points on the plane, $S$ (the source) and $D$ (the destination). The robots are required to transmit the message from the source to the destination as quickly as possible by face-to-face message passing. We consider both the offline setting where all information (the locations and maximum speeds of the robots) are known in advance and the online setting where each robot knows only its own position and speed along with the positions of $S$ and $D$. In the offline case, we discover an important connection between the problem for two-robot systems and the well-known Apollonius circle which we employ to design an optimal algorithm. We also propose a $\sqrt 2$ approximation algorithm for systems with any number of robots. In the online setting, we provide an algorithm with competitive ratio $\frac 17 \left( 5+ 4 \sqrt{2} \right)$ for two-robot systems and show that the same algorithm has a competitive ratio less than $2$ for systems with any number of robots. We also show these results are tight for the given algorithm. Finally, we give two lower bounds (employing different arguments) on the competitive ratio of any online algorithm, one of $1.0391$ and the other of $1.0405$.

This demo presents a functional Proof-of-Concept prototype of a smart bracelet that utilizes IoT and ML to help in the effort to contain pandemics such as COVID-19. The designed smart bracelet aids people to navigate life safely by monitoring health signs; and detecting and alerting people when they violate social distancing regulations. In addition, the bracelet communicates with similar bracelets to keep track of recent contacts. Using RFID technology, the bracelet helps in automating access control to premises such as workplaces. All this is achieved while preserving the privacy of the users.

We present the results and the main findings of the NLP4IF-2021 shared tasks. Task 1 focused on fighting the COVID-19 infodemic in social media, and it was offered in Arabic, Bulgarian, and English. Given a tweet, it asked to predict whether that tweet contains a verifiable claim, and if so, whether it is likely to be false, is of general interest, is likely to be harmful, and is worthy of manual fact-checking; also, whether it is harmful to society, and whether it requires the attention of policy makers. Task~2 focused on censorship detection, and was offered in Chinese. A total of ten teams submitted systems for task 1, and one team participated in task 2; nine teams also submitted a system description paper. Here, we present the tasks, analyze the results, and discuss the system submissions and the methods they used. Most submissions achieved sizable improvements over several baselines, and the best systems used pre-trained Transformers and ensembles. The data, the scorers and the leaderboards for the tasks are available at //gitlab.com/NLP4IF/nlp4if-2021.

北京阿比特科技有限公司