亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Smartphone technology has drastically improved over the past decade. These improvements have seen the creation of specialized health applications, which offer consumers a range of health-related activities such as tracking and checking symptoms of health conditions or diseases through their smartphones. We term these applications as Symptom Checking apps or simply SymptomCheckers. Due to the sensitive nature of the private data they collect, store and manage, leakage of user information could result in significant consequences. In this paper, we use a combination of techniques from both static and dynamic analysis to detect, trace and categorize security and privacy issues in 36 popular SymptomCheckers on Google Play. Our analyses reveal that SymptomCheckers request a significantly higher number of sensitive permissions and embed a higher number of third-party tracking libraries for targeted advertisements and analytics exploiting the privileged access of the SymptomCheckers in which they exist, as a mean of collecting and sharing critically sensitive data about the user and their device. We find that these are sharing the data that they collect through unencrypted plain text to the third-party advertisers and, in some cases, to malicious domains. The results reveal that the exploitation of SymptomCheckers is present in popular apps, still readily available on Google Play.

相關內容

Google Play(前 Android Market) 是一個由谷(gu)歌公司為 Android 系統用(yong)戶創建的服務,允許安裝了(le) Android 系統的手機和平板電腦(nao)用(yong)戶從 Android Market 瀏(liu)覽和下載一些應用(yong)程(cheng)序。用(yong)戶可以(yi)購買或免費試用(yong)這些應用(yong)程(cheng)序。

In the 21st century, the industry of drones, also known as Unmanned Aerial Vehicles (UAVs), has witnessed a rapid increase with its large number of airspace users. The tremendous benefits of this technology in civilian applications such as hostage rescue and parcel delivery will integrate smart cities in the future. Nowadays, the affordability of commercial drones expands its usage at a large scale. However, the development of drone technology is associated with vulnerabilities and threats due to the lack of efficient security implementations. Moreover, the complexity of UAVs in software and hardware triggers potential security and privacy issues. Thus, posing significant challenges for the industry, academia, and governments. In this paper, we extensively survey the security and privacy issues of UAVs by providing a systematic classification at four levels: Hardware-level, Software-level, Communication-level, and Sensor-level. In particular, for each level, we thoroughly investigate (1) common vulnerabilities affecting UAVs for potential attacks from malicious actors, (2) existing threats that are jeopardizing the civilian application of UAVs, (3) active and passive attacks performed by the adversaries to compromise the security and privacy of UAVs, (4) possible countermeasures and mitigation techniques to protect UAVs from such malicious activities. In addition, we summarize the takeaways that highlight lessons learned about UAVs' security and privacy issues. Finally, we conclude our survey by presenting the critical pitfalls and suggesting promising future research directions for security and privacy of UAVs.

Recommender models are commonly used to suggest relevant items to a user for e-commerce and online advertisement-based applications. These models use massive embedding tables to store numerical representation of items' and users' categorical variables (memory intensive) and employ neural networks (compute intensive) to generate final recommendations. Training these large-scale recommendation models is evolving to require increasing data and compute resources. The highly parallel neural networks portion of these models can benefit from GPU acceleration however, large embedding tables often cannot fit in the limited-capacity GPU device memory. Hence, this paper deep dives into the semantics of training data and obtains insights about the feature access, transfer, and usage patterns of these models. We observe that, due to the popularity of certain inputs, the accesses to the embeddings are highly skewed with a few embedding entries being accessed up to 10000x more. This paper leverages this asymmetrical access pattern to offer a framework, called FAE, and proposes a hot-embedding aware data layout for training recommender models. This layout utilizes the scarce GPU memory for storing the highly accessed embeddings, thus reduces the data transfers from CPU to GPU. At the same time, FAE engages the GPU to accelerate the executions of these hot embedding entries. Experiments on production-scale recommendation models with real datasets show that FAE reduces the overall training time by 2.3x and 1.52x in comparison to XDL CPU-only and XDL CPU-GPU execution while maintaining baseline accuracy

Password managers help users more effectively manage their passwords, encouraging them to adopt stronger passwords across their many accounts. In contrast to desktop systems where password managers receive no system-level support, mobile operating systems provide autofill frameworks designed to integrate with password managers to provide secure and usable autofill for browsers and other apps installed on mobile devices. In this paper, we evaluate mobile autofill frameworks on iOS and Android, examining whether they achieve substantive benefits over the ad-hoc desktop environment or become a problematic single point of failure. Our results find that while the frameworks address several common issues, they also enforce insecure behavior and fail to provide password managers sufficient information to override the frameworks' insecure behavior, resulting in mobile managers being less secure than their desktop counterparts overall. We also demonstrate how these frameworks act as a confused deputy in manager-assisted credential phishing attacks. Our results demonstrate the need for significant improvements to mobile autofill frameworks. We conclude the paper with recommendations for the design and implementation of secure autofill frameworks.

While many studies have looked at privacy properties of the Android and Google Play app ecosystem, comparatively much less is known about iOS and the Apple App Store, the most widely used ecosystem in the US. At the same time, there is increasing competition around privacy between these smartphone operating system providers. In this paper, we present a study of 24k Android and iOS apps from 2020 along several dimensions relating to user privacy. We find that third-party tracking and the sharing of unique user identifiers was widespread in apps from both ecosystems, even in apps aimed at children. In the children's category, iOS apps used much fewer advertising-related tracking than their Android counterparts, but could more often access children's location (by a factor of 7). Across all studied apps, our study highlights widespread potential violations of US, EU and UK privacy law, including 1) the use of third-party tracking without user consent, 2) the lack of parental consent before sharing PII with third-parties in children's apps, 3) the non-data-minimising configuration of tracking libraries, 4) the sending of personal data to countries without an adequate level of data protection, and 5) the continued absence of transparency around tracking, partly due to design decisions by Apple and Google. Overall, we find that neither platform is clearly better than the other for privacy across the dimensions we studied.

Hardware Security Modules (HSMs) are trusted machines that perform sensitive operations in critical ecosystems. They are usually required by law in financial and government digital services. The most important feature of an HSM is its ability to store sensitive credentials and cryptographic keys inside a tamper-resistant hardware, so that every operation is done internally through a suitable API, and such sensitive data are never exposed outside the device. HSMs are now conveniently provided in the cloud, meaning that the physical machines are remotely hosted by some provider and customers can access them through a standard API. The property of keeping sensitive data inside the device is even more important in this setting as a vulnerable application might expose the full API to an attacker. Unfortunately, in the last 20+ years a multitude of practical API-level attacks have been found and proved feasible in real devices. The latest version of PKCS#11, the most popular standard API for HSMs, does not address these issues leaving all the flaws possible. In this paper, we propose the first secure HSM configuration that does not require any restriction or modification of the PKCS#11 API and is suitable to cloud HSM solutions, where compliance to the standard API is of paramount importance. The configuration relies on a careful separation of roles among the different HSM users so that known API flaws are not exploitable by any attacker taking control of the application. We prove the correctness of the configuration by providing a formal model in the state-of-the-art Tamarin prover and we show how to implement the configuration in a real cloud HSM solution.

This demo presents a functional Proof-of-Concept prototype of a smart bracelet that utilizes IoT and ML to help in the effort to contain pandemics such as COVID-19. The designed smart bracelet aids people to navigate life safely by monitoring health signs; and detecting and alerting people when they violate social distancing regulations. In addition, the bracelet communicates with similar bracelets to keep track of recent contacts. Using RFID technology, the bracelet helps in automating access control to premises such as workplaces. All this is achieved while preserving the privacy of the users.

Mobile health applications (mHealth apps for short) are being increasingly adopted in the healthcare sector, enabling stakeholders such as governments, health units, medics, and patients, to utilize health services in a pervasive manner. Despite having several known benefits, mHealth apps entail significant security and privacy challenges that can lead to data breaches with serious social, legal, and financial consequences. This research presents an empirical investigation about security awareness of end-users of mHealth apps that are available on major mobile platforms, including Android and iOS. We collaborated with two mHealth providers in Saudi Arabia to survey 101 end-users, investigating their security awareness about (i) existing and desired security features, (ii) security related issues, and (iii) methods to improve security knowledge. Findings indicate that majority of the end-users are aware of the existing security features provided by the apps (e.g., restricted app permissions); however, they desire usable security (e.g., biometric authentication) and are concerned about privacy of their health information (e.g., data anonymization). End-users suggested that protocols such as session timeout or Two-factor authentication (2FA) positively impact security but compromise usability of the app. Security-awareness via social media, peer guidance, or training from app providers can increase end-users trust in mHealth apps. This research investigates human-centric knowledge based on empirical evidence and provides a set of guidelines to develop secure and usable mHealth apps.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

Privacy is a major good for users of personalized services such as recommender systems. When applied to the field of health informatics, privacy concerns of users may be amplified, but the possible utility of such services is also high. Despite availability of technologies such as k-anonymity, differential privacy, privacy-aware recommendation, and personalized privacy trade-offs, little research has been conducted on the users' willingness to share health data for usage in such systems. In two conjoint-decision studies (sample size n=521), we investigate importance and utility of privacy-preserving techniques related to sharing of personal health data for k-anonymity and differential privacy. Users were asked to pick a preferred sharing scenario depending on the recipient of the data, the benefit of sharing data, the type of data, and the parameterized privacy. Users disagreed with sharing data for commercial purposes regarding mental illnesses and with high de-anonymization risks but showed little concern when data is used for scientific purposes and is related to physical illnesses. Suggestions for health recommender system development are derived from the findings.

北京阿比特科技有限公司