With the growing advances in the Internet of Things (IoT) technology, IoT device management platforms are becoming increasingly important. We conducted a web-based survey and usability study with 43 participants who use IoT devices frequently to: 1) examine their smart home IoT usage patterns and privacy preferences, and 2) evaluate a web-based prototype for smart home IoT device management. We found that participants perceived privacy as more important than the convenience afforded by the IoT devices. Based on their average scores of privacy vs. convenience importance, participants with low privacy and low convenience significantly reported less privacy control and convenience preferences than participants with high privacy and high convenience. Overall, all participants were satisfied with the proposed website prototype and their actual usability evaluation demonstrated a good understanding of the website features. This paper provides an empirical examination of the privacy versus convenience trade-offs smart home users make when managing their IoT devices.
The Internet of Things (IoT) is an idea that intends to interface arranged data frameworks to actual items. The Internet of Things (IoT) has applications in pretty much every part of life in this day and age, and stock administration is no special case. IoT gives an answer for this issue by making it simpler to interface every one of the various organizations in a strategic framework utilizing Wireless Sensor Networks. An Interactive Shopping Model and an Automated Inventory Intelligent Management System that uses the Internet of Things to give constant item following, the board, and observing. A study and examination of the commonness of IoT among assembling SMEs is introduced, just as the current impediments and possibilities for permitting prescient investigation. The four examination capacities are depicted alongside an outline of the IoT empowering agents. Future patterns and difficulties in arising innovative work subjects are featured, for example, making IoT advances available to SMEs. The motivation behind this paper is to look at how the Internet of Things is changing our lives and work spaces, just as to feature probably the best strategic approaches, insights, and patterns. Considering the developing significance of big business IoT and the exploration hole in this field, an IoT design and the IoT administration industry will be examined. A model is needed to choose and send IoT administrations in different authoritative settings.
Automated driving systems (ADS) are expected to be reliable and robust against a wide range of driving scenarios. Their decisions, first and foremost, must be well understood. Understanding a decision made by ADS is a great challenge, because it is not straightforward to tell whether the decision is correct or not, and how to verify it systematically. In this paper, a Sequential MetAmoRphic Testing Smart framework is proposed based on metamorphic testing, a mainstream software testing approach. In metamorphic testing, metamorphic groups are constructed by selecting multiple inputs according to the so-called metamorphic relations, which are basically the system's necessary properties; the violation of certain relations by some corresponding metamorphic groups implies the detection of erroneous system behaviors. The proposed framework makes use of sequences of metamorphic groups to understand ADS behaviors, and is applicable without the need of ground-truth datasets. To demonstrate its effectiveness, the framework is applied to test three ADS models that steer an autonomous car in different scenarios with another car either leading in front or approaching in the opposite direction. The conducted experiments reveal a large number of undesirable behaviors in these top-ranked deep learning models in the scenarios. These counter-intuitive behaviors are associated with how the core models of ADS respond to different positions, directions and properties of the other car in its proximity. Further analysis of the results helps identify critical factors affecting ADS decisions and thus demonstrates that the framework can be used to provide a comprehensive understanding of ADS before their deployment
We consider experiments in dynamical systems where interventions on some experimental units impact other units through a limiting constraint (such as a limited inventory). Despite outsize practical importance, the best estimators for this `Markovian' interference problem are largely heuristic in nature, and their bias is not well understood. We formalize the problem of inference in such experiments as one of policy evaluation. Off-policy estimators, while unbiased, apparently incur a large penalty in variance relative to state-of-the-art heuristics. We introduce an on-policy estimator: the Differences-In-Q's (DQ) estimator. We show that the DQ estimator can in general have exponentially smaller variance than off-policy evaluation. At the same time, its bias is second order in the impact of the intervention. This yields a striking bias-variance tradeoff so that the DQ estimator effectively dominates state-of-the-art alternatives. From a theoretical perspective, we introduce three separate novel techniques that are of independent interest in the theory of Reinforcement Learning (RL). Our empirical evaluation includes a set of experiments on a city-scale ride-hailing simulator.
Mobile devices have access to personal, potentially sensitive data, and there is a large number of mobile applications and third-party libraries that transmit this information over the network to remote servers (including app developer servers and third party servers). In this paper, we are interested in better understanding of not just the extent of personally identifiable information (PII) exposure, but also its context i.e., functionality of the app, destination server, encryption used, etc.) and the risk perceived by mobile users today. To that end we take two steps. First, we perform a measurement study: we collect a new dataset via manual and automatic testing and capture the exposure of 16 PII types from 400 most popular Android apps. We analyze these exposures and provide insights into the extent and patterns of mobile apps sharing PII, which can be later used for prediction and prevention. Second, we perform a user study with 220 participants on Amazon Mechanical Turk: we summarize the results of the measurement study in categories, present them in a realistic context, and assess users' understanding, concern, and willingness to take action. To the best of our knowledge, our user study is the first to collect and analyze user input in such fine granularity and on actual (not just potential or permitted) privacy exposures on mobile devices. Although many users did not initially understand the full implications of their PII being exposed, after being better informed through the study, they became appreciative and interested in better privacy practices.
Modern websites frequently use and embed third-party services to facilitate web development, connect to social media, or for monetization. This often introduces privacy issues as the inclusion of third-party services on a website can allow the third party to collect personal data about the website's visitors. While the prevalence and mechanisms of third-party web tracking have been widely studied, little is known about the decision processes that lead to websites using third-party functionality and whether efforts are being made to protect their visitors' privacy. We report results from an online survey with 395 participants involved in the creation and maintenance of websites. For ten common website functionalities we investigated if privacy has played a role in decisions about how the functionality is integrated, if specific efforts for privacy protection have been made during integration, and to what degree people are aware of data collection through third parties. We find that ease of integration drives third-party adoption but visitor privacy is considered if there are legal requirements or respective guidelines. Awareness of data collection and privacy risks is higher if the collection is directly associated with the purpose for which the third-party service is used.
Introduction: Systems that exist in the hospital or clinic settings are capable of providing services in the physical environment. These systems (e.g., Picture Archiving and communication systems) provide remote service for patients. To design such systems, we need some unique methods such as software development life cycle and different methods such as prototyping. Clinical setting: This study designs an image exchange system in the private dental sector of Urmia city using user-centered methods and prototyping. Methods: Information was collected based on each stage's software development life cycle. Interviews and observations were used to gather user-needs data, such as object-oriented programming for developing a Prototype. Results: The users' needs were determined to consider at the beginning. Ease of use, security, and mobile apps were their most essential needs. Then, the prototype was designed and evaluated in the focus group session. These steps continued until users were satisfied in the focus group. Eventually, after the users' consent, the prototype became the final system. Discussion: Instant access to Information, volunteering, user interface design, and usefulness were the most critical variables users considered. The advantage of this system also includes less radiation to the patient due to not losing and missing the clips of the patient's images. Conclusion: The success of such a system requires the consideration of end-users needs and their application to the system. In addition to this system, having an electronic health record can improve the treatment process and improve the work of the medical staff.
Increasingly, information systems rely on computational, storage, and network resources deployed in third-party facilities such as cloud centers and edge nodes. Such an approach further exacerbates cybersecurity concerns constantly raised by numerous incidents of security and privacy attacks resulting in data leakage and identity theft, among others. These have, in turn, forced the creation of stricter security and privacy-related regulations and have eroded the trust in cyberspace. In particular, security-related services and infrastructures, such as Certificate Authorities (CAs) that provide digital certificate services and Third-Party Authorities (TPAs) that provide cryptographic key services, are critical components for establishing trust in crypto-based privacy-preserving applications and services. To address such trust issues, various transparency frameworks and approaches have been recently proposed in the literature. This paper proposes TAB framework that provides transparency and trustworthiness of third-party authority and third-party facilities using blockchain techniques for emerging crypto-based privacy-preserving applications. TAB employs the Ethereum blockchain as the underlying public ledger and also includes a novel smart contract to automate accountability with an incentive mechanism that motivates users to participate in auditing, and punishes unintentional or malicious behaviors. We implement TAB and show through experimental evaluation in the Ethereum official test network, Rinkeby, that the framework is efficient. We also formally show the security guarantee provided by TAB, and analyze the privacy guarantee and trustworthiness it provides.
Covid-19 has radically changed our lives, with many governments and businesses mandating work-from-home (WFH) and remote education. However, work-from-home policy is not always known globally, and even when enacted, compliance can vary. These uncertainties suggest a need to measure WFH and confirm actual policy implementation. We show new algorithms that detect WFH from changes in network use during the day. We show that change-sensitive networks reflect mobile computer use, detecting WFH from changes in network intensity, the diurnal and weekly patterns of IP address response. Our algorithm provides new analysis of existing, continuous, global scans of most of the responsive IPv4 Internet (about 5.1M /24 blocks). Reuse of existing data allows us to study the emergence of Covid-19, revealing global reactions. We demonstrate the algorithm in networks with known ground truth, evaluate the data reconstruction and algorithm design choices with studies of real-world data, and validate our approach by testing random samples against news reports. In addition to Covid-related WFH, we also find other government-mandated lockdowns. Our results show the first use of network intensity to infer-real world behavior and policies.
End-to-end object detection is rapidly progressed after the emergence of DETR. DETRs use a set of sparse queries that replace the dense candidate boxes in most traditional detectors. In comparison, the sparse queries cannot guarantee a high recall as dense priors. However, making queries dense is not trivial in current frameworks. It not only suffers from heavy computational cost but also difficult optimization. As both sparse and dense queries are imperfect, then \emph{what are expected queries in end-to-end object detection}? This paper shows that the expected queries should be Dense Distinct Queries (DDQ). Concretely, we introduce dense priors back to the framework to generate dense queries. A duplicate query removal pre-process is applied to these queries so that they are distinguishable from each other. The dense distinct queries are then iteratively processed to obtain final sparse outputs. We show that DDQ is stronger, more robust, and converges faster. It obtains 44.5 AP on the MS COCO detection dataset with only 12 epochs. DDQ is also robust as it outperforms previous methods on both object detection and instance segmentation tasks on various datasets. DDQ blends advantages from traditional dense priors and recent end-to-end detectors. We hope it can serve as a new baseline and inspires researchers to revisit the complementarity between traditional methods and end-to-end detectors. The source code is publicly available at \url{//github.com/jshilong/DDQ}.
User engagement is a critical metric for evaluating the quality of open-domain dialogue systems. Prior work has focused on conversation-level engagement by using heuristically constructed features such as the number of turns and the total time of the conversation. In this paper, we investigate the possibility and efficacy of estimating utterance-level engagement and define a novel metric, {\em predictive engagement}, for automatic evaluation of open-domain dialogue systems. Our experiments demonstrate that (1) human annotators have high agreement on assessing utterance-level engagement scores; (2) conversation-level engagement scores can be predicted from properly aggregated utterance-level engagement scores. Furthermore, we show that the utterance-level engagement scores can be learned from data. These scores can improve automatic evaluation metrics for open-domain dialogue systems, as shown by correlation with human judgements. This suggests that predictive engagement can be used as a real-time feedback for training better dialogue models.