亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Mobility analysis, or understanding and modeling of people's mobility patterns in terms of when, where, and how people move from one place to another, is fundamentally important as such information is the basis for large-scale investment decisions on the nation's multi-modal transportation infrastructure. Recent rise of using passively generated mobile data from mobile devices have raised questions on using such data for capturing the mobility patterns of a population because: 1) there is a great variety of different kinds of mobile data and their respective properties are unknown; and 2) data pre-processing and analysis methods are often not explicitly reported. The high stakes involved with mobility analysis and issues associated with the passively generated mobile data call for mobility analysis (including data, methods and results) to be accessible to all, interoperable across different computing systems, reproducible and reusable by others. In this study, a container system named Mobility Analysis Workflow (MAW) that integrates data, methods and results, is developed. Built upon the containerization technology, MAW allows its users to easily create, configure, modify, execute and share their methods and results in the form of Docker containers. Tools for operationalizing MAW are also developed and made publicly available on GitHub. One use case of MAW is the comparative analysis for the impacts of different pre-processing and mobility analysis methods on inferred mobility patterns. This study finds that different pre-processing and analysis methods do have impacts on the resulting mobility patterns. The creation of MAW and a better understanding of the relationship between data, methods and resulting mobility patterns as facilitated by MAW represent an important first step toward promoting reproducibility and reusability in mobility analysis with passively-generated data.

相關內容

The farming industry constantly seeks the automation of different processes involved in agricultural production, such as sowing, harvesting and weed control. The use of mobile autonomous robots to perform those tasks is of great interest. Arable lands present hard challenges for Simultaneous Localization and Mapping (SLAM) systems, key for mobile robotics, given the visual difficulty due to the highly repetitive scene and the crop leaves movement caused by the wind. In recent years, several Visual-Inertial Odometry (VIO) and SLAM systems have been developed. They have proved to be robust and capable of achieving high accuracy in indoor and outdoor urban environments. However, they were not properly assessed in agricultural fields. In this work we assess the most relevant state-of-the-art VIO systems in terms of accuracy and processing time on arable lands in order to better understand how they behave on these environments. In particular, the evaluation is carried out on a collection of sensor data recorded by our wheeled robot in a soybean field, which was publicly released as the Rosario Dataset. The evaluation shows that the highly repetitive appearance of the environment, the strong vibration produced by the rough terrain and the movement of the leaves caused by the wind, expose the limitations of the current state-of-the-art VIO and SLAM systems. We analyze the systems failures and highlight the observed drawbacks, including initialization failures, tracking loss and sensitivity to IMU saturation. Finally, we conclude that even though certain systems like ORB-SLAM3 and S-MSCKF show good results with respect to others, more improvements should be done to make them reliable in agricultural fields for certain applications such as soil tillage of crop rows and pesticide spraying.

Cross-validation is a widely-used technique to estimate prediction error, but its behavior is complex and not fully understood. Ideally, one would like to think that cross-validation estimates the prediction error for the model at hand, fit to the training data. We prove that this is not the case for the linear model fit by ordinary least squares; rather it estimates the average prediction error of models fit on other unseen training sets drawn from the same population. We further show that this phenomenon occurs for most popular estimates of prediction error, including data splitting, bootstrapping, and Mallow's Cp. Next, the standard confidence intervals for prediction error derived from cross-validation may have coverage far below the desired level. Because each data point is used for both training and testing, there are correlations among the measured accuracies for each fold, and so the usual estimate of variance is too small. We introduce a nested cross-validation scheme to estimate this variance more accurately, and we show empirically that this modification leads to intervals with approximately correct coverage in many examples where traditional cross-validation intervals fail.

Mobile crowd sensing and computing (MCSC) enables heterogeneous users (workers) to contribute real-time sensed, generated, and pre-processed data from their mobile devices to the MCSC platform, for intelligent service provisioning. This paper investigates a novel hybrid worker recruitment problem where the MCSC platform employs workers to serve MCSC tasks with diverse quality requirements and budget constraints, while considering uncertainties in workers' participation and their local workloads. We propose a hybrid worker recruitment framework consisting of offline and online trading modes. The former enables the platform to overbook long-term workers (services) to cope with dynamic service supply via signing contracts in advance, which is formulated as 0-1 integer linear programming (ILP) with probabilistic constraints related to service quality and budget. Besides, motivated by the existing uncertainties which may render long-term workers fail to meet the service quality requirement of each task, we augment our methodology with an online temporary worker recruitment scheme as a backup Plan B to support seamless service provisioning for MCSC tasks, which also represents a 0-1 ILP problem. To tackle these problems which are proved to be NP-hard, we develop three algorithms, namely, i) exhaustive searching, ii) unique index-based stochastic searching with risk-aware filter constraint, and iii) geometric programming-based successive convex algorithm, which achieve the optimal (with high computational complexity) or sub-optimal (with low complexity) solutions. Experimental results demonstrate the effectiveness of our proposed hybrid worker recruitment mechanism in terms of service quality, time efficiency, etc.

We present a systematic refactoring of the conventional treatment of privacy analyses, basing it on mathematical concepts from the framework of Quantitative Information Flow (QIF). The approach we suggest brings three principal advantages: it is flexible, allowing for precise quantification and comparison of privacy risks for attacks both known and novel; it can be computationally tractable for very large, longitudinal datasets; and its results are explainable both to politicians and to the general public. We apply our approach to a very large case study: the Educational Censuses of Brazil, curated by the governmental agency INEP, which comprise over 90 attributes of approximately 50 million individuals released longitudinally every year since 2007. These datasets have only very recently (2018-2021) attracted legislation to regulate their privacy -- while at the same time continuing to maintain the openness that had been sought in Brazilian society. INEP's reaction to that legislation was the genesis of our project with them. In our conclusions here we share the scientific, technical, and communication lessons we learned in the process.

Rule sets are highly interpretable logical models in which the predicates for decision are expressed in disjunctive normal form (DNF, OR-of-ANDs), or, equivalently, the overall model comprises an unordered collection of if-then decision rules. In this paper, we consider a submodular optimization based approach for learning rule sets. The learning problem is framed as a subset selection task in which a subset of all possible rules needs to be selected to form an accurate and interpretable rule set. We employ an objective function that exhibits submodularity and thus is amenable to submodular optimization techniques. To overcome the difficulty arose from dealing with the exponential-sized ground set of rules, the subproblem of searching a rule is casted as another subset selection task that asks for a subset of features. We show it is possible to write the induced objective function for the subproblem as a difference of two submodular (DS) functions to make it approximately solvable by DS optimization algorithms. Overall, the proposed approach is simple, scalable, and likely to be benefited from further research on submodular optimization. Experiments on real datasets demonstrate the effectiveness of our method.

Many applications that benefit from data offload to cloud services operate on private data. A now-long line of work has shown that, even when data is offloaded in an encrypted form, an adversary can learn sensitive information by analyzing data access patterns. Existing techniques for oblivious data access-that protect against access pattern attacks-require a centralized and stateful trusted proxy to orchestrate data accesses from applications to cloud services. We show that, in failure-prone deployments, such a centralized and stateful proxy results in violation of oblivious data access security guarantees and/or system unavailability. We thus initiate the study of distributed, fault-tolerant, oblivious data access. We present SHORTSTACK, a distributed proxy architecture for oblivious data access in failure-prone deployments. SHORTSTACK achieves the classical obliviousness guarantee--access patterns observed by the adversary being independent of the input--even under a powerful passive persistent adversary that can force failure of arbitrary (bounded-sized) subset of proxy servers at arbitrary times. We also introduce a security model that enables studying oblivious data access with distributed, failure-prone, servers. We provide a formal proof that SHORTSTACK enables oblivious data access under this model, and show empirically that SHORTSTACK performance scales near-linearly with number of distributed proxy servers.

A sensor is a device that converts a physical parameter or an environmental characteristic (e.g., temperature, distance, speed, etc.) into a signal that can be digitally measured and processed to perform specific tasks. Mobile robots need sensors to measure properties of their environment, thus allowing for safe navigation, complex perception and corresponding actions and effective interactions with other agents that populate it. Sensors used by mobile robots range from simple tactile sensors, such as bumpers, to complex vision-based sensors such as structured light cameras. All of them provide a digital output (e.g., a string, a set of values, a matrix, etc.) that can be processed by the robot's computer. Such output is typically obtained by discretizing one or more analog electrical signals by using an Analog to Digital Converter (ADC) included in the sensor. In this chapter we present the most common sensors used in mobile robotics, providing an introduction to their taxonomy, basic features and specifications. The description of the functionalities and the types of applications follows a bottom-up approach: the basic principles and components on which the sensors are based are presented before describing real-world sensors, which are generally based on multiple technologies and basic devices.

In this paper, we focus our attention on private Empirical Risk Minimization (ERM), which is one of the most commonly used data analysis method. We take the first step towards solving the above problem by theoretically exploring the effect of epsilon (the parameter of differential privacy that determines the strength of privacy guarantee) on utility of the learning model. We trace the change of utility with modification of epsilon and reveal an established relationship between epsilon and utility. We then formalize this relationship and propose a practical approach for estimating the utility under an arbitrary value of epsilon. Both theoretical analysis and experimental results demonstrate high estimation accuracy and broad applicability of our approach in practical applications. As providing algorithms with strong utility guarantees that also give privacy when possible becomes more and more accepted, our approach would have high practical value and may be likely to be adopted by companies and organizations that would like to preserve privacy but are unwilling to compromise on utility.

Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. The billion-scale data in Taobao creates three major challenges to Taobao's RS: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on the graph embedding framework. We first construct an item graph from users' behavior history. Each item is then represented as a vector using graph embedding. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using online A/B test, we show that the online Click-Through-Rate (CTRs) are improved comparing to the previous recommendation methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.

北京阿比特科技有限公司