亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This case-study aims at a comparison of the service quality of time-tabled buses as compared to on-demand ridepooling cabs in the late evening hours in the city of Wuppertal, Germany. To evaluate the efficiency of ridepooling as compared to bus services, and to simulate bus rides during the evening hours, transport requests are generated using a predictive simulation. To this end, a framework in the programming language R is created, which automatedly combines generalized linear models for count regression to model the demand at each bus stop. Furthermore, we use classification models for the prediction of trip destinations. To solve the resulting dynamic dial-a-ride problem, a rolling-horizon algorithm based on the iterative solution of Mixed-Integer Linear Programming Models (MILP) is used. A feasible-path heuristic is used to enhance the performance of the algorithm in presence of high request densities. This allows an estimation of the number of cabs needed depending on the weekday to realize the same or a better general service quality as the bus system.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 缺失數據 · 貝葉斯框架 · 貝葉斯 · 近似 ·
2023 年 3 月 27 日

Measurement error (ME) and missing values in covariates are often unavoidable in disciplines that deal with data, and both problems have separately received considerable attention during the past decades. However, while most researchers are familiar with methods for treating missing data, accounting for ME in covariates of regression models is less common. In addition, ME and missing data are typically treated as two separate problems, despite practical and theoretical similarities. Here, we exploit the fact that missing data in a continuous covariate is an extreme case of classical ME, allowing us to use existing methodology that accounts for ME via a Bayesian framework that employs integrated nested Laplace approximations (INLA), and thus to simultaneously account for both ME and missing data in the same covariate. As a useful by-product, we present an approach to handle missing data in INLA, since this corresponds to the special case when no ME is present. In addition, we show how to account for Berkson ME in the same framework. In its broadest generality, the proposed joint Bayesian framework can thus account for Berkson ME, classical ME, and missing data, or for any combination of these in the same or different continuous covariates of the family of regression models that are feasible with INLA. The approach is exemplified using both simulated and real data. We provide extensive and fully reproducible Supplementary Material with thoroughly documented examples using {R-INLA} and {inlabru}.

Epilepsy is a chronic neurological disorder with a significant prevalence. However, there is still no adequate technological support to enable epilepsy detection and continuous outpatient monitoring in everyday life. Hyperdimensional (HD) computing is an interesting alternative for wearable devices, characterized by a much simpler learning process and also lower memory requirements. In this work, we demonstrate a few additional aspects in which HD computing, and the way its models are built and stored, can be used for further understanding, comparing, and creating more advanced machine learning models for epilepsy detection. These possibilities are not feasible with other state-of-the-art models, such as random forests or neural networks. We compare inter-subject similarity of models per different classes (seizure and non-seizure), then study the process of creation of generalized models from personalized ones, and in the end, how to combine personalized and generalized models to create hybrid models. This results in improved epilepsy detection performance. We also tested knowledge transfer between models created on two different datasets. Finally, all those examples could be highly interesting not only from an engineering perspective to create better models for wearables, but also from a neurological perspective to better understand individual epilepsy patterns.

Large neural networks can improve the accuracy and generalization on tasks across many domains. However, this trend cannot continue indefinitely due to limited hardware memory. As a result, researchers have devised a number of memory optimization methods (MOMs) to alleviate the memory bottleneck, such as gradient checkpointing, quantization, and swapping. In this work, we study memory optimization methods and show that, although these strategies indeed lower peak memory usage, they can actually decrease training throughput by up to 9.3x. To provide practical guidelines for practitioners, we propose a simple but effective performance model PAPAYA to quantitatively explain the memory and training time trade-off. PAPAYA can be used to determine when to apply the various memory optimization methods in training different models. We outline the circumstances in which memory optimization techniques are more advantageous based on derived implications from PAPAYA. We assess the accuracy of PAPAYA and the derived implications on a variety of machine models, showing that it achieves over 0.97 R score on predicting the peak memory/throughput, and accurately predicts the effectiveness of MOMs across five evaluated models on vision and NLP tasks.

Multitask learning is widely used in practice to train a low-resource target task by augmenting it with multiple related source tasks. Yet, naively combining all the source tasks with a target task does not always improve the prediction performance for the target task due to negative transfers. Thus, a critical problem in multitask learning is identifying subsets of source tasks that would benefit the target task. This problem is computationally challenging since the number of subsets grows exponentially with the number of source tasks; efficient heuristics for subset selection does not always capture the relationship between task subsets and multitask learning performances. In this paper, we introduce an efficient procedure to address this problem via surrogate modeling. In surrogate modeling, we sample (random) subsets of source tasks and precompute their multitask learning performances; Then, we approximate the precomputed performances with a linear regression model that can also be used to predict the multitask performance of unseen task subsets. We show theoretically and empirically that fitting this model only requires sampling linearly many subsets in the number of source tasks. The fitted model provides a relevance score between each source task and the target task; We use the relevance scores to perform subset selection for multitask learning by thresholding. Through extensive experiments, we show that our approach predicts negative transfers from multiple source tasks to target tasks much more accurately than existing task affinity measures. Additionally, we demonstrate that for five weak supervision datasets, our approach consistently improves upon existing optimization methods for multi-task learning.

This study compares the National Cybersecurity Strategies (NCSSs) of publicly available documents of ten nations across Europe (United Kingdom, France, Lithuania, Estonia, Spain, and Norway), Asia-Pacific (Singapore and Australia), and the American region (the United States of America and Canada). The study observed that there is not a unified understanding of the term "Cybersecurity"; however, a common trajectory of the NCSSs shows that the fight against cybercrime is a joint effort among various stakeholders, hence the need for strong international cooperation. Using a comparative structure and an NCSS framework, the research finds similarities in protecting critical assets, commitment to research and development, and improved national and international collaboration. The study finds that the lack of a unified underlying cybersecurity framework leads to a disparity in the structure and contents of the strategies. The strengths and weaknesses of the NCSSs from the research can benefit countries planning to develop or update their cybersecurity strategies. The study gives recommendations that strategy developers can consider when developing an NCSS.

The emergence of Consumer-to-Consumer (C2C) platforms has allowed consumers to buy and sell goods directly, but it has also created problems, such as commodity fraud and fake reviews. Trust Management Algorithms (TMAs) are expected to be a countermeasure to detect fraudulent users. However, it is unknown whether TMAs are as effective as reported as they are designed for Peer-to-Peer (P2P) communications between devices on a network. Here we examine the applicability of `EigenTrust', a representative TMA, for the use case of C2C services using an agent-based model. First, we defined the transaction process in C2C services, assumed six types of fraudulent transactions, and then analysed the dynamics of EigenTrust in C2C systems through simulations. We found that EigenTrust could correctly estimate low trust scores for two types of simple frauds. Furthermore, we found the oscillation of trust scores for two types of advanced frauds, which previous research did not address. This suggests that by detecting such oscillations, EigenTrust may be able to detect some (but not all) advanced frauds. Our study helps increase the trustworthiness of transactions in C2C services and provides insights into further technological development for consumer services.

This paper proposes three new approaches for additive functional regression models with functional responses. The first one is a reformulation of the linear regression model, and the last two are on the yet scarce case of additive nonlinear functional regression models. Both proposals are based on extensions of similar models for scalar responses. One of our nonlinear models is based on constructing a Spectral Additive Model (the word "Spectral" refers to the representation of the covariates in an $\mcal{L}_2$ basis), which is restricted (by construction) to Hilbertian spaces. The other one extends the kernel estimator, and it can be applied to general metric spaces since it is only based on distances. We include our new approaches as well as real datasets in an R package. The performances of the new proposals are compared with previous ones, which we review theoretically and practically in this paper. The simulation results show the advantages of the nonlinear proposals and the small loss of efficiency when the simulation scenario is truly linear. Finally, the supplementary material provides a visualization tool for checking the linearity of the relationship between a single covariate and the response.

We consider a multi-process remote estimation system observing $K$ independent Ornstein-Uhlenbeck processes. In this system, a shared sensor samples the $K$ processes in such a way that the long-term average sum mean square error (MSE) is minimized. The sensor operates under a total sampling frequency constraint $f_{\max}$. The samples from all processes consume random processing delays in a shared queue and then are transmitted over an erasure channel with probability $\epsilon$. We study two variants of the problem: first, when the samples are scheduled according to a Maximum-Age-First (MAF) policy, and the receiver provides an erasure status feedback; and second, when samples are scheduled according to a Round-Robin (RR) policy, when there is no erasure status feedback from the receiver. Aided by optimal structural results, we show that the optimal sampling policy for both settings, under some conditions, is a \emph{threshold policy}. We characterize the optimal threshold and the corresponding optimal long-term average sum MSE as a function of $K$, $f_{\max}$, $\epsilon$, and the statistical properties of the observed processes. Our results show that, with an exponentially distributed service rate, the optimal threshold $\tau^*$ increases as the number of processes $K$ increases, for both settings. Additionally, we show that the optimal threshold is an \emph{increasing} function of $\epsilon$ in the case of \emph{available} erasure status feedback, while it exhibits the \emph{opposite behavior}, i.e., $\tau^*$ is a \emph{decreasing} function of $\epsilon$, in the case of \emph{absent} erasure status feedback.

Evaluating the performance of human is a common need across many applications, such as in engineering and sports. When evaluating human performance in completing complex and interactive tasks, the most common way is to use a metric having been proved efficient for that context, or to use subjective measurement techniques. However, this can be an error prone and unreliable process since static metrics cannot capture all the complex contexts associated with such tasks and biases exist in subjective measurement. The objective of our research is to create data-driven AI agents as computational benchmarks to evaluate human performance in solving difficult tasks involving multiple humans and contextual factors. We demonstrate this within the context of football performance analysis. We train a generative model based on Conditional Variational Recurrent Neural Network (VRNN) Model on a large player and ball tracking dataset. The trained model is used to imitate the interactions between two teams and predict the performance from each team. Then the trained Conditional VRNN Model is used as a benchmark to evaluate team performance. The experimental results on Premier League football dataset demonstrates the usefulness of our method to existing state-of-the-art static metric used in football analytics.

With the increasing availability of non-Euclidean data objects, statisticians are faced with the task of developing appropriate statistical methods for their analysis. For regression models in which the predictors lie in $\mathbb{R}^p$ and the response variables are situated in a metric space, conditional Fr\'echet means can be used to define the Fr\'echet regression function. Global and local Fr\'echet methods have recently been developed for modeling and estimating this regression function as extensions of multiple and local linear regression, respectively. This paper expands on these methodologies by proposing the Fr\'echet Single Index model, in which the Fr\'echet regression function is assumed to depend only on a scalar projection of the multivariate predictor. Estimation is performed by combining local Fr\'echet along with M-estimation to estimate both the coefficient vector and the underlying regression function, and these estimators are shown to be consistent. The method is illustrated by simulations for response objects on the surface of the unit sphere and through an analysis of human mortality data in which lifetable data are represented by distributions of age-of-death, viewed as elements of the Wasserstein space of distributions.

北京阿比特科技有限公司