亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this manuscript we present several possible ways of modeling human capital accumulation during the spread of a disease following an agent based approach, where agents behave maximizing their intertemporal utility. We assume that the interaction between agents is of mean field type, yielding a Mean Field Game description of the problem. We discuss how the analysis of a model including both the mechanism of change of species from one epidemiological state to the other and an optimization problem for each agent leads to an aggregate behavior that is not easy to describe, and that sometimes exhibits structural problems. Therefore we eventually propose and study numerically a SEIRD model in which the rate of infection depends on the distribution of the population, given exogenously as the solution to the the Mean Field Game system arising as the macroscopic description of the discrete multi-agent economic model for the accumulation of human capital. Such model arises in fact as a simplified but tractable version of the initial one.

相關內容

Recently the use of mobile technologies in Ecological Momentary Assessments (EMA) and Interventions (EMI) has made it easier to collect data suitable for intra-individual variability studies in the medical field. Nevertheless, especially when self-reports are used during the data collection process, there are difficulties in balancing data quality and the burden placed on the subjects. In this paper, we address this problem for a specific EMA setting which aims to submit a demanding task to subjects at high/low values of a self-reported variable. We adopt a dynamic approach inspired by control chart methods and design optimization techniques to obtain an EMA triggering mechanism for data collection which takes into account both the individual variability of the self-reported variable and of the adherence rate. We test the algorithm in both a simulation setting and with real, large-scale data from a tinnitus longitudinal study. A Wilcoxon-Mann-Whitney Rank Sum Test shows that the algorithm tends to have both a higher F1 score and utility than a random schedule and a rule-based algorithm with static thresholds, which are the current state-of-the-art approaches. In conclusion, the algorithm is proven effective in balancing data quality and the burden placed on the participants, especially, as the analysis performed suggest, in studies where data collection is impacted by adherence.

We develop a general method to study the Fisher information distance in central limit theorem for nonlinear statistics. We first construct explicit representations for the score functions. We then use these representations to derive quantitative estimates for the Fisher information distance. To illustrate the applicability of our approach, explicit rates of Fisher information convergence for quadratic forms and the functions of sample means are provided. The case of the sums of independent random variables are discussed as well.

Manufacturing companies face challenges when it comes to quickly adapting their production control to fluctuating demands or changing requirements. Control approaches that encapsulate production functions as services have shown to be promising in order to increase the flexibility of Cyber-Physical Production Systems. But an existing challenge of such approaches is finding a production plan based on provided functionalities for a demanded product, especially when there is no direct (i.e., syntactic) match between demanded and provided functions. While there is a variety of approaches to production planning, flexible production poses specific requirements that are not covered by existing research. In this contribution, we first capture these requirements for flexible production environments. Afterwards, an overview of current Artificial Intelligence approaches that can be utilized in order to overcome the aforementioned challenges is given. For this purpose, we focus on planning algorithms, but also consider models of production systems that can act as inputs to these algorithms. Approaches from both symbolic AI planning as well as approaches based on Machine Learning are discussed and eventually compared against the requirements. Based on this comparison, a research agenda is derived.

The Receiver Operating Characteristic (ROC) curve is a useful tool that measures the discriminating power of a continuous variable or the accuracy of a pharmaceutical or medical test to distinguish between two conditions or classes. In certain situations, the practitioner may be able to measure some covariates related to the diagnostic variable which can increase the discriminating power of the ROC curve. To protect against the existence of atypical data among the observations, a procedure to obtain robust estimators for the ROC curve in presence of covariates is introduced. The considered proposal focusses on a semiparametric approach which fits a location-scale regression model to the diagnostic variable and considers empirical estimators of the regression residuals distributions. Robust parametric estimators are combined with adaptive weighted empirical distribution estimators to down-weight the influence of outliers. The uniform consistency of the proposal is derived under mild assumptions. A Monte Carlo study is carried out to compare the performance of the robust proposed estimators with the classical ones both, in clean and contaminated samples. A real data set is also analysed.

Since the inception of Bitcoin in 2009, the market of cryptocurrencies has grown beyond initial expectations as daily trades exceed $10 billion. As industries become automated, the need for an automated fraud detector becomes very apparent. Detecting anomalies in real time prevents potential accidents and economic losses. Anomaly detection in multivariate time series data poses a particular challenge because it requires simultaneous consideration of temporal dependencies and relationships between variables. Identifying an anomaly in real time is not an easy task specifically because of the exact anomalistic behavior they observe. Some points may present pointwise global or local anomalistic behavior, while others may be anomalistic due to their frequency or seasonal behavior or due to a change in the trend. In this paper we suggested working on real time series of trades of Ethereum from specific accounts and surveyed a large variety of different algorithms traditional and new. We categorized them according to the strategy and the anomalistic behavior which they search and showed that when bundling them together to different groups, they can prove to be a good real-time detector with an alarm time of no longer than a few seconds and with very high confidence.

We study the following repeated non-atomic routing game. In every round, nature chooses a state in an i.i.d. manner according to a publicly known distribution, which influences link latency functions. The system planner makes private route recommendations to participating agents, which constitute a fixed fraction, according to a publicly known signaling strategy. The participating agents choose between obeying or not obeying the recommendation according to cumulative regret of the participating agent population in the previous round. The non-participating agents choose route according to myopic best response to a calibrated forecast of the routing decisions of the participating agents. We show that, for parallel networks, if the planner's signal strategy satisfies the obedience condition, then, almost surely, the link flows are asymptotically consistent with the Bayes correlated equilibrium induced by the signaling strategy.

We propose and examine the idea of continuously adapting state-of-the-art neural network (NN)-based orthogonal frequency division multiplex (OFDM) receivers to current channel conditions. This online adaptation via retraining is mainly motivated by two reasons: First, receiver design typically focuses on the universal optimal performance for a wide range of possible channel realizations. However, in actual applications and within short time intervals, only a subset of these channel parameters is likely to occur, as macro parameters, e.g., the maximum channel delay, can assumed to be static. Second, in-the-field alterations like temporal interferences or other conditions out of the originally intended specifications can occur on a practical (real-world) transmission. While conventional (filter-based) systems would require reconfiguration or additional signal processing to cope with these unforeseen conditions, NN-based receivers can learn to mitigate previously unseen effects even after their deployment. For this, we showcase on-the-fly adaption to current channel conditions and temporal alterations solely based on recovered labels from an outer forward error correction (FEC) code without any additional piloting overhead. To underline the flexibility of the proposed adaptive training, we showcase substantial gains for scenarios with static channel macro parameters, for out-of-specification usage and for interference compensation.

The recurrence rebuild and retrieval method (R3M) is proposed in this paper to accelerate the electromagnetic (EM) validations of large-scale digital coding metasurfaces (DCMs). R3M aims to accelerate the EM validations of DCMs with varied codebooks, which involves the analysis of a group of similar but not identical structures. The method transforms general DCMs to rigorously periodic arrays by replacing each coding unit with the macro unit, which comprises all possible coding states. The system matrix corresponding to the rigorously periodic array is globally shared for DCMs with arbitrary codebooks via implicit retrieval. The discrepancy of the interactions for edge and corner units are precluded by the basis extension of periodic boundaries. Moreover, the hierarchical pattern exploitation (HPE) algorithm is leveraged to efficiently assemble the system matrix for further acceleration. Due to the fully utilization of the rigid periodicity, the computational complexity of R3M-HPE is theoretically lower than that of $\mathcal{H}$-matrix within the same paradigm. Numerical results for two types of DCMs indicate that R3M-HPE is accurate in comparison with commercial software. Besides, R3M-HPE is also compatible with the preconditioning for efficient iterative solutions. The efficiency of R3M-HPE for DCMs outperforms the conventional fast algorithms in both the storage and CPU time cost.

Encouraged by decision makers' appetite for future information on topics ranging from elections to pandemics, and enabled by the explosion of data and computational methods, model based forecasts have garnered increasing influence on a breadth of decisions in modern society. Using several classic examples from fisheries management, I demonstrate that selecting the model or models that produce the most accurate and precise forecast (measured by statistical scores) can sometimes lead to worse outcomes (measured by real-world objectives). This can create a forecast trap, in which the outcomes such as fish biomass or economic yield decline while the manager becomes increasingly convinced that these actions are consistent with the best models and data available. The forecast trap is not unique to this example, but a fundamental consequence of non-uniqueness of models. Existing practices promoting a broader set of models are the best way to avoid the trap.

Behaviors of the synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models with minimal intelligence. Such computational models cannot adapt to reflect the experience of the characters, resulting in brittle intelligence for even the most effective behavior models devised via costly and labor-intensive processes. Observation-based behavior model adaptation that leverages machine learning and the experience of synthetic entities in combination with appropriate prior knowledge can address the issues in the existing computational behavior models to create a better training experience in military training simulations. In this paper, we introduce a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior while being aware of human trainees and their needs within a training simulation. This framework brings together three mutually complementary components. The first component is a Unity-based simulation environment - Rapid Integration and Development Environment (RIDE) - supporting One World Terrain (OWT) models and capable of running and supporting machine learning experiments. The second is Shiva, a novel multi-agent reinforcement and imitation learning framework that can interface with a variety of simulation environments, and that can additionally utilize a variety of learning algorithms. The final component is the Sigma Cognitive Architecture that will augment the behavior models with symbolic and probabilistic reasoning capabilities. We have successfully created proof-of-concept behavior models leveraging this framework on realistic terrain as an essential step towards bringing machine learning into military simulations.

北京阿比特科技有限公司