亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents SimAEN, an agent-based simulation whose purpose is to assist public health in understanding and controlling AEN. SimAEN models a population of interacting individuals, or 'agents', in which COVID-19 is spreading. These individuals interact with a public health system that includes Automated Exposure Notifiation (AEN) and Manual Contact Tracing (MCT). These interactions influence when individuals enter and leave quarantine, affecting the spread of the simulated disease. Over 70 user-configurable parameters influence the outcome of SimAEN's simulations. These parameters allow the user to tailor SimAEN to a specific public health jurisdiction and to test the effects of various interventions, including different sensitivity settings of AEN.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 集成 · 線性組合 · 卡爾曼濾波 · 相關系數 ·
2022 年 1 月 31 日

Ensemble Kalman inversion (EKI) is a technique for the numerical solution of inverse problems. A great advantage of the EKI's ensemble approach is that derivatives are not required in its implementation. But theoretically speaking, EKI's ensemble size needs to surpass the dimension of the problem. This is because of EKI's "subspace property", i.e., that the EKI solution is a linear combination of the initial ensemble it starts off with. We show that the ensemble can break out of this initial subspace when "localization" is applied. In essence, localization enforces an assumed correlation structure onto the problem, and is heavily used in ensemble Kalman filtering and data assimilation. We describe and analyze how to apply localization to the EKI, and how localization helps the EKI ensemble break out of the initial subspace. Specifically, we show that the localized EKI (LEKI) ensemble will collapse to a single point (as intended) and that the LEKI ensemble mean will converge to the global optimum at a sublinear rate. Under strict assumptions on the localization procedure and observation process, we further show that the data misfit decays uniformly. We illustrate our ideas and theoretical developments with numerical examples with simplified toy problems, a Lorenz model, and an inversion of electromagnetic data, where some of our mathematical assumptions may only be approximately valid.

Decision support is the science and associated practice that consist in providing recommendations to decision makers facing problems, based on available theoretical knowledge and empirical data. Although this activity is often seen as being concerned with solving mathematical problems and conceiving algorithms, it is essentially an empirical and socially framed activity, where interactions between clients and analysts, and between them and concerned third parties, play a crucial role. Since the 80s, two concepts have structured the literature devoted to analysing this aspect of decision support: validity and legitimacy. Whereas validity is focused on the interactions between the client and the analyst, legitimacy refers to the broader picture: the organisational context, the overall problem situation, the environment, culture, history. Despite its importance, this concept has not received the attention it deserves in the literature in decision support. The present paper aims at filling this gap. For that purpose, we review the literature in other disciplines relevant to elaborate a concept of legitimacy useful in decision support contexts. Based on this review, we propose a general theory of legitimacy, adapted to decision support contexts, encompassing the relevant contributions we found in the literature. According to this general theory, a legitimate decision support intervention is one for which the decision support provider produces a justification that satisfies two conditions: (i) it effectively convinces the decision support provider's interlocutors (effectiveness condition) and (ii) it is organised around the active elicitation of as many and as diverse counterarguments as possible (truthfulness condition). Despite its conceptual simplicity, legitimacy, understood in this sense, is a very exacting requirement, opening ambitious research avenues that we delineate.

Exoskeletons and orthoses are wearable mobile systems providing mechanical benefits to the users. Despite significant improvements in the last decades, the technology is not fully mature to be adopted for strenuous and non-programmed tasks. To accommodate this insufficiency, different aspects of this technology need to be analysed and improved. Numerous studies have been trying to address some aspects of exoskeletons, e.g. mechanism design, intent prediction, and control scheme. However, most works have focused on a specific element of design or application without providing a comprehensive review framework. This study aims to analyse and survey the contributing aspects to the improvement and broad adoption of this technology. To address this, after introducing assistive devices and exoskeletons, the main design criteria will be investigated from a physical Human-Robot Interface (HRI) perspective. The study will be further developed by outlining several examples of known assistive devices in different categories. In order to establish an intelligent HRI strategy and enabling intuitive control for users, cognitive HRI will be investigated. Various approaches to this strategy will be reviewed, and a model for intent prediction will be proposed. This model is utilised to predict the gate phase from a single Electromyography (EMG) channel input. The outcomes of modelling show the potential use of single-channel input in low-power assistive devices. Furthermore, the proposed model can provide redundancy in devices with a complex control strategy.

Driven by the "user-centered design" philosophy, this paper first outlines the human factors problems of the flight deck automation for large civil aircraft and the human factors research carried out based on the "human-centered automation" approach. This paper then reviews the previous initial human factors research on intelligent civil flight deck based on the "human-centered AI" approach and discusses the prospects for future human factors research. Based on our proposed human factors engineering model for intelligent human-computer interaction and the framework of joint cognitive eco-systems, this paper proposes an initial human factors solution for the single-pilot operations of large civil aircraft and presents preliminary suggestions for future human factors research.

Researchers are often faced with evaluating the effect of a policy or program that was simultaneously initiated across an entire population of units at a single point in time, and its effects over the targeted population can manifest at any time period afterwards. In the presence of data measured over time, Bayesian time series models have been used to impute what would have happened after the policy was initiated, had the policy not taken place, in order to estimate causal effects. However, the considerations regarding the definition of the target estimands, the underlying assumptions, the plausibility of such assumptions, and the choice of an appropriate model have not been thoroughly investigated. In this paper, we establish useful estimands for the evaluation of large-scale policies. We discuss that imputation of missing potential outcomes relies on an assumption which, even though untestable, can be partially evaluated using observed data. We illustrate an approach to evaluate this key causal assumption and facilitate model elicitation based on data from the time interval before policy initiation and using classic statistical techniques. As an illustration, we study the Hospital Readmissions Reduction Program (HRRP), a US federal intervention aiming to improve health outcomes for patients with pneumonia, acute myocardial infraction, or congestive failure admitted to a hospital. We evaluate the effect of the HRRP on population mortality across the US and in four geographic subregions, and at different time windows. We find that the HRRP increased mortality from the three targeted conditions across most scenarios considered, and is likely to have had a detrimental effect on public health.

We introduce a numerical technique for controlling the location and stability properties of Hopf bifurcations in dynamical systems. The algorithm consists of solving an optimization problem constrained by an extended system of nonlinear partial differential equations that characterizes Hopf bifurcation points. The flexibility and robustness of the method allows us to advance or delay a Hopf bifurcation to a target value of the bifurcation parameter, as well as controlling the oscillation frequency with respect to a parameter of the system or the shape of the domain on which solutions are defined. Numerical applications are presented in systems arising from biology and fluid dynamics, such as the FitzHugh-Nagumo model, Ginzburg-Landau equation, Rayleigh-B\'enard convection problem, and Navier-Stokes equations, where the control of the location and oscillation frequency of periodic solutions is of high interest.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

This paper describes the development of the Microsoft XiaoIce system, the most popular social chatbot in the world. XiaoIce is uniquely designed as an AI companion with an emotional connection to satisfy the human need for communication, affection, and social belonging. We take into account both intelligent quotient (IQ) and emotional quotient (EQ) in system design, cast human-machine social chat as decision-making over Markov Decision Processes (MDPs), and optimize XiaoIce for long-term user engagement, measured in expected Conversation-turns Per Session (CPS). We detail the system architecture and key components including dialogue manager, core chat, skills, and an empathetic computing module. We show how XiaoIce dynamically recognizes human feelings and states, understands user intents, and responds to user needs throughout long conversations. Since the release in 2014, XiaoIce has communicated with over 660 million users and succeeded in establishing long-term relationships with many of them. Analysis of large-scale online logs shows that XiaoIce has achieved an average CPS of 23, which is significantly higher than that of other chatbots and even human conversations.

北京阿比特科技有限公司