亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In a bipartite experiment, units that are assigned treatments differ from the units for which we measure outcomes. The two groups of units are connected by a bipartite graph, governing how the treated units can affect the outcome units. Often motivated by experiments in marketplaces, the bipartite experimental framework has been used for example to investigate the causal effects of supply-side changes on demand-side behavior. In this paper, we consider the problem of estimating the average total treatment effect in the bipartite experimental framework under a linear exposure-response model. We introduce the Exposure Reweighted Linear (ERL) Estimator, an unbiased linear estimator of the average treatment effect in this setting. We show that the estimator is consistent and asymptotically normal, provided that the bipartite graph is sufficiently sparse. We derive a variance estimator which facilitates confidence intervals based on a normal approximation. In addition, we introduce Exposure-Design, a cluster-based design which aims to increase the precision of the ERL estimator by realizing desirable exposure distributions. Finally, we demonstrate the effectiveness of the described estimator and design with an application using a publicly available Amazon user-item review graph.

相關內容

One of the possible objectives when designing experiments is to build or formulate a model for predicting future observations. When the primary objective is prediction, some typical approaches in the planning phase are to use well-established small-sample experimental designs in the design phase (e.g., Definitive Screening Designs) and to construct predictive models using widely used model selection algorithms such as LASSO. These design and analytic strategies, however, do not guarantee high prediction performance, partly due to the small sample sizes that prevent partitioning the data into training and validation sets, a strategy that is commonly used in machine learning models to improve out-of-sample prediction. In this work, we propose a novel framework for building high-performance predictive models from experimental data that capitalizes on the advantage of having both training and validation sets. However, instead of partitioning the data into two mutually exclusive subsets, we propose a weighting scheme based on the fractional random weight bootstrap that emulates data partitioning by assigning anti-correlated training and validation weights to each observation. The proposed methodology, called Self-Validated Ensemble Modeling (SVEM), proceeds in the spirit of bagging so that it iterates through bootstraps of anti-correlated weights and fitted models, with the final SVEM model being the average of the bootstrapped models. We investigate the performance of the SVEM algorithm with several model-building approaches such as stepwise regression, Lasso, and the Dantzig selector. Finally, through simulation and case studies, we show that SVEM generally generates models with better prediction performance in comparison to one-shot model selection approaches.

Optimal feedback control (OFC) is a theory from the motor control literature that explains how humans move their body to achieve a certain goal, e.g., pointing with the finger. OFC is based on the assumption that humans aim to control their body optimally, within the constraints imposed by body, environment, and task. In this paper, we explain how this theory can be applied to understanding Human-Computer Interaction. We propose that in this case, the dynamics of the human body and computer can be interpreted as a single dynamical system. The state of this system is controlled by the user via muscle control signals, and estimated from observations. Between-trial variability arises from signal-dependent control noise and observation noise. We compare four different models from optimal control theory and evaluate to what degree these models can replicate movements in the case of mouse pointing. We introduce a procedure to identify parameters that best explain observed user behavior, and show how these parameters can be used to gain insights regarding user characteristics and preferences. We conclude that OFC presents a powerful framework for HCI to understand and simulate motion of the human body and of the interface on a moment by moment basis.

When developing a new networking algorithm, it is established practice to run a randomized experiment, or A/B test, to evaluate its performance. In an A/B test, traffic is randomly allocated between a treatment group, which uses the new algorithm, and a control group, which uses the existing algorithm. However, because networks are congested, both treatment and control traffic compete against each other for resources in a way that biases the outcome of these tests. This bias can have a surprisingly large effect; for example, in lab A/B tests with two widely used congestion control algorithms, the treatment appeared to deliver 150% higher throughput when used by a few flows, and 75% lower throughput when used by most flows-despite the fact that the two algorithms have identical throughput when used by all traffic. Beyond the lab, we show that A/B tests can also be biased at scale. In an experiment run in cooperation with Netflix, estimates from A/B tests mistake the direction of change of some metrics, miss changes in other metrics, and overestimate the size of effects. We propose alternative experiment designs, previously used in online platforms, to more accurately evaluate new algorithms and allow experimenters to better understand the impact of congestion on their tests.

This study presents contemporaneous modeling of asset return and price range within the framework of stochastic volatility with leverage. A new representation of the probability density function for the price range is provided, and its accurate sampling algorithm is developed. A Bayesian estimation using Markov chain Monte Carlo (MCMC) method is provided for the model parameters and unobserved variables. MCMC samples can be generated rigorously, despite the estimation procedure requiring sampling from a density function with the sum of an infinite series. The empirical results obtained using data from the U.S. market indices are consistent with the stylized facts in the financial market, such as the existence of the leverage effect. In addition, to explore the model's predictive ability, a model comparison based on the volatility forecast performance is conducted.

High-frequency market making is a liquidity-providing trading strategy that simultaneously generates many bids and asks for a security at ultra-low latency while maintaining a relatively neutral position. The strategy makes a profit from the bid-ask spread for every buy and sell transaction, against the risk of adverse selection, uncertain execution and inventory risk. We design realistic simulations of limit order markets and develop a high-frequency market making strategy in which agents process order book information to post the optimal price, order type and execution time. By introducing the Deep Hawkes process to the high-frequency market making strategy, we allow a feedback loop to be created between order arrival and the state of the limit order book, together with self- and cross-excitation effects. Our high-frequency market making strategy accounts for the cancellation of orders that influence order queue position, profitability, bid-ask spread and the value of the order. The experimental results show that our trading agent outperforms the baseline strategy, which uses a probability density estimate of the fundamental price. We investigate the effect of cancellations on market quality and the agent's profitability. We validate how closely the simulation framework approximates reality by reproducing stylised facts from the empirical analysis of the simulated order book data.

This paper draws upon three themes in the bipedal control literature to achieve highly agile, terrain-aware locomotion. By terrain aware, we mean the robot can use information on terrain slope and friction cone as supplied by state-of-the-art mapping and trajectory planning algorithms. The process starts with abstracting from the full dynamics of a Cassie 3D bipedal robot, an exact low-dimensional representation of its centroidal dynamics, parameterized by angular momentum. Under a piecewise planar terrain assumption, and the elimination of terms for the angular momentum about the robot's center of mass, the centroidal dynamics become linear and has dimension four. Four-step-horizon model predictive control (MPC) of the centroidal dynamics provides step-to-step foot placement commands. Importantly, we also include the intra-step dynamics at 10 ms intervals so that realistic terrain-aware constraints on robot's evolution can be imposed in the MPC formulation. The output of the MPC is directly implemented on Cassie through the method of virtual constraints. In experiments, we validate the performance of our control strategy for the robot on inclined and stationary terrain, both indoors on a treadmill and outdoors on a hill.

The growing availability of observational databases like electronic health records (EHR) provides unprecedented opportunities for secondary use of such data in biomedical research. However, these data can be error-prone and need to be validated before use. It is usually unrealistic to validate the whole database due to resource constraints. A cost-effective alternative is to implement a two-phase design that validates a subset of patient records that are enriched for information about the research question of interest. Herein, we consider odds ratio estimation under differential outcome and exposure misclassification. We propose optimal designs that minimize the variance of the maximum likelihood odds ratio estimator. We develop a novel adaptive grid search algorithm that can locate the optimal design in a computationally feasible and numerically accurate manner. Because the optimal design requires specification of unknown parameters at the outset and thus is unattainable without prior information, we introduce a multi-wave sampling strategy to approximate it in practice. We demonstrate the efficiency gains of the proposed designs over existing ones through extensive simulations and two large observational studies. We provide an R package and Shiny app to facilitate the use of the optimal designs.

Privacy preference signals allow users to express preferences over how their personal data is processed. These signals become important in determining privacy outcomes when they reference an enforceable legal basis, as is the case with recent signals such as the Global Privacy Control and the Transparency & Consent Framework. However, the coexistence of multiple privacy preference signals creates ambiguity as users may transmit more than one signal. This paper collects evidence about ambiguity flowing from the aforementioned two signals and the historic Do Not Track signal. We provide the first empirical evidence that ambiguous signals are sent by web users in the wild. We also show that preferences stored in the browser are reliable predictors of privacy preferences expressed in web dialogs. Finally, we provide the first evidence that popular cookie dialogs are blocked by the majority of users who adopted the Do Not Track and Global Privacy Control standards. These empirical results inform forthcoming legal debates about how to interpret privacy preference signals.

Mathematical models of infectious diseases exhibit robust dynamics such as stable endemic or a disease-free equilibrium, or convergence of the solutions to periodic epidemic waves. The present work shows that the accuracy of such dynamics can be significantly improved by incorporating both local and global dynamics of the infection in disease models. To demonstrate improved accuracies, we extended a standard Susceptible-Infected-Recovered (SIR) model by incorporating global dynamics of the COVID-19 pandemic. The extended SIR model assumes three possibilities for the susceptible individuals traveling outside of their community: They can return to the community without any exposure to the infection, they can be exposed and develop symptoms after returning to the community, or they can be tested positive during the trip and remain quarantined until fully recovered. To examine the predictive accuracies of the extended SIR model, we studied the prevalence of the COVID-19 infection in Kansas City, Missouri influenced by the COVID-19 global pandemic. Using a two-step model-fitting algorithm, the extended SIR model was parameterized using the Kansas City, Missouri COVID-19 data during March to October 2020. The extended SIR model significantly outperformed the standard SIR model and revealed oscillatory behaviors with an increasing trend of infected individuals. In conclusion, the analytics and predictive accuracies of disease models can be significantly improved by incorporating the global dynamics of the infection in the models.

The rise of social networks as the primary means of communication in almost every country in the world has simultaneously triggered an increase in the amount of fake news circulating online. This fact became particularly evident during the 2016 U.S. political elections and even more so with the advent of the COVID-19 pandemic. Several research studies have shown how the effects of fake news dissemination can be mitigated by promoting greater competence through lifelong learning and discussion communities, and generally rigorous training in the scientific method and broad interdisciplinary education. The urgent need for models that can describe the growing infodemic of fake news has been highlighted by the current pandemic. The resulting slowdown in vaccination campaigns due to misinformation and generally the inability of individuals to discern the reliability of information is posing enormous risks to the governments of many countries. In this research using the tools of kinetic theory we describe the interaction between fake news spreading and competence of individuals through multi-population models in which fake news spreads analogously to an infectious disease with different impact depending on the level of competence of individuals. The level of competence, in particular, is subject to an evolutionary dynamic due to both social interactions between agents and external learning dynamics. The results show how the model is able to correctly describe the dynamics of diffusion of fake news and the important role of competence in their containment.

北京阿比特科技有限公司