亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Optimizing the allocation of units into treatment groups can help researchers improve the precision of causal estimators and decrease costs when running factorial experiments. However, existing optimal allocation results typically assume a super-population model and that the outcome data comes from a known family of distributions. Instead, we focus on randomization-based causal inference for the finite-population setting, which does not require model specifications for the data or sampling assumptions. We propose exact theoretical solutions for optimal allocation in $2^K$ factorial experiments under complete randomization with A-, D- and E-optimality criteria. We then extend this work to factorial designs with block randomization. We also derive results for optimal allocations when using cost-based constraints. To connect our theory to practice, we provide convenient integer-constrained programming solutions using a greedy optimization approach to find integer optimal allocation solutions for both complete and block randomization. The proposed methods are demonstrated using two real-life factorial experiments conducted by social scientists.

相關內容

A causal decomposition analysis allows researchers to determine whether the difference in a health outcome between two groups can be attributed to a difference in each group's distribution of one or more modifiable mediator variables. With this knowledge, researchers and policymakers can focus on designing interventions that target these mediator variables. Existing methods for causal decomposition analysis either focus on one mediator variable or assume that each mediator variable is conditionally independent given the group label and the mediator-outcome confounders. In this paper, we propose a flexible causal decomposition analysis method that can accommodate multiple correlated and interacting mediator variables, which are frequently seen in studies of health behaviors and studies of environmental pollutants. We extend a Monte Carlo-based causal decomposition analysis method to this setting by using a multivariate mediator model that can accommodate any combination of binary and continuous mediator variables. Furthermore, we state the causal assumptions needed to identify both joint and path-specific decomposition effects through each mediator variable. To illustrate the reduction in bias and confidence interval width of the decomposition effects under our proposed method, we perform a simulation study. We also apply our approach to examine whether differences in smoking status and dietary inflammation score explain any of the Black-White differences in incident diabetes using data from a national cohort study.

Formation control of multi-agent systems has been a prominent research topic, spanning both theoretical and practical domains over the past two decades. Our study delves into the leader-follower framework, addressing two critical, previously overlooked aspects. Firstly, we investigate the impact of an unknown nonlinear manifold, introducing added complexity to the formation control challenge. Secondly, we address the practical constraint of limited follower sensing range, posing difficulties in accurately localizing the leader for followers. Our core objective revolves around employing Koopman operator theory and Extended Dynamic Mode Decomposition to craft a reliable prediction algorithm for the follower robot to anticipate the leader's position effectively. Our experimentation on an elliptical paraboloid manifold, utilizing two omni-directional wheeled robots, validates the prediction algorithm's effectiveness.

This work studies how the choice of the representation for parametric, spatially distributed inputs to elliptic partial differential equations (PDEs) affects the efficiency of a polynomial surrogate, based on Taylor expansion, for the parameter-to-solution map. In particular, we show potential advantages of representations using functions with localized supports. As model problem, we consider the steady-state diffusion equation, where the diffusion coefficient and right-hand side depend smoothly but potentially in a \textsl{highly nonlinear} way on a parameter $y\in [-1,1]^{\mathbb{N}}$. Following previous work for affine parameter dependence and for the lognormal case, we use pointwise instead of norm-wise bounds to prove $\ell^p$-summability of the Taylor coefficients of the solution. As application, we consider surrogates for solutions to elliptic PDEs on parametric domains. Using a mapping to a nominal configuration, this case fits in the general framework, and higher convergence rates can be attained when modeling the parametric boundary via spatially localized functions. The theoretical results are supported by numerical experiments for the parametric domain problem, illustrating the efficiency of the proposed approach and providing further insight on numerical aspects. Although the methods and ideas are carried out for the steady-state diffusion equation, they extend easily to other elliptic and parabolic PDEs.

Recently established, directed dependence measures for pairs $(X,Y)$ of random variables build upon the natural idea of comparing the conditional distributions of $Y$ given $X=x$ with the marginal distribution of $Y$. They assign pairs $(X,Y)$ values in $[0,1]$, the value is $0$ if and only if $X,Y$ are independent, and it is $1$ exclusively for $Y$ being a function of $X$. Here we show that comparing randomly drawn conditional distributions with each other instead or, equivalently, analyzing how sensitive the conditional distribution of $Y$ given $X=x$ is on $x$, opens the door to constructing novel families of dependence measures $\Lambda_\varphi$ induced by general convex functions $\varphi: \mathbb{R} \rightarrow \mathbb{R}$, containing, e.g., Chatterjee's coefficient of correlation as special case. After establishing additional useful properties of $\Lambda_\varphi$ we focus on continuous $(X,Y)$, translate $\Lambda_\varphi$ to the copula setting, consider the $L^p$-version and establish an estimator which is strongly consistent in full generality. A real data example and a simulation study illustrate the chosen approach and the performance of the estimator. Complementing the afore-mentioned results, we show how a slight modification of the construction underlying $\Lambda_\varphi$ can be used to define new measures of explainability generalizing the fraction of explained variance.

We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems for which the observational map is based on partial differential equations and, consequently, is computationally expensive to evaluate. A-optimality is a widely used and easy-to-interpret criterion for Bayesian experimental design. This criterion seeks the optimal experimental design by minimizing the expected conditional variance, which is also known as the expected posterior variance. This study presents a novel likelihood-free approach to the A-optimal experimental design that does not require sampling or integrating the Bayesian posterior distribution. The expected conditional variance is obtained via the variance of the conditional expectation using the law of total variance, and we take advantage of the orthogonal projection property to approximate the conditional expectation. We derive an asymptotic error estimation for the proposed estimator of the expected conditional variance and show that the intractability of the posterior distribution does not affect the performance of our approach. We use an artificial neural network (ANN) to approximate the nonlinear conditional expectation in the implementation of our method. We then extend our approach for dealing with the case that the domain of experimental design parameters is continuous by integrating the training process of the ANN into minimizing the expected conditional variance. Through numerical experiments, we demonstrate that our method greatly reduces the number of observation model evaluations compared with widely used importance sampling-based approaches. This reduction is crucial, considering the high computational cost of the observational models. Code is available at //github.com/vinh-tr-hoang/DOEviaPACE.

While there is wide agreement that physical activity is an important component of a healthy lifestyle, it is unclear how many people adhere to public health recommendations on physical activity. The Physical Activity Guidelines (PAG), published by the CDC, provide guidelines to American adults, but it is difficult to assess compliance with these guidelines. The PAG further complicate adherence assessment by recommending activity to occur in at least 10 minute bouts. To better understand the measurement capabilities of various instruments to quantify activity, and to propose an approach to evaluate activity relative to the PAG, researchers at Iowa State University administered the Physical Activity Measurement Survey (PAMS) to over 1,000 participants in four different Iowa counties. In this paper, we develop a two-part Bayesian measurement error model and apply it to the PAMS data in order to assess compliance to the PAG in the Iowa adult population. The model accurately accounts for the 10 minute bout requirement put forth in the PAG. The measurement error model corrects biased estimates and accounts for day to day variation in activity. The model is also applied to the nationally representative National Health and Nutrition Examination Survey.

The application of deep learning to non-stationary temporal datasets can lead to overfitted models that underperform under regime changes. In this work, we propose a modular machine learning pipeline for ranking predictions on temporal panel datasets which is robust under regime changes. The modularity of the pipeline allows the use of different models, including Gradient Boosting Decision Trees (GBDTs) and Neural Networks, with and without feature engineering. We evaluate our framework on financial data for stock portfolio prediction, and find that GBDT models with dropout display high performance, robustness and generalisability with reduced complexity and computational cost. We then demonstrate how online learning techniques, which require no retraining of models, can be used post-prediction to enhance the results. First, we show that dynamic feature projection improves robustness by reducing drawdown in regime changes. Second, we demonstrate that dynamical model ensembling based on selection of models with good recent performance leads to improved Sharpe and Calmar ratios of out-of-sample predictions. We also evaluate the robustness of our pipeline across different data splits and random seeds with good reproducibility.

We present a robust deep incremental learning framework for regression tasks on financial temporal tabular datasets which is built upon the incremental use of commonly available tabular and time series prediction models to adapt to distributional shifts typical of financial datasets. The framework uses a simple basic building block (decision trees) to build self-similar models of any required complexity to deliver robust performance under adverse situations such as regime changes, fat-tailed distributions, and low signal-to-noise ratios. As a detailed study, we demonstrate our scheme using XGBoost models trained on the Numerai dataset and show that a two layer deep ensemble of XGBoost models over different model snapshots delivers high quality predictions under different market regimes. We also show that the performance of XGBoost models with different number of boosting rounds in three scenarios (small, standard and large) is monotonically increasing with respect to model size and converges towards the generalisation upper bound. We also evaluate the robustness of the model under variability of different hyperparameters, such as model complexity and data sampling settings. Our model has low hardware requirements as no specialised neural architectures are used and each base model can be independently trained in parallel.

The ability of robots to autonomously navigate through 3D environments depends on their comprehension of spatial concepts, ranging from low-level geometry to high-level semantics, such as objects, places, and buildings. To enable such comprehension, 3D scene graphs have emerged as a robust tool for representing the environment as a layered graph of concepts and their relationships. However, building these representations using monocular vision systems in real-time remains a difficult task that has not been explored in depth. This paper puts forth a real-time spatial perception system Mono-Hydra, combining a monocular camera and an IMU sensor setup, focusing on indoor scenarios. However, the proposed approach is adaptable to outdoor applications, offering flexibility in its potential uses. The system employs a suite of deep learning algorithms to derive depth and semantics. It uses a robocentric visual-inertial odometry (VIO) algorithm based on square-root information, thereby ensuring consistent visual odometry with an IMU and a monocular camera. This system achieves sub-20 cm error in real-time processing at 15 fps, enabling real-time 3D scene graph construction using a laptop GPU (NVIDIA 3080). This enhances decision-making efficiency and effectiveness in simple camera setups, augmenting robotic system agility. We make Mono-Hydra publicly available at: //github.com/UAV-Centre-ITC/Mono_Hydra

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司