亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Platform trials are randomized clinical trials that allow simultaneous comparison of multiple interventions, usually against a common control. Arms to test experimental interventions may enter and leave the platform over time. This implies that the number of experimental intervention arms in the trial may change over time. Determining optimal allocation rates to allocate patients to the treatment and control arms in platform trials is challenging because the change in treatment arms implies that also the optimal allocation rates will change when treatments enter or leave the platform. In addition, the optimal allocation depends on the analysis strategy used. In this paper, we derive optimal treatment allocation rates for platform trials with shared controls, assuming that a stratified estimation and testing procedure based on a regression model, is used to adjust for time trends. We consider both, analysis using concurrent controls only as well as analysis methods based on also non-concurrent controls and assume that the total sample size is fixed. The objective function to be minimized is the maximum of the variances of the effect estimators. We show that the optimal solution depends on the entry time of the arms in the trial and, in general, does not correspond to the square root of $k$ allocation rule used in the classical multi-arm trials. We illustrate the optimal allocation and evaluate the power and type 1 error rate compared to trials using one-to-one and square root of $k$ allocations by means of a case study.

相關內容

Goal-oriented error estimation provides the ability to approximate the discretization error in a chosen functional quantity of interest. Adaptive mesh methods provide the ability to control this discretization error to obtain accurate quantity of interest approximations while still remaining computationally feasible. Traditional discrete goal-oriented error estimates incur linearization errors in their derivation. In this paper, we investigate the role of linearization errors in adaptive goal-oriented error simulations. In particular, we develop a novel two-level goal-oriented error estimate that is free of linearization errors. Additionally, we highlight how linearization errors can facilitate the verification of the adjoint solution used in goal-oriented error estimation. We then verify the newly proposed error estimate by applying it to a model nonlinear problem for several quantities of interest and further highlight its asymptotic effectiveness as mesh sizes are reduced. In an adaptive mesh context, we then compare the newly proposed estimate to a more traditional two-level goal-oriented error estimate. We highlight that accounting for linearization errors in the error estimate can improve its effectiveness in certain situations and demonstrate that localizing linearization errors can lead to more optimal adapted meshes.

We study the problem of multi-agent coordination in unpredictable and partially observable environments, that is, environments whose future evolution is unknown a priori and that can only be partially observed. We are motivated by the future of autonomy that involves multiple robots coordinating actions in dynamic, unstructured, and partially observable environments to complete complex tasks such as target tracking, environmental mapping, and area monitoring. Such tasks are often modeled as submodular maximization coordination problems due to the information overlap among the robots. We introduce the first submodular coordination algorithm with bandit feedback and bounded tracking regret -- bandit feedback is the robots' ability to compute in hindsight only the effect of their chosen actions, instead of all the alternative actions that they could have chosen instead, due to the partial observability; and tracking regret is the algorithm's suboptimality with respect to the optimal time-varying actions that fully know the future a priori. The bound gracefully degrades with the environments' capacity to change adversarially, quantifying how often the robots should re-select actions to learn to coordinate as if they fully knew the future a priori. The algorithm generalizes the seminal Sequential Greedy algorithm by Fisher et al. to the bandit setting, by leveraging submodularity and algorithms for the problem of tracking the best action. We validate our algorithm in simulated scenarios of multi-target tracking.

Neural Networks (GNNs) have recently emerged as a promising approach to tackling power allocation problems in wireless networks. Since unpaired transmitters and receivers are often spatially distant, the distanced-based threshold is proposed to reduce the computation time by excluding or including the channel state information in GNNs. In this paper, we are the first to introduce a neighbour-based threshold approach to GNNs to reduce the time complexity. Furthermore, we conduct a comprehensive analysis of both distance-based and neighbour-based thresholds and provide recommendations for selecting the appropriate value in different communication channel scenarios. We design the corresponding distance-based and neighbour-based Graph Neural Networks with the aim of allocating transmit powers to maximise the network throughput. Our results show that our proposed GNNs offer significant advantages in terms of reducing time complexity while preserving strong performance. Besides, we show that by choosing a suitable threshold, the time complexity is reduced from O(|V|^2) to O(|V|), where |V| is the total number of transceiver pairs.

This paper studies an integrated sensing and communication (ISAC) system for single-target detection in a cloud radio access network architecture. The system considers downlink communication and multi-static sensing approach, where ISAC transmit access points (APs) jointly serve the user equipments (UEs) and optionally steer a beam toward the target. A centralized operation of cell-free massive MIMO (multiple-input multiple-output) is considered for communication and sensing purposes. A maximum a posteriori ratio test detector is developed to detect the target in the presence of clutter, so-called target-free signals. Moreover, a power allocation algorithm is proposed to maximize the sensing signal-to-interference-plus-noise ratio (SINR) while ensuring a minimum communication SINR value for each UE and meeting per-AP power constraints. Two ISAC setups are studied: i) using only existing communication beams for sensing and ii) using additional sensing beams. The proposed algorithm's efficiency is investigated in both realistic and idealistic scenarios, corresponding to the presence and absence of the target-free channels, respectively. Although detection probability degrades in the presence of target-free channels that act as interference, the proposed algorithm significantly outperforms the interference-unaware benchmark by exploiting the statistics of the clutter. It has also been shown that the proposed algorithm outperforms the fully communication-centric algorithm, both in the presence and absence of clutter. Moreover, using an additional sensing beam improves the detection performance for a target with lower radar cross-section variances compared to the case without sensing beams.

Offloading computation to nearby edge/fog computing nodes, including the ones carried by moving vehicles, e.g., vehicular fog nodes (VFN), has proved to be a promising approach for enabling low-latency and compute-intensive mobility applications, such as cooperative and autonomous driving. This work considers vehicular fog computing scenarios where the clients of computation offloading services try to minimize their own costs while deciding which VFNs to offload their tasks. We focus on decentralized multi-agent decision-making in a repeated unknown game where each agent, e.g., service client, can observe only its own action and realized cost. In other words, each agent is unaware of the game composition or even the existence of opponents. We apply a completely uncoupled learning rule to generalize the decentralized decision-making algorithm presented in \cite{Cho2021} for the multi-agent case. The multi-agent solution proposed in this work can capture the unknown offloading cost variations susceptive to resource congestion under an adversarial framework where each agent may take implicit cost estimation and suitable resource choice adapting to the dynamics associated with volatile supply and demand. According to the evaluation via simulation, this work reveals that such individual perturbations for robustness to uncertainty and adaptation to dynamicity ensure a certain level of optimality in terms of social welfare, e.g., converging the actual sequence of play with unknown and asymmetric attributes and lowering the correspondent cost in social welfare due to the self-interested behaviors of agents.

We consider a multi-agent delegation mechanism without money. In our model, given a set of agents, each agent has a fixed number of solutions which is exogenous to the mechanism, and privately sends a signal, e.g., a subset of solutions, to the principal. Then, the principal selects a final solution based on the agents' signals. In stark contrast to single-agent setting by Kleinberg and Kleinberg (EC'18) with an approximate Bayesian mechanism, we show that there exists efficient approximate prior-independent mechanisms with both information and performance gain, thanks to the competitive tension between the agents. Interestingly, however, the amount of such a compelling power significantly varies with respect to the information available to the agents, and the degree of correlation between the principal's and the agent's utility. Technically, we conduct a comprehensive study on the multi-agent delegation problem and derive several results on the approximation factors of Bayesian/prior-independent mechanisms in complete/incomplete information settings. As a special case of independent interest, we obtain comparative statics regarding the number of agents which implies the dominance of the multi-agent setting ($n \ge 2$) over the single-agent setting ($n=1$) in terms of the principal's utility. We further extend our problem by considering an examination cost of the mechanism and derive some analogous results in the complete information setting.

We study the problem of fairly allocating a set of indivisible goods among agents with matroid rank valuations -- every good provides a marginal value of $0$ or $1$ when added to a bundle and valuations are submodular. We generalize the Yankee Swap algorithm to create a simple framework, called General Yankee Swap, that can efficiently compute allocations that maximize any justice criterion (or fairness objective) satisfying some mild assumptions. Along with maximizing a justice criterion, General Yankee Swap is guaranteed to maximize utilitarian social welfare, ensure strategyproofness and use at most a quadratic number of valuation queries. We show how General Yankee Swap can be used to compute allocations for five different well-studied justice criteria: (a) Prioritized Lorenz dominance, (b) Maximin fairness, (c) Weighted leximin, (d) Max weighted Nash welfare, and (e) Max weighted $p$-mean welfare. In particular, our framework provides the first polynomial time algorithms to compute weighted leximin, max weighted Nash welfare and max weighted $p$-mean welfare allocations for agents with matroid rank valuations.

Randomized controlled trials (RCTs) are increasingly prevalent in education research, and are often regarded as a gold standard of causal inference. Two main virtues of randomized experiments are that they (1) do not suffer from confounding, thereby allowing for an unbiased estimate of an intervention's causal impact, and (2) allow for design-based inference, meaning that the physical act of randomization largely justifies the statistical assumptions made. However, RCT sample sizes are often small, leading to low precision; in many cases RCT estimates may be too imprecise to guide policy or inform science. Observational studies, by contrast, have strengths and weaknesses complementary to those of RCTs. Observational studies typically offer much larger sample sizes, but may suffer confounding. In many contexts, experimental and observational data exist side by side, allowing the possibility of integrating "big observational data" with "small but high-quality experimental data" to get the best of both. Such approaches hold particular promise in the field of education, where RCT sample sizes are often small due to cost constraints, but automatic collection of observational data, such as in computerized educational technology applications, or in state longitudinal data systems (SLDS) with administrative data on hundreds of thousand of students, has made rich, high-dimensional observational data widely available. We outline an approach that allows one to employ machine learning algorithms to learn from the observational data, and use the resulting models to improve precision in randomized experiments. Importantly, there is no requirement that the machine learning models are "correct" in any sense, and the final experimental results are guaranteed to be exactly unbiased. Thus, there is no danger of confounding biases in the observational data leaking into the experiment.

Transformers have achieved superior performances in many tasks in natural language processing and computer vision, which also intrigues great interests in the time series community. Among multiple advantages of transformers, the ability to capture long-range dependencies and interactions is especially attractive for time series modeling, leading to exciting progress in various time series applications. In this paper, we systematically review transformer schemes for time series modeling by highlighting their strengths as well as limitations through a new taxonomy to summarize existing time series transformers in two perspectives. From the perspective of network modifications, we summarize the adaptations of module level and architecture level of the time series transformers. From the perspective of applications, we categorize time series transformers based on common tasks including forecasting, anomaly detection, and classification. Empirically, we perform robust analysis, model size analysis, and seasonal-trend decomposition analysis to study how Transformers perform in time series. Finally, we discuss and suggest future directions to provide useful research guidance. To the best of our knowledge, this paper is the first work to comprehensively and systematically summarize the recent advances of Transformers for modeling time series data. We hope this survey will ignite further research interests in time series Transformers.

Medical image segmentation requires consensus ground truth segmentations to be derived from multiple expert annotations. A novel approach is proposed that obtains consensus segmentations from experts using graph cuts (GC) and semi supervised learning (SSL). Popular approaches use iterative Expectation Maximization (EM) to estimate the final annotation and quantify annotator's performance. Such techniques pose the risk of getting trapped in local minima. We propose a self consistency (SC) score to quantify annotator consistency using low level image features. SSL is used to predict missing annotations by considering global features and local image consistency. The SC score also serves as the penalty cost in a second order Markov random field (MRF) cost function optimized using graph cuts to derive the final consensus label. Graph cut obtains a global maximum without an iterative procedure. Experimental results on synthetic images, real data of Crohn's disease patients and retinal images show our final segmentation to be accurate and more consistent than competing methods.

北京阿比特科技有限公司