亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a slotted-time system with a transmitter-receiver pair. In the system, a transmitter observes a dynamic source and sends updates to a remote receiver through a communication channel. We assume that the channel is error-free but suffers a random delay. Moreover, when an update has been transmitted for too long, the transmission will be terminated immediately, and the update will be discarded. We assume the maximum transmission time is predetermined and is not controlled by the transmitter. The receiver will maintain estimates of the current state of the dynamic source using the received updates. In this paper, we adopt the Age of Incorrect Information (AoII) as the performance metric and investigate the problem of optimizing the transmitter's action in each time slot to minimize AoII. We first characterize the optimization problem using Markov Decision Process and evaluate the performance of some canonical transmission policies. Then, by leveraging the policy improvement theorem, we prove that, under a simple and easy-to-verify condition, the optimal policy for the transmitter is the one that initiates a transmission whenever the channel is idle and AoII is not zero. Lastly, we take the case where the transmission time is geometrically distributed as an example. For this example, we verify the condition numerically and provide numerical results that highlight the performance of the optimal policy.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 優化器 · 設計 · 模型驗證 · CASES ·
2023 年 3 月 10 日

Numerical predictions of quantities of interest measured within physical systems rely on the use of mathematical models that should be validated, or at best, not invalidated. Model validation usually involves the comparison of experimental data (outputs from the system of interest) and model predictions, both obtained at a specific validation scenario. The design of this validation experiment should be directly relevant to the objective of the model, that of predicting a quantity of interest at a prediction scenario. In this paper, we address two specific issues arising when designing validation experiments. The first issue consists in determining an appropriate validation scenario in cases where the prediction scenario cannot be carried out in a controlled environment. The second issue concerns the selection of observations when the quantity of interest cannot be readily observed. The proposed methodology involves the computation of influence matrices that characterize the response surface of given model functionals. Minimization of the distance between influence matrices allow one for selecting a validation experiment most representative of the prediction scenario. We illustrate our approach on two numerical examples. The first example considers the validation of a simple model based on an ordinary differential equation governing an object in free fall to put in evidence the importance of the choice of the validation experiment. The second numerical experiment focuses on the transport of a pollutant and demonstrates the impact that the choice of the quantity of interest has on the validation experiment to be performed.

When choosing estimands and estimators in randomized clinical trials, caution is warranted as intercurrent events, such as - due to patients who switch treatment after disease progression, are often extreme. Statistical analyses may then easily lure one into making large implicit extrapolations, which often go unnoticed. We will illustrate this problem of implicit extrapolations using a real oncology case study, with a right-censored time-to-event endpoint, in which patients can cross over from the control to the experimental treatment after disease progression, for ethical reasons. We resolve this by developing an estimator for the survival risk ratio contrasting the survival probabilities at each time t if all patients would take experimental treatment with the survival probabilities at those times t if all patients would take control treatment up to time t, using randomization as an instrumental variable to avoid reliance on no unmeasured confounders assumptions. This doubly robust estimator can handle time-varying treatment switches and right-censored survival times. Insight into the rationale behind the estimator is provided and the approach is demonstrated by re-analyzing the oncology trial.

Optimization algorithms such as projected Newton's method, FISTA, mirror descent, and its variants enjoy near-optimal regret bounds and convergence rates, but suffer from a computational bottleneck of computing ``projections'' in potentially each iteration (e.g., $O(T^{1/2})$ regret of online mirror descent). On the other hand, conditional gradient variants solve a linear optimization in each iteration, but result in suboptimal rates (e.g., $O(T^{3/4})$ regret of online Frank-Wolfe). Motivated by this trade-off in runtime v/s convergence rates, we consider iterative projections of close-by points over widely-prevalent submodular base polytopes $B(f)$. We first give necessary and sufficient conditions for when two close points project to the same face of a polytope, and then show that points far away from the polytope project onto its vertices with high probability. We next use this theory and develop a toolkit to speed up the computation of iterative projections over submodular polytopes using both discrete and continuous perspectives. We subsequently adapt the away-step Frank-Wolfe algorithm to use this information and enable early termination. For the special case of cardinality-based submodular polytopes, we improve the runtime of computing certain Bregman projections by a factor of $\Omega(n/\log(n))$. Our theoretical results show orders of magnitude reduction in runtime in preliminary computational experiments.

We consider the minimization of the cost of actuation error under resource constraints for real-time tracking in wireless autonomous systems. A transmitter monitors the state of a discrete random process and sends updates to the receiver over an unreliable wireless channel. The receiver takes actions according to the estimated state of the source. For each discrepancy between the real state of the source and the estimated one, we consider a different cost of actuation error. This models the case where some states, and consequently the corresponding actions to be taken, are more important than others. We provide two algorithms: one reaching an optimal solution but of high complexity, and one providing a suboptimal solution but with low complexity. The performance of the two algorithms are quite close as shown by the simulations.

This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function introduced by Stein (22, Section 6.7). Convergence rates are studied for the joint maximum likelihood estimation of the regularity and the amplitude parameters when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a fixed deterministic element of a Sobolev space of continuous functions is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.

We present two methods for bounding the probabilities of benefit and harm under unmeasured confounding. The first method computes the (upper or lower) bound of either probability as a function of the observed data distribution and two intuitive sensitivity parameters which, then, can be presented to the analyst as a 2-D plot to assist her in decision making. The second method assumes the existence of a measured nondifferential proxy (i.e., direct effect) of the unmeasured confounder. Using this proxy, tighter bounds than the existing ones can be derived from just the observed data distribution.

In this paper, we not only propose an new optimal sequential test of sum of logarithmic likelihood ratio (SLR) but also present the CUSUM sequential test (control chart, stopping time) with the observation-adjusted control limits (CUSUM-OAL) for monitoring quickly and adaptively the change in distribution of a sequential observations. Two limiting relationships between the optimal test and a series of the CUSUM-OAL tests are established. Moreover, we give the estimation of the in-control and the out-of-control average run lengths (ARLs) of the CUSUM-OAL test. The theoretical results are illustrated by numerical simulations in detecting mean shifts of the observations sequence.

We study a system in which two-state Markov sources send status updates to a common receiver over a slotted ALOHA random access channel. We characterize the performance of the system in terms of state estimation entropy (SEE), which measures the uncertainty at the receiver about the sources' state. Two channel access strategies are considered, a reactive policy that depends on the source behavior and a random one that is independent of it. We prove that the considered policies can be studied using two different hidden Markov models (HMM) and show through density evolution (DE) analysis that the reactive strategy outperforms the random one in terms of SEE while the opposite is true for AoI. Furthermore, we characterize the probability of error in the state estimation at the receiver, considering a maximum a posteriori (MAP) estimator and a low-complexity (decode and hold) estimator. Our study provides useful insights on the design trade-offs that emerge when different performance metrics such as SEE, age or information (AoI) or state estimation probability error are adopted. Moreover, we show how the source statistics significantly impact the system performance.

Indiscriminate data poisoning attacks aim to decrease a model's test accuracy by injecting a small amount of corrupted training data. Despite significant interest, existing attacks remain relatively ineffective against modern machine learning (ML) architectures. In this work, we introduce the notion of model poisonability as a technical tool to explore the intrinsic limits of data poisoning attacks. We derive an easily computable threshold to establish and quantify a surprising phase transition phenomenon among popular ML models: data poisoning attacks become effective only when the poisoning ratio exceeds our threshold. Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing data poisoning baselines over a range of datasets and models. Our work highlights the critical role played by the poisoning ratio, and sheds new insights on existing empirical results, attacks and mitigation strategies in data poisoning.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

北京阿比特科技有限公司