亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Implicitly Normalized Forecaster (INF) algorithm is considered to be an optimal solution for adversarial multi-armed bandit (MAB) problems. However, most of the existing complexity results for INF rely on restrictive assumptions, such as bounded rewards. Recently, a related algorithm was proposed that works for both adversarial and stochastic heavy-tailed MAB settings. However, this algorithm fails to fully exploit the available data. In this paper, we propose a new version of INF called the Implicitly Normalized Forecaster with clipping (INF-clip) for MAB problems with heavy-tailed reward distributions. We establish convergence results under mild assumptions on the rewards distribution and demonstrate that INF-clip is optimal for linear heavy-tailed stochastic MAB problems and works well for non-linear ones. Furthermore, we show that INF-clip outperforms the best-of-both-worlds algorithm in cases where it is difficult to distinguish between different arms.

相關內容

In situations involving teams of diverse robots, assigning appropriate roles to each robot and evaluating their performance is crucial. These roles define the specific characteristics of a robot within a given context. The stream actions exhibited by a robot based on its assigned role are referred to as the process role. Our research addresses the depiction of process roles using a multivariate probabilistic function. The main aim of this study is to develop a role engine for collaborative multi-robot systems and optimize the behavior of the robots. The role engine is designed to assign suitable roles to each robot, generate approximately optimal process roles, update them on time, and identify instances of robot malfunction or trigger replanning when necessary. The environment considered is dynamic, involving obstacles and other agents. The role engine operates hybrid, with central initiation and decentralized action, and assigns unlabeled roles to agents. We employ the Gaussian Process (GP) inference method to optimize process roles based on local constraints and constraints related to other agents. Furthermore, we propose an innovative approach that utilizes the environment's skeleton to address initialization and feasibility evaluation challenges. We successfully demonstrated the proposed approach's feasibility, and efficiency through simulation studies and real-world experiments involving diverse mobile robots.

In this paper, we consider a general observation model for restless multi-armed bandit problems. The operation of the player needs to be based on certain feedback mechanism that is error-prone due to resource constraints or environmental or intrinsic noises. By establishing a general probabilistic model for dynamics of feedback/observation, we formulate the problem as a restless bandit with a countable belief state space starting from an arbitrary initial belief (a priori information). We apply the achievable region method with partial conservation law (PCL) to the infinite-state problem and analyze its indexability and priority index (Whittle index). Finally, we propose an approximation process to transform the problem into which the AG algorithm of Ni\~no-Mora and Bertsimas for finite-state problems can be applied to. Numerical experiments show that our algorithm has an excellent performance.

Both data ferrying with disruption-tolerant networking (DTN) and mobile cellular base stations constitute important techniques for UAV-aided communication in situations of crises where standard communication infrastructure is unavailable. For optimal use of a limited number of UAVs, we propose providing both DTN and a cellular base station on each UAV. Here, DTN is used for large amounts of low-priority data, while capacity-constrained cell coverage remains reserved for emergency calls or command and control. We optimize cell coverage via a novel optimal transport-based formulation using alternating minimization, while for data ferrying we periodically deliver data between dynamic clusters by solving quadratic assignment problems. In our evaluation, we consider different scenarios with varying mobility models and a wide range of flight patterns. Overall, we tractably achieve optimal cell coverage under quality-of-service costs with DTN-based data ferrying, enabling large-scale deployment of UAV swarms for crisis communication.

Causal inference for extreme events has many potential applications in fields such as climate science, medicine and economics. We study the extremal quantile treatment effect of a binary treatment on a continuous, heavy-tailed outcome. Existing methods are limited to the case where the quantile of interest is within the range of the observations. For applications in risk assessment, however, the most relevant cases relate to extremal quantiles that go beyond the data range. We introduce an estimator of the extremal quantile treatment effect that relies on asymptotic tail approximation, and use a new causal Hill estimator for the extreme value indices of potential outcome distributions. We establish asymptotic normality of the estimators and propose a consistent variance estimator to achieve valid statistical inference. We illustrate the performance of our method in simulation studies, and apply it to a real data set to estimate the extremal quantile treatment effect of college education on wage.

The paper establishes the strong convergence rates of a spatio-temporal full discretization of the stochastic wave equation with nonlinear damping in dimension one and two. We discretize the SPDE by applying a spectral Galerkin method in space and a modified implicit exponential Euler scheme in time. The presence of the super-linearly growing damping in the underlying model brings challenges into the error analysis. To address these difficulties, we first achieve upper mean-square error bounds, and then obtain mean-square convergence rates of the considered numerical solution. This is done without requiring the moment bounds of the full approximations. The main result shows that, in dimension one, the scheme admits a convergence rate of order $\tfrac12$ in space and order $1$ in time. In dimension two, the error analysis is more subtle and can be done at the expense of an order reduction due to an infinitesimal factor. Numerical experiments are performed and confirm our theoretical findings.

Exploiting the computational heterogeneity of mobile devices and edge nodes, mobile edge computation (MEC) provides an efficient approach to achieving real-time applications that are sensitive to information freshness, by offloading tasks from mobile devices to edge nodes. We use the metric Age-of-Information (AoI) to evaluate information freshness. An efficient solution to minimize the AoI for the MEC system with multiple users is non-trivial to obtain due to the random computing time. In this paper, we consider multiple users offloading tasks to heterogeneous edge servers in a MEC system. We first reformulate the problem as a Restless Multi-Arm-Bandit (RMAB) problem and establish a hierarchical Markov Decision Process (MDP) to characterize the updating of AoI for the MEC system. Based on the hierarchical MDP, we propose a nested index framework and design a nested index policy with provably asymptotic optimality. Finally, the closed form of the nested index is obtained, which enables the performance tradeoffs between computation complexity and accuracy. Our algorithm leads to an optimality gap reduction of up to 40%, compared to benchmarks. Our algorithm asymptotically approximates the lower bound as the system scalar gets large enough.

Multi armed bandit (MAB) algorithms have been increasingly used to complement or integrate with A/B tests and randomized clinical trials in e-commerce, healthcare, and policymaking. Recent developments incorporate possible delayed feedback. While existing MAB literature often focuses on maximizing the expected cumulative reward outcomes (or, equivalently, regret minimization), few efforts have been devoted to establish valid statistical inference approaches to quantify the uncertainty of learned policies. We attempt to fill this gap by providing a unified statistical inference framework for policy evaluation where a target policy is allowed to differ from the data collecting policy, and our framework allows delay to be associated with the treatment arms. We present an adaptively weighted estimator that on one hand incorporates the arm-dependent delaying mechanism to achieve consistency, and on the other hand mitigates the variance inflation across stages due to vanishing sampling probability. In particular, our estimator does not critically depend on the ability to estimate the unknown delay mechanism. Under appropriate conditions, we prove that our estimator converges to a normal distribution as the number of time points goes to infinity, which provides guarantees for large-sample statistical inference. We illustrate the finite-sample performance of our approach through Monte Carlo experiments.

The general sequential decision-making problem, which includes Markov decision processes (MDPs) and partially observable MDPs (POMDPs) as special cases, aims at maximizing a cumulative reward by making a sequence of decisions based on a history of observations and actions over time. Recent studies have shown that the sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs). Despite these advancements, existing approaches typically involve oracles or steps that are not computationally efficient. On the other hand, the upper confidence bound (UCB) based approaches, which have served successfully as computationally efficient methods in bandits and MDPs, have not been investigated for more general PSRs, due to the difficulty of optimistic bonus design in these more challenging settings. This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models. We further characterize the sample complexity bounds for our designed UCB-type algorithms for both online and offline PSRs. In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational efficiency, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.

The Euclidean Steiner Minimal Tree problem takes as input a set $\mathcal P$ of points in the Euclidean plane and finds the minimum length network interconnecting all the points of $\mathcal P$. In this paper, in continuation to the works of Du et al. and Weng et al., we study Euclidean Steiner Minimal Tree when $\mathcal P$ is formed by the vertices of a pair of regular, concentric and parallel $n$-gons. We restrict our attention to the cases where the two polygons are not very close to each other. In such cases, we show that Euclidean Steiner Minimal Tree is polynomial-time solvable, and we describe an explicit structure of a Euclidean Steiner minimal tree for $\mathcal P$. We also consider point sets $\mathcal P$ of size $n$ where the number of input points not on the convex hull of $\mathcal P$ is $f(n) \leq n$. We give an exact algorithm with running time $2^{\mathcal{O}(f(n)\log n)}$ for such input point sets $\mathcal P$. Note that when $f(n) = \mathcal{O}(\frac{n}{\log n})$, our algorithm runs in single-exponential time, and when $f(n) = o(n)$ the running time is $2^{o(n\log n)}$ which is better than the known algorithm stated in Hwang et al. We know that no FPTAS exists for Euclidean Steiner Minimal Tree unless P=NP, as shown by Garey et al. On the other hand FPTASes exist for Euclidean Steiner Minimal Tree on convex point sets, as given by Scott Provan. In this paper, we show that if the number of input points in $\mathcal P$ not belonging to the convex hull of $\mathcal P$ is $\mathcal{O}(\log n)$, then an FPTAS exists for Euclidean Steiner Minimal Tree. In contrast, we show that for any $\epsilon \in (0,1]$, when there are $\Omega(n^{\epsilon})$ points not belonging to the convex hull of the input set, then no FPTAS can exist for Euclidean Steiner Minimal Tree unless P=NP.

Federated learning is a new distributed machine learning framework, where a bunch of heterogeneous clients collaboratively train a model without sharing training data. In this work, we consider a practical and ubiquitous issue in federated learning: intermittent client availability, where the set of eligible clients may change during the training process. Such an intermittent client availability model would significantly deteriorate the performance of the classical Federated Averaging algorithm (FedAvg for short). We propose a simple distributed non-convex optimization algorithm, called Federated Latest Averaging (FedLaAvg for short), which leverages the latest gradients of all clients, even when the clients are not available, to jointly update the global model in each iteration. Our theoretical analysis shows that FedLaAvg attains the convergence rate of $O(1/(N^{1/4} T^{1/2}))$, achieving a sublinear speedup with respect to the total number of clients. We implement and evaluate FedLaAvg with the CIFAR-10 dataset. The evaluation results demonstrate that FedLaAvg indeed reaches a sublinear speedup and achieves 4.23% higher test accuracy than FedAvg.

北京阿比特科技有限公司