亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multi-Access Point Coordination (MAPC) will be a key feature in next generation Wi-Fi 8 networks. MAPC aims to improve the overall network performance by allowing Access Points (APs) to share time, frequency and/or spatial resources in a coordinated way, thus alleviating inter-AP contention and enabling new multi-AP channel access strategies. This paper introduces a framework to support periodic MAPC transmissions on top of current Wi-Fi operation. We first focus on the problem of creating multi-AP groups that can transmit simultaneously to leverage Spatial Reuse opportunities. Then, once these groups are created, we study different scheduling algorithms to determine which groups will transmit at every MAPC transmission. Two different types of algorithms are tested: per-AP, and per-Group. While per-AP algorithms base their scheduling decision on the buffer state of individual APs, per-Group algorithms do that taking into account the aggregate buffer state of all APs in a group. Obtained results -- targetting worst-case delay -- show that per-AP based algorithms outperform per-Group ones due to their ability to guarantee that the AP with a) more packets, or b) with the oldest waiting packet in the buffer is selected.

相關內容

Group一直是研究計算機支持的合作工作、人機交互、計算機支持的協作學習和社會技術研究的主要場所。該會議將社會科學、計算機科學、工程、設計、價值觀以及其他與小組工作相關的多個不同主題的工作結合起來,并進行了廣泛的概念化。官網鏈接: · 泛函 · 損失 · 統計量 · MoDELS ·
2023 年 6 月 21 日

The sensitivity of loss reserving techniques to outliers in the data or deviations from model assumptions is a well known challenge. It has been shown that the popular chain-ladder reserving approach is at significant risk to such aberrant observations in that reserve estimates can be significantly shifted in the presence of even one outlier. As a consequence the chain-ladder reserving technique is non-robust. In this paper we investigate the sensitivity of reserves and mean squared errors of prediction under Mack's Model (Mack, 1993). This is done through the derivation of impact functions which are calculated by taking the first derivative of the relevant statistic of interest with respect to an observation. We also provide and discuss the impact functions for quantiles when total reserves are assumed to be lognormally distributed. Additionally, comparisons are made between the impact functions for individual accident year reserves under Mack's Model and the Bornhuetter-Ferguson methodology. It is shown that the impact of incremental claims on these statistics of interest varies widely throughout a loss triangle and is heavily dependent on other cells in the triangle. Results are illustrated using data from a Belgian non-life insurer.

We present an approach to task scheduling in heterogeneous multi-robot systems. In our setting, the tasks to complete require diverse skills. We assume that each robot is multi-skilled, i.e., each robot offers a subset of the possible skills. This makes the formation of heterogeneous teams (\emph{coalitions}) a requirement for task completion. We present two centralized algorithms to schedule robots across tasks and to form suitable coalitions, assuming stochastic travel times across tasks. The coalitions are dynamic, in that the robots form and disband coalitions as the schedule is executed. The first algorithm we propose guarantees optimality, but its run-time is acceptable only for small problem instances. The second algorithm we propose can tackle large problems with short run-times, and is based on a heuristic approach that typically reaches 1x-2x of the optimal solution cost.

We consider Ramp Metering (RM) at the microscopic level subject to vehicle following safety constraints for a freeway with arbitrary number of on- and off-ramps. The arrival times of vehicles to the on-ramps, as well as their destinations are modeled by exogenous stochastic processes. Once a vehicle is released from an on-ramp, it accelerates towards the free flow speed if it is not obstructed by another vehicle; once it gets close to another vehicle, it adopts a safe gap vehicle following behavior. The vehicle exits the freeway once it reaches its destination off-ramp. We design traffic-responsive RM policies that maximize the throughput. For a given routing matrix, the throughput of a RM policy is characterized by the set of on-ramp arrival rates for which the expected queue size at all the on-ramps remain bounded. The proposed RM policies work in synchronous cycles during which an on-ramp does not release more vehicles than its queue size at the beginning of the cycle. Moreover, all the policies operate under vehicle following safety constraints, where new vehicles are released only if there is sufficient gap between vehicles on the mainline at the moment of release. We provide three mechanisms under which each on-ramp: (i) pauses release for a time interval at the end of a cycle, or (ii) adjusts the release rate during a cycle, or (iii) adopts a conservative safe gap criterion for release during a cycle. All the proposed policies are reactive, meaning that they only require real-time traffic measurements without the need for demand prediction. The throughput of these policies is characterized by studying stochastic stability of the induced Markov chains, and is proven to be maximized when the merging speed at all the on-ramps equals the free flow speed. Simulations are provided to illustrate the performance of our policies and compare with a well-known RM policy from the literature.

We consider a system of several collocated nodes sharing a time slotted wireless channel, and seek a MAC (medium access control) that (i) provides low mean delay, (ii) has distributed control (i.e., there is no central scheduler), and (iii) does not require explicit exchange of state information or control signals. The design of such MAC protocols must keep in mind the need for contention access at light traffic, and scheduled access in heavy traffic, leading to the long-standing interest in hybrid, adaptive MACs. Working in the discrete time setting, for the distributed MAC design, we consider a practical information structure where each node has local information and some common information obtained from overhearing. In this setting, "ZMAC" is an existing protocol that is hybrid and adaptive. We approach the problem via two steps (1) We show that it is sufficient for the policy to be "greedy" and "exhaustive". Limiting the policy to this class reduces the problem to obtaining a queue switching policy at queue emptiness instants. (2) Formulating the delay optimal scheduling as a POMDP (partially observed Markov decision process), we show that the optimal switching rule is Stochastic Largest Queue (SLQ). Using this theory as the basis, we then develop a practical distributed scheduler, QZMAC, which is also tunable. We implement QZMAC on standard off-the-shelf TelosB motes and also use simulations to compare QZMAC with the full-knowledge centralized scheduler, and with ZMAC. We use our implementation to study the impact of false detection while overhearing the common information, and the efficiency of QZMAC. Our simulation results show that the mean delay with QZMAC is close that of the full-knowledge centralized scheduler.

A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Theta(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case.

We introduce variational sequential Optimal Experimental Design (vsOED), a new method for optimally designing a finite sequence of experiments under a Bayesian framework and with information-gain utilities. Specifically, we adopt a lower bound estimator for the expected utility through variational approximation to the Bayesian posteriors. The optimal design policy is solved numerically by simultaneously maximizing the variational lower bound and performing policy gradient updates. We demonstrate this general methodology for a range of OED problems targeting parameter inference, model discrimination, and goal-oriented prediction. These cases encompass explicit and implicit likelihoods, nuisance parameters, and physics-based partial differential equation models. Our vsOED results indicate substantially improved sample efficiency and reduced number of forward model simulations compared to previous sequential design algorithms.

Network interference, where the outcome of an individual is affected by the treatment assignment of those in their social network, is pervasive in real-world settings. However, it poses a challenge to estimating causal effects. We consider the task of estimating the total treatment effect (TTE), or the difference between the average outcomes of the population when everyone is treated versus when no one is, under network interference. Under a Bernoulli randomized design, we provide an unbiased estimator for the TTE when network interference effects are constrained to low order interactions among neighbors of an individual. We make no assumptions on the graph other than bounded degree, allowing for well-connected networks that may not be easily clustered. We derive a bound on the variance of our estimator and show in simulated experiments that it performs well compared with standard estimators for the TTE. We also derive a minimax lower bound on the mean squared error of our estimator which suggests that the difficulty of estimation can be characterized by the degree of interactions in the potential outcomes model. We also prove that our estimator is asymptotically normal under boundedness conditions on the network degree and potential outcomes model. Central to our contribution is a new framework for balancing model flexibility and statistical complexity as captured by this low order interactions structure.

Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how observed neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity, or on learning dynamical systems that explicitly relate to the neural state over time. We discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. Building on this concept, we propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data as a sparse combination of simpler, more interpretable components. Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time. The decomposed nature of the dynamics is more expressive than previous switched approaches for a given number of parameters and enables modeling of overlapping and non-stationary dynamics. In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system, learn efficient representations, and capture smooth transitions between dynamical modes, focusing on intuitive low-dimensional non-stationary linear and nonlinear systems. Furthermore, we highlight our model's ability to efficiently capture and demix population dynamics generated from multiple independent subnetworks, a task that is computationally impractical for switched models. Finally, we apply our model to neural "full brain" recordings of C. elegans data, illustrating a diversity of dynamics that is obscured when classified into discrete states.

Communication plays a vital role in multi-agent systems, fostering collaboration and coordination. However, in real-world scenarios where communication is bandwidth-limited, existing multi-agent reinforcement learning (MARL) algorithms often provide agents with a binary choice: either transmitting a fixed number of bytes or no information at all. This limitation hinders the ability to effectively utilize the available bandwidth. To overcome this challenge, we present the Dynamic Size Message Scheduling (DSMS) method, which introduces a finer-grained approach to scheduling by considering the actual size of the information to be exchanged. Our contribution lies in adaptively adjusting message sizes using Fourier transform-based compression techniques, enabling agents to tailor their messages to match the allocated bandwidth while striking a balance between information loss and transmission efficiency. Receiving agents can reliably decompress the messages using the inverse Fourier transform. Experimental results demonstrate that DSMS significantly improves performance in multi-agent cooperative tasks by optimizing the utilization of bandwidth and effectively balancing information value.

For ultra-reliable, low-latency communications (URLLC) applications such as mission-critical industrial control and extended reality (XR), it is important to ensure the communication quality of individual packets. Prior studies have considered Probabilistic Per-packet Real-time Communications (PPRC) guarantees for single-cell, single-channel networks, but they have not considered real-world complexities such as inter-cell interference in large-scale networks with multiple communication channels and heterogeneous real-time requirements. To fill the gap, we propose a real-time scheduling algorithm based on \emph{local-deadline-partition (LDP)}, and the LDP algorithm ensures PPRC guarantee for large-scale, multi-channel networks with heterogeneous real-time constraints. We also address the associated challenge of schedulability test. In particular, we propose the concept of \emph{feasible set}, identify a closed-form sufficient condition for the schedulability of PPRC traffic, and then propose an efficient distributed algorithm for the schedulability test. We numerically study the properties of the LDP algorithm and observe that it significantly improves the network capacity of URLLC, for instance, by a factor of 5-20 as compared with a typical method. Furthermore, the PPRC traffic supportable by the LDP algorithm is significantly higher than that of state-of-the-art comparison schemes. This demonstrates the potential of fine-grained scheduling algorithms for URLLC wireless systems regarding interference scenarios.

北京阿比特科技有限公司