亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The age of information minimization problems has been extensively studied in Real-time monitoring applications frameworks. In this paper, we consider the problem of monitoring the states of unknown remote source that evolves according to a Markovian Process. A central scheduler decides at each time slot whether to schedule the source or not in order to receive the new status updates in such a way as to minimize the Mean Age of Incorrect Information (MAoII). When the scheduler knows the source parameters, we formulate the minimization problem as an MDP problem. Then, we prove that the optimal solution is a threshold-based policy. When the source's parameters are unknown, the problem's difficulty lies in finding a strategy with a good trade-off between exploitation and exploration. Indeed, we need to provide an algorithm implemented by the scheduler that jointly estimates the unknown parameters (exploration) and minimizes the MAoII (exploitation). However, considering our system model, we can only explore the source if the monitor decides to schedule it. Then, applying the greedy approach, we risk definitively stopping the exploration process in the case where at a specific time, we end up with an estimation of the Markovian source's parameters to which the corresponding optimal solution is never to transmit. In this case, we can no longer improve the estimation of our unknown parameters, which may significantly detract from the performance of the algorithm. For that, we develop a new learning algorithm that gives a good balance between exploration and exploitation to avoid this main problem. Then, we theoretically analyze the performance of our algorithm compared to a genie solution by proving that the regret bound at time T is log(T). Finally, we provide some numerical results to highlight the performance of our derived policy compared to the greedy approach.

相關內容

Bayesian adaptive experimental design is a form of active learning, which chooses samples to maximize the information they give about uncertain parameters. Prior work has shown that other forms of active learning can suffer from active learning bias, where unrepresentative sampling leads to inconsistent parameter estimates. We show that active learning bias can also afflict Bayesian adaptive experimental design, depending on model misspecification. We analyze the case of estimating a linear model, and show that worse misspecification implies more severe active learning bias. At the same time, model classes incorporating more "noise" - i.e., specifying higher inherent variance in observations - suffer less from active learning bias. Finally, we demonstrate empirically that insights from the linear model can predict the presence and degree of active learning bias in nonlinear contexts, namely in a (simulated) preference learning experiment.

Reinforcement Learning (RL) algorithms are known to scale poorly to environments with many available actions, requiring numerous samples to learn an optimal policy. The traditional approach of considering the same fixed action space in every possible state implies that the agent must understand, while also learning to maximize its reward, to ignore irrelevant actions such as $\textit{inapplicable actions}$ (i.e. actions that have no effect on the environment when performed in a given state). Knowing this information can help reduce the sample complexity of RL algorithms by masking the inapplicable actions from the policy distribution to only explore actions relevant to finding an optimal policy. This is typically done in an ad-hoc manner with hand-crafted domain logic added to the RL algorithm. In this paper, we propose a more systematic approach to introduce this knowledge into the algorithm. We (i) standardize the way knowledge can be manually specified to the agent; and (ii) present a new framework to autonomously learn these state-dependent action constraints jointly with the policy. We show experimentally that learning inapplicable actions greatly improves the sample efficiency of the algorithm by providing a reliable signal to mask out irrelevant actions. Moreover, we demonstrate that thanks to the transferability of the knowledge acquired, it can be reused in other tasks to make the learning process more efficient.

Image generation and image completion are rapidly evolving fields, thanks to machine learning algorithms that are able to realistically replace missing pixels. However, generating large high resolution images, with a large level of details, presents important computational challenges. In this work, we formulate the image generation task as completion of an image where one out of three corners is missing. We then extend this approach to iteratively build larger images with the same level of detail. Our goal is to obtain a scalable methodology to generate high resolution samples typically found in satellite imagery data sets. We introduce a conditional progressive Generative Adversarial Networks (GAN), that generates the missing tile in an image, using as input three initial adjacent tiles encoded in a latent vector by a Wasserstein auto-encoder. We focus on a set of images used by the United Nations Satellite Centre (UNOSAT) to train flood detection tools, and validate the quality of synthetic images in a realistic setup.

Pessimism is of great importance in offline reinforcement learning (RL). One broad category of offline RL algorithms fulfills pessimism by explicit or implicit behavior regularization. However, most of them only consider policy divergence as behavior regularization, ignoring the effect of how the offline state distribution differs with that of the learning policy, which may lead to under-pessimism for some states and over-pessimism for others. Taking account of this problem, we propose a principled algorithmic framework for offline RL, called \emph{State-Aware Proximal Pessimism} (SA-PP). The key idea of SA-PP is leveraging discounted stationary state distribution ratios between the learning policy and the offline dataset to modulate the degree of behavior regularization in a state-wise manner, so that pessimism can be implemented in a more appropriate way. We first provide theoretical justifications on the superiority of SA-PP over previous algorithms, demonstrating that SA-PP produces a lower suboptimality upper bound in a broad range of settings. Furthermore, we propose a new algorithm named \emph{State-Aware Conservative Q-Learning} (SA-CQL), by building SA-PP upon representative CQL algorithm with the help of DualDICE for estimating discounted stationary state distribution ratios. Extensive experiments on standard offline RL benchmark show that SA-CQL outperforms the popular baselines on a large portion of benchmarks and attains the highest average return.

Dynamic symbolic execution (DSE) is a powerful test generation approach based on an exploration of the path space of the program under test. Well-adapted for path coverage, this approach is however less efficient for conditions, decisions, advanced coverage criteria (such as multiple conditions, weak mutations, boundary testing) or user-provided test objectives. While theoretical solutions to adapt DSE to a large set of criteria have been proposed, they have never been integrated into publicly available testing tools. This paper presents a first integration of an optimized test generation strategy for advanced coverage criteria into a popular open-source testing tool based on DSE, namely, Klee. The integration is performed in a fully black-box manner, and can therefore inspire an easy integration into other similar tools. The resulting version of the tool is publicly available. We present the design of the proposed technique and evaluate it on several benchmarks. Our results confirm the benefits of the proposed tool for advanced coverage criteria.

This paper presents the Gatekeeper algorithm, a real-time method to guarantee the safety of a robotic system operating in environments that are unknown and dynamic. Given a nominal planner designed to meet mission objectives, Gatekeeper extends the nominal trajectories using backup controllers, and determines a control policy that is certified safe for all future time using the currently available information. We demonstrate the algorithm on a dynamic aerial firefighting mission, and show reduced conservatism relative to existing methods. The algorithm was also demonstrated onboard a quadrotor, where a map of the environment was built online, and the Gatekeeper algorithm prevented a human pilot from flying the quadrotor into obstacles and unknown regions.

Real-world face recognition applications often deal with suboptimal image quality or resolution due to different capturing conditions such as various subject-to-camera distances, poor camera settings, or motion blur. This characteristic has an unignorable effect on performance. Recent cross-resolution face recognition approaches used simple, arbitrary, and unrealistic down- and up-scaling techniques to measure robustness against real-world edge-cases in image quality. Thus, we propose a new standardized benchmark dataset and evaluation protocol derived from the famous Labeled Faces in the Wild (LFW). In contrast to previous derivatives, which focus on pose, age, similarity, and adversarial attacks, our Cross-Quality Labeled Faces in the Wild (XQLFW) maximizes the quality difference. It contains only more realistic synthetically degraded images when necessary. Our proposed dataset is then used to further investigate the influence of image quality on several state-of-the-art approaches. With XQLFW, we show that these models perform differently in cross-quality cases, and hence, the generalizing capability is not accurately predicted by their performance on LFW. Additionally, we report baseline accuracy with recent deep learning models explicitly trained for cross-resolution applications and evaluate the susceptibility to image quality. To encourage further research in cross-resolution face recognition and incite the assessment of image quality robustness, we publish the database and code for evaluation.

In this work, we consider the problem of jointly minimizing the average cost of sampling and transmitting status updates by users over a wireless channel subject to average Age of Information (AoI) constraints. Errors in the transmission may occur and a scheduling policy has to decide if the users sample a new packet or attempt for retransmission of the packet sampled previously. The cost consists of both sampling and transmission costs. The sampling of a new packet after a failure imposes an additional cost on the system. We formulate a stochastic optimization problem with the average cost in the objective under average AoI constraints. To solve this problem, we propose three scheduling policies; a) a dynamic policy, that is centralized and requires full knowledge of the state of the system, b) two stationary randomized policies that require no knowledge of the state of the system. We utilize tools from Lyapunov optimization theory in order to provide the dynamic policy, and we prove that its solution is arbitrary close to the optimal one. In order to provide the randomized policies, we model the system by utilizing Discrete Time Markov Chain (DTMC). We provide closed-form and approximated expressions for the average AoI and its distribution, for each randomized policy. Simulation results show the importance of providing the option to transmit an old packet in order to minimize the total average cost.

We consider the problem of enhancing the delivery of real-time traffic in wireless networks using bandwidth sharing between operators. A key characteristic of real-time traffic is that a packet has to be delivered within a delay deadline for it to be useful. The abundance of real-time traffic is evident in the popularity of applications like video and audio conferencing, which increased significantly during the COVID-19 period. We propose a sharing and scheduling policy which involves dynamically sharing a portion of one operator's bandwidth with another operator. We provide strong theoretical guarantees for the policy. We also evaluate its performance via extensive simulations, which show significant improvements of up to 90% in the ability to carry real-time traffic when using the policy. We also explore how the improvements from bandwidth sharing depend on the amount of sharing, and on additional traffic characteristics.

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other's generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.

北京阿比特科技有限公司