亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work discusses the benefits of having multiple simulated environments with different degrees of realism for the development of algorithms in scenarios populated by autonomous nodes capable of communication and mobility. This approach aids the development experience and generates robust algorithms. It also proposes GrADyS-SIM NextGen as a solution that enables development on a single programming language and toolset over multiple environments with varying levels of realism. Finally, we illustrate the usefulness of this approach with a toy problem that makes use of the simulation framework, taking advantage of the proposed environments to iteratively develop a robust solution.

相關內容

To overcome the sim-to-real gap in reinforcement learning (RL), learned policies must maintain robustness against environmental uncertainties. While robust RL has been widely studied in single-agent regimes, in multi-agent environments, the problem remains understudied -- despite the fact that the problems posed by environmental uncertainties are often exacerbated by strategic interactions. This work focuses on learning in distributionally robust Markov games (RMGs), a robust variant of standard Markov games, wherein each agent aims to learn a policy that maximizes its own worst-case performance when the deployed environment deviates within its own prescribed uncertainty set. This results in a set of robust equilibrium strategies for all agents that align with classic notions of game-theoretic equilibria. Assuming a non-adaptive sampling mechanism from a generative model, we propose a sample-efficient model-based algorithm (DRNVI) with finite-sample complexity guarantees for learning robust variants of various notions of game-theoretic equilibria. We also establish an information-theoretic lower bound for solving RMGs, which confirms the near-optimal sample complexity of DRNVI with respect to problem-dependent factors such as the size of the state space, the target accuracy, and the horizon length.

Quantum parameter estimation theory is an important component of quantum information theory and provides the statistical foundation that underpins important topics such as quantum system identification and quantum waveform estimation. When there is more than one parameter the ultimate precision in the mean square error given by the quantum Cram\'er-Rao bound is not necessarily achievable. For non-full rank quantum states, it was not known when this bound can be saturated (achieved) when only a single copy of the quantum state encoding the unknown parameters is available. This single-copy scenario is important because of its experimental/practical tractability. Recently, necessary and sufficient conditions for saturability of the quantum Cram\'er-Rao bound in the multiparameter single-copy scenario have been established in terms of i) the commutativity of a set of projected symmetric logarithmic derivatives and ii) the existence of a unitary solution to a system of coupled nonlinear partial differential equations. New sufficient conditions were also obtained that only depend on properties of the symmetric logarithmic derivatives. In this paper, key structural properties of optimal measurements that saturate the quantum Cram\'er-Rao bound are illuminated. These properties are exploited to i) show that the sufficient conditions are in fact necessary and sufficient for an optimal measurement to be projective, ii) give an alternative proof of previously established necessary conditions, and iii) describe general POVMs, not necessarily projective, that saturate the multiparameter QCRB. Examples are given where a unitary solution to the system of nonlinear partial differential equations can be explicitly calculated when the required conditions are fulfilled.

We present a flexible, deterministic numerical method for computing left-tail rare events of sums of non-negative, independent random variables. The method is based on iterative numerical integration of linear convolutions by means of Newtons-Cotes rules. The periodicity properties of convoluted densities combined with the Trapezoidal rule are exploited to produce a robust and efficient method, and the method is flexible in the sense that it can be applied to all kinds of non-negative continuous RVs. We present an error analysis and study the benefits of utilizing Newton-Cotes rules versus the fast Fourier transform (FFT) for numerical integration, showing that although there can be efficiency-benefits to using FFT, Newton-Cotes rules tend to preserve the relative error better, and indeed do so at an acceptable computational cost. Numerical studies on problems with both known and unknown rare-event probabilities showcase the method's performance and support our theoretical findings.

Identifying partial differential equations (PDEs) from data is crucial for understanding the governing mechanisms of natural phenomena, yet it remains a challenging task. We present an extension to the ARGOS framework, ARGOS-RAL, which leverages sparse regression with the recurrent adaptive lasso to identify PDEs from limited prior knowledge automatically. Our method automates calculating partial derivatives, constructing a candidate library, and estimating a sparse model. We rigorously evaluate the performance of ARGOS-RAL in identifying canonical PDEs under various noise levels and sample sizes, demonstrating its robustness in handling noisy and non-uniformly distributed data. We also test the algorithm's performance on datasets consisting solely of random noise to simulate scenarios with severely compromised data quality. Our results show that ARGOS-RAL effectively and reliably identifies the underlying PDEs from data, outperforming the sequential threshold ridge regression method in most cases. We highlight the potential of combining statistical methods, machine learning, and dynamical systems theory to automatically discover governing equations from collected data, streamlining the scientific modeling process.

When solving systems of banded Toeplitz equations or calculating their inverses, it is necessary to determine the invertibility of the matrices beforehand. In this paper, we equate the invertibility of an $n$-order banded Toeplitz matrix with bandwidth $2k+1$ to that of a small $k*k$ matrix. By utilizing a specially designed algorithm, we compute the invertibility sequence of a class of banded Toeplitz matrices with a time complexity of $5k^2n/2+kn$ and a space complexity of $3k^2$ where $n$ is the size of the largest matrix. This enables efficient preprocessing when solving equation systems and inverses of banded Toeplitz matrices.

Computing the rate-distortion function for continuous sources is commonly regarded as a standard continuous optimization problem. When numerically addressing this problem, a typical approach involves discretizing the source space and subsequently solving the associated discrete problem. However, existing literature has predominantly concentrated on the convergence analysis of solving discrete problems, usually neglecting the convergence relationship between the original continuous optimization and its associated discrete counterpart. This neglect is not rigorous, since the solution of a discrete problem does not necessarily imply convergence to the solution of the original continuous problem, especially for non-linear problems. To address this gap, our study employs rigorous mathematical analysis, which constructs a series of finite-dimensional spaces approximating the infinite-dimensional space of the probability measure, establishing that solutions from discrete schemes converge to those from the continuous problems.

The problem of optimizing discrete phases in a reconfigurable intelligent surface (RIS) to maximize the received power at a user equipment is addressed. Necessary and sufficient conditions to achieve this maximization are given. These conditions are employed in an algorithm to achieve the maximization. New versions of the algorithm are given that are proven to achieve convergence in N or fewer steps whether the direct link is completely blocked or not, where N is the number of the RIS elements, whereas previously published results achieve this in KN or 2N number of steps where K is the number of discrete phases. Thus, for a discrete-phase RIS, the techniques presented in this paper achieve the optimum received power in the smallest number of steps published in the literature. In addition, in each of those N steps, the techniques presented in this paper determine only one or a small number of phase shifts with a simple elementwise update rule, which result in a substantial reduction of computation time, as compared to the algorithms in the literature. As a secondary result, we define the uniform polar quantization (UPQ) algorithm which is an intuitive quantization algorithm that can approximate the continuous solution with an approximation ratio of sinc^2(1/K) and achieve low time-complexity, given perfect knowledge of the channel.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual analysis and natural language processing. Our goal is different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment; instead, we aim to infer the latent emotional state of the user. Thus, we focus on predicting the emotion word tags attached by users to their Tumblr posts, treating these as "self-reported emotions." We demonstrate that our multimodal model combining both text and image features outperforms separate models based solely on either images or text. Our model's results are interpretable, automatically yielding sensible word lists associated with emotions. We explore the structure of emotions implied by our model and compare it to what has been posited in the psychology literature, and validate our model on a set of images that have been used in psychology studies. Finally, our work also provides a useful tool for the growing academic study of images - both photographs and memes - on social networks.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司