亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We develop an extension of posterior sampling for reinforcement learning (PSRL) that is suited for a continuing agent-environment interface and integrates naturally into agent designs that scale to complex environments. The approach, continuing PSRL, maintains a statistically plausible model of the environment and follows a policy that maximizes expected $\gamma$-discounted return in that model. At each time, with probability $1-\gamma$, the model is replaced by a sample from the posterior distribution over environments. For a choice of discount factor that suitably depends on the horizon $T$, we establish an $\tilde{O}(\tau S \sqrt{A T})$ bound on the Bayesian regret, where $S$ is the number of environment states, $A$ is the number of actions, and $\tau$ denotes the reward averaging time, which is a bound on the duration required to accurately estimate the average reward of any policy. Our work is the first to formalize and rigorously analyze the resampling approach with randomized exploration.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

We consider a multi-process remote estimation system observing $K$ independent Ornstein-Uhlenbeck processes. In this system, a shared sensor samples the $K$ processes in such a way that the long-term average sum mean square error (MSE) is minimized. The sensor operates under a total sampling frequency constraint $f_{\max}$. The samples from all processes consume random processing delays in a shared queue and then are transmitted over an erasure channel with probability $\epsilon$. We study two variants of the problem: first, when the samples are scheduled according to a Maximum-Age-First (MAF) policy, and the receiver provides an erasure status feedback; and second, when samples are scheduled according to a Round-Robin (RR) policy, when there is no erasure status feedback from the receiver. Aided by optimal structural results, we show that the optimal sampling policy for both settings, under some conditions, is a \emph{threshold policy}. We characterize the optimal threshold and the corresponding optimal long-term average sum MSE as a function of $K$, $f_{\max}$, $\epsilon$, and the statistical properties of the observed processes. Our results show that, with an exponentially distributed service rate, the optimal threshold $\tau^*$ increases as the number of processes $K$ increases, for both settings. Additionally, we show that the optimal threshold is an \emph{increasing} function of $\epsilon$ in the case of \emph{available} erasure status feedback, while it exhibits the \emph{opposite behavior}, i.e., $\tau^*$ is a \emph{decreasing} function of $\epsilon$, in the case of \emph{absent} erasure status feedback.

Novel class discovery (NCD) aims at learning a model that transfers the common knowledge from a class-disjoint labelled dataset to another unlabelled dataset and discovers new classes (clusters) within it. Many methods, as well as elaborate training pipelines and appropriate objectives, have been proposed and considerably boosted performance on NCD tasks. Despite all this, we find that the existing methods do not sufficiently take advantage of the essence of the NCD setting. To this end, in this paper, we propose to model both inter-class and intra-class constraints in NCD based on the symmetric Kullback-Leibler divergence (sKLD). Specifically, we propose an inter-class sKLD constraint to effectively exploit the disjoint relationship between labelled and unlabelled classes, enforcing the separability for different classes in the embedding space. In addition, we present an intra-class sKLD constraint to explicitly constrain the intra-relationship between a sample and its augmentations and ensure the stability of the training process at the same time. We conduct extensive experiments on the popular CIFAR10, CIFAR100 and ImageNet benchmarks and successfully demonstrate that our method can establish a new state of the art and can achieve significant performance improvements, e.g., 3.5%/3.7% clustering accuracy improvements on CIFAR100-50 dataset split under the task-aware/-agnostic evaluation protocol, over previous state-of-the-art methods. Code is available at //github.com/FanZhichen/NCD-IIC.

Crowd counting is a key aspect of crowd analysis and has been typically accomplished by estimating a crowd-density map and summing over the density values. However, this approach suffers from background noise accumulation and loss of density due to the use of broad Gaussian kernels to create the ground truth density maps. This issue can be overcome by narrowing the Gaussian kernel. However, existing approaches perform poorly when trained with such ground truth density maps. To overcome this limitation, we propose using conditional diffusion models to predict density maps, as diffusion models are known to model complex distributions well and show high fidelity to training data during crowd-density map generation. Furthermore, as the intermediate time steps of the diffusion process are noisy, we incorporate a regression branch for direct crowd estimation only during training to improve the feature learning. In addition, owing to the stochastic nature of the diffusion model, we introduce producing multiple density maps to improve the counting performance contrary to the existing crowd counting pipelines. Further, we also differ from the density summation and introduce contour detection followed by summation as the counting operation, which is more immune to background noise. We conduct extensive experiments on public datasets to validate the effectiveness of our method. Specifically, our novel crowd-counting pipeline improves the error of crowd-counting by up to $6\%$ on JHU-CROWD++ and up to $7\%$ on UCF-QNRF.

Efficient and accurate estimation of multivariate empirical probability distributions is fundamental to the calculation of information-theoretic measures such as mutual information and transfer entropy. Common techniques include variations on histogram estimation which, whilst computationally efficient, are often unable to precisely capture the probability density of samples with high correlation, kurtosis or fine substructure, especially when sample sizes are small. Adaptive partitions, which adjust heuristically to the sample, can reduce the bias imparted from the geometry of the histogram itself, but these have commonly focused on the location, scale and granularity of the partition, the effects of which are limited for highly correlated distributions. In this paper, I reformulate the differential entropy estimator for the special case of an equiprobable histogram, using a k-d tree to partition the sample space into bins of equal probability mass. By doing so, I expose an implicit rotational orientation parameter, which is conjectured to be suboptimally specified in the typical marginal alignment. I propose that the optimal orientation minimises the variance of the bin volumes, and demonstrate that improved entropy estimates can be obtained by rotationally aligning the partition to the sample distribution accordingly. Such optimal partitions are observed to be more accurate than existing techniques in estimating entropies of correlated bivariate Gaussian distributions with known theoretical values, across varying sample sizes (99% CI).

Many solid mechanics problems on complex geometries are conventionally solved using discrete boundary methods. However, such an approach can be cumbersome for problems involving evolving domain boundaries due to the need to track boundaries and constant remeshing. In this work, we employ a robust smooth boundary method (SBM) that represents complex geometry implicitly, in a larger and simpler computational domain, as the support of a smooth indicator function. We present the resulting equations for mechanical equilibrium, in which inhomogeneous boundary conditions are replaced by source terms. The resulting mechanical equilibrium problem is semidefinite, making it difficult to solve. In this work, we present a computational strategy for efficiently solving near-singular SBM elasticity problems. We use the block-structured adaptive mesh refinement (BSAMR) method for resolving evolving boundaries appropriately, coupled with a geometric multigrid solver for an efficient solution of mechanical equilibrium. We discuss some of the practical numerical strategies for implementing this method, notably including the importance of grid versus node-centered fields. We demonstrate the solver's accuracy and performance for three representative examples: a) plastic strain evolution around a void, b) crack nucleation and propagation in brittle materials, and c) structural topology optimization. In each case, we show that very good convergence of the solver is achieved, even with large near-singular areas, and that any convergence issues arise from other complexities, such as stress concentrations. We present this framework as a versatile tool for studying a wide variety of solid mechanics problems involving variable geometry.

The success of a football team depends on various individual skills and performances of the selected players as well as how cohesively they perform. This work proposes a two-stage process for selecting optimal playing eleven of a football team from its pool of available players. In the first stage, for the reference team, a LASSO-induced modified trinomial logistic regression model is derived to analyze the probabilities of the three possible outcomes. The model takes into account strengths of the players in the team as well as those of the opponent, home advantage, and also the effects of individual players and player combinations beyond the recorded performances of these players. Careful use of the LASSO technique acts as an appropriate enabler of the player selection exercise while keeping the number of variables at a reasonable level. Then, in the second stage, a GRASP-type meta-heuristic is implemented for the team selection which maximizes the probability of win for the team. The work is illustrated with English Premier League data from 2008/09 to 2015/16. The application demonstrates that the model in the first stage furnishes valuable insights about the deciding factors for different teams whereas the optimization steps can be effectively used to determine the best possible starting lineup under various circumstances. Based on the adopted model and methodology, we propose a measure of efficiency in team selection by the team management and analyze the performance of EPL teams on this front.

We provide a unified approach to a method of estimation of the regression parameter in balanced linear models with a structured covariance matrix that combines a high breakdown point and bounded influence with high asymptotic efficiency at models with multivariate normal errors. Of main interest are linear mixed effects models, but our approach also includes several other standard multivariate models, such as multiple regression, multivariate regression, and multivariate location and scatter. We provide sufficient conditions for the existence of the estimators and corresponding functionals, establish asymptotic properties such as consistency and asymptotic normality, and derive their robustness properties in terms of breakdown point and influence function. All the results are obtained for general identifiable covariance structures and are established under mild conditions on the distribution of the observations, which goes far beyond models with elliptically contoured densities. Some of our results are new and others are more general than existing ones in the literature. In this way this manuscript completes and improves results on high breakdown estimation with high efficiency in a wide variety of multivariate models.

Typical cooperative multi-agent systems (MASs) exchange information to coordinate their motion in proximity-based control consensus schemes to complete a common objective. However, in the event of faults or cyber attacks to on-board positioning sensors of agents, global control performance may be compromised resulting in a hijacking of the entire MAS. For systems that operate in unknown or landmark-free environments (e.g., open terrain, sea, or air) and also beyond range/proximity sensing of nearby agents, compromised agents lose localization capabilities. To maintain resilience in these scenarios, we propose a method to recover compromised agents by utilizing Received Signal Strength Indication (RSSI) from nearby agents (i.e., mobile landmarks) to provide reliable position measurements for localization. To minimize estimation error: i) a multilateration scheme is proposed to leverage RSSI and position information received from neighboring agents as mobile landmarks and ii) a Kalman filtering method adaptively updates the unknown RSSI-based position measurement covariance matrix at runtime that is robust to unreliable state estimates. The proposed framework is demonstrated with simulations on MAS formations in the presence of faults and cyber attacks to on-board position sensors.

Standard bandit algorithms that assume continual reallocation of measurement effort are challenging to implement due to delayed feedback and infrastructural/organizational difficulties. Motivated by practical instances involving a handful of reallocation epochs in which outcomes are measured in batches, we develop a new adaptive experimentation framework that can flexibly handle any batch size. Our main observation is that normal approximations universal in statistical inference can also guide the design of scalable adaptive designs. By deriving an asymptotic sequential experiment, we formulate a dynamic program that can leverage prior information on average rewards. State transitions of the dynamic program are differentiable with respect to the sampling allocations, allowing the use of gradient-based methods for planning and policy optimization. We propose a simple iterative planning method, Residual Horizon Optimization, which selects sampling allocations by optimizing a planning objective via stochastic gradient-based methods. Our method significantly improves statistical power over standard adaptive policies, even when compared to Bayesian bandit algorithms (e.g., Thompson sampling) that require full distributional knowledge of individual rewards. Overall, we expand the scope of adaptive experimentation to settings which are difficult for standard adaptive policies, including problems with a small number of reallocation epochs, low signal-to-noise ratio, and unknown reward distributions.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司