亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we examine the effectiveness of several multi-arm bandit algorithms when used as a trust system to select agents to delegate tasks to. In contrast to existing work, we allow for recursive delegation to occur. That is, a task delegated to one agent can be delegated onwards by that agent, with further delegation possible until some agent finally executes the task. We show that modifications to the standard multi-arm bandit algorithms can provide improvements in performance in such recursive delegation settings.

相關內容

Introducing a coupling framework reminiscent of FETI methods, but here on abstract form, we establish conditions for stability and minimal requirements for well-posedness on the continuous level, as well as conditions on local solvers for the approximation of subproblems. We then discuss stability of the resulting Lagrange multiplier methods and show stability under a mesh conditions between the local discretizations and the mortar space. If this condition is not satisfied we show how a stabilization, acting only on the multiplier can be used to achieve stability. The design of preconditioners of the Schur complement system is discussed in the unstabilized case. Finally we discuss some applications that enter the framework.

Methods to generate realistic non-stationary demand scenarios are a key component for analyzing and optimizing decision policies in supply chains. Typical forecasting techniques recommended in standard inventory control textbooks consist of some form of exponential smoothing for both the estimates for the mean and standard deviation. We propose and study a class of demand generating processes (DGPs) that yield non-stationary demand scenarios, and that are consistent with SES, meaning that SES yields unbiased estimates when applied to the generated demand scenarios. As demand in typical practical settings is discrete and non-negative, we study consistent DGPs on the non-negative integers, and derive conditions under which the existence of such DGPs can be guaranteed. Our subsequent simulation study gains further insights into the proposed DGP. It demonstrates that from a given initial forecast, our DGPs yields a diverse set of demand scenarios with a wide range of properties. To show the applicability of the DGP, we apply it to generate demand in a standard inventory problem with full backlogging and a positive lead time. We find that appropriate dynamic base-stock levels can be obtained using a new and relatively simple algorithm, and we demonstrate that this algorithm outperforms relevant benchmarks.

A common approach to analyze count time series is to fit models based on random sum operators. As an alternative, this paper introduces time series models based on a random multiplication operator, which is simply the multiplication of a variable operand by an integer-valued random coefficient, whose mean is the constant operand. Such operation is endowed into auto-regressive-like models with integer-valued random inputs, addressed as RMINAR. Two special variants are studied, namely the N0-valued random coefficient auto-regressive model and the N0-valued random coefficient multiplicative error model. Furthermore, Z-valued extensions are considered. The dynamic structure of the proposed models is studied in detail. In particular, their corresponding solutions are everywhere strictly stationary and ergodic, a fact that is not common neither in the literature on integer-valued time series models nor real-valued random coefficient auto-regressive models. Therefore, the parameters of the RMINAR model are estimated using a four-stage weighted least squares estimator, with consistency and asymptotic normality established everywhere in the parameter space. Finally, the new RMINAR models are illustrated with some simulated and empirical examples.

In this paper, we propose a new test for testing the equality of two population covariance matrices in the ultra-high dimensional setting that the dimension is much larger than the sizes of both of the two samples. Our proposed methodology relies on a data splitting procedure and a comparison of a set of well selected eigenvalues of the sample covariance matrices on the split data sets. Compared to the existing methods, our methodology is adaptive in the sense that (i). it does not require specific assumption (e.g., comparable or balancing, etc.) on the sizes of two samples; (ii). it does not need quantitative or structural assumptions of the population covariance matrices; (iii). it does not need the parametric distributions or the detailed knowledge of the moments of the two populations. Theoretically, we establish the asymptotic distributions of the statistics used in our method and conduct the power analysis. We justify that our method is powerful under very weak alternatives. We conduct extensive numerical simulations and show that our method significantly outperforms the existing ones both in terms of size and power. Analysis of two real data sets is also carried out to demonstrate the usefulness and superior performance of our proposed methodology. An $\texttt{R}$ package $\texttt{UHDtst}$ is developed for easy implementation of our proposed methodology.

In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.

Many existing covariate shift adaptation methods estimate sample weights to be used in the risk estimation in order to mitigate the gap between the source and the target distribution. However, non-parametrically estimating the optimal weights typically involves computationally expensive hyper-parameter tuning that is crucial to the final performance. In this paper, we propose a new non-parametric approach to covariate shift adaptation which avoids estimating weights and has no hyper-parameter to be tuned. Our basic idea is to label unlabeled target data according to the $k$-nearest neighbors in the source dataset. Our analysis indicates that setting $k = 1$ is an optimal choice. Thanks to this property, there is no need to tune any hyper-parameters, unlike other non-parametric methods. Moreover, our method achieves a running time quasi-linear in the sample size with a theoretical guarantee, for the first time in the literature to the best of our knowledge. Our results include sharp rates of convergence for estimating the joint probability distribution of the target data. In particular, the variance of our estimators has the same rate of convergence as for standard parametric estimation despite their non-parametric nature. Our numerical experiments show that proposed method brings drastic reduction in the running time with accuracy comparable to that of the state-of-the-art methods.

In this paper, we introduce a new simple approach to developing and establishing the convergence of splitting methods for a large class of stochastic differential equations (SDEs), including additive, diagonal and scalar noise types. The central idea is to view the splitting method as a replacement of the driving signal of an SDE, namely Brownian motion and time, with a piecewise linear path that yields a sequence of ODEs $-$ which can be discretised to produce a numerical scheme. This new way of understanding splitting methods is inspired by, but does not use, rough path theory. We show that when the driving piecewise linear path matches certain iterated stochastic integrals of Brownian motion, then a high order splitting method can be obtained. We propose a general proof methodology for establishing the strong convergence of these approximations that is akin to the general framework of Milstein and Tretyakov. That is, once local error estimates are obtained for the splitting method, then a global rate of convergence follows. This approach can then be readily applied in future research on SDE splitting methods. By incorporating recently developed approximations for iterated integrals of Brownian motion into these piecewise linear paths, we propose several high order splitting methods for SDEs satisfying a certain commutativity condition. In our experiments, which include the Cox-Ingersoll-Ross model and additive noise SDEs (noisy anharmonic oscillator, stochastic FitzHugh-Nagumo model, underdamped Langevin dynamics), the new splitting methods exhibit convergence rates of $O(h^{3/2})$ and outperform schemes previously proposed in the literature.

This paper presents an approach for efficiently approximating the inverse of Fisher information, a key component in variational Bayes inference. A notable aspect of our approach is the avoidance of analytically computing the Fisher information matrix and its explicit inversion. Instead, we introduce an iterative procedure for generating a sequence of matrices that converge to the inverse of Fisher information. The natural gradient variational Bayes algorithm without matrix inversion is provably convergent and achieves a convergence rate of order O(log s/s), with s the number of iterations. We also obtain a central limit theorem for the iterates. Our algorithm exhibits versatility, making it applicable across a diverse array of variational Bayes domains, including Gaussian approximation and normalizing flow Variational Bayes. We offer a range of numerical examples to demonstrate the efficiency and reliability of the proposed variational Bayes method.

This paper proposes a specialized autonomous driving system that takes into account the unique constraints and characteristics of automotive systems, aiming for innovative advancements in autonomous driving technology. The proposed system systematically analyzes the intricate data flow in autonomous driving and provides functionality to dynamically adjust various factors that influence deep learning models. Additionally, for algorithms that do not rely on deep learning models, the system analyzes the flow to determine resource allocation priorities. In essence, the system optimizes data flow and schedules efficiently to ensure real-time performance and safety. The proposed system was implemented in actual autonomous vehicles and experimentally validated across various driving scenarios. The experimental results provide evidence of the system's stable inference and effective control of autonomous vehicles, marking a significant turning point in the development of autonomous driving systems.

In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.

北京阿比特科技有限公司