亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Minimum variance controllers have been employed in a wide-range of industrial applications. A key challenge experienced by many adaptive controllers is their poor empirical performance in the initial stages of learning. In this paper, we address the problem of initializing them so that they provide acceptable transients, and also provide an accompanying finite-time regret analysis, for adaptive minimum variance control of an auto-regressive system with exogenous inputs (ARX). Following [3], we consider a modified version of the Certainty Equivalence (CE) adaptive controller, which we call PIECE, that utilizes probing inputs for exploration. We show that it has a $C \log T$ bound on the regret after $T$ time-steps for bounded noise, and $C\log^2 T$ in the case of sub-Gaussian noise. The simulation results demonstrate the advantage of PIECE over the algorithm proposed in [3] as well as the standard Certainty Equivalence controller especially in the initial learning phase. To the best of our knowledge, this is the first work that provides finite-time regret bounds for an adaptive minimum variance controller.

相關內容

Reversible data hiding (RDH) has been extensively studied in the field of information security. In our previous work [1], an explicit implementation approaching the rate-distortion bound of RDH has been proposed. However, there are two challenges left in our previous method. Firstly, this method suffers from computing precision problem due to the use of arithmetic coding, which may cause the further embedding impossible. Secondly, it had to transmit the probability distribution of the host signals during the embedding/extraction process, yielding quite additional overhead and application limitations. In this paper, we first propose an RDH scheme that employs our recent asymmetric numeral systems (ANS) variant as the underlying coding framework to avoid the computing precision problem. Then, we give a dynamic implementation that does not require transmitting the host distribution in advance. The simulation results show that the proposed static method provides slightly higher peak signal-to-noise ratio (PSNR) values than our previous work, and larger embedding capacity than some state-of-the-art methods on gray-scale images. In addition, the proposed dynamic method totally saves the explicit transmission of the host distribution and achieve data embedding at the cost of a small image quality loss.

We consider reinforcement learning for continuous-time Markov decision processes (MDPs) in the infinite-horizon, average-reward setting. In contrast to discrete-time MDPs, a continuous-time process moves to a state and stays there for a random holding time after an action is taken. With unknown transition probabilities and rates of exponential holding times, we derive instance-dependent regret lower bounds that are logarithmic in the time horizon. Moreover, we design a learning algorithm and establish a finite-time regret bound that achieves the logarithmic growth rate. Our analysis builds upon upper confidence reinforcement learning, a delicate estimation of the mean holding times, and stochastic comparison of point processes.

In this paper we derive tight lower bounds resolving the hardness status of several fundamental weighted matroid problems. One notable example is budgeted matroid independent set, for which we show there is no fully polynomial-time approximation scheme (FPTAS), indicating the Efficient PTAS of [Doron-Arad, Kulik and Shachnai, SOSA 2023] is the best possible. Furthermore, we show that there is no pseudo-polynomial time algorithm for exact weight matroid independent set, implying the algorithm of [Camerini, Galbiati and Maffioli, J. Algorithms 1992] for representable matroids cannot be generalized to arbitrary matroids. Similarly, we show there is no Fully PTAS for constrained minimum basis of a matroid and knapsack cover with a matroid, implying the existing Efficient PTAS for the former is optimal. For all of the above problems, we obtain unconditional lower bounds in the oracle model, where the independent sets of the matroid can be accessed only via a membership oracle. We complement these results by showing that the same lower bounds hold under standard complexity assumptions, even if the matroid is encoded as part of the instance. All of our bounds are based on a specifically structured family of paving matroids.

In this paper, we extend the Generalized Finite Difference Method (GFDM) on unknown compact submanifolds of the Euclidean domain, identified by randomly sampled data that (almost surely) lie on the interior of the manifolds. Theoretically, we formalize GFDM by exploiting a representation of smooth functions on the manifolds with Taylor's expansions of polynomials defined on the tangent bundles. We illustrate the approach by approximating the Laplace-Beltrami operator, where a stable approximation is achieved by a combination of Generalized Moving Least-Squares algorithm and novel linear programming that relaxes the diagonal-dominant constraint for the estimator to allow for a feasible solution even when higher-order polynomials are employed. We establish the theoretical convergence of GFDM in solving Poisson PDEs and numerically demonstrate the accuracy on simple smooth manifolds of low and moderate high co-dimensions as well as unknown 2D surfaces. For the Dirichlet Poisson problem where no data points on the boundaries are available, we employ GFDM with the volume-constraint approach that imposes the boundary conditions on data points close to the boundary. When the location of the boundary is unknown, we introduce a novel technique to detect points close to the boundary without needing to estimate the distance of the sampled data points to the boundary. We demonstrate the effectiveness of the volume-constraint employed by imposing the boundary conditions on the data points detected by this new technique compared to imposing the boundary conditions on all points within a certain distance from the boundary, where the latter is sensitive to the choice of truncation distance and require the knowledge of the boundary location.

Granger causality is among the widely used data-driven approaches for causal analysis of time series data with applications in various areas including economics, molecular biology, and neuroscience. Two of the main challenges of this methodology are: 1) over-fitting as a result of limited data duration, and 2) correlated process noise as a confounding factor, both leading to errors in identifying the causal influences. Sparse estimation via the LASSO has successfully addressed these challenges for parameter estimation. However, the classical statistical tests for Granger causality resort to asymptotic analysis of ordinary least squares, which require long data duration to be useful and are not immune to confounding effects. In this work, we address this disconnect by introducing a LASSO-based statistic and studying its non-asymptotic properties under the assumption that the true models admit sparse autoregressive representations. We establish fundamental limits for reliable identification of Granger causal influences using the proposed LASSO-based statistic. We further characterize the false positive error probability and test power of a simple thresholding rule for identifying Granger causal effects and provide two methods to set the threshold in a data-driven fashion. We present simulation studies and application to real data to compare the performance of our proposed method to ordinary least squares and existing LASSO-based methods in detecting Granger causal influences, which corroborate our theoretical results.

In uncertainty quantification, variance-based global sensitivity analysis quantitatively determines the effect of each input random variable on the output by partitioning the total output variance into contributions from each input. However, computing conditional expectations can be prohibitively costly when working with expensive-to-evaluate models. Surrogate models can accelerate this, yet their accuracy depends on the quality and quantity of training data, which is expensive to generate (experimentally or computationally) for complex engineering systems. Thus, methods that work with limited data are desirable. We propose a diffeomorphic modulation under observable response preserving homotopy (D-MORPH) regression to train a polynomial dimensional decomposition surrogate of the output that minimizes the number of training data. The new method first computes a sparse Lasso solution and uses it to define the cost function. A subsequent D-MORPH regression minimizes the difference between the D-MORPH and Lasso solution. The resulting D-MORPH surrogate is more robust to input variations and more accurate with limited training data. We illustrate the accuracy and computational efficiency of the new surrogate for global sensitivity analysis using mathematical functions and an expensive-to-simulate model of char combustion. The new method is highly efficient, requiring only 15% of the training data compared to conventional regression.

The present investigation focuses on the application of deep neural network (DNN) models to predict the filtered density function (FDF) of mixture fraction in large eddy simulation (LES) of variable density mixing layers with conserved scalar mixing. A systematic training method is proposed to select the DNN-FDF model training sample size and architecture via learning curves, thereby reducing bias and variance. Two DNN-FDF models are developed: one trained on the FDFs generated from direct numerical simulation (DNS), and another trained with low-fidelity simulations in a zero-dimensional pairwise mixing stirred reactor (PMSR). The accuracy and consistency of both DNN-FDF models are established by comparing their predicted scalar filtered moments with those of conventional LES, in which the transport equations corresponding to these moments are directly solved. Further, DNN-FDF approach is shown to perform better than the widely used $\beta$-FDF method, particularly for multi-modal FDF shapes and higher variances. Additionally, DNN-FDF results are also assessed via comparison with data obtained by DNS and the transported FDF method. The latter involves LES simulations coupled with the Monte Carlo (MC) methods which directly account for the mixture fraction FDF. The DNN-FDF results compare favorably with those of DNS and transported FDF method. Furthermore, DNN-FDF models exhibit good predictive capabilities compared to filtered DNS for filtering of highly non-linear functions, highlighting their potential for applications in turbulent reacting flow simulations. Overall, the DNN-FDF approach offers a more accurate alternative to the conventional presumed FDF method for describing turbulent scalar transport in a cost-effective manner.

We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of Optimal Control Problems (OCPs) constrained by random partial differential equations. The method requires to solve the OCP for several low-fidelity spatial grids and quadrature formulae for the objective functional. All the computed solutions are then linearly combined to get a final approximation which, under suitable regularity assumptions, preserves the same accuracy of fine tensor product approximations, while drastically reducing the computational cost. The combination technique involves only tensor product quadrature formulae, thus the discretized OCPs preserve the convexity of the continuous OCP. Hence, the combination technique avoids the inconveniences of Multilevel Monte Carlo and/or sparse grids approaches, but remains suitable for high dimensional problems. The manuscript presents an a-priori procedure to choose the most important mixed differences and an asymptotic complexity analysis, which states that the asymptotic complexity is exclusively determined by the spatial solver. Numerical experiments validate the results.

This paper studies online convex optimization with stochastic constraints. We propose a variant of the drift-plus-penalty algorithm that guarantees $O(\sqrt{T})$ expected regret and zero constraint violation, after a fixed number of iterations, which improves the vanilla drift-plus-penalty method with $O(\sqrt{T})$ constraint violation. Our algorithm is oblivious to the length of the time horizon $T$, in contrast to the vanilla drift-plus-penalty method. This is based on our novel drift lemma that provides time-varying bounds on the virtual queue drift and, as a result, leads to time-varying bounds on the expected virtual queue length. Moreover, we extend our framework to stochastic-constrained online convex optimization under two-point bandit feedback. We show that by adapting our algorithmic framework to the bandit feedback setting, we may still achieve $O(\sqrt{T})$ expected regret and zero constraint violation, improving upon the previous work for the case of identical constraint functions. Numerical results demonstrate our theoretical results.

In this paper we present a layered approach for multi-agent control problem, decomposed into three stages, each building upon the results of the previous one. First, a high-level plan for a coarse abstraction of the system is computed, relying on parametric timed automata augmented with stopwatches as they allow to efficiently model simplified dynamics of such systems. In the second stage, the high-level plan, based on SMT-formulation, mainly handles the combinatorial aspects of the problem, provides a more dynamically accurate solution. These stages are collectively referred to as the SWA-SMT solver. They are correct by construction but lack a crucial feature: they cannot be executed in real time. To overcome this, we use SWA-SMT solutions as the initial training dataset for our last stage, which aims at obtaining a neural network control policy. We use reinforcement learning to train the policy, and show that the initial dataset is crucial for the overall success of the method.

北京阿比特科技有限公司