亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this study, we generalize a problem of sampling a scalar Gauss Markov Process, namely, the Ornstein-Uhlenbeck (OU) process, where the samples are sent to a remote estimator and the estimator makes a causal estimate of the observed realtime signal. In recent years, the problem is solved for stable OU processes. We present solutions for the optimal sampling policy that exhibits a smaller estimation error for both stable and unstable cases of the OU process along with a special case when the OU process turns to a Wiener process. The obtained optimal sampling policy is a threshold policy. However, the thresholds are different for all three cases. Later, we consider additional noise with the sample when the sampling decision is made beforehand. The estimator utilizes noisy samples to make an estimate of the current signal value. The mean-square error (mse) is changed from previous due to noise and the additional term in the mse is solved which provides performance upper bound and room for a pursuing further investigation on this problem to find an optimal sampling strategy that minimizes the estimation error when the observed samples are noisy. Numerical results show performance degradation caused by the additive noise.

相關內容

In many real-world applications, we are interested in approximating black-box, costly functions as accurately as possible with the smallest number of function evaluations. A complex computer code is an example of such a function. In this work, a Gaussian process (GP) emulator is used to approximate the output of complex computer code. We consider the problem of extending an initial experiment (set of model runs) sequentially to improve the emulator. A sequential sampling approach based on leave-one-out (LOO) cross-validation is proposed that can be easily extended to a batch mode. This is a desirable property since it saves the user time when parallel computing is available. After fitting a GP to training data points, the expected squared LOO (ES-LOO) error is calculated at each design point. ES-LOO is used as a measure to identify important data points. More precisely, when this quantity is large at a point it means that the quality of prediction depends a great deal on that point and adding more samples nearby could improve the accuracy of the GP. As a result, it is reasonable to select the next sample where ES-LOO is maximised. However, ES-LOO is only known at the experimental design and needs to be estimated at unobserved points. To do this, a second GP is fitted to the ES-LOO errors and where the maximum of the modified expected improvement (EI) criterion occurs is chosen as the next sample. EI is a popular acquisition function in Bayesian optimisation and is used to trade-off between local/global search. However, it has a tendency towards exploitation, meaning that its maximum is close to the (current) "best" sample. To avoid clustering, a modified version of EI, called pseudo expected improvement, is employed which is more explorative than EI yet allows us to discover unexplored regions. Our results show that the proposed sampling method is promising.

The problem of optimal estimation of linear functionals constructed from the unobserved values of a stochastic sequence with periodically stationary increments based on observations of the sequence with stationary noise is considered. For sequences with known spectral densities, we obtain formulas for calculating values of the mean square errors and the spectral characteristics of the optimal estimates of the functionals. Formulas that determine the least favorable spectral densities and the minimax-robust spectral characteristics of the optimal linear estimates of functionals are proposed in the case where spectral densities of the sequence are not exactly known while some sets of admissible spectral densities are specified.

Spatio-temporal processes in environmental applications are often assumed to follow a Gaussian model, possibly after some transformation. However, heterogeneity in space and time might have a pattern that will not be accommodated by transforming the data. In this scenario, modelling the variance laws is an appealing alternative. This work adds flexibility to the usual Multivariate Dynamic Gaussian model by defining the process as a scale mixture between a Gaussian and log-Gaussian processes. The scale is represented by a process varying smoothly over space and time which is allowed to depend on covariates. State-space equations define the dynamics over time for both mean and variance processes resulting infeasible inference and prediction. Analysis of artificial datasets show that the parameters are identifiable and simpler models are well recovered by the general proposed model. The analyses of two important environmental processes, maximum temperature and maximum ozone, illustrate the effectiveness of our proposal in improving the uncertainty quantification in the prediction of spatio-temporal processes.

We consider the phase retrieval problem, in which the observer wishes to recover a $n$-dimensional real or complex signal $\mathbf{X}^\star$ from the (possibly noisy) observation of $|\mathbf{\Phi} \mathbf{X}^\star|$, in which $\mathbf{\Phi}$ is a matrix of size $m \times n$. We consider a \emph{high-dimensional} setting where $n,m \to \infty$ with $m/n = \mathcal{O}(1)$, and a large class of (possibly correlated) random matrices $\mathbf{\Phi}$ and observation channels. Spectral methods are a powerful tool to obtain approximate observations of the signal $\mathbf{X}^\star$ which can be then used as initialization for a subsequent algorithm, at a low computational cost. In this paper, we extend and unify previous results and approaches on spectral methods for the phase retrieval problem. More precisely, we combine the linearization of message-passing algorithms and the analysis of the \emph{Bethe Hessian}, a classical tool of statistical physics. Using this toolbox, we show how to derive optimal spectral methods for arbitrary channel noise and right-unitarily invariant matrix $\mathbf{\Phi}$, in an automated manner (i.e. with no optimization over any hyperparameter or preprocessing function).

In this paper, we consider an information update system where wireless sensor sends timely updates to the destination over a random blocking terahertz channel with the supply of harvested energy and reliable energy backup. The paper aims to find the optimal information updating policy that minimize the time-average weighted sum of the Age of information(AoI) and the reliable energy costs by formulating an infinite state Markov decision process(MDP). With the derivation of the monotonicity of value function on each component, the optimal information updating policy is proved to have a threshold structure. Based on this special structure, an algorithm for efficiently computing the optimal policy is proposed. Numerical results show that the optimal updating policy proposed outperforms baseline policies.

In black-box optimization problems, we aim to maximize an unknown objective function, where the function is only accessible through feedbacks of an evaluation or simulation oracle. In real-life, the feedbacks of such oracles are often noisy and available after some unknown delay that may depend on the computation time of the oracle. Additionally, if the exact evaluations are expensive but coarse approximations are available at a lower cost, the feedbacks can have multi-fidelity. In order to address this problem, we propose a generic extension of hierarchical optimistic tree search (HOO), called ProCrastinated Tree Search (PCTS), that flexibly accommodates a delay and noise-tolerant bandit algorithm. We provide a generic proof technique to quantify regret of PCTS under delayed, noisy, and multi-fidelity feedbacks. Specifically, we derive regret bounds of PCTS enabled with delayed-UCB1 (DUCB1) and delayed-UCB-V (DUCBV) algorithms. Given a horizon $T$, PCTS retains the regret bound of non-delayed HOO for expected delay of $O(\log T)$ and worsens by $O(T^{\frac{1-\alpha}{d+2}})$ for expected delays of $O(T^{1-\alpha})$ for $\alpha \in (0,1]$. We experimentally validate on multiple synthetic functions and hyperparameter tuning problems that PCTS outperforms the state-of-the-art black-box optimization methods for feedbacks with different noise levels, delays, and fidelity.

Decision-making often requires accurate estimation of treatment effects from observational data. This is challenging as outcomes of alternative decisions are not observed and have to be estimated. Previous methods estimate outcomes based on unconfoundedness but neglect any constraints that unconfoundedness imposes on the outcomes. In this paper, we propose a novel regularization framework for estimating average treatment effects that exploits unconfoundedness. To this end, we formalize unconfoundedness as an orthogonality constraint, which ensures that the outcomes are orthogonal to the treatment assignment. This orthogonality constraint is then included in the loss function via a regularization. Based on our regularization framework, we develop deep orthogonal networks for unconfounded treatments (DONUT), which learn outcomes that are orthogonal to the treatment assignment. Using a variety of benchmark datasets for estimating average treatment effects, we demonstrate that DONUT outperforms the state-of-the-art substantially.

The problem of optimal estimation of the linear functionals which depend on the unknown values of a periodically correlated stochastic sequence ${\zeta}(j)$ from observations of the sequence ${\zeta}(j)+{\theta}(j)$ at points $j\in\{\dots,-n,\dots,-2,-1,0\}\setminus S$, $S=\bigcup _{l=1}^{s-1}\{-M_l\cdot T+1,\dots,-M_{l-1}\cdot T-N_{l}\cdot T\}$, is considered, where ${\theta}(j)$ is an uncorrelated with ${\zeta}(j)$ periodically correlated stochastic sequence. Formulas for calculation the mean square error and the spectral characteristic of the optimal estimate of the functional $A\zeta$ are proposed in the case where spectral densities of the sequences are exactly known. Formulas that determine the least favorable spectral densities and the minimax-robust spectral characteristics of the optimal estimates of functionals are proposed in the case of spectral uncertainty, where the spectral densities are not exactly known while some sets of admissible spectral densities are specified.

Optimal $k$-thresholding algorithms are a class of sparse signal recovery algorithms that overcome the shortcomings of traditional hard thresholding algorithms caused by the oscillation of the residual function. In this paper, we provide a novel theoretical analysis for the data-time tradeoffs of optimal $k$-thresholding algorithms. Both the analysis and numerical results demonstrate that when the number of measurements is small, the algorithms cannot converge; when the number of measurements is suitably large, the number of measurements required for successful recovery has a negative correlation with the number of iterations and the algorithms can achieve linear convergence. Furthermore, the theory presents that the transition point of the number of measurements is on the order of $k \log({en}/{k})$, where $n$ is the dimension of the target signal.

Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over subsets of data, {\em i.e.} minibatches. While computationally appealing, we highlight in this paper some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behavior. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.

北京阿比特科技有限公司