亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of the mean-square optimal estimation of the linear functionals which depend on the unknown values of a stochastic stationary sequence from observations of the sequence in special sets of points is considered. Formulas for calculating the mean-square error and the spectral characteristic of the optimal linear estimate of the functionals are derived under the condition of spectral certainty, where the spectral density of the sequence is exactly known. The minimax (robust) method of estimation is applied in the case where the spectral density of the sequence is not known exactly while some sets of admissible spectral densities are given. Formulas that determine the least favourable spectral densities and the minimax spectral characteristics are derived for some special sets of admissible densities.

相關內容

This study proposes an efficient Newton-type method for the optimal control of switched systems under a given mode sequence. A mesh-refinement-based approach is utilized to discretize continuous-time optimal control problems (OCPs) and formulate a nonlinear program (NLP), which guarantees the local convergence of a Newton-type method. A dedicated structure-exploiting algorithm (Riccati recursion) is proposed to perform a Newton-type method for the NLP efficiently because its sparsity structure is different from a standard OCP. The proposed method computes each Newton step with linear time-complexity for the total number of discretization grids as the standard Riccati recursion algorithm. Additionally, the computation is always successful if the solution is sufficiently close to a local minimum. Conversely, general quadratic programming (QP) solvers cannot accomplish this because the Hessian matrix is inherently indefinite. Moreover, a modification on the reduced Hessian matrix is proposed using the nature of the Riccati recursion algorithm as the dynamic programming for a QP subproblem to enhance the convergence. A numerical comparison is conducted with off-the-shelf NLP solvers, which demonstrates that the proposed method is up to two orders of magnitude faster. Whole-body optimal control of quadrupedal gaits is also demonstrated and shows that the proposed method can achieve the whole-body model predictive control (MPC) of robotic systems with rigid contacts.

A common misconception is that the Oracle eigenvalue estimator of the covariance matrix yields the best realized portfolio performance. In reality, the Oracle estimator simply modifies the empirical covariance matrix eigenvalues so as to minimize the Frobenius distance between the filtered and the realized covariance matrices. This leads to the best portfolios only when the in-sample eigenvectors coincide with the out-of-sample ones. In all the other cases, the optimal eigenvalue correction can be obtained from the solution of a Quadratic-Programming problem. Solving it shows that the Oracle estimators only yield the best portfolios in the limit of infinite data points per asset and only in stationary systems.

An accurate covariance matrix is essential for obtaining reliable cosmological results when using a Gaussian likelihood. In this paper we study the covariance of pseudo-$C_\ell$ estimates of tomographic cosmic shear power spectra. Using two existing publicly available codes in combination, we calculate the full covariance matrix, including mode-coupling contributions arising from both partial sky coverage and non-linear structure growth. For three different sky masks, we compare the theoretical covariance matrix to that estimated from publicly available N-body weak lensing simulations, finding good agreement. We find that as a more extreme sky cut is applied, a corresponding increase in both Gaussian off-diagonal covariance and non-Gaussian super-sample covariance is observed in both theory and simulations, in accordance with expectations. Studying the different contributions to the covariance in detail, we find that the Gaussian covariance dominates along the main diagonal and the closest off-diagonals, but further away from the main diagonal the super-sample covariance is dominant. Forming mock constraints in parameters describing matter clustering and dark energy, we find that neglecting non-Gaussian contributions to the covariance can lead to underestimating the true size of confidence regions by up to 70 per cent. The dominant non-Gaussian covariance component is the super-sample covariance, but neglecting the smaller connected non-Gaussian covariance can still lead to the underestimation of uncertainties by 10--20 per cent. A real cosmological analysis will require marginalisation over many nuisance parameters, which will decrease the relative importance of all cosmological contributions to the covariance, so these values should be taken as upper limits on the importance of each component.

This work provides a theoretical framework for the pose estimation problem using total least squares for vector observations from landmark features. First, the optimization framework is formulated with observation vectors extracted from point cloud features. Then, error-covariance expressions are derived. The attitude and position solutions obtained via the derived optimization framework are proven to reach the bounds defined by the Cram\'er-Rao lower bound under the small-angle approximation of attitude errors. The measurement data for the simulation of this problem is provided through a series of vector observation scans, and a fully populated observation noise-covariance matrix is assumed as the weight in the cost function to cover the most general case of the sensor uncertainty. Here, previous derivations are expanded for the pose estimation problem to include more generic correlations in the errors than previous cases involving an isotropic noise assumption. The proposed solution is simulated in a Monte-Carlo framework to validate the error-covariance analysis.

A T-graph (a special case of a chordal graph) is the intersection graph of connected subtrees of a suitable subdivision of a fixed tree T . We deal with the isomorphism problem for T-graphs which is GI-complete in general - when T is a part of the input and even a star. We prove that the T-graph isomorphism problem is in FPT when T is the fixed parameter of the problem. This can equivalently be stated that isomorphism is in FPT for chordal graphs of (so-called) bounded leafage. While the recognition problem for T-graphs is not known to be in FPT wrt. T, we do not need a T-representation to be given (a promise is enough). To obtain the result, we combine a suitable isomorphism-invariant decomposition of T-graphs with the classical tower-of-groups algorithm of Babai, and reuse some of the ideas of our isomorphism algorithm for S_d-graphs [MFCS 2020].

We study fractional variants of the quasi-norms introduced by Brezis, Van Schaftingen, and Yung in the study of the Sobolev space $\dot W^{1,p}$. The resulting spaces are identified as a special class of real interpolation spaces of Sobolev-Slobodecki\u{\i} spaces. We establish the equivalence between Fourier analytic definitions and definitions via difference operators acting on measurable functions. We prove various new results on embeddings and non-embeddings, and give applications to harmonic and caloric extensions. For suitable wavelet bases we obtain a characterization of the approximation spaces for best $n$-term approximation from a wavelet basis via smoothness conditions on the function; this extends a classical result by DeVore, Jawerth and Popov.

Let $P$ be a set of points in $\mathbb{R}^d$, where each point $p\in P$ has an associated transmission range $\rho(p)$. The range assignment $\rho$ induces a directed communication graph $\mathcal{G}_{\rho}(P)$ on $P$, which contains an edge $(p,q)$ iff $|pq| \leq \rho(p)$. In the broadcast range-assignment problem, the goal is to assign the ranges such that $\mathcal{G}_{\rho}(P)$ contains an arborescence rooted at a designated node and whose cost $\sum_{p \in P} \rho(p)^2$ is minimized. We study trade-offs between the stability of the solution -- the number of ranges that are modified when a point is inserted into or deleted from $P$ -- and its approximation ratio. We introduce $k$-stable algorithms, which are algorithms that modify the range of at most $k$ points when they update the solution. We also introduce the concept of a stable approximation scheme (SAS). A SAS is an update algorithm that, for any given fixed parameter $\varepsilon>0$, is $k(\epsilon)$-stable and maintains a solution with approximation ratio $1+\varepsilon$, where the stability parameter $k(\varepsilon)$ only depends on $\varepsilon$ and not on the size of $P$. We study such trade-offs in three settings. - In $\mathbb{R}^1$, we present a SAS with $k(\varepsilon)=O(1/\varepsilon)$, which we show is tight in the worst case. We also present a 1-stable $(6+2\sqrt{5})$-approximation algorithm, a $2$-stable 2-approximation algorithm, and a $3$-stable $1.97$-approximation algorithm. - In $\mathbb{S}^1$ (where the underlying space is a circle) we prove that no SAS exists, even though an optimal solution can always be obtained by cutting the circle at an appropriate point and solving the resulting problem in $\mathbb{R}^1$. - In $\mathbb{R}^2$, we also prove that no SAS exists, and we present a $O(1)$-stable $O(1)$-approximation algorithm.

Stochastic gradient descent with momentum (SGDM) is the dominant algorithm in many optimization scenarios, including convex optimization instances and non-convex neural network training. Yet, in the stochastic setting, momentum interferes with gradient noise, often leading to specific step size and momentum choices in order to guarantee convergence, set aside acceleration. Proximal point methods, on the other hand, have gained much attention due to their numerical stability and elasticity against imperfect tuning. Their stochastic accelerated variants though have received limited attention: how momentum interacts with the stability of (stochastic) proximal point methods remains largely unstudied. To address this, we focus on the convergence and stability of the stochastic proximal point algorithm with momentum (SPPAM), and show that SPPAM allows a faster linear convergence to a neighborhood compared to stochastic proximal point algorithm (SPPA) with a better contraction factor, under proper hyperparameter tuning. In terms of stability, we show that SPPAM depends on problem constants more favorably than SGDM, allowing a wider range of step size and momentum that lead to convergence.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司