亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An algorithm is presented that, taking a sequence of independent Bernoulli random variables with parameter $1/2$ as inputs and using only rational arithmetic, simulates a Bernoulli random variable with possibly irrational parameter $\tau$. It requires a series representation of $\tau$ with positive, rational terms, and a rational bound on its truncation error that converges to $0$. The number of required inputs has an exponentially bounded tail, and its mean is at most $3$. The number of arithmetic operations has a tail that can be bounded in terms of the sequence of truncation error bounds. The algorithm is applied to two specific values of $\tau$, including Euler's constant, for which obtaining a simple simulation algorithm was an open problem.

相關內容

We consider an elliptic linear-quadratic parameter estimation problem with a finite number of parameters. A novel a priori bound for the parameter error is proved and, based on this bound, an adaptive finite element method driven by an a posteriori error estimator is presented. Unlike prior results in the literature, our estimator, which is composed of standard energy error residual estimators for the state equation and suitable co-state problems, reflects the faster convergence of the parameter error compared to the (co)-state variables. We show optimal convergence rates of our method; in particular and unlike prior works, we prove that the estimator decreases with a rate that is the sum of the best approximation rates of the state and co-state variables. Experiments confirm that our method matches the convergence rate of the parameter error.

This paper considers the one-bit precoding problem for the multiuser downlink massive multiple-input multiple-output (MIMO) system with phase shift keying (PSK) modulation and focuses on the celebrated constructive interference (CI)-based problem formulation. The existence of the discrete one-bit constraint makes the problem generally hard to solve. In this paper, we propose an efficient negative $\ell_1$ penalty approach for finding a high-quality solution of the considered problem. Specifically, we first propose a novel negative $\ell_1$ penalty model, which penalizes the one-bit constraint into the objective with a negative $\ell_1$-norm term, and show the equivalence between (global and local) solutions of the original problem and the penalty problem when the penalty parameter is sufficiently large. We further transform the penalty model into an equivalent min-max problem and propose an efficient alternating optimization (AO) algorithm for solving it. The AO algorithm enjoys low per-iteration complexity and is guaranteed to converge to the stationary point of the min-max problem. Numerical results show that, compared against the state-of-the-art CI-based algorithms, the proposed algorithm generally achieves better bit-error-rate (BER) performance with lower computational cost.

We define a complexity class $\mathsf{IB}$ as the class of functional problems reducible to computing $f^{(n)}(x)$ for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove, the definition is robust against variations in the type of reduction used in its definition, and in whether we require $f$ to have a polynomial-time inverse or to be computible by a reversible logic circuit. We relate $\mathsf{IB}$ to other standard complexity classes, and demonstrate its applicability by finding natural $\mathsf{IB}$-complete problems in circuit complexity, cellular automata, graph algorithms, and the dynamical systems described by piecewise-linear transformations.

In this paper we propose a deep learning based numerical scheme for strongly coupled FBSDE, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the means square error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDE, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method.

In this paper we study the number $r_{bwt}$ of equal-letter runs produced by the Burrows-Wheeler transform ($BWT$) when it is applied to purely morphic finite words, which are words generated by iterating prolongable morphisms. Such a parameter $r_{bwt}$ is very significant since it provides a measure of the performances of the $BWT$, in terms of both compressibility and indexing. In particular, we prove that, when $BWT$ is applied to any purely morphic finite word on a binary alphabet, $r_{bwt}$ is $\mathcal{O}(\log n)$, where $n$ is the length of the word. Moreover, we prove that $r_{bwt}$ is $\Theta(\log n)$ for the binary words generated by a large class of prolongable binary morphisms. These bounds are proved by providing some new structural properties of the \emph{bispecial circular factors} of such words.

In the present paper, we study the analyticity of the leftmost eigenvalue of the linear elliptic partial differential operator with random coefficient and analyze the convergence rate of the quasi-Monte Carlo method for approximation of the expectation of this quantity. The random coefficient is assumed to be represented by an affine expansion $a_0(\boldsymbol{x})+\sum_{j\in \mathbb{N}}y_ja_j(\boldsymbol{x})$, where elements of the parameter vector $\boldsymbol{y}=(y_j)_{j\in \mathbb{N}}\in U^\infty$ are independent and identically uniformly distributed on $U:=[-\frac{1}{2},\frac{1}{2}]$. Under the assumption $ \|\sum_{j\in \mathbb{N}}\rho_j|a_j|\|_{L_\infty(D)} <\infty$ with some positive sequence $(\rho_j)_{j\in \mathbb{N}}\in \ell_p(\mathbb{N})$ for $p\in (0,1]$ we show that for any $\boldsymbol{y}\in U^\infty$, the elliptic partial differential operator has a countably infinite number of eigenvalues $(\lambda_j(\boldsymbol{y}))_{j\in \mathbb{N}}$ which can be ordered non-decreasingly. Moreover, the spectral gap $\lambda_2(\boldsymbol{y})-\lambda_1(\boldsymbol{y})$ is uniformly positive in $U^\infty$. From this, we prove the holomorphic extension property of $\lambda_1(\boldsymbol{y})$ to a complex domain in $\mathbb{C}^\infty$ and estimate mixed derivatives of $\lambda_1(\boldsymbol{y})$ with respect to the parameters $\boldsymbol{y}$ by using Cauchy's formula for analytic functions. Based on these bounds we prove the dimension-independent convergence rate of the quasi-Monte Carlo method to approximate the expectation of $\lambda_1(\boldsymbol{y})$. In this case, the computational cost of fast component-by-component algorithm for generating quasi-Monte Carlo $N$-points scales linearly in terms of integration dimension.

Models defined by moment conditions are at the center of structural econometric estimation, but economic theory is mostly agnostic about moment selection. While a large pool of valid moments can potentially improve estimation efficiency, in the meantime a few invalid ones may undermine consistency. This paper investigates the empirical likelihood estimation of these moment-defined models in high-dimensional settings. We propose a penalized empirical likelihood (PEL) estimation and establish its oracle property with consistent detection of invalid moments. The PEL estimator is asymptotically normally distributed, and a projected PEL procedure further eliminates its asymptotic bias and provides more accurate normal approximation to the finite sample behavior. Simulation exercises demonstrate excellent numerical performance of these methods in estimation and inference.

Particle smoothers are SMC (Sequential Monte Carlo) algorithms designed to approximate the joint distribution of the states given observations from a state-space model. We propose dSMC (de-Sequentialized Monte Carlo), a new particle smoother that is able to process $T$ observations in $\mathcal{O}(\log T)$ time on parallel architecture. This compares favourably with standard particle smoothers, the complexity of which is linear in $T$. We derive $\mathcal{L}_p$ convergence results for dSMC, with an explicit upper bound, polynomial in $T$. We then discuss how to reduce the variance of the smoothing estimates computed by dSMC by (i) designing good proposal distributions for sampling the particles at the initialization of the algorithm, as well as by (ii) using lazy resampling to increase the number of particles used in dSMC. Finally, we design a particle Gibbs sampler based on dSMC, which is able to perform parameter inference in a state-space model at a $\mathcal{O}(\log(T))$ cost on parallel hardware.

Because it determines a center-outward ordering of observations in $\mathbb{R}^d$ with $d\geq 2$, the concept of statistical depth permits to define quantiles and ranks for multivariate data and use them for various statistical tasks (e.g. inference, hypothesis testing). Whereas many depth functions have been proposed \textit{ad-hoc} in the literature since the seminal contribution of \cite{Tukey75}, not all of them possess the properties desirable to emulate the notion of quantile function for univariate probability distributions. In this paper, we propose an extension of the \textit{integrated rank-weighted} statistical depth (IRW depth in abbreviated form) originally introduced in \cite{IRW}, modified in order to satisfy the property of \textit{affine-invariance}, fulfilling thus all the four key axioms listed in the nomenclature elaborated by \cite{ZuoS00a}. The variant we propose, referred to as the Affine-Invariant IRW depth (AI-IRW in short), involves the covariance/precision matrices of the (supposedly square integrable) $d$-dimensional random vector $X$ under study, in order to take into account the directions along which $X$ is most variable to assign a depth value to any point $x\in \mathbb{R}^d$. The accuracy of the sampling version of the AI-IRW depth is investigated from a nonasymptotic perspective. Namely, a concentration result for the statistical counterpart of the AI-IRW depth is proved. Beyond the theoretical analysis carried out, applications to anomaly detection are considered and numerical results are displayed, providing strong empirical evidence of the relevance of the depth function we propose here.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

北京阿比特科技有限公司