亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a numerically efficient method for evaluating the random-coding union bound with parameter $s$ on the error probability achievable in the finite-blocklength regime by a pilot-assisted transmission scheme employing Gaussian codebooks and operating over a memoryless block-fading channel. Our method relies on the saddlepoint approximation, which, differently from previous results reported for similar scenarios, is performed with respect to the number of fading blocks (a.k.a. diversity branches) spanned by each codeword, instead of the number of channel uses per block. This different approach avoids a costly numerical averaging of the error probability over the realizations of the fading process and of its pilot-based estimate at the receiver and results in a significant reduction of the number of channel realizations required to estimate the error probability accurately. Our numerical experiments for both single-antenna communication links and massive multiple-input multiple-output (MIMO) networks show that, when two or more diversity branches are available, the error probability can be estimated accurately with the saddlepoint approximation with respect to the number of fading blocks using a numerical method that requires about two orders of magnitude fewer Monte-Carlo samples than with the saddlepoint approximation with respect to the number of channel uses per block.

相關內容

We propose a finite element discretization for the steady, generalized Navier-Stokes equations for fluids with shear-dependent viscosity, completed with inhomogeneous Dirichlet boundary conditions and an inhomogeneous divergence constraint. We establish (weak) convergence of discrete solutions as well as a priori error estimates for the velocity vector field and the scalar kinematic pressure. Numerical experiments complement the theoretical findings.

We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem, possibly with a local or a boundary control, for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits excellent performances and robustness with respect to the parameters of interest.

This paper develops a unified and computationally efficient method for change-point estimation along the time dimension in a non-stationary spatio-temporal process. By modeling a non-stationary spatio-temporal process as a piecewise stationary spatio-temporal process, we consider simultaneous estimation of the number and locations of change-points, and model parameters in each segment. A composite likelihood-based criterion is developed for change-point and parameters estimation. Under the framework of increasing domain asymptotics, theoretical results including consistency and distribution of the estimators are derived under mild conditions. In contrast to classical results in fixed dimensional time series that the localization error of change-point estimator is $O_{p}(1)$, exact recovery of true change-points can be achieved in the spatio-temporal setting. More surprisingly, the consistency of change-point estimation can be achieved without any penalty term in the criterion function. In addition, we further establish consistency of the number and locations of the change-point estimator under the infill asymptotics framework where the time domain is increasing while the spatial sampling domain is fixed. A computationally efficient pruned dynamic programming algorithm is developed for the challenging criterion optimization problem. Extensive simulation studies and an application to U.S. precipitation data are provided to demonstrate the effectiveness and practicality of the proposed method.

We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

A $(a,b)$-coloring of a graph $G$ associates to each vertex a $b$-subset of a set of $a$ colors in such a way that the color-sets of adjacent vertices are disjoint. We define general reduction tools for $(a,b)$-coloring of graphs for $2\le a/b\le 3$. In particular, using necessary and sufficient conditions for the existence of a $(a,b)$-coloring of a path with prescribed color-sets on its end-vertices, more complex $(a,b)$-colorability reductions are presented. The utility of these tools is exemplified on finite triangle-free induced subgraphs of the triangular lattice for which McDiarmid-Reed's conjecture asserts that they are all $(9,4)$-colorable. Computations on millions of such graphs generated randomly show that our tools allow to find a $(9,4)$-coloring for each of them except for one specific regular shape of graphs (that can be $(9,4)$-colored by an easy ad-hoc process). We thus obtain computational evidence towards the conjecture of McDiarmid\&Reed.

Bayesian optimization is an effective method for optimizing expensive-to-evaluate black-box functions. High-dimensional problems are particularly challenging as the surrogate model of the objective suffers from the curse of dimensionality, which makes accurate modeling difficult. We propose a group testing approach to identify active variables to facilitate efficient optimization in these domains. The proposed algorithm, Group Testing Bayesian Optimization (GTBO), first runs a testing phase where groups of variables are systematically selected and tested on whether they influence the objective. To that end, we extend the well-established theory of group testing to functions of continuous ranges. In the second phase, GTBO guides optimization by placing more importance on the active dimensions. By exploiting the axis-aligned subspace assumption, GTBO is competitive against state-of-the-art methods on several synthetic and real-world high-dimensional optimization tasks. Furthermore, GTBO aids in the discovery of active parameters in applications, thereby enhancing practitioners' understanding of the problem at hand.

The property that the velocity $\boldsymbol{u}$ belongs to $L^\infty(0,T;L^2(\Omega)^d)$ is an essential requirement in the definition of energy solutions of models for incompressible fluids. It is, therefore, highly desirable that the solutions produced by discretisation methods are uniformly stable in the $L^\infty(0,T;L^2(\Omega)^d)$-norm. In this work, we establish that this is indeed the case for Discontinuous Galerkin (DG) discretisations (in time and space) of non-Newtonian models with $p$-structure, assuming that $p\geq \frac{3d+2}{d+2}$; the time discretisation is equivalent to the RadauIIA Implicit Runge-Kutta method. We also prove (weak) convergence of the numerical scheme to the weak solution of the system; this type of convergence result for schemes based on quadrature seems to be new. As an auxiliary result, we also derive Gagliardo-Nirenberg-type inequalities on DG spaces, which might be of independent interest.

The greedy and nearest-neighbor TSP heuristics can both have $\log n$ approximation factors from optimal in worst case, even just for $n$ points in Euclidean space. In this note, we show that this approximation factor is only realized when the optimal tour is unusually short. In particular, for points from any fixed $d$-Ahlfor's regular metric space (which includes any $d$-manifold like the $d$-cube $[0,1]^d$ in the case $d$ is an integer but also fractals of dimension $d$ when $d$ is real-valued), our results imply that the greedy and nearest-neighbor heuristics have \emph{additive} errors from optimal on the order of the \emph{optimal} tour length through \emph{random} points in the same space, for $d>1$.

We present a priori error estimates for a multirate time-stepping scheme for coupled differential equations. The discretization is based on Galerkin methods in time using two different time meshes for two parts of the problem. We aim at surface coupled multiphysics problems like two-phase flows. Special focus is on the handling of the interface coupling to guarantee a coercive formulation as key to optimal order error estimates. In a sequence of increasing complexity, we begin with the coupling of two ordinary differential equations, coupled heat conduction equation, and finally a coupled Stokes problem. For this we show optimal multi-rate estimates in velocity and a suboptimal result in pressure. The a priori estimates prove that the multirate method decouples the two subproblems exactly. This is the basis for adaptive methods which can choose optimal lattices for the respective subproblems.

Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.

北京阿比特科技有限公司