亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multiple antenna arrays play a key role in wireless networks for communications but also localization and sensing. The use of large antenna arrays pushes towards a propagation regime in which the wavefront is no longer plane but spherical. This allows to infer the position and orientation of an arbitrary source from the received signal without the need of using multiple anchor nodes. To understand the fundamental limits of large antenna arrays for localization, this paper fusions wave propagation theory with estimation theory, and computes the Cram{\'e}r-Rao Bound (CRB) for the estimation of the three Cartesian coordinates of the source on the basis of the electromagnetic vector field, observed over a rectangular surface area. To simplify the analysis, we assume that the source is a dipole, whose center is located on the line perpendicular to the surface center, with an orientation a priori known. Numerical and asymptotic results are given to quantify the CRBs, and to gain insights into the effect of various system parameters on the ultimate estimation accuracy. It turns out that surfaces of practical size may guarantee a centimeter-level accuracy in the mmWave bands.

相關內容

An evolving surface finite element discretisation is analysed for the evolution of a closed two-dimensional surface governed by a system coupling a generalised forced mean curvature flow and a reaction--diffusion process on the surface, inspired by a gradient flow of a coupled energy. Two algorithms are proposed, both based on a system coupling the diffusion equation to evolution equations for geometric quantities in the velocity law for the surface. One of the numerical methods is proved to be convergent in the $H^1$ norm with optimal-order for finite elements of degree at least two. We present numerical experiments illustrating the convergence behaviour and demonstrating the qualitative properties of the flow: preservation of mean convexity, loss of convexity, weak maximum principles, and the occurrence of self-intersections.

We consider an elliptic linear-quadratic parameter estimation problem with a finite number of parameters. A novel a priori bound for the parameter error is proved and, based on this bound, an adaptive finite element method driven by an a posteriori error estimator is presented. Unlike prior results in the literature, our estimator, which is composed of standard energy error residual estimators for the state equation and suitable co-state problems, reflects the faster convergence of the parameter error compared to the (co)-state variables. We show optimal convergence rates of our method; in particular and unlike prior works, we prove that the estimator decreases with a rate that is the sum of the best approximation rates of the state and co-state variables. Experiments confirm that our method matches the convergence rate of the parameter error.

Online bipartite matching with one-sided arrival and its variants have been extensively studied since the seminal work of Karp, Vazirani, and Vazirani (STOC 1990). Motivated by real-life applications with dynamic market structures, e.g. ride-sharing, two generalizations of the classical one-sided arrival model are proposed to allow non-bipartite graphs and to allow all vertices to arrive online. Namely, online matching with general vertex arrival is introduced by Wang and Wong (ICALP 2015), and fully online matching is introduced by Huang et al. (JACM 2020). In this paper, we study the fractional versions of the two models. We improve three out of the four state-of-the-art upper and lower bounds of the two models. For fully online matching, we design a $0.6$-competitive algorithm and prove no problem can be $0.613$-competitive. For online matching with general vertex arrival, we prove no algorithm can be $0.584$-competitive. Moreover, we give an arguably more intuitive algorithm for the general vertex arrival model, compared to the algorithm of Wang and Wong, while attaining the same competitive ratio of $0.526$.

Computations of incompressible flows with velocity boundary conditions require solution of a Poisson equation for pressure with all Neumann boundary conditions. Discretization of such a Poisson equation results in a rank-deficient matrix of coefficients. When a non-conservative discretization method such as finite difference, finite element, or spectral scheme is used, such a matrix also generates an inconsistency which makes the residuals in the iterative solution to saturate at a threshold level that depends on the spatial resolution and order of the discretization scheme. In this paper, we examine inconsistency for a high-order meshless discretization scheme suitable for solving the equations on a complex domain. The high order meshless method uses polyharmonic spline radial basis functions (PHS-RBF) with appended polynomials to interpolate scattered data and constructs the discrete equations by collocation. The PHS-RBF provides the flexibility to vary the order of discretization by increasing the degree of the appended polynomial. In this study, we examine the convergence of the inconsistency for different spatial resolutions and for different degrees of the appended polynomials by solving the Poisson equation for a manufactured solution as well as the Navier-Stokes equations for several fluid flows. We observe that the inconsistency decreases faster than the error in the final solution, and eventually becomes vanishing small at sufficient spatial resolution. The rate of convergence of the inconsistency is observed to be similar or better than the rate of convergence of the discretization errors. This beneficial observation makes it unnecessary to regularize the Poisson equation by fixing either the mean pressure or pressure at an arbitrary point. A simple point solver such as the SOR is seen to be well-convergent, although it can be further accelerated using multilevel methods.

We consider the problem of parameter estimation in a Bayesian setting and propose a general lower-bound that includes part of the family of $f$-Divergences. The results are then applied to specific settings of interest and compared to other notable results in the literature. In particular, we show that the known bounds using Mutual Information can be improved by using, for example, Maximal Leakage, Hellinger divergence, or generalizations of the Hockey-Stick divergence.

In [1], the non-linear space-time Hasegawa-Mima plasma equation is formulated as a coupled system of two linear PDE's, a solution of which is a pair (u, w). The first equation is of hyperbolic type and the second of elliptic type. Variational frames for obtaining weak solutions to the initial value Hasegawa-Mima problem with periodic boundary conditions were also derived. In a more recent work [2], a numerical approach consisting of a finite element space-domain combined with an Euler-implicit time scheme was used to discretize the coupled variational Hasegawa-Mima model. A semi-linear version of this implicit nonlinear scheme was tested for several types of initial conditions. This semi-linear scheme proved to lack efficiency for long time, which necessitates imposing a cap on the magnitude of the solution. To circumvent this difficulty, in this paper, we use Newton-type methods (Newton, Chord and an introduced Modified Newton method) to solve numerically the fully-implicit non-linear scheme. Testing these methods in FreeFEM++ indicates significant improvements as no cap needs to be imposed for long time. In the sequel, we demonstrate the validity of these methods by proving several results, in particular the convergence of the implemented methods.

Maximal parabolic $L^p$-regularity of linear parabolic equations on an evolving surface is shown by pulling back the problem to the initial surface and studying the maximal $L^p$-regularity on a fixed surface. By freezing the coefficients in the parabolic equations at a fixed time and utilizing a perturbation argument around the freezed time, it is shown that backward difference time discretizations of linear parabolic equations on an evolving surface along characteristic trajectories can preserve maximal $L^p$-regularity in the discrete setting. The result is applied to prove the stability and convergence of time discretizations of nonlinear parabolic equations on an evolving surface, with linearly implicit backward differentiation formulae characteristic trajectories of the surface, for general locally Lipschitz nonlinearities. The discrete maximal $L^p$-regularity is used to prove the boundedness and stability of numerical solutions in the $L^\infty(0,T;W^{1,\infty})$ norm, which is used to bound the nonlinear terms in the stability analysis. Optimal-order error estimates of time discretizations in the $L^\infty(0,T;W^{1,\infty})$ norm is obtained by combining the stability analysis with the consistency estimates.

Many functions have approximately-known upper and/or lower bounds, potentially aiding the modeling of such functions. In this paper, we introduce Gaussian process models for functions where such bounds are (approximately) known. More specifically, we propose the first use of such bounds to improve Gaussian process (GP) posterior sampling and Bayesian optimization (BO). That is, we transform a GP model satisfying the given bounds, and then sample and weight functions from its posterior. To further exploit these bounds in BO settings, we present bounded entropy search (BES) to select the point gaining the most information about the underlying function, estimated by the GP samples, while satisfying the output constraints. We characterize the sample variance bounds and show that the decision made by BES is explainable. Our proposed approach is conceptually straightforward and can be used as a plug in extension to existing methods for GP posterior sampling and Bayesian optimization.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

Detecting objects and estimating their pose remains as one of the major challenges of the computer vision research community. There exists a compromise between localizing the objects and estimating their viewpoints. The detector ideally needs to be view-invariant, while the pose estimation process should be able to generalize towards the category-level. This work is an exploration of using deep learning models for solving both problems simultaneously. For doing so, we propose three novel deep learning architectures, which are able to perform a joint detection and pose estimation, where we gradually decouple the two tasks. We also investigate whether the pose estimation problem should be solved as a classification or regression problem, being this still an open question in the computer vision community. We detail a comparative analysis of all our solutions and the methods that currently define the state of the art for this problem. We use PASCAL3D+ and ObjectNet3D datasets to present the thorough experimental evaluation and main results. With the proposed models we achieve the state-of-the-art performance in both datasets.

北京阿比特科技有限公司