亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A well-known challenge in beamforming is how to optimally utilize the degrees of freedom (DoF) of the array to design a robust beamformer, especially when the array DoF is limited. In this paper, we leverage the tool of constrained convex optimization and propose a penalized inequality-constrained minimum variance (P-ICMV) beamformer to address this challenge. Specifically, a well-targeted objective function and inequality constraints are proposed to achieve the design goals. By penalizing the maximum gain of the beamformer at any interfering directions, the total interference power can be efficiently mitigated with limited DoF. Multiple robust constraints on the target protection and interference suppression can be introduced to increase the robustness of the beamformer against steering vector mismatch. By integrating the noise reduction, interference suppression, and target protection, the proposed formulation can efficiently obtain a robust beamformer design while optimally trading off various design goals. To numerically solve this problem, we formulate the P-ICMV beamformer design as a convex second-order cone program (SOCP) and propose a low complexity iterative algorithm based on the alternating direction method of multipliers (ADMM). Three applications are simulated to demonstrate the effectiveness of the proposed beamformer.

相關內容

This paper considers the reconfigurable intelligent surface (RIS)-assisted multi-user communications, where an RIS is used to assist the base station (BS) for serving multiple users. The RIS consisting of passive reflecting elements can manipulate the reflected direction of the incoming electromagnetic waves by adjusting the phase shifts of the reflecting elements. Alternating optimization (AO) based approach is commonly used to determine the phase shifts of the RIS elements. While AO-based approaches have shown significant gain of RIS, the complexity is quite high due to the coupled structure of the cascade channel from the BS through RIS to the user. In addition, the sub-wavelength structure of the RIS introduces spatial correlation that may cause strong interference to users. To handle severe multi-user interference over correlated channels, we consider adaptive user grouping previously proposed for massive mutli-input and multi-output (MIMO) systems and propose two low-complexity beamforming design methods, depending on whether the grouping result is taken into account. Simulation results demonstrate the superior sum rate achieved by the proposed methods than that without user grouping. Besides, the proposed methods can perform similarly to the AO-based approach but with much lower complexity.

We study properties of confidence intervals (CIs) for the difference of two Bernoulli distributions' success parameters, $p_x - p_y$, in the case where the goal is to obtain a CI of a given half-width while minimizing sampling costs when the observation costs may be different between the two distributions. Assuming that we are provided with preliminary estimates of the success parameters, we propose three different methods for constructing fixed-width CIs: (i) a two-stage sampling procedure, (ii) a sequential method that carries out sampling in batches, and (iii) an $\ell$-stage "look-ahead" procedure. We use Monte Carlo simulation to show that, under diverse success probability and observation cost scenarios, our proposed algorithms obtain significant cost savings versus their baseline counterparts (up to 50\% for the two-stage procedure, up to 15\% for the sequential methods). Furthermore, for the battery of scenarios under study, our sequential-batches and $\ell$-stage "look-ahead" procedures approximately obtain the nominal coverage while also meeting the desired width requirement. Our sequential-batching method turned out to be more efficient than the "look-ahead" method from a computational standpoint, with average running times at least an order-of-magnitude faster over all the scenarios tested.

This work concerns developing communication- and computation-efficient methods for large-scale multiple testing over networks, which is of interest to many practical applications. We take an asymptotic approach and propose two methods, proportion-matching and greedy aggregation, tailored to distributed settings. The proportion-matching method achieves the global BH performance yet only requires a one-shot communication of the (estimated) proportion of true null hypotheses as well as the number of p-values at each node. By focusing on the asymptotic optimal power, we go beyond the BH procedure by providing an explicit characterization of the asymptotic optimal solution. This leads to the greedy aggregation method that effectively approximate the optimal rejection regions at each node, while computation-efficiency comes from the greedy-type approach naturally. Extensive numerical results over a variety of challenging settings are provided to support our theoretical findings.

We propose a new dynamic average consensus algorithm that is robust to information-sharing noise arising from differential-privacy design. Not only is dynamic average consensus widely used in cooperative control and distributed tracking, it is also a fundamental building block in numerous distributed computation algorithms such as multi-agent optimization and distributed Nash equilibrium seeking. We propose a new dynamic average consensus algorithm that is robust to persistent and independent information-sharing noise added for the purpose of differential-privacy protection. In fact, the algorithm can ensure both provable convergence to the exact average reference signal and rigorous epsilon-differential privacy (even when the number of iterations tends to infinity), which, to our knowledge, has not been achieved before in average consensus algorithms. Given that channel noise in communication can be viewed as a special case of differential-privacy noise, the algorithm can also be used to counteract communication imperfections. Numerical simulation results confirm the effectiveness of the proposed approach.

This paper is motivated by the need to quantify human immune responses to environmental challenges. Specifically, the genome of the selected cell population from a blood sample is amplified by the well-known PCR process of successive heating and cooling, producing a large number of reads. They number roughly 30,000 to 300,000. Each read corresponds to a particular rearrangement of so-called V(D)J sequences. In the end, the observation consists of a set of numbers of reads corresponding to different V(D)J sequences. The underlying relative frequencies of distinct V(D)J sequences can be summarized by a probability vector, with the cardinality being the number of distinct V(D)J rearrangements present in the blood. Statistical question is to make inferences on a summary parameter of the probability vector based on a single multinomial-type observation of a large dimension. Popular summary of the diversity of a cell population includes clonality and entropy, or more generally, is a suitable function of the probability vector. A point estimator of the clonality based on multiple replicates from the same blood sample has been proposed previously. After obtaining a point estimator of a particular function, the remaining challenge is to construct a confidence interval of the parameter to appropriately reflect its uncertainty. In this paper, we have proposed to couple the empirical Bayes method with a resampling-based calibration procedure to construct a robust confidence interval for different population diversity parameters. The method has been illustrated via extensive numerical study and real data examples.

This paper deals with the maximum independent set (M.I.S.) problem, also known as the stable set problem. The basic mathematical programming model that captures this problem is an Integer Program (I.P.) with zero-one variables and only the edge inequalities. We present an enhanced model by adding a polynomial number of linear constraints, known as valid inequalities; this new model is still polynomial in the number of vertices in the graph. We carried out computational testing of the Linear Relaxation of the new Integer Program. We tested about 7000 instances of randomly generated (and connected) graphs with up to 64 vertices (as well as all 64, 128, and 256-vertex instances at the "challenge" website OEIS.org). In each of these instances, the Linear Relaxation returned an optimal solution with (i) every variable having an integer value, and (ii) the optimal solution value of the Linear Relaxation was the same as that of the original (basic) Integer Program. Our computational experience has been that a binary search on the objective function value is a powerful tool which yields a (weakly) polynomial algorithm.

The combination of the effects of Doppler frequency shifts (due to mobility) and phase noise (due to the imperfections of oscillators operating at a high carrier frequency) poses serious challenges to Orthogonal Frequency Division Multiplexing (OFDM) wireless transmissions in terms of channel estimation and phase noise tracking performance and the associated pilot overhead required for that estimation and tracking. In this paper, we use separate sets of Basis Expansion Model (BEM) coefficients for modelling the time variation over intervals of several OFDM symbols of the channel paths and the phase noise process. Based on this model, an efficient solution approximating the maximum-likelihood joint estimation of these BEM coefficients is derived and shown to outperform state-of-the-art phase noise compensation methods

The purpose of this article is to develop a general parametric estimation theory that allows the derivation of the limit distribution of estimators in non-regular models where the true parameter value may lie on the boundary of the parameter space or where even identifiability fails. For that, we propose a more general local approximation of the parameter space (at the true value) than previous studies. This estimation theory is comprehensive in that it can handle penalized estimation as well as quasi-maximum likelihood estimation under such non-regular models. Besides, our results can apply to the so-called non-ergodic statistics, where the Fisher information is random in the limit, including the regular experiment that is locally asymptotically mixed normal. In penalized estimation, depending on the boundary constraint, even the Bridge estimator with $q<1$ does not necessarily give selection consistency. Therefore, some sufficient condition for selection consistency is described, precisely evaluating the balance between the boundary constraint and the form of the penalty. Examples handled in the paper are: (i) ML estimation of the generalized inverse Gaussian distribution, (ii) quasi-ML estimation of the diffusion parameter in a non-ergodic It\^o process whose parameter space consists of positive semi-definite symmetric matrices, while the drift parameter is treated as nuisance and (iii) penalized ML estimation of variance components of random effects in linear mixed models.

It is well known that second order homogeneous linear ordinary differential equations with slowly varying coefficients admit slowly varying phase functions. This observation underlies the Liouville-Green method and many other techniques for the asymptotic approximation of the solutions of such equations. It is also the basis of a recently developed numerical algorithm that, in many cases of interest, runs in time independent of the magnitude of the equation's coefficients and achieves accuracy on par with that predicted by its condition number. Here we point out that a large class of second order inhomogeneous linear ordinary differential equations can be efficiently and accurately solved by combining phase function methods for second order homogeneous linear ordinary differential equations with a variant of the adaptive Levin method for evaluating oscillatory integrals.

Offline reinforcement learning, which aims at optimizing sequential decision-making strategies with historical data, has been extensively applied in real-life applications. State-Of-The-Art algorithms usually leverage powerful function approximators (e.g. neural networks) to alleviate the sample complexity hurdle for better empirical performances. Despite the successes, a more systematic understanding of the statistical complexity for function approximation remains lacking. Towards bridging the gap, we take a step by considering offline reinforcement learning with differentiable function class approximation (DFA). This function class naturally incorporates a wide range of models with nonlinear/nonconvex structures. Most importantly, we show offline RL with differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results provide the theoretical basis for understanding a variety of practical heuristics that rely on Fitted Q-Iteration style design. In addition, we further improve our guarantee with a tighter instance-dependent characterization. We hope our work could draw interest in studying reinforcement learning with differentiable function approximation beyond the scope of current research.

北京阿比特科技有限公司