亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the mean square error (MSE)-optimal conditional mean estimator (CME) in one-bit quantized systems in the context of channel estimation with jointly Gaussian inputs. We analyze the relationship of the generally nonlinear CME to the linear Bussgang estimator, a well-known method based on Bussgang's theorem. We highlight a novel observation that the Bussgang estimator is equal to the CME for different special cases, including the case of univariate Gaussian inputs and the case of multiple observations in the absence of additive noise prior to the quantization. For the general cases we conduct numerical simulations to quantify the gap between the Bussgang estimator and the CME. This gap increases for higher dimensions and longer pilot sequences. We propose an optimal pilot sequence, motivated by insights from the CME, and derive a novel closed-form expression of the MSE for that case. Afterwards, we find a closed-form limit of the MSE in the asymptotically large number of pilots regime that also holds for the Bussgang estimator. Lastly, we present numerical experiments for various system parameters and for different performance metrics which illuminate the behavior of the optimal channel estimator in the quantized regime. In this context, the well-known stochastic resonance effect that appears in quantized systems can be quantified.

相關內容

Stability and optimal convergence analysis of a non-uniform implicit-explicit L1 finite element method (IMEX-L1-FEM) are studied for a class of time-fractional linear partial differential/integro-differential equations with non-self-adjoint elliptic part having variable (space-time) coefficients. Non-uniform IMEX-L1-FEM is based on a combination of an IMEX-L1 method on graded mesh in the temporal direction and a finite element method in the spatial direction. A discrete fractional Gr\"{o}nwall inequality is proposed, which enables us to derive optimal error estimates in $L^2$- and $H^1$-norms. Numerical experiments are presented to validate our theoretical findings.

We consider the problem of model selection when grouping structure is inherent within the regressors. Using a Bayesian approach, we model the mean vector by a one-group global-local shrinkage prior belonging to a broad class of such priors that includes the horseshoe prior. In the context of variable selection, this class of priors was studied by Tang et al. (2018) \cite{tang2018bayesian}. A modified form of the usual class of global-local shrinkage priors with polynomial tail on the group regression coefficients is proposed. The resulting threshold rule selects the active group if within a group, the ratio of the $L_2$ norm of the posterior mean of its group coefficient to that of the corresponding ordinary least square group estimate is greater than a half. In the theoretical part of this article, we have used the global shrinkage parameter either as a tuning one or an empirical Bayes estimate of it depending on the knowledge regarding the underlying sparsity of the model. When the proportion of active groups is known, using $\tau$ as a tuning parameter, we have proved that our method enjoys variable selection consistency. In case this proportion is unknown, we propose an empirical Bayes estimate of $\tau$. Even if this empirical Bayes estimate is used, then also our half-thresholding rule captures the true sparse group structure. Though our theoretical works rely on a special form of the design matrix, but for general design matrices also, our simulation results show that the half-thresholding rule yields results similar to that of Yang and Narisetty (2020) \cite{yang2020consistent}. As a consequence of this, in a high dimensional sparse group selection problem, instead of using the so-called `gold standard' spike and slab prior, one can use the one-group global-local shrinkage priors with polynomial tail to obtain similar results.

The use of high order fully implicit Runge-Kutta methods is of significant importance in the context of the numerical solution of transient partial differential equations, in particular when solving large scale problems due to fine space resolution with many millions of spatial degrees of freedom and long time intervals. In this study we consider strongly A-stable implicit Runge-Kutta methods of arbitrary order of accuracy, based on Radau quadratures, for which efficient preconditioners have been introduced. A refined spectral analysis of the corresponding matrices and matrix-sequences is presented, both in terms of localization and asymptotic global distribution of the eigenvalues. Specific expressions of the eigenvectors are also obtained. The given study fully agrees with the numerically observed spectral behavior and substantially improves the theoretical studies done in this direction so far. Concluding remarks and open problems end the current work, with specific attention to the potential generalizations of the hereby suggested general approach.

We consider the differentially private estimation of multiple quantiles (MQ) of a distribution from a dataset, a key building block in modern data analysis. We apply the recent non-smoothed Inverse Sensitivity (IS) mechanism to this specific problem. We establish that the resulting method is closely related to the recently published ad hoc algorithm JointExp. In particular, they share the same computational complexity and a similar efficiency. We prove the statistical consistency of these two algorithms for continuous distributions. Furthermore, we demonstrate both theoretically and empirically that this method suffers from an important lack of performance in the case of peaked distributions, which can degrade up to a potentially catastrophic impact in the presence of atoms. Its smoothed version (i.e. by applying a max kernel to its output density) would solve this problem, but remains an open challenge to implement. As a proxy, we propose a simple and numerically efficient method called Heuristically Smoothed JointExp (HSJointExp), which is endowed with performance guarantees for a broad class of distributions and achieves results that are orders of magnitude better on problematic datasets.

Software is a great enabler for a number of projects that otherwise would be impossible to perform. Such projects include Space Exploration, Weather Modeling, Genome Projects, and many others. It is critical that software aiding these projects does what it is expected to do. In the terminology of software engineering, software that corresponds to requirements, that is does what it is expected to do is called correct. Checking the correctness of software has been the focus of a great deal of research in the area of software engineering. Practitioners in the field in which software is applied quite often do not assign much value to checking this correctness. Yet, as software systems become larger, potentially combined with distributed subsystems written by different authors, such verification becomes even more important. Concurrent, distributed systems are prone to dangerous errors due to different speeds of execution of their components such as deadlocks, race conditions, or violation of project-specific properties. This project describes an application of a static analysis method called model checking to verification of a distributed system for the Bioinformatics process. In it, we evaluate the efficiency of the model checking approach to the verification of combined processes with an increasing number of concurrently executed steps. We show that our experimental results correspond to analytically derived expectations. We also highlight the importance of static analysis to combined processes in the Bioinformatics field.

The $N$th power of a polynomial matrix of fixed size and degree can be computed by binary powering as fast as multiplying two polynomials of linear degree in $N$. When Fast Fourier Transform (FFT) is available, the resulting arithmetic complexity is \emph{softly linear} in $N$, i.e. linear in $N$ with extra logarithmic factors. We show that it is possible to beat binary powering, by an algorithm whose complexity is \emph{purely linear} in $N$, even in absence of FFT. The key result making this improvement possible is that the entries of the $N$th power of a polynomial matrix satisfy linear differential equations with polynomial coefficients whose orders and degrees are independent of $N$. Similar algorithms are proposed for two related problems: computing the $N$th term of a C-recursive sequence of polynomials, and modular exponentiation to the power $N$ for bivariate polynomials.

Parallel-in-time integration has been the focus of intensive research efforts over the past two decades due to the advent of massively parallel computer architectures and the scaling limits of purely spatial parallelization. Various iterative parallel-in-time (PinT) algorithms have been proposed, like Parareal, PFASST, MGRIT, and Space-Time Multi-Grid (STMG). These methods have been described using different notations, and the convergence estimates that are available are difficult to compare. We describe Parareal, PFASST, MGRIT and STMG for the Dahlquist model problem using a common notation and give precise convergence estimates using generating functions. This allows us, for the first time, to directly compare their convergence. We prove that all four methods eventually converge super-linearly, and also compare them numerically. The generating function framework provides further opportunities to explore and analyze existing and new methods.

A confidence sequence (CS) is a sequence of confidence intervals that is valid at arbitrary data-dependent stopping times. These are useful in applications like A/B testing, multi-armed bandits, off-policy evaluation, election auditing, etc. We present three approaches to constructing a confidence sequence for the population mean, under the minimal assumption that only an upper bound $\sigma^2$ on the variance is known. While previous works rely on light-tail assumptions like boundedness or subGaussianity (under which all moments of a distribution exist), the confidence sequences in our work are able to handle data from a wide range of heavy-tailed distributions. The best among our three methods -- the Catoni-style confidence sequence -- performs remarkably well in practice, essentially matching the state-of-the-art methods for $\sigma^2$-subGaussian data, and provably attains the $\sqrt{\log \log t/t}$ lower bound due to the law of the iterated logarithm. Our findings have important implications for sequential experimentation with unbounded observations, since the $\sigma^2$-bounded-variance assumption is more realistic and easier to verify than $\sigma^2$-subGaussianity (which implies the former). We also extend our methods to data with infinite variance, but having $p$-th central moment ($1<p<2$).

Goal-conditioned reinforcement learning (GCRL) refers to learning general-purpose skills which aim to reach diverse goals. In particular, offline GCRL only requires purely pre-collected datasets to perform training tasks without additional interactions with the environment. Although offline GCRL has become increasingly prevalent and many previous works have demonstrated its empirical success, the theoretical understanding of efficient offline GCRL algorithms is not well established, especially when the state space is huge and the offline dataset only covers the policy we aim to learn. In this paper, we propose a novel provably efficient algorithm (the sample complexity is $\tilde{O}({\rm poly}(1/\epsilon))$ where $\epsilon$ is the desired suboptimality of the learned policy) with general function approximation. Our algorithm only requires nearly minimal assumptions of the dataset (single-policy concentrability) and the function class (realizability). Moreover, our algorithm consists of two uninterleaved optimization steps, which we refer to as $V$-learning and policy learning, and is computationally stable since it does not involve minimax optimization. To the best of our knowledge, this is the first algorithm with general function approximation and single-policy concentrability that is both statistically efficient and computationally stable.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

北京阿比特科技有限公司