亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

With the development of innovative applications that require high reliability and low latency, ultra-reliable and low latency communications become critical for wireless networks. In this paper, the second-order coding rate of the coherent quasi-static Rayleigh-product MIMO channel is investigated. We consider the coding rate within O(1/\sqrt(Mn)) of the capacity, where M and n denote the number of transmit antennas and the blocklength, respectively, and derive the closed-form upper and lower bounds for the optimal average error probability. This analysis is achieved by setting up a central limit theorem (CLT) for the mutual information density (MID) with the assumption that the block-length, the number of the scatterers, and the number of the antennas go to infinity with the same pace. To obtain more physical insights, the high and low SNR approximations for the upper and lower bounds are also given. One interesting observation is that rank-deficiency degrades the performance of MIMO systems with FBL and the fundamental limits of the Rayleigh-product channel approaches those of the single Rayleigh case when the number of scatterers approaches infinity. Finally, the fitness of the CLT and the gap between the derived bounds and the performance of practical LDPC coding are illustrated by simulations.

相關內容

We consider the adversarial linear contextual bandit setting, which allows for the loss functions associated with each of $K$ arms to change over time without restriction. Assuming the $d$-dimensional contexts are drawn from a fixed known distribution, the worst-case expected regret over the course of $T$ rounds is known to scale as $\tilde O(\sqrt{Kd T})$. Under the additional assumption that the density of the contexts is log-concave, we obtain a second-order bound of order $\tilde O(K\sqrt{d V_T})$ in terms of the cumulative second moment of the learner's losses $V_T$, and a closely related first-order bound of order $\tilde O(K\sqrt{d L_T^*})$ in terms of the cumulative loss of the best policy $L_T^*$. Since $V_T$ or $L_T^*$ may be significantly smaller than $T$, these improve over the worst-case regret whenever the environment is relatively benign. Our results are obtained using a truncated version of the continuous exponential weights algorithm over the probability simplex, which we analyse by exploiting a novel connection to the linear bandit setting without contexts.

The geometric optimization of crystal structures is a procedure widely used in Chemistry that changes the geometrical placement of the particles inside a structure. It is called structural relaxation and constitutes a local minimization problem with a non-convex objective function whose domain complexity increases according to the number of particles involved. In this work we study the performance of the two most popular first order optimization methods in structural relaxation. Although frequently employed, there is a lack of their study in this context from an algorithmic point of view. We run each algorithm in combination with a constant step size, which provides a benchmark for the methods' analysis and direct comparison. We also design dynamic step size rules and study how these improve the two algorithms' performance. Our results show that there is a trade-off between convergence rate and the possibility of an experiment to succeed, hence we construct a function to assign utility to each method based on our respective preference. The function is built according to a recently introduced model of preference indication concerning algorithms with deadline and their run time. Finally, building on all our insights from the experimental results, we provide algorithmic recipes that best correspond to each of the presented preferences and select one recipe as the optimal for equally weighted preferences. Alongside our results we present our open source Python software veltiCRYS, which was used to perform the geometric optimization experiments. Our implementation, can be easily edited to accommodate other energy functions and is especially targeted for testing different methods in structural relaxation.

We introduce and analyze a new finite-difference scheme, relying on the theta-method, for solving monotone second-order mean field games. These games consist of a coupled system of the Fokker-Planck and the Hamilton-Jacobi-Bellman equation. The theta-method is used for discretizing the diffusion terms: we approximate them with a convex combination of an implicit and an explicit term. On contrast, we use an explicit centered scheme for the first-order terms. Assuming that the running cost is strongly convex and regular, we first prove the monotonicity and the stability of our theta-scheme, under a CFL condition. Taking advantage of the regularity of the solution of the continuous problem, we estimate the consistency error of the theta-scheme. Our main result is a convergence rate of order $\mathcal{O}(h^r)$ for the theta-scheme, where $h$ is the step length of the space variable and $r \in (0,1)$ is related to the H\"older continuity of the solution of the continuous problem and some of its derivatives.

In the open online dial-a-ride problem, a single server has to deliver transportation requests appearing over time in some metric space, subject to minimizing the completion time. We improve on the best known upper bounds on the competitive ratio on general metric spaces and on the half-line, for both the preemptive and non-preemptive version of the problem. We achieve this by revisiting the algorithm $\textsc{Lazy}$ recently suggested in [WAOA, 2022] and giving an improved and tight analysis. More precisely, we show that it has competitive ratio $2.457$ on general metric spaces and $2.366$ on the half-line. This is the first upper bound that beats known lower bounds of 2.5 for schedule-based algorithms as well as the natural $\textsc{Replan}$ algorithm.

Non-stationary multi-armed bandit (NS-MAB) problems have recently received significant attention. NS-MAB are typically modelled in two scenarios: abruptly changing, where reward distributions remain constant for a certain period and change at unknown time steps, and smoothly changing, where reward distributions evolve smoothly based on unknown dynamics. In this paper, we propose Discounted Thompson Sampling (DS-TS) with Gaussian priors to address both non-stationary settings. Our algorithm passively adapts to changes by incorporating a discounted factor into Thompson Sampling. DS-TS method has been experimentally validated, but analysis of the regret upper bound is currently lacking. Under mild assumptions, we show that DS-TS with Gaussian priors can achieve nearly optimal regret bound on the order of $\tilde{O}(\sqrt{TB_T})$ for abruptly changing and $\tilde{O}(T^{\beta})$ for smoothly changing, where $T$ is the number of time steps, $B_T$ is the number of breakpoints, $\beta$ is associated with the smoothly changing environment and $\tilde{O}$ hides the parameters independent of $T$ as well as logarithmic terms. Furthermore, empirical comparisons between DS-TS and other non-stationary bandit algorithms demonstrate its competitive performance. Specifically, when prior knowledge of the maximum expected reward is available, DS-TS has the potential to outperform state-of-the-art algorithms.

Hamiltonian simulation is one of the most important problems in the field of quantum computing. There have been extended efforts on designing algorithms for faster simulation, and the evolution time $T$ for the simulation turns out to largely affect algorithm runtime. While there are some specific types of Hamiltonians that can be fast-forwarded, i.e., simulated within time $o(T)$, for large enough classes of Hamiltonians (e.g., all local/sparse Hamiltonians), existing simulation algorithms require running time at least linear in the evolution time $T$. On the other hand, while there exist lower bounds of $\Omega(T)$ circuit size for some large classes of Hamiltonian, these lower bounds do not rule out the possibilities of Hamiltonian simulation with large but "low-depth" circuits by running things in parallel. Therefore, it is intriguing whether we can achieve fast Hamiltonian simulation with the power of parallelism. In this work, we give a negative result for the above open problem, showing that sparse Hamiltonians and (geometrically) local Hamiltonians cannot be parallelly fast-forwarded. In the oracle model, we prove that there are time-independent sparse Hamiltonians that cannot be simulated via an oracle circuit of depth $o(T)$. In the plain model, relying on the random oracle heuristic, we show that there exist time-independent local Hamiltonians and time-dependent geometrically local Hamiltonians that cannot be simulated via an oracle circuit of depth $o(T/n^c)$, where the Hamiltonians act on $n$-qubits, and $c$ is a constant.

We present an upper bound for the Single Channel Speech Separation task, which is based on an assumption regarding the nature of short segments of speech. Using the bound, we are able to show that while the recent methods have made significant progress for a few speakers, there is room for improvement for five and ten speakers. We then introduce a Deep neural network, SepIt, that iteratively improves the different speakers' estimation. At test time, SpeIt has a varying number of iterations per test sample, based on a mutual information criterion that arises from our analysis. In an extensive set of experiments, SepIt outperforms the state-of-the-art neural networks for 2, 3, 5, and 10 speakers.

We present an effective framework for improving the breakdown point of robust regression algorithms. Robust regression has attracted widespread attention due to the ubiquity of outliers, which significantly affect the estimation results. However, many existing robust least-squares regression algorithms suffer from a low breakdown point, as they become stuck around local optima when facing severe attacks. By expanding on the previous work, we propose a novel framework that enhances the breakdown point of these algorithms by inserting a prior distribution in each iteration step, and adjusting the prior distribution according to historical information. We apply this framework to a specific algorithm and derive the consistent robust regression algorithm with iterative local search (CORALS). The relationship between CORALS and momentum gradient descent is described, and a detailed proof of the theoretical convergence of CORALS is presented. Finally, we demonstrate that the breakdown point of CORALS is indeed higher than that of the algorithm from which it is derived. We apply the proposed framework to other robust algorithms, and show that the improved algorithms achieve better results than the original algorithms, indicating the effectiveness of the proposed framework.

Consider $k$ independent random samples from $p$-dimensional multivariate normal distributions. We are interested in the limiting distribution of the log-likelihood ratio test statistics for testing for the equality of $k$ covariance matrices. It is well known from classical multivariate statistics that the limit is a chi-square distribution when $k$ and $p$ are fixed integers. Jiang and Yang~\cite{JY13} and Jiang and Qi~\cite{JQ15} have obtained the central limit theorem for the log-likelihood ratio test statistics when the dimensionality $p$ goes to infinity with the sample sizes. In this paper, we derive the central limit theorem when either $p$ or $k$ goes to infinity. We also propose adjusted test statistics which can be well approximated by chi-squared distributions regardless of values for $p$ and $k$. Furthermore, we present numerical simulation results to evaluate the performance of our adjusted test statistics and the log-likelihood ratio statistics based on classical chi-square approximation and the normal approximation.

We give query complexity lower bounds for convex optimization and the related feasibility problem. We show that quadratic memory is necessary to achieve the optimal oracle complexity for first-order convex optimization. In particular, this shows that center-of-mass cutting-planes algorithms in dimension $d$ which use $\tilde O(d^2)$ memory and $\tilde O(d)$ queries are Pareto-optimal for both convex optimization and the feasibility problem, up to logarithmic factors. Precisely, we prove that to minimize $1$-Lipschitz convex functions over the unit ball to $1/d^4$ accuracy, any deterministic first-order algorithms using at most $d^{2-\delta}$ bits of memory must make $\tilde\Omega(d^{1+\delta/3})$ queries, for any $\delta\in[0,1]$. For the feasibility problem, in which an algorithm only has access to a separation oracle, we show a stronger trade-off: for at most $d^{2-\delta}$ memory, the number of queries required is $\tilde\Omega(d^{1+\delta})$. This resolves a COLT 2019 open problem of Woodworth and Srebro.

北京阿比特科技有限公司