亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the sum-capacity of two-user optical intensity multiple access channels with per-user peak- or/and average-intensity constraints. By leveraging tools from the decomposition of certain maxentropic distributions, we derive several lower bounds on the sum-capacity. In the high signal-to-noise ratio (SNR) regime, some bounds asymptotically match or approach the sum-capacity, thus closing or reducing the existing gaps to the high-SNR asymptotic sum-capacity. At moderate SNR, some bounds are also fairly close to the sum-capacity.

相關內容

This paper presents a boundary element method (BEM) for computing the energy transmittance of a singly-periodic grating in 2D for a wide frequency band, which is of engineering interest in various fields with possible applications to acoustic metamaterial design. The proposed method is based on the Pad\'e approximants of the response. The high-order frequency derivatives of the sound pressure necessary to evaluate the approximants are evaluated by a novel fast BEM accelerated by the fast-multipole and hierarchical matrix methods combined with the automatic differentiation. The target frequency band is divided adaptively, and the Pad\'e approximation is used in each subband so as to accurately estimate the transmittance for a wide frequency range. Through some numerical examples, we confirm that the proposed method can efficiently and accurately give the transmittance even when some anomalies and stopband exist in the target band.

A bilateral (i.e., upper and lower) bound on the mean-square error under a general model mismatch is developed. The bound, which is derived from the variational representation of the chi-square divergence, is applicable in the Bayesian and nonBayesian frameworks to biased and unbiased estimators. Unlike other classical MSE bounds that depend only on the model, our bound is also estimator-dependent. Thus, it is applicable as a tool for characterizing the MSE of a specific estimator. The proposed bounding technique has a variety of applications, one of which is a tool for proving the consistency of estimators for a class of models. Furthermore, it provides insight as to why certain estimators work well under general model mismatch conditions.

Ultra-reliable low latency communications (uRLLC) is adopted in the fifth generation (5G) mobile networks to better support mission-critical applications that demand high level of reliability and low latency. With the aid of well-established multiple-input multiple-output (MIMO) information theory, uRLLC in the future 6G is expected to provide enhanced capability towards extreme connectivity. Since the latency constraint can be represented equivalently by blocklength, channel coding theory at finite block-length plays an important role in the theoretic analysis of uRLLC. On the basis of Polyanskiy's and Yang's asymptotic results, we first derive the exact close-form expressions for the expectation and variance of channel dispersion. Then, the bound of average maximal achievable rate is given for massive MIMO systems in ideal independent and identically distributed fading channels. This is the study to reveal the underlying connections among the fundamental parameters in MIMO transmissions in a concise and complete close-form formula. Most importantly, the inversely proportional law observed therein implies that the latency can be further reduced at expense of spatial degrees of freedom.

Distributed tensor decomposition (DTD) is a fundamental data-analytics technique that extracts latent important properties from high-dimensional multi-attribute datasets distributed over edge devices. Conventionally its wireless implementation follows a one-shot approach that first computes local results at devices using local data and then aggregates them to a server with communication-efficient techniques such as over-the-air computation (AirComp) for global computation. Such implementation is confronted with the issues of limited storage-and-computation capacities and link interruption, which motivates us to propose a framework of on-the-fly communication-and-computing (FlyCom$^2$) in this work. The proposed framework enables streaming computation with low complexity by leveraging a random sketching technique and achieves progressive global aggregation through the integration of progressive uploading and multiple-input-multiple-output (MIMO) AirComp. To develop FlyCom$^2$, an on-the-fly sub-space estimator is designed to take real-time sketches accumulated at the server to generate online estimates for the decomposition. Its performance is evaluated by deriving both deterministic and probabilistic error bounds using the perturbation theory and concentration of measure. Both results reveal that the decomposition error is inversely proportional to the population of sketching observations received by the server. To further rein in the noise effect on the error, we propose a threshold-based scheme to select a subset of sufficiently reliable received sketches for DTD at the server. Experimental results validate the performance gain of the proposed selection algorithm and show that compared to its one-shot counterparts, the proposed FlyCom$^2$ achieves comparable (even better in the case of large eigen-gaps) decomposition accuracy besides dramatically reducing devices' complexity costs.

The accuracy of Earth system models is compromised by unknown and/or unresolved dynamics, making the quantification of systematic model errors essential. While a model parameter estimation, which allows parameters to change spatio-temporally, shows promise in quantifying and mitigating systematic model errors, the estimation of the spatio-temporally distributed model parameters has been practically challenging. Here we present an efficient and practical method to estimate time-varying parameters in high-dimensional spaces. In our proposed method, Hybrid Offline and Online Parameter Estimation with ensemble Kalman filtering (HOOPE-EnKF), model parameters estimated by EnKF are constrained by results of offline batch optimization, in which the posterior distribution of model parameters is obtained by comparing simulated and observed climatological variables. HOOPE-EnKF outperforms the original EnKF in a synthetic experiment using a two-scale Lorenz96 model. One advantage of HOOPE-EnKF over traditional EnKFs is that its performance is not greatly affected by inflation factors for model parameters, thus eliminating the need for extensive tuning of inflation factors. We thoroughly discuss the potential of HOOPE-EnKF as a practical method for improving parameterizations of process-based models and prediction in real-world applications such as numerical weather prediction.

A Low-rank Spectral Optimization Problem (LSOP) minimizes a linear objective subject to multiple two-sided linear matrix inequalities intersected with a low-rank and spectral constrained domain set. Although solving LSOP is, in general, NP-hard, its partial convexification (i.e., replacing the domain set by its convex hull) termed "LSOP-R," is often tractable and yields a high-quality solution. This motivates us to study the strength of LSOP-R. Specifically, we derive rank bounds for any extreme point of the feasible set of LSOP-R and prove their tightness for the domain sets with different matrix spaces. The proposed rank bounds recover two well-known results in the literature from a fresh angle and also allow us to derive sufficient conditions under which the relaxation LSOP-R is equivalent to the original LSOP. To effectively solve LSOP-R, we develop a column generation algorithm with a vector-based convex pricing oracle, coupled with a rank-reduction algorithm, which ensures the output solution satisfies the theoretical rank bound. Finally, we numerically verify the strength of the LSOP-R and the efficacy of the proposed algorithms.

Communication compression is an essential strategy for alleviating communication overhead by reducing the volume of information exchanged between computing nodes in large-scale distributed stochastic optimization. Although numerous algorithms with convergence guarantees have been obtained, the optimal performance limit under communication compression remains unclear. In this paper, we investigate the performance limit of distributed stochastic optimization algorithms employing communication compression. We focus on two main types of compressors, unbiased and contractive, and address the best-possible convergence rates one can obtain with these compressors. We establish the lower bounds for the convergence rates of distributed stochastic optimization in six different settings, combining strongly-convex, generally-convex, or non-convex functions with unbiased or contractive compressor types. To bridge the gap between lower bounds and existing algorithms' rates, we propose NEOLITHIC, a nearly optimal algorithm with compression that achieves the established lower bounds up to logarithmic factors under mild conditions. Extensive experimental results support our theoretical findings. This work provides insights into the theoretical limitations of existing compressors and motivates further research into fundamentally new compressor properties.

In conventional backscatter communication (BackCom) systems, time division multiple access (TDMA) and frequency division multiple access (FDMA) are generally adopted for multiuser backscattering due to their simplicity in implementation. However, as the number of backscatter devices (BDs) proliferates, there will be a high overhead under the traditional centralized control techniques, and the inter-user coordination is unaffordable for the passive BDs, which are of scarce concern in existing works and remain unsolved. To this end, in this paper, we propose a slotted ALOHA-based random access for BackCom systems, in which each BD is randomly chosen and is allowed to coexist with one active device for hybrid multiple access. To excavate and evaluate the performance, a resource allocation problem for max-min transmission rate is formulated, where transmit antenna selection, receive beamforming design, reflection coefficient adjustment, power control, and access probability determination are jointly considered. To deal with this intractable problem, we first transform the objective function with the max-min form into an equivalent linear one, and then decompose the resulting problem into three sub-problems. Next, a block coordinate descent (BCD)-based greedy algorithm with a penalty function, successive convex approximation, and linear programming are designed to obtain sub-optimal solutions for tractable analysis. Simulation results demonstrate that the proposed algorithm outperforms benchmark algorithms in terms of transmission rate and fairness.

This paper introduces a novel Bayesian approach to detect changes in the variance of a Gaussian sequence model, focusing on quantifying the uncertainty in the change point locations and providing a scalable algorithm for inference. Such a measure of uncertainty is necessary when change point methods are deployed in sensitive applications, for example, when one is interested in determining whether an organ is viable for transplant. The key of our proposal is framing the problem as a product of multiple single changes in the scale parameter. We fit the model through an iterative procedure similar to what is done for additive models. The novelty is that each iteration returns a probability distribution on time instances, which captures the uncertainty in the change point location. Leveraging a recent result in the literature, we can show that our proposal is a variational approximation of the exact model posterior distribution. We study the algorithm's convergence and the change point localization rate. Extensive experiments in simulation studies illustrate the performance of our method and the possibility of generalizing it to more complex data-generating mechanisms. We apply the new model to an experiment involving a novel technique to assess the viability of a liver and oceanographic data.

The problem of reconstructing a sequence from the set of its length-$k$ substrings has received considerable attention due to its various applications in genomics. We study an uncoded version of this problem where multiple random sources are to be simultaneously reconstructed from the union of their $k$-mer sets. We consider an asymptotic regime where $m = n^\alpha$ i.i.d. source sequences of length $n$ are to be reconstructed from the set of their substrings of length $k=\beta \log n$, and seek to characterize the $(\alpha,\beta)$ pairs for which reconstruction is information-theoretically feasible. We show that, as $n \to \infty$, the source sequences can be reconstructed if $\beta > \max(2\alpha+1,\alpha+2)$ and cannot be reconstructed if $\beta < \max( 2\alpha+1, \alpha+ \tfrac32)$, characterizing the feasibility region almost completely. Interestingly, our result shows that there are feasible $(\alpha,\beta)$ pairs where repeats across the source strings abound, and non-trivial reconstruction algorithms are needed to achieve the fundamental limit.

北京阿比特科技有限公司