亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the Age of Incorrect Information (AoII) in a communication system whose channel suffers a random delay. We consider a slotted-time system where a transmitter observes a dynamic source and decides when to send updates to a remote receiver through the communication channel. The threshold policy, under which the transmitter initiates transmission only when the AoII exceeds the threshold, governs the transmitter's decision. In this paper, we analyze and calculate the performance of the threshold policy in terms of the achieved AoII. Using the Markov chain to characterize the system evolution, the expected AoII can be obtained precisely by solving a system of linear equations whose size is finite and depends on the threshold. We also give closed-form expressions of the expected AoII under two particular thresholds. Finally, calculation results show that there are better strategies than the transmitter constantly transmitting new updates.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Performer · 優化器 · Networking · ·
2023 年 2 月 28 日

Reconfigurable intelligent surfaces (RISs) are recognized with great potential to strengthen wireless security, yet the performance gain largely depends on the deployment location of RISs in the network topology. In this paper, we consider the anti-eavesdropping communication established through a RIS at a fixed location, as well as an aerial platform mounting another RIS and a friendly jammer to further improve the secrecy. The aerial RIS helps enhance the legitimate signal and the aerial cooperative jamming is strengthened through the fixed RIS. The security gain with aerial reflection and jamming is further improved with the optimized deployment of the aerial platform. We particularly consider the imperfect channel state information issue and address the worst-case secrecy for robust performance. The formulated robust secrecy rate maximization problem is decomposed into two layers, where the inner layer solves for reflection and jamming with robust optimization, and the outer layer tackles the aerial deployment through deep reinforcement learning. Simulation results show the deployment under different network topologies and demonstrate the performance superiority of our proposal in terms of the worst-case security provisioning as compared with the baselines.

Tennenbaum's theorem states that the only countable model of Peano arithmetic (PA) with computable arithmetical operations is the standard model of natural numbers. In this paper, we use constructive type theory as a framework to revisit, analyze and generalize this result. The chosen framework allows for a synthetic approach to computability theory, exploiting that, externally, all functions definable in constructive type theory can be shown computable. We then build on this viewpoint and furthermore internalize it by assuming a version of Church's thesis, which expresses that any function on natural numbers is representable by a formula in PA. This assumption provides for a conveniently abstract setup to carry out rigorous computability arguments, even in the theorem's mechanization. Concretely, we constructivize several classical proofs and present one inherently constructive rendering of Tennenbaum's theorem, all following arguments from the literature. Concerning the classical proofs in particular, the constructive setting allows us to highlight differences in their assumptions and conclusions which are not visible classically. All versions are accompanied by a unified mechanization in the Coq proof assistant.

In this work, we study the real-time tracking and reconstruction of an information source with the purpose of actuation. A device monitors the state of the information source and transmits status updates to a receiver over a wireless erasure channel. We consider two models for the source, namely an $N$-state Markov chain and an $N$-state Birth-Death Markov process. We investigate several joint sampling and transmission policies, including a semantics-aware one, and we study their performance with respect to a set of metrics. Specifically, we investigate the real-time reconstruction error and its variance, the cost of actuation error, the consecutive error, and the cost of memory error. These metrics capture different characteristics of the system performance, such as the impact of erroneous actions and the timing of errors. In addition, we propose a randomized stationary sampling and transmission policy and we derive closed-form expressions for the aforementioned metrics. We then formulate two optimization problems. The first optimization problem aims to minimize the time-averaged reconstruction error subject to time-averaged sampling cost constraint. Then, we compare the optimal randomized stationary policy with uniform, change-aware, and semantics-aware sampling policies. Our results show that in the scenario of constrained sampling generation, the optimal randomized stationary policy outperforms all other sampling policies when the source is rapidly evolving. Otherwise, the semantics-aware policy performs the best. The objective of the second optimization problem is to obtain an optimal sampling policy that minimizes the average consecutive error with a constraint on the time-averaged sampling cost. Based on this, we propose a \emph{wait-then-generate} sampling policy which is simple to implement.

Semantic communication is considered the future of mobile communication, which aims to transmit data beyond Shannon's theorem of communications by transmitting the semantic meaning of the data rather than the bit-by-bit reconstruction of the data at the receiver's end. The semantic communication paradigm aims to bridge the gap of limited bandwidth problems in modern high-volume multimedia application content transmission. Integrating AI technologies with the 6G communications networks paved the way to develop semantic communication-based end-to-end communication systems. In this study, we have implemented a semantic communication-based end-to-end image transmission system, and we discuss potential design considerations in developing semantic communication systems in conjunction with physical channel characteristics. A Pre-trained GAN network is used at the receiver as the transmission task to reconstruct the realistic image based on the Semantic segmented image at the receiver input. The semantic segmentation task at the transmitter (encoder) and the GAN network at the receiver (decoder) is trained on a common knowledge base, the COCO-Stuff dataset. The research shows that the resource gain in the form of bandwidth saving is immense when transmitting the semantic segmentation map through the physical channel instead of the ground truth image in contrast to conventional communication systems. Furthermore, the research studies the effect of physical channel distortions and quantization noise on semantic communication-based multimedia content transmission.

In the setting of federated optimization, where a global model is aggregated periodically, step asynchronism occurs when participants conduct model training by efficiently utilizing their computational resources. It is well acknowledged that step asynchronism leads to objective inconsistency under non-i.i.d. data, which degrades the model's accuracy. To address this issue, we propose a new algorithm FedaGrac, which calibrates the local direction to a predictive global orientation. Taking advantage of the estimated orientation, we guarantee that the aggregated model does not excessively deviate from the global optimum while fully utilizing the local updates of faster nodes. We theoretically prove that FedaGrac holds an improved order of convergence rate than the state-of-the-art approaches and eliminates the negative effect of step asynchronism. Empirical results show that our algorithm accelerates the training and enhances the final accuracy.

We analyze the performance of the least absolute shrinkage and selection operator (Lasso) for the linear model when the number of regressors $N$ grows larger keeping the true support size $d$ finite, i.e., the ultra-sparse case. The result is based on a novel treatment of the non-rigorous replica method in statistical physics, which has been applied only to problem settings where $N$ ,$d$ and the number of observations $M$ tend to infinity at the same rate. Our analysis makes it possible to assess the average performance of Lasso with Gaussian sensing matrices without assumptions on the scaling of $N$ and $M$, the noise distribution, and the profile of the true signal. Under mild conditions on the noise distribution, the analysis also offers a lower bound on the sample complexity necessary for partial and perfect support recovery when $M$ diverges as $M = O(\log N)$. The obtained bound for perfect support recovery is a generalization of that given in previous literature, which only considers the case of Gaussian noise and diverging $d$. Extensive numerical experiments strongly support our analysis.

The deliberate introduction of controlled intersymbol interference (ISI) in Tukey signalling enables the recovery of signal amplitude and (in part) signal phase under direct detection, giving rise to significant data rate improvements compared to intensity modulation with direct detection (IMDD). The use of an integrate-and-dump detector makes precise waveform shaping unnecessary, thereby equipping the scheme with a high degree of robustness to nonlinear signal distortions introduced by practical modulators. Signal sequences drawn from star quadrature amplitude modulation (SQAM) formats admit an efficient trellis description that facilitates codebook design and low-complexity near maximum-likelihood sequence detection in the presence of both shot noise and thermal noise. Under the practical (though suboptimal) allocation of a 50% duty cycle between ISI-free and ISI-present signalling segments, at a symbol rate of 50 Gbaud and a launch power of -10 dBm the Tukey scheme has a maximum theoretically achievable throughput of 200 Gb/s with an (8,4)-SQAM constellation, while an IMDD scheme achieves about 145 Gb/s using PAM-8. Note that the two mentioned constellations have the same number of magnitude levels and the difference in throughput is resulting from exploiting phase information under using a complex-valued signal constellation.

Data imbalance is a common problem in machine learning that can have a critical effect on the performance of a model. Various solutions exist but their impact on the convergence of the learning dynamics is not understood. Here, we elucidate the significant negative impact of data imbalance on learning, showing that the learning curves for minority and majority classes follow sub-optimal trajectories when training with a gradient-based optimizer. This slowdown is related to the imbalance ratio and can be traced back to a competition between the optimization of different classes. Our main contribution is the analysis of the convergence of full-batch (GD) and stochastic gradient descent (SGD), and of variants that renormalize the contribution of each per-class gradient. We find that GD is not guaranteed to decrease the loss for each class but that this problem can be addressed by performing a per-class normalization of the gradient. With SGD, class imbalance has an additional effect on the direction of the gradients: the minority class suffers from a higher directional noise, which reduces the effectiveness of the per-class gradient normalization. Our findings not only allow us to understand the potential and limitations of strategies involving the per-class gradients, but also the reason for the effectiveness of previously used solutions for class imbalance such as oversampling.

Consider the problem of solving systems of linear algebraic equations $Ax=b$ with a real symmetric positive definite matrix $A$ using the conjugate gradient (CG) method. To stop the algorithm at the appropriate moment, it is important to monitor the quality of the approximate solution. One of the most relevant quantities for measuring the quality of the approximate solution is the $A$-norm of the error. This quantity cannot be easily computed, however, it can be estimated. In this paper we discuss and analyze the behaviour of the Gauss-Radau upper bound on the $A$-norm of the error, based on viewing CG as a procedure for approximating a certain Riemann-Stieltjes integral. This upper bound depends on a prescribed underestimate $\mu$ to the smallest eigenvalue of $A$. We concentrate on explaining a phenomenon observed during computations showing that, in later CG iterations, the upper bound loses its accuracy, and is almost independent of $\mu$. We construct a model problem that is used to demonstrate and study the behaviour of the upper bound in dependence of $\mu$, and developed formulas that are helpful in understanding this behavior. We show that the above mentioned phenomenon is closely related to the convergence of the smallest Ritz value to the smallest eigenvalue of $A$. It occurs when the smallest Ritz value is a better approximation to the smallest eigenvalue than the prescribed underestimate $\mu$. We also suggest an adaptive strategy for improving the accuracy of the upper bounds in the previous iterations.

We investigate the performance of concurrent remote sensing from independent strategic sources, whose goal is to minimize a linear combination of the freshness of information and the updating cost. In the literature, this is often investigated from a static perspective of setting the update rate of the sources a priori, either in a centralized optimal way or with a distributed game-theoretic approach. However, we argue that truly rational sources would better make such a decision with full awareness of the current age of information, resulting in a more efficient implementation of the updating policies. To this end, we investigate the scenario where sources independently perform a stateful optimization of their objective. Their strategic character leads to the formalization of this problem as a Markov game, for which we find the resulting Nash equilibrium. This can be translated into practical smooth threshold policies for their update. The results are eventually tested in a sample scenario, comparing a centralized optimal approach with two distributed approaches with different objectives for the players.

北京阿比特科技有限公司