亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a generalized collision channel model for general multi-user communication systems, an extension of Massey and Mathys' collision channel without feedback for multiple access communications. In our model, there are multiple transmitters and receivers sharing the same communication channel. The transmitters are not synchronized and arbitrary time offsets between transmitters and receivers are assumed. A ``collision" occurs if two or more packets from different transmitters partially or completely overlap at a receiver. Our model includes the original collision channel as a special case. This paper focuses on reliable throughputs that are approachable for arbitrary time offsets. We consider both slot-synchronized and non-synchronized cases and characterize their reliable throughput regions for the generalized collision channel model. These two regions are proven to coincide. Moreover, it is shown that the protocol sequences constructed for multiple access communication remain ``throughput optimal" in the generalized collision channel model. We also identify the protocol sequences that can approach the outer boundary of the reliable throughput region.

相關內容

We consider the problem of sequentially optimizing a time-varying objective function using time-varying Bayesian optimization (TVBO). Here, the key challenge is the exploration-exploitation trade-off under time variations. Current approaches to TVBO require prior knowledge of a constant rate of change. However, in practice, the rate of change is usually unknown. We propose an event-triggered algorithm, ET-GP-UCB, that treats the optimization problem as static until it detects changes in the objective function online and then resets the dataset. This allows the algorithm to adapt to realized temporal changes without the need for prior knowledge. The event-trigger is based on probabilistic uniform error bounds used in Gaussian process regression. We provide regret bounds for ET-GP-UCB and show in numerical experiments that it outperforms state-of-the-art algorithms on synthetic and real-world data. Furthermore, these results demonstrate that ET-GP-UCB is readily applicable to various settings without tuning hyperparameters.

This paper derives approximate outage probability (OP) expressions for uplink cell-free massive multiple-input-multiple-output (CF-mMIMO) systems with and without pilot contamination. The system's access points (APs) are considered to have imperfect channel state information (CSI). The signal-to-interference-plus-noise ratio (SINR) of the CF-mMIMO system is approximated via a Log-normal distribution using a two-step moment matching method. OP and ergodic rate expressions are derived with the help of the approximated Log-normal distribution. For the no-pilot contamination scenario, an exact expression is first derived using conditional expectations in terms of a multi-fold integral. Then, a novel dimension reduction method is used to approximate it by the sum of single-variable integrations. Both the approximations derived for the CF-mMIMO systems are also useful for single-cell collocated massive MIMO (mMIMO) systems and lead to closed-form expression. The derived expressions closely match the simulated numerical values for OP and ergodic rate.

We present a finite element scheme for fractional diffusion problems with varying diffusivity and fractional order. We consider a symmetric integral form of these nonlocal equations defined on general geometries and in arbitrary bounded domains. A number of challenges are encountered when discretizing these equations. The first comes from the heterogeneous kernel singularity in the fractional integral operator. The second comes from the dense discrete operator with its quadratic growth in memory footprint and arithmetic operations. An additional challenge comes from the need to handle volume conditions-the generalization of classical local boundary conditions to the nonlocal setting. Satisfying these conditions requires that the effect of the whole domain, including both the interior and exterior regions, can be computed on every interior point in the discretization. Performed directly, this would result in quadratic complexity. To address these challenges, we propose a strategy that decomposes the stiffness matrix into three components. The first is a sparse matrix that handles the singular near-field separately and is computed by adapting singular quadrature techniques available for the homogeneous case to the case of spatially variable order. The second component handles the remaining smooth part of the near-field as well as the far field and is approximated by a hierarchical $\mathcal{H}^{2}$ matrix that maintains linear complexity in storage and operations. The third component handles the effect of the global mesh at every node and is written as a weighted mass matrix whose density is computed by a fast-multipole type method. The resulting algorithm has therefore overall linear space and time complexity. Analysis of the consistency of the stiffness matrix is provided and numerical experiments are conducted to illustrate the convergence and performance of the proposed algorithm.

Stochastic gradient descent with momentum (SGDM) is the dominant algorithm in many optimization scenarios, including convex optimization instances and non-convex neural network training. Yet, in the stochastic setting, momentum interferes with gradient noise, often leading to specific step size and momentum choices in order to guarantee convergence, set aside acceleration. Proximal point methods, on the other hand, have gained much attention due to their numerical stability and elasticity against imperfect tuning. Their stochastic accelerated variants though have received limited attention: how momentum interacts with the stability of (stochastic) proximal point methods remains largely unstudied. To address this, we focus on the convergence and stability of the stochastic proximal point algorithm with momentum (SPPAM), and show that SPPAM allows a faster linear convergence to a neighborhood compared to the stochastic proximal point algorithm (SPPA) with a better contraction factor, under proper hyperparameter tuning. In terms of stability, we show that SPPAM depends on problem constants more favorably than SGDM, allowing a wider range of step size and momentum that lead to convergence.

In the classical communication setting multiple senders having access to the same source of information and transmitting it over channel(s) to a receiver in general leads to a decrease in estimation error at the receiver as compared with the single sender case. However, if the objectives of the information providers are different from that of the estimator, this might result in interesting strategic interactions and outcomes. In this work, we consider a hierarchical signaling game between multiple senders (information designers) and a single receiver (decision maker) each having their own, possibly misaligned, objectives. The senders lead the game by committing to individual information disclosure policies simultaneously, within the framework of a non-cooperative Nash game among themselves. This is followed by the receiver's action decision. With Gaussian information structure and quadratic objectives (which depend on underlying state and receiver's action) for all the players, we show that in general the equilibrium is not unique. We hence identify a set of equilibria and further show that linear noiseless policies can achieve a minimal element of this set. Additionally, we show that competition among the senders is beneficial to the receiver, as compared with cooperation among the senders. Further, we extend our analysis to a dynamic signaling game of finite horizon with Markovian information evolution. We show that linear memoryless policies can achieve equilibrium in this dynamic game. We also consider an extension to a game with multiple receivers having coupled objectives. We provide algorithms to compute the equilibrium strategies in all these cases. Finally, via extensive simulations, we analyze the effects of multiple senders in varying degrees of alignment among their objectives.

The increasing significance of digital twin technology across engineering and industrial domains, such as aerospace, infrastructure, and automotive, is undeniable. However, the lack of detailed application-specific information poses challenges to its seamless implementation in practical systems. Data-driven models play a crucial role in digital twins, enabling real-time updates and predictions by leveraging data and computational models. Nonetheless, the fidelity of available data and the scarcity of accurate sensor data often hinder the efficient learning of surrogate models, which serve as the connection between physical systems and digital twin models. To address this challenge, we propose a novel framework that begins by developing a robust multi-fidelity surrogate model, subsequently applied for tracking digital twin systems. Our framework integrates polynomial correlated function expansion (PCFE) with the Gaussian process (GP) to create an effective surrogate model called H-PCFE. Going a step further, we introduce deep-HPCFE, a cascading arrangement of models with different fidelities, utilizing nonlinear auto-regression schemes. These auto-regressive schemes effectively address the issue of erroneous predictions from low-fidelity models by incorporating space-dependent cross-correlations among the models. To validate the efficacy of the multi-fidelity framework, we first assess its performance in uncertainty quantification using benchmark numerical examples. Subsequently, we demonstrate its applicability in the context of digital twin systems.

We consider a source monitoring a stochastic process with a transmitter to transmit timely information through a wireless ON/OFF channel to a destination. We assume that once the source samples the data, the sampled data has to be processed to identify the state of the stochastic process. The processing can take place either at the source before transmission or after transmission at the destination. The objective is to minimize the distortion while keeping the age of information (AoI) that measures the timeliness of information under a certain threshold. We use a stationary randomized policy (SRP) framework to solve the formulated problem. We show that the two-dimensional discrete-time Markov chain considering the AoI and instantaneous distortion as the state is lumpable and we obtain the expression for the expected AoI under the SRP.

This paper proposes and analyzes a novel fully discrete finite element scheme with the interpolation operator for stochastic Cahn-Hilliard equations with functional-type noise. The nonlinear term satisfies a one-side Lipschitz condition and the diffusion term is globally Lipschitz continuous. The novelties of this paper are threefold. First, the $L^2$-stability ($L^\infty$ in time) and the discrete $H^2$-stability ($L^2$ in time) are proved for the proposed scheme. The idea is to utilize the special structure of the matrix assembled by the nonlinear term. None of these stability results has been proved for the fully implicit scheme in existing literature due to the difficulty arising from the interaction of the nonlinearity and the multiplicative noise. Second, the higher moment stability in $L^2$-norm of the discrete solution is established based on the previous stability results. Third, the H\"older continuity in time for the strong solution is established under the minimum assumption of the strong solution. Based on these, the discrete $H^{-1}$-norm of the strong convergence is discussed. Several numerical experiments including stability and convergence are also presented to validate our theoretical results.

The performance of distributed storage systems deployed on wide-area networks can be improved using weighted (majority) quorum systems instead of their regular variants due to the heterogeneous performance of the nodes. A significant limitation of weighted majority quorum systems lies in their dependence on static weights, which are inappropriate for systems subject to the dynamic nature of networked environments. To overcome this limitation, such quorum systems require mechanisms for reassigning weights over time according to the performance variations. We study the problem of node weight reassignment in asynchronous systems with a static set of servers and static fault threshold. We prove that solving such a problem is as hard as solving consensus, i.e., it cannot be implemented in asynchronous failure-prone distributed systems. This result is somewhat counter-intuitive, given the recent results showing that two related problems -- replica set reconfiguration and asset transfer -- can be solved in asynchronous systems. Inspired by these problems, we present two versions of the problem that contain restrictions on the weights of servers and the way they are reassigned. We propose a protocol to implement one of the restricted problems in asynchronous systems. As a case study, we construct a dynamic-weighted atomic storage based on such a protocol. We also discuss the relationship between weight reassignment and asset transfer problems and compare our dynamic-weighted atomic storage with reconfigurable atomic storage.

In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility and integrate them in a manner that is also universal. Here, universality refers to independence on homophily or heterophily graph assumptions. We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic. Learned GPR weights automatically adjust to the node label pattern, irrelevant on the type of initialization, and thereby guarantee excellent learning performance for label patterns that are usually hard to handle. Furthermore, they allow one to avoid feature over-smoothing, a process which renders feature information nondiscriminative, without requiring the network to be shallow. Our accompanying theoretical analysis of the GPR-GNN method is facilitated by novel synthetic benchmark datasets generated by the so-called contextual stochastic block model. We also compare the performance of our GNN architecture with that of several state-of-the-art GNNs on the problem of node-classification, using well-known benchmark homophilic and heterophilic datasets. The results demonstrate that GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.

北京阿比特科技有限公司