亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In dynamic and resource-constrained environments, such as multi-hop wireless mesh networks, traditional routing protocols often falter by relying on predetermined paths that prove ineffective in unpredictable link conditions. Shortest Anypath routing offers a solution by adapting routing decisions based on real-time link conditions. However, the effectiveness of such routing is fundamentally dependent on the quality and reliability of the available links, and predicting these variables with certainty is challenging. This paper introduces a novel approach that leverages the Deterministic Sequencing of Exploration and Exploitation (DSEE), a multi-armed bandit algorithm, to address the need for accurate and real-time estimation of link delivery probabilities. This approach augments the reliability and resilience of the Shortest Anypath routing in the face of fluctuating link conditions. By coupling DSEE with Anypath routing, this algorithm continuously learns and ensures accurate delivery probability estimation and selects the most suitable way to efficiently route packets while maintaining a provable near-logarithmic regret bound. We also theoretically prove that our proposed scheme offers better regret scaling with respect to the network size than the previously proposed Thompson Sampling-based Opportunistic Routing (TSOR).

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際(ji)網(wang)絡會(hui)議。 Publisher:IFIP。 SIT:

Sixth generation (6G) wireless networks are envisioned to include aspects of energy footprint reduction (sustainability), besides those of network capacity and connectivity, at the design stage. This paradigm change requires radically new physical layer technologies. Notably, the integration of large-aperture arrays and the transmission over high frequency bands, such as the sub-terahertz spectrum, are two promising options. In many communication scenarios of practical interest, the use of large antenna arrays in the sub-terahertz frequency range often results in short-range transmission distances that are characterized by line-of-sight channels, in which pairs of transmitters and receivers are located in the (radiating) near field of one another. These features make the traditional designs, based on the far-field approximation, for multiple-input multiple-output (MIMO) systems sub-optimal in terms of spatial multiplexing gains. To overcome these limitations, new designs for MIMO systems are required, which account for the spherical wavefront that characterizes the electromagnetic waves in the near field, in order to ensure the highest spatial multiplexing gain without increasing the power expenditure. In this paper, we introduce an analytical framework for optimizing the deployment of antenna arrays in line-of-sight channels, which can be applied to paraxial and non-paraxial network deployments. In the paraxial setting, we devise a simpler analytical framework, which, compared to those available in the literature, provides explicit information about the impact of key design parameters. In the non-paraxial setting, we introduce a novel analytical framework that allows us to identify a set of sufficient conditions to be fulfilled for achieving the highest spatial multiplexing gain. The proposed designs are validated with numerical simulations.

The typical training of neural networks using large stepsize gradient descent (GD) under the logistic loss often involves two distinct phases, where the empirical risk oscillates in the first phase but decreases monotonically in the second phase. We investigate this phenomenon in two-layer networks that satisfy a near-homogeneity condition. We show that the second phase begins once the empirical risk falls below a certain threshold, dependent on the stepsize. Additionally, we show that the normalized margin grows nearly monotonically in the second phase, demonstrating an implicit bias of GD in training non-homogeneous predictors. If the dataset is linearly separable and the derivative of the activation function is bounded away from zero, we show that the average empirical risk decreases, implying that the first phase must stop in finite steps. Finally, we demonstrate that by choosing a suitably large stepsize, GD that undergoes this phase transition is more efficient than GD that monotonically decreases the risk. Our analysis applies to networks of any width, beyond the well-known neural tangent kernel and mean-field regimes.

Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce $\textit{AND}$, the first $\textbf{A}$udio $\textbf{N}$etwork $\textbf{D}$issection framework that automatically establishes natural language explanations of acoustic neurons based on highly-responsive audio. $\textit{AND}$ features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify $\textit{AND}$'s precise and informative descriptions. In addition, we demonstrate a potential use of $\textit{AND}$ for audio machine unlearning by conducting concept-specific pruning based on the generated descriptions. Finally, we highlight two acoustic model behaviors with analysis by $\textit{AND}$: (i) models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts; (ii) training strategies affect model behaviors and neuron interpretability -- supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features.

Neural networks often operate in the overparameterized regime, in which there are far more parameters than training samples, allowing the training data to be fit perfectly. That is, training the network effectively learns an interpolating function, and properties of the interpolant affect predictions the network will make on new samples. This manuscript explores how properties of such functions learned by neural networks of depth greater than two layers. Our framework considers a family of networks of varying depths that all have the same capacity but different representation costs. The representation cost of a function induced by a neural network architecture is the minimum sum of squared weights needed for the network to represent the function; it reflects the function space bias associated with the architecture. Our results show that adding additional linear layers to the input side of a shallow ReLU network yields a representation cost favoring functions with low mixed variation - that is, it has limited variation in directions orthogonal to a low-dimensional subspace and can be well approximated by a single- or multi-index model. Such functions may be represented by the composition of a function with low two-layer representation cost and a low-rank linear operator. Our experiments confirm this behavior in standard network training regimes. They additionally show that linear layers can improve generalization and the learned network is well-aligned with the true latent low-dimensional linear subspace when data is generated using a multi-index model.

Existing studies analyzing electromagnetic field (EMFE) in wireless networks have primarily considered downlink communications. In the uplink, the EMFE caused by the user's smartphone is usually the only considered source of radiation, thereby ignoring contributions caused by other active neighboring devices. In addition, the network coverage and EMFE are typically analyzed independently for both the uplink and downlink, while a joint analysis would be necessary to fully understand the network performance and answer various questions related to optimal network deployment. This paper bridges these gaps by presenting an enhanced stochastic geometry framework that includes the above aspects. The proposed topology features base stations modeled via a homogeneous Poisson point process. The users active during a same time slot are distributed according to a mixture of a Mat\'ern cluster process and a Gauss-Poisson process, featuring groups of users possibly carrying several equipments. In this paper, we derive the marginal and meta distributions of the downlink and uplink EMFE and we characterize the uplink to downlink EMFE ratio. Moreover, we derive joint probability metrics considering the uplink and downlink coverage and EMFE. These metrics are evaluated in four scenarios considering BS, cluster and/or intracluster densifications. Our numerical results highlight the existence of optimal node densities maximizing these joint probabilities.

The customization of services in Fifth-generation (5G) and Beyond 5G (B5G) networks relies heavily on network slicing, which creates multiple virtual networks on a shared physical infrastructure, tailored to meet specific requirements of distinct applications, using Software Defined Networking (SDN) and Network Function Virtualization (NFV). It is imperative to ensure that network services meet the performance and reliability requirements of various applications and users, thus, service assurance is one of the critical components in network slicing. One of the key functionalities of network slicing is the ability to scale Virtualized Network Functions (VNFs) in response to changing resource demand and to meet Customer Service Level agreements (SLAs). In this paper, we introduce a proactive closed-loop algorithm for end-to-end network orchestration, designed to provide service assurance in 5G and B5G networks. We focus on dynamically scaling resources to meet key performance indicators (KPIs) specific to each network slice and operate in parallel across multiple slices, making it scalable and capable of managing completely automatically real-time service assurance. Through our experiments, we demonstrate that the proposed algorithm effectively fulfills service assurance requirements for different network slice types, thereby minimizing network resource utilization and reducing the over-provisioning of spare resources.

Modelling passenger assignments in public transport networks is a fundamental task for city planners, especially when deliberating network infrastructure decisions. A key aspect of a realistic model for passenger assignments is to integrate selfish routing behaviour of passengers on the one hand, and the limited vehicle capacities on the other hand. We formulate a side-constrained user equilibrium model in a schedule-based time-expanded transit network, where passengers are modelled via a continuum of non-atomic agents that want to travel with a fixed start time from a user-specific origin to a destination. An agent's route may comprise several rides along given lines, each using vehicles with hard loading capacities. We give a characterization of (side-constrained) user equilibria via a quasi-variational inequality and prove their existence by generalizing a well-known existence result of Bernstein and Smith (Transp. Sci., 1994). We further derive a polynomial time algorithm for single-commodity instances and an exact finite time algorithm for the multi-commodity case. Based on our quasi-variational characterization, we finally devise a fast heuristic computing user equilibria, which is tested on real-world instances based on data gained from the Hamburg S-Bahn system and the Swiss long-distance train network. It turns out that w.r.t. the total travel time, the computed user-equilibria are quite efficient compared to a system optimum, which neglects equilibrium constraints and only minimizes total travel time.

As telecommunications networks become increasingly complex, the integration of advanced technologies such as network digital twins and generative artificial intelligence (AI) emerges as a pivotal solution to enhance network operations and resilience. This paper explores the synergy between network digital twins, which provide a dynamic virtual representation of physical networks, and generative AI, particularly focusing on Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). We propose a novel architectural framework that incorporates these technologies to significantly improve predictive maintenance, network scenario simulation, and real-time data-driven decision-making. Through extensive simulations, we demonstrate how generative AI can enhance the accuracy and operational efficiency of network digital twins, effectively handling real-world complexities such as unpredictable traffic loads and network failures. The findings suggest that this integration not only boosts the capability of digital twins in scenario forecasting and anomaly detection but also facilitates a more adaptive and intelligent network management system.

One principal approach for illuminating a black-box neural network is feature attribution, i.e. identifying the importance of input features for the network's prediction. The predictive information of features is recently proposed as a proxy for the measure of their importance. So far, the predictive information is only identified for latent features by placing an information bottleneck within the network. We propose a method to identify features with predictive information in the input domain. The method results in fine-grained identification of input features' information and is agnostic to network architecture. The core idea of our method is leveraging a bottleneck on the input that only lets input features associated with predictive latent features pass through. We compare our method with several feature attribution methods using mainstream feature attribution evaluation experiments. The code is publicly available.

The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.

北京阿比特科技有限公司