亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Intelligent reflecting surfaces (IRSs) are a promising low-cost solution for achieving high spectral and energy efficiency in future communication systems by enabling the customization of wireless propagation environments. Despite the plethora of research on resource allocation design for IRS-assisted multiuser communication systems, the optimal design and the corresponding performance upper bound are still not fully understood. To bridge this gap in knowledge, in this paper, we investigate the optimal resource allocation design for IRS-assisted multiuser systems employing practical discrete IRS phase shifters. In particular, we jointly optimize the beamforming vector at the base station (BS) and the discrete IRS phase shifts to minimize the total transmit power for the cases of perfect and imperfect channel state information (CSI) knowledge. To this end, two novel algorithms based on the generalized Benders decomposition (GBD) method are developed to obtain the globally optimal solution for perfect and imperfect CSI, respectively. Moreover, to facilitate practical implementation, we propose two corresponding low-complexity suboptimal algorithms with guaranteed convergence by capitalizing on successive convex approximation (SCA). In particular, for imperfect CSI, we adopt a bounded error model to characterize the CSI uncertainty and propose a new transformation to convexify the robust quality-of-service (QoS) constraints. Our numerical results confirm the optimality of the proposed GBD-based algorithms for the considered system for both perfect and imperfect CSI. Furthermore, we unveil that both proposed SCA-based algorithms can achieve a close-to-optimal performance within a few iterations. Moreover, compared with the state-of-the-art solution based on the alternating optimization (AO) method, the proposed SCA-based scheme achieves a significant performance gain with low complexity.

相關內容

Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer black-box optimization framework, on non-separable functions. First, we reveal empirical reasons of when decomposition-based methods are preferred or not in practice on some non-separable large-scale problems, which have not been clearly pointed out in many previous CC papers. Then, we formalize CC to a continuous-game model via simplification, but without losing its essential property. Different from previous evolutionary game theory for CC, our new model provides a much simpler but useful viewpoint to analyze its convergence, since only the pure Nash equilibrium concept is needed and more general fitness landscapes can be explicitly considered. Based on convergence analyses, we propose a hierarchical decomposition strategy for better generalization, as for any decomposition, there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally, we use powerful distributed computing to accelerate it under the recent multi-level learning framework, which combines the fine-tuning ability from decomposition with the invariance property of CMA-ES. Experiments on a set of high-dimensional test functions validate both its search performance and scalability (w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores.

The large untapped spectrum in the sub-THz allows for ultra-high throughput communication to realize many seemingly impossible applications in 6G. One of the challenges in radio communications in sub-THz is the hardware impairments. Specifically, phase noise is one key hardware impairment, which is accentuated as we increase the frequency and bandwidth. Furthermore, the modest output power of the sub-THz power amplifier demands limits on peak to average power ratio (PAPR) signal design. Single carrier frequency domain equalization (SC-FDE) waveform has been identified as a suitable candidate for sub-THz, although some challenges such as phase noise and PAPR still remain to be tackled. In this work, we design a phase noise robust, low PAPR SC-FDE waveform by geometrically shaping the constellation under practical conditions. We formulate the waveform optimization problem in its augmented Lagrangian form and use a back-propagation-inspired technique to obtain a constellation design that is numerically robust to phase noise, while maintaining a low PAPR.

Orthogonal time frequency space (OTFS) is a modulation technique which is robust against the disruptive effects of doubly-selective channels. In this paper, we perform an experimental study of OTFS by a real-time software defined radio (SDR) setup. Our SDR consists of a Graphical Processing Unit (GPU) for signal processing programmed using Sionna and TensorFlow, and Universal Software Radio Peripheral (USRP) devices for air interface. We implement a low-latency transceiver structure for OTFS and investigate its performance under various Doppler values. By comparing the performance of OTFS with Orthogonal Frequency Division Multiplexing (OFDM), we demonstrate that OTFS is highly robust against the disruptive effects of doubly-selective channels in a real-time experimental setup.

The sixth-generation (6G) wireless technology recognizes the potential of reconfigurable intelligent surfaces (RIS) as an effective technique for intelligently manipulating channel paths through reflection to serve desired users. Full-duplex (FD) systems, enabling simultaneous transmission and reception from a base station (BS), offer the theoretical advantage of doubled spectrum efficiency. However, the presence of strong self-interference (SI) in FD systems significantly degrades performance, which can be mitigated by leveraging the capabilities of RIS. Moreover, accurately obtaining channel state information (CSI) from RIS poses a critical challenge. Our objective is to maximize downlink (DL) user data rates while ensuring quality-of-service (QoS) for uplink (UL) users under imperfect CSI from reflected channels. To address this, we propose a robust active BS and passive RIS beamforming (RAPB) scheme for RIS-FD, accounting for both SI and imperfect CSI. RAPB incorporates distributionally robust design, conditional value-at-risk (CVaR), and penalty convex-concave programming (PCCP) techniques. Simulation results demonstrate the UL/DL rate improvement are achieved by considering different levels of imperfect CSI. The proposed RAPB schemes validate their effectiveness across different RIS deployments and RIS/BS configurations. Benefited from robust beamforming, RAPB outperforms the existing methods in terms of non-robustness, deployment without RIS, conventional approximation, and half-duplex systems.

Categorical semantics of type theories are often characterized as structure-preserving functors. This is because in category theory both the syntax and the domain of interpretation are uniformly treated as structured categories, so that we can express interpretations as structure-preserving functors between them. This mathematical characterization of semantics makes it convenient to manipulate and to reason about relationships between interpretations. Motivated by this success of functorial semantics, we address the question of finding a functorial analogue in abstract interpretation, a general framework for comparing semantics, so that we can bring similar benefits of functorial semantics to semantic abstractions used in abstract interpretation. Major differences concern the notion of interpretation that is being considered. Indeed, conventional semantics are value-based whereas abstract interpretation typically deals with more complex properties. In this paper, we propose a functorial approach to abstract interpretation and study associated fundamental concepts therein. In our approach, interpretations are expressed as oplax functors in the category of posets, and abstraction relations between interpretations are expressed as lax natural transformations representing concretizations. We present examples of these formal concepts from monadic semantics of programming languages and discuss soundness.

In this work, a set of motion primitives is defined for use in an energy-aware motion planning problem. The motion primitives are defined as sequences of control inputs to a simplified four-DOF dynamics model and are used to replace the traditional continuous control space used in many sampling-based motion planners. The primitives are implemented in a Stable Sparse Rapidly Exploring Random Tree (SST) motion planner and compared to an identical planner using a continuous control space. The planner using primitives was found to run 11.0\% faster but yielded solution paths that were on average worse with higher variance. Also, the solution path travel time is improved by about 50\%. Using motion primitives for sampling spaces in SST can effectively reduce the run time of the algorithm, although at the cost of solution quality.

We consider the problem of detecting causal relationships between discrete time series, in the presence of potential confounders. A hypothesis test is introduced for identifying the temporally causal influence of $(x_n)$ on $(y_n)$, causally conditioned on a possibly confounding third time series $(z_n)$. Under natural Markovian modeling assumptions, it is shown that the null hypothesis, corresponding to the absence of temporally causal influence, is equivalent to the underlying `causal conditional directed information rate' being equal to zero. The plug-in estimator for this functional is identified with the log-likelihood ratio test statistic for the desired test. This statistic is shown to be asymptotically normal under the alternative hypothesis and asymptotically $\chi^2$ distributed under the null, facilitating the computation of $p$-values when used on empirical data. The effectiveness of the resulting hypothesis test is illustrated on simulated data, validating the underlying theory. The test is also employed in the analysis of spike train data recorded from neurons in the V4 and FEF brain regions of behaving animals during a visual attention task. There, the test results are seen to identify interesting and biologically relevant information.

Learning-based vehicle planning is receiving increasing attention with the emergence of diverse driving simulators and large-scale driving datasets. While offline reinforcement learning (RL) is well suited for these safety-critical tasks, it still struggles to plan over extended periods. In this work, we present a skill-based framework that enhances offline RL to overcome the long-horizon vehicle planning challenge. Specifically, we design a variational autoencoder (VAE) to learn skills from offline demonstrations. To mitigate posterior collapse of common VAEs, we introduce a two-branch sequence encoder to capture both discrete options and continuous variations of the complex driving skills. The final policy treats learned skills as actions and can be trained by any off-the-shelf offline RL algorithms. This facilitates a shift in focus from per-step actions to temporally extended skills, thereby enabling long-term reasoning into the future. Extensive results on CARLA prove that our model consistently outperforms strong baselines at both training and new scenarios. Additional visualizations and experiments demonstrate the interpretability and transferability of extracted skills.

Finite state machines (FSM's) are implemented with sequential circuits and are used to orchestrate the operation of hardware designs. Sequential obfuscation schemes aimed at preventing IP theft often operate by augmenting a design's FSM post-synthesis. Many such schemes are based on the ability to recover the FSM's topology from the synthesized design. In this paper, we present two tools which can improve the performance of topology extraction: RECUT, which extracts the FSM implementation from a netlist, and REFSM-SAT, which solves topology enumeration as a series of SAT problems. In some cases, these tools can improve performance significantly over current methods, attaining up to a 99\% decrease in runtime.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

北京阿比特科技有限公司