亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

As the demand for wireless connectivity continues to soar, the fifth generation and beyond wireless networks are exploring new ways to efficiently utilize the wireless spectrum and reduce hardware costs. One such approach is the integration of sensing and communications (ISAC) paradigms to jointly access the spectrum. Recent ISAC studies have focused on upper millimeter-wave and low terahertz bands to exploit ultrawide bandwidths. At these frequencies, hybrid beamformers that employ fewer radio-frequency chains are employed to offset expensive hardware but at the cost of lower multiplexing gains. Wideband hybrid beamforming also suffers from the beam-split effect arising from the subcarrier-independent (SI) analog beamformers. To overcome these limitations, this paper introduces a spatial path index modulation (SPIM) ISAC architecture, which transmits additional information bits via modulating the spatial paths between the base station and communications users. We design the SPIM-ISAC beamformers by first estimating both radar and communications parameters by developing beam-split-aware algorithms. Then, we propose to employ a family of hybrid beamforming techniques such as hybrid, SI, and subcarrier-dependent analog-only, and beam-split-aware beamformers. Numerical experiments demonstrate that the proposed SPIM-ISAC approach exhibits significantly improved spectral efficiency performance in the presence of beam-split than that of even fully digital non-SPIM beamformers.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

In conventional backscatter communication (BackCom) systems, time division multiple access (TDMA) and frequency division multiple access (FDMA) are generally adopted for multiuser backscattering due to their simplicity in implementation. However, as the number of backscatter devices (BDs) proliferates, there will be a high overhead under the traditional centralized control techniques, and the inter-user coordination is unaffordable for the passive BDs, which are of scarce concern in existing works and remain unsolved. To this end, in this paper, we propose a slotted ALOHA-based random access for BackCom systems, in which each BD is randomly chosen and is allowed to coexist with one active device for hybrid multiple access. To excavate and evaluate the performance, a resource allocation problem for max-min transmission rate is formulated, where transmit antenna selection, receive beamforming design, reflection coefficient adjustment, power control, and access probability determination are jointly considered. To deal with this intractable problem, we first transform the objective function with the max-min form into an equivalent linear one, and then decompose the resulting problem into three sub-problems. Next, a block coordinate descent (BCD)-based greedy algorithm with a penalty function, successive convex approximation, and linear programming are designed to obtain sub-optimal solutions for tractable analysis. Simulation results demonstrate that the proposed algorithm outperforms benchmark algorithms in terms of transmission rate and fairness.

Vision Transformers (ViTs) have been shown to be effective in various vision tasks. However, resizing them to a mobile-friendly size leads to significant performance degradation. Therefore, developing lightweight vision transformers has become a crucial area of research. This paper introduces CloFormer, a lightweight vision transformer that leverages context-aware local enhancement. CloFormer explores the relationship between globally shared weights often used in vanilla convolutional operators and token-specific context-aware weights appearing in attention, then proposes an effective and straightforward module to capture high-frequency local information. In CloFormer, we introduce AttnConv, a convolution operator in attention's style. The proposed AttnConv uses shared weights to aggregate local information and deploys carefully designed context-aware weights to enhance local features. The combination of the AttnConv and vanilla attention which uses pooling to reduce FLOPs in CloFormer enables the model to perceive high-frequency and low-frequency information. Extensive experiments were conducted in image classification, object detection, and semantic segmentation, demonstrating the superiority of CloFormer. The code is available at \url{//github.com/qhfan/CloFormer}.

Integrated sensing and communications (ISAC) is a spectrum-sharing paradigm that allows different users to jointly utilize and access the crowded electromagnetic spectrum. In this context, intelligent reflecting surfaces (IRSs) have lately emerged as an enabler for non-line-of-sight (NLoS) ISAC. Prior IRS-aided ISAC studies assume passive surfaces and rely on the continuous-valued phase-shift model. In practice, the phase-shifts are quantized. Moreover, recent research has shown substantial performance benefits with active IRS. In this paper, we include these characteristics in our IRS-aided ISAC model to maximize the receive radar and communications signal-to-noise ratios (SNR) subjected to a unimodular IRS phase-shift vector and power budget. The resulting optimization is a highly non-convex unimodular quartic optimization problem. We tackle this problem via a bi-quadratic transformation to split the design into two quadratic sub-problems that are solved using the power iteration method. The proposed approach employs the M-ary unimodular sequence design via relaxed power method-like iteration (MaRLI) to design the quantized phase-shifts. Numerical experiments employ continuous-valued phase shifts as a benchmark and demonstrate that our active-IRS-aided ISAC design with MaRLI converges to a higher value of SNR with an increase in the number of IRS quantization bits.

As of today, 5G is rolling out across the world, but academia and industry have shifted their attention to the sixth generation (6G) cellular technology for a full-digitalized, intelligent society in 2030 and beyond. 6G demands far more bandwidth to support extreme performance, exacerbating the problem of spectrum shortage in mobile communications. In this context, this paper proposes a novel concept coined Full-Spectrum Wireless Communications (FSWC). It makes use of all communication-feasible spectral resources over the whole electromagnetic (EW) spectrum, from microwave, millimeter wave, terahertz (THz), infrared light, visible light, to ultraviolet light. FSWC not only provides sufficient bandwidth but also enables new paradigms taking advantage of peculiarities on different EW bands. This paper will define FSWC, justify its necessity for 6G, and then discuss the opportunities and challenges of exploiting THz and optical bands.

In the recent literature on estimating heterogeneous treatment effects, each proposed method makes its own set of restrictive assumptions about the intervention's effects and which subpopulations to explicitly estimate. Moreover, the majority of the literature provides no mechanism to identify which subpopulations are the most affected--beyond manual inspection--and provides little guarantee on the correctness of the identified subpopulations. Therefore, we propose Treatment Effect Subset Scan (TESS), a new method for discovering which subpopulation in a randomized experiment is most significantly affected by a treatment. We frame this challenge as a pattern detection problem where we efficiently maximize a nonparametric scan statistic (a measure of the conditional quantile treatment effect) over subpopulations. Furthermore, we identify the subpopulation which experiences the largest distributional change as a result of the intervention, while making minimal assumptions about the intervention's effects or the underlying data generating process. In addition to the algorithm, we demonstrate that under the sharp null hypothesis of no treatment effect, the asymptotic Type I and II error can be controlled, and provide sufficient conditions for detection consistency--i.e., exact identification of the affected subpopulation. Finally, we validate the efficacy of the method by discovering heterogeneous treatment effects in simulations and in real-world data from a well-known program evaluation study.

This letter proposes a new user cooperative offloading protocol called user reciprocity in backscatter communication (BackCom)-aided mobile edge computing systems with efficient computation, whose quintessence is that each user can switch alternately between the active or the BackCom mode in different slots, and one user works in the active mode and the other user works in the BackCom mode in each time slot. In particular, the user in the BackCom mode can always use the signal transmitted by the user in the active mode for more data transmission in a spectrum-sharing manner. To evaluate the proposed protocol, a computation efficiency (CE) maximization-based optimization problem is formulated by jointly power control, time scheduling, reflection coefficient adjustment, and computing frequency allocation, while satisfying various physical constraints on the maximum energy budget, the computing frequency threshold, the minimum computed bits, and harvested energy threshold. To solve this non-convex problem, Dinkelbach's method and quadratic transform are first employed to transform the complex fractional forms into linear ones. Then, an iterative algorithm is designed by decomposing the resulting problem to obtain the suboptimal solution. The closed-form solutions for the transmit power, the RC, and the local computing frequency are provided for more insights. Besides, the analytical performance gain with the reciprocal mode is also derived. Simulation results demonstrate that the proposed scheme outperforms benchmark schemes regarding the CE.

To make indoor industrial cell-free massive multiple-input multiple-output (CF-mMIMO) networks free from wired fronthaul, this paper studies a multicarrier-division duplex (MDD)-enabled two-tier terahertz (THz) fronthaul scheme. More specifically, two layers of fronthaul links rely on the mutually orthogonal subcarreir sets in the same THz band, while access links are implemented over sub-6G band. The proposed scheme leads to a complicated mixed-integer nonconvex optimization problem incorporating access point (AP) clustering, device selection, the assignment of subcarrier sets between two fronthaul links and the resource allocation at both the central processing unit (CPU) and APs. In order to address the formulated problem, we first resort to the low-complexity but efficient heuristic methods thereby relaxing the binary variables. Then, the overall end-to-end rate is obtained by iteratively optimizing the assignment of subcarrier sets and the number of AP clusters. Furthermore, an advanced MDD frame structure consisting of three parallel data streams is tailored for the proposed scheme. Simulation results demonstrate the effectiveness of the proposed dynamic AP clustering approach in dealing with the varying sizes of networks. Moreover, benefiting from the well-designed frame structure, MDD is capable of outperforming TDD in the two-tier fronthaul networks. Additionally, the effect of the THz bandwidth on system performance is analyzed, and it is shown that with sufficient frequency resources, our proposed two-tier fully-wireless fronthaul scheme can achieve a comparable performance to the fiber-optic based systems. Finally, the superiority of the proposed MDD-enabled fronthaul scheme is verified in a practical scenario with realistic ray-tracing simulations.

When estimating quantities and fields that are difficult to measure directly, such as the fluidity of ice, from point data sources, such as satellite altimetry, it is important to solve a numerical inverse problem that is formulated with Bayesian consistency. Otherwise, the resultant probability density function for the difficult to measure quantity or field will not be appropriately clustered around the truth. In particular, the inverse problem should be formulated by evaluating the numerical solution at the true point locations for direct comparison with the point data source. If the data are first fitted to a gridded or meshed field on the computational grid or mesh, and the inverse problem formulated by comparing the numerical solution to the fitted field, the benefits of additional point data values below the grid density will be lost. We demonstrate, with examples in the fields of groundwater hydrology and glaciology, that a consistent formulation can increase the accuracy of results and aid discourse between modellers and observationalists. To do this, we bring point data into the finite element method ecosystem as discontinuous fields on meshes of disconnected vertices. Point evaluation can then be formulated as a finite element interpolation operation (dual-evaluation). This new abstraction is well-suited to automation, including automatic differentiation. We demonstrate this through implementation in Firedrake, which generates highly optimised code for solving PDEs with the finite element method. Our solution integrates with dolfin-adjoint/pyadjoint, allowing PDE-constrained optimisation problems, such as data assimilation, to be solved through forward and adjoint mode automatic differentiation.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at //github.com/uclnlp/gntp.

北京阿比特科技有限公司