亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study a secure integrated sensing and communication (ISAC) system where one multi-antenna base station (BS) simultaneously serves a downlink communication user and senses the location of a target that may potentially serve as an eavesdropper via its reflected echo signals. Specifically, the location information of the target is unknown and random, while its a priori distribution is available for exploitation. First, to characterize the sensing performance, we derive the posterior Cram\'er-Rao bound (PCRB) which is a lower bound of the mean squared error (MSE) for target sensing exploiting prior distribution. Due to the intractability of the PCRB expression, we further derive a novel approximate upper bound of it which has a closed-form expression. Next, under an artificial noise (AN) based beamforming structure at the BS to alleviate information eavesdropping and enhance the target's reflected signal power for sensing, we formulate a transmit beamforming optimization problem to maximize the worst-case secrecy rate among all possible target (eavesdropper) locations, under a sensing accuracy threshold characterized by an upper bound on the PCRB. Despite the non-convexity of the formulated problem, we propose a two-stage approach to obtain its optimal solution by leveraging the semi-definite relaxation (SDR) technique. Numerical results validate the effectiveness of our proposed transmit beamforming design and demonstrate the non-trivial trade-off between secrecy performance and sensing performance in secure ISAC systems.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

In backscatter communication (BC), a passive tag transmits information by just affecting an external electromagnetic field through load modulation. Thereby, the feed current of the excited tag antenna is modulated by adapting the passive termination load. This paper studies the achievable information rates with a freely adaptable passive load. As a prerequisite, we unify monostatic, bistatic, and ambient BC with circuit-based system modeling. We present the crucial insight that channel capacity is described by existing results on peak-power-limited quadrature Gaussian channels, because the steady-state tag current phasor lies on a disk. Consequently, we derive the channel capacity for the case of an unmodulated external field, for general passive, purely reactive, or purely resistive tag loads. We find that modulating both resistance and reactance is important for very high rates. We discuss the capacity-achieving load statistics, rate asymptotics, technical conclusions, and rate losses from value-range-constrained loads (which are found to be small for moderate constraints). We then demonstrate that near-capacity rates can be attained by more practical schemes: (i) amplitude-and-phase-shift keying on the reflection coefficient and (ii) simple load circuits of a few switched resistors and capacitors. Finally, we draw conclusions for the ambient BC channel capacity in important special cases.

In observational studies, unobserved confounding is a major barrier in isolating the average causal effect (ACE). In these scenarios, two main approaches are often used: confounder adjustment for causality (CAC) and instrumental variable analysis for causation (IVAC). Nevertheless, both are subject to untestable assumptions and, therefore, it may be unclear which assumption violation scenarios one method is superior in terms of mitigating inconsistency for the ACE. Although general guidelines exist, direct theoretical comparisons of the trade-offs between CAC and the IVAC assumptions are limited. Using ordinary least squares (OLS) for CAC and two-stage least squares (2SLS) for IVAC, we analytically compare the relative inconsistency for the ACE of each approach under a variety of assumption violation scenarios and discuss rules of thumb for practice. Additionally, a sensitivity framework is proposed to guide analysts in determining which approach may result in less inconsistency for estimating the ACE with a given dataset. We demonstrate our findings both through simulation and an application examining whether maternal stress during pregnancy affects a neonate's birthweight. The implications of our findings for causal inference practice are discussed, providing guidance for analysts for judging whether CAC or IVAC may be more appropriate for a given situation.

Densest Subgraph Problem (DSP) is an important primitive problem with a wide range of applications, including fraud detection, community detection and DNA motif discovery. Edge-based density is one of the most common metrics in DSP. Although a maximum flow algorithm can exactly solve it in polynomial time, the increasing amount of data and the high complexity of algorithms motivate scientists to find approximation algorithms. Among these, its duality of linear programming derives several iterative algorithms including Greedy++, Frank-Wolfe and FISTA which redistribute edge weights to find the densest subgraph, however, these iterative algorithms vibrate around the optimal solution, which are not satisfactory for fast convergence. We propose our main algorithm Locally Optimal Weight Distribution (LOWD) to distribute the remaining edge weights in a locally optimal operation to converge to the optimal solution monotonically. Theoretically, we show that it will reach the optimal state of a specific linear programming which is called locally-dense decomposition. Besides, we show that it is not necessary to consider most of the edges in the original graph. Therefore, we develop a pruning algorithm using a modified Counting Sort to prune graphs by removing unnecessary edges and nodes, and then we can search the densest subgraph in a much smaller graph.

Terahertz (THz) communication is one of the most promising candidates to accommodate high-speed mobile data services. This paper proposes a secure hybrid automatic repeat request with incremental redundancy (HARQ-IR) aided THz communication scheme, where the transmission secrecy is ensured by confusing the eavesdropper with dummy messages. The connection and secrecy outage probabilities are then derived in closed-form. Besides, the tail behaviour of the connection outage probability in high signal-to-noise ratio (SNR) is examined by carrying out the asymptotic analysis, and the upper bound of the secrecy outage probability is obtained in a simple form by capitalizing on large deviations. With these results, we take a step further to investigate the secrecy long term average throughput (LTAT). By noticing that HARQ-IR not only improves the reliability of the legitimate user, but also increases the probability of being eavesdropped, a robust rate adaption policy is finally proposed to maximize the LTAT while restricting the connection and secrecy outage probabilities within satisfactory requirements.

Shared information is a measure of mutual dependence among multiple jointly distributed random variables with finite alphabets. For a Markov chain on a tree with a given joint distribution, we give a new proof of an explicit characterization of shared information. The Markov chain on a tree is shown to possess a global Markov property based on graph separation; this property plays a key role in our proofs. When the underlying joint distribution is not known, we exploit the special form of this characterization to provide a multiarmed bandit algorithm for estimating shared information, and analyze its error performance.

Two autonomous mobile robots and a non-autonomous one, also called bike, are placed at the origin of an infinite line. The autonomous robots can travel with maximum speed $1$. When a robot rides the bike its speed increases to $v>1$, however only exactly one robot at a time can ride the bike and the bike is non-autonomous in that it cannot move on its own. An Exit is placed on the line at an unknown location and at distance $d$ from the origin. The robots have limited communication behavior; one robot is a sender (denoted by S) in that it can send information wirelessly at any distance and receive messages only in F2F (Face-to-Face), while the other robot is a receiver (denoted by R) in that it can receive information wirelessly but can send information only F2F. The bike has no communication capabilities of its own. We refer to the resulting communication model of the ensemble of the two autonomous robots and the bike as S/R. Our general goal is to understand the impact of the non-autonomous robot in assisting the evacuation of the two autonomous faulty robots. Our main contribution is to provide a new evacuation algorithm that enables both robots to evacuate from the unknown Exit in the S/R model. We also analyze the resulting evacuation time as a function of the bike's speed $v$ and give upper and lower bounds on the competitive ratio of the resulting algorithm for the entire range of possible values of $v$.

Optimization is offered as an objective approach to resolving complex, real-world decisions involving uncertainty and conflicting interests. It drives business strategies as well as public policies and, increasingly, lies at the heart of sophisticated machine learning systems. A paradigm used to approach potentially high-stakes decisions, optimization relies on abstracting the real world to a set of decision(s), objective(s) and constraint(s). Drawing from the modeling process and a range of actual cases, this paper describes the normative choices and assumptions that are necessarily part of using optimization. It then identifies six emergent problems that may be neglected: 1) Misspecified values can yield optimizations that omit certain imperatives altogether or incorporate them incorrectly as a constraint or as part of the objective, 2) Problematic decision boundaries can lead to faulty modularity assumptions and feedback loops, 3) Failing to account for multiple agents' divergent goals and decisions can lead to policies that serve only certain narrow interests, 4) Mislabeling and mismeasurement can introduce bias and imprecision, 5) Faulty use of relaxation and approximation methods, unaccompanied by formal characterizations and guarantees, can severely impede applicability, and 6) Treating optimization as a justification for action, without specifying the necessary contextual information, can lead to ethically dubious or faulty decisions. Suggestions are given to further understand and curb the harms that can arise when optimization is used wrongfully.

This paper proposes a fully automated method for recovering the location of a source and medium parameters in shallow waters. The scenario involves an unknown source emitting low-frequency sound waves in a shallow water environment, and a single hydrophone recording the signal. Firstly, theoretical tools are introduced to understand the robustness of the warping method and to propose and analyze an automated way to separate the modal components of the recorded signal. Secondly, using the spectrogram of each modal component, the paper investigates the best way to recover the modal travel times and provides stability estimates. Finally, a penalized minimization algorithm is presented to recover estimates of the source location and medium parameters. The proposed method is tested on experimental data of right whale gunshot and combustive sound sources, demonstrating its effectiveness in real-world scenarios.

In this paper, we consider a full-duplex (FD) space shift keying (SSK) communication system, where information exchange between two users is assisted only by a reconfigurable intelligent surface (RIS). In particular, the impact of loop interference (LI) between the transmit and receive antennas as well as residual self-interference (SI) from the RIS is considered. Based on the maximum likelihood detector, we derive the conditional pairwise error probability and the numerical integration expression for the unconditional pairwise error probability (UPEP). Since it is difficult to find a closed-form solution, we perform accurate estimation by the Gauss-Chebyshev quadrature (GCQ) method. To gain more useful insights, we derive an expression for UPEP in the high signal-to-noise ratio region and further give the average bit error probability (ABEP) expression. Monte Carlo simulations were performed to validate the derived results. It is found that SI and LI have severe impacts on system performance. Fortunately, these two disturbances can be well counteracted by increasing the number of RIS units.

Autonomous vehicles and Advanced Driving Assistance Systems (ADAS) have the potential to radically change the way we travel. Many such vehicles currently rely on segmentation and object detection algorithms to detect and track objects around its surrounding. The data collected from the vehicles are often sent to cloud servers to facilitate continual/life-long learning of these algorithms. Considering the bandwidth constraints, the data is compressed before sending it to servers, where it is typically decompressed for training and analysis. In this work, we propose the use of a learning-based compression Codec to reduce the overhead in latency incurred for the decompression operation in the standard pipeline. We demonstrate that the learned compressed representation can also be used to perform tasks like semantic segmentation in addition to decompression to obtain the images. We experimentally validate the proposed pipeline on the Cityscapes dataset, where we achieve a compression factor up to $66 \times$ while preserving the information required to perform segmentation with a dice coefficient of $0.84$ as compared to $0.88$ achieved using decompressed images while reducing the overall compute by $11\%$.

北京阿比特科技有限公司