亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Ambient backscatter communications have been identified for ultra-low energy wireless communications. Indeed, a tag can send a message to a reader without emitting any wave and without battery, simply by backscattering the waves generated by a source. In the simplest implementation of such a system, the tag sends a binary message by oscillating between two states and the reader detects the bits by comparing the two distinct received powers. In this paper, for the first time, we propose to study an ambient backscatter communication system, in the presence of a diffusing surface, a simple flat panel that diffuses in all directions. We establish the analytical closed form expression of the power contrast in the presence of the surface. We show that the diffusing surface improves the power contrast. Moreover our approach allows us to express the contrast to noise ratio, and therefore to establish the BER performance. Furthermore, we derive the optimum source transmit power for a given target power contrast. This makes it possible to quantify the amount of energy that can be saved at the source side, thanks to the diffusing surface.

相關內容

Surface 是微軟(ruan)公司( )旗下(xia)一系列使(shi)用(yong) Windows 10(早期為 Windows 8.X)操作系統的電腦產(chan)品,目前有 Surface、Surface Pro 和(he) Surface Book 三個系列。 2012 年 6 月(yue) 18 日,初(chu)代(dai) Surface Pro/RT 由時任微軟(ruan) CEO 史(shi)蒂夫(fu)·鮑爾默發布(bu)于在洛杉磯(ji)舉行的記者會,2012 年 10 月(yue) 26 日上(shang)市銷售(shou)。

Valid online inference is an important problem in contemporary multiple testing research, to which various solutions have been proposed recently. It is well-known that these methods can suffer from a significant loss of power if the null $p$-values are conservative. This occurs frequently, for instance whenever discrete tests are performed. To reduce conservatism, we introduce the method of super-uniformity reward (SURE). This approach works by incorporating information about the individual null cumulative distribution functions (or upper bounds of them), which we assume to be available. Our approach yields several new "rewarded" procedures that theoretically control online error criteria based either on the family-wise error rate (FWER) or the marginal false discovery rate (mFDR). We prove that the rewarded procedures uniformly improve upon the non-rewarded ones, and illustrate their performance for simulated and real data.

The use of large arrays might be the solution to the capacity problems in wireless communications. The signal-to-noise ratio (SNR) grows linearly with the number of array elements $N$ when using Massive MIMO receivers and half-duplex relays. Moreover, intelligent reflecting surfaces (IRSs) have recently attracted attention since these can relay signals to achieve an SNR that grows as $N^2$, which seems like a major benefit. In this paper, we use a deterministic propagation model for a planar array of arbitrary size, to demonstrate that the mentioned SNR behaviors, and associated power scaling laws, only apply in the far-field. They cannot be used to study the regime where $N\to\infty$. We derive an exact channel gain expression that captures three essential near-field behaviors and use it to revisit the power scaling laws. We derive new finite asymptotic SNR limits but also conclude that these are unlikely to be approached in practice. We further prove that an IRS-aided setup cannot achieve a higher SNR than an equal-sized Massive MIMO setup, despite its faster SNR growth. We quantify analytically how much larger the IRS must be to achieve the same SNR. Finally, we show that an optimized IRS does not behave as an "anomalous" mirror but can vastly outperform that benchmark.

We provide a sufficient condition for solvability of a system of real quadratic equations $p_i(x)=y_i$, $i=1, \ldots, m$, where $p_i: {\mathbb R}^n \longrightarrow {\mathbb R}$ are quadratic forms. By solving a positive semidefinite program, one can reduce it to another system of the type $q_i(x)=\alpha_i$, $i=1, \ldots, m$, where $q_i: {\mathbb R}^n \longrightarrow {\mathbb R}$ are quadratic forms and $\alpha_i=\mathrm{tr\ } q_i$. We prove that the latter system has solution $x \in {\mathbb R}^n$ if for some (equivalently, for any) orthonormal basis $A_1,\ldots, A_m$ in the space spanned by the matrices of the forms $q_i$, the operator norm of $A_1^2 + \ldots + A_m^2$ does not exceed $\eta/m$ for some absolute constant $\eta > 0$. The condition can be checked in polynomial time and is satisfied, for example, for random $q_i$ provided $m \leq \gamma \sqrt{n}$ for an absolute constant $\gamma >0$. We prove a similar sufficient condition for a system of homogeneous quadratic equations to have a non-trivial solution. While the condition we obtain is of an algebraic nature, the proof relies on analytic tools including Fourier analysis and measure concentration.

Current trends in the computer graphics community propose leveraging the massive parallel computational power of GPUs to accelerate physically based simulations. Collision detection and solving is a fundamental part of this process. It is also the most significant bottleneck on physically based simulations and it easily becomes intractable as the number of vertices in the scene increases. Brute force approaches carry a quadratic growth in both computational time and memory footprint. While their parallelization is trivial in GPUs, their complexity discourages from using such approaches. Acceleration structures -- such as BVH -- are often applied to increase performance, achieving logarithmic computational times for individual point queries. Nonetheless, their memory footprint also grows rapidly and their parallelization in a GPU is problematic due to their branching nature. We propose using implicit surface representations learnt through deep learning for collision handling in physically based simulations. Our proposed architecture has a complexity of O(n) -- or O(1) for a single point query -- and has no parallelization issues. We will show how this permits accurate and efficient collision handling in physically based simulations, more specifically, for cloth. In our experiments, we query up to 1M points in 300 milliseconds.

Information geometry is concerned with the application of differential geometry concepts in the study of the parametric spaces of statistical models. When the random variables are independent and identically distributed, the underlying parametric space exhibit constant curvature, which makes the geometry hyperbolic (negative) or spherical (positive). In this paper, we derive closed-form expressions for the components of the first and second fundamental forms regarding pairwise isotropic Gaussian-Markov random field manifolds, allowing the computation of the Gaussian, mean and principal curvatures. Computational simulations using Markov Chain Monte Carlo dynamics indicate that a change in the sign of the Gaussian curvature is related to the emergence of phase transitions in the field. Moreover, the curvatures are highly asymmetrical for positive and negative displacements in the inverse temperature parameter, suggesting the existence of irreversible geometric properties in the parametric space along the dynamics. Furthermore, these asymmetric changes in the curvature of the space induces an intrinsic notion of time in the evolution of the random field.

A reconfigurable intelligent surface (RIS) is a planar structure that is engineered to dynamically control the electromagnetic waves. In wireless communications, RISs have recently emerged as a promising technology for realizing programmable and reconfigurable wireless propagation environments through nearly passive signal transformations. With the aid of RISs, a wireless environment becomes part of the network design parameters that are subject to optimization. In this tutorial paper, we focus our attention on communication models for RISs. First, we review the communication models that are most often employed in wireless communications and networks for analyzing and optimizing RISs, and elaborate on their advantages and limitations. Then, we concentrate on models for RISs that are based on inhomogeneous sheets of surface impedance, and offer a step-by-step tutorial on formulating electromagnetically-consistent analytical models for optimizing the surface impedance. The differences between local and global designs are discussed and analytically formulated in terms of surface power efficiency and reradiated power flux through the Poynting vector. Finally, with the aid of numerical results, we discuss how approximate global designs can be realized by using locally passive RISs with zero electrical resistance (i.e., inhomogeneous reactance boundaries with no local power amplification), even for large angles of reflection and at high power efficiency.

We introduce a notion of "simulation" for labelled graphs, in which edges of the simulated graph are realized by regular expressions in the simulating graph, and prove that the tiling problem (aka "domino problem") for the simulating graph is at least as difficult as that for the simulated graph. We apply this to the Cayley graph of the "lamplighter group" $L=\mathbb Z/2\wr\mathbb Z$, and more generally to "Diestel-Leader graphs". We prove that these graphs simulate the plane, and thus deduce that the seeded tiling problem is unsolvable on the group $L$. We note that $L$ does not contain any plane in its Cayley graph, so our undecidability criterion by simulation covers cases not covered by Jeandel's criterion based on translation-like action of a product of finitely generated infinite groups. Our approach to tiling problems is strongly based on categorical constructions in graph theory.

We consider the problem of maximizing the Nash social welfare when allocating a set $\mathcal{G}$ of indivisible goods to a set $\mathcal{N}$ of agents. We study instances, in which all agents have 2-value additive valuations: The value of every agent $i \in \mathcal{N}$ for every good $j \in \mathcal{G}$ is $v_{ij} \in \{p,q\}$, for $p,q \in \mathbb{N}$, $p \le q$. Maybe surprisingly, we design an algorithm to compute an optimal allocation in polynomial time if $p$ divides $q$, i.e., when $p=1$ and $q \in \mathbb{N}$ after appropriate scaling. The problem is \classNP-hard whenever $p$ and $q$ are coprime and $p \ge 3$. In terms of approximation, we present positive and negative results for general $p$ and $q$. We show that our algorithm obtains an approximation ratio of at most 1.0345. Moreover, we prove that the problem is \classAPX-hard, with a lower bound of $1.000015$ achieved at $p/q = 4/5$.

Sequence-to-sequence models are a powerful workhorse of NLP. Most variants employ a softmax transformation in both their attention mechanism and output layer, leading to dense alignments and strictly positive output probabilities. This density is wasteful, making models less interpretable and assigning probability mass to many implausible outputs. In this paper, we propose sparse sequence-to-sequence models, rooted in a new family of $\alpha$-entmax transformations, which includes softmax and sparsemax as particular cases, and is sparse for any $\alpha > 1$. We provide fast algorithms to evaluate these transformations and their gradients, which scale well for large vocabulary sizes. Our models are able to produce sparse alignments and to assign nonzero probability to a short list of plausible outputs, sometimes rendering beam search exact. Experiments on morphological inflection and machine translation reveal consistent gains over dense models.

Degradation of image quality due to the presence of haze is a very common phenomenon. Existing DehazeNet [3], MSCNN [11] tackled the drawbacks of hand crafted haze relevant features. However, these methods have the problem of color distortion in gloomy (poor illumination) environment. In this paper, a cardinal (red, green and blue) color fusion network for single image haze removal is proposed. In first stage, network fusses color information present in hazy images and generates multi-channel depth maps. The second stage estimates the scene transmission map from generated dark channels using multi channel multi scale convolutional neural network (McMs-CNN) to recover the original scene. To train the proposed network, we have used two standard datasets namely: ImageNet [5] and D-HAZY [1]. Performance evaluation of the proposed approach has been carried out using structural similarity index (SSIM), mean square error (MSE) and peak signal to noise ratio (PSNR). Performance analysis shows that the proposed approach outperforms the existing state-of-the-art methods for single image dehazing.

北京阿比特科技有限公司