亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, the role of secret key with finite rate is studied to enhance the secrecy performance of the system when users are operating in interference limited scenarios. To address this problem, a 2-user Gaussian Z-IC with secrecy constraint at the receiver is considered. One of the fundamental problems here is how to use the secret key as a part of the encoding process. The paper proposes novel achievable schemes, where the schemes differ from each other based on how the key has been used in the encoding process. The first achievable scheme uses one part of the key for one-time pad and remaining part of the key for wiretap coding. The encoding is performed such that the receiver experiencing interference can decode some part of the interference without violating the secrecy constraint. As a special case of the derived result, one can obtain the secrecy rate region when the key is completely used for one-time pad or part of the wiretap coding. The second scheme uses the shared key to encrypt the message using one-time pad and in contrast to the previous case no interference is decoded at the receiver. The paper also derives an outer bound on the sum rate and secrecy rate of the transmitter which causes interference. The main novelty of deriving outer bound lies in the selection of side information provided to the receiver and using the secrecy constraint at the receiver. The derived outer bounds are found to be tight depending on the channel conditions and rate of the key. The scaling behaviour of key rate is also explored for different schemes using the notion of secure GDOF. The optimality of different schemes is characterized for some specific cases. The developed results show the importance of key rate splitting in enhancing the secrecy performance of the system when users are operating under interference limited environment.

相關內容

The kernels of operating systems such as Windows, Linux, and MacOS are vulnerable to control-flow hijacking. Defenses exist, but many require efficient intra-address-space isolation. Execute-only memory, for example, requires read protection on code segments, and shadow stacks require protection from buffer overwrites. Intel's Protection Keys for Userspace (PKU) could, in principle, provide the intra-kernel isolation needed by such defenses, but, when used as designed, it applies only to user-mode application code. This paper presents an unconventional approach to memory protection, allowing PKU to be used within the operating system kernel on existing Intel hardware, replacing the traditional user/supervisor isolation mechanism and, simultaneously, enabling efficient intra-kernel isolation. We call the resulting mechanism Protection Keys for Kernelspace (PKK). To demonstrate its utility and efficiency, we present a system we call IskiOS: a Linux variant featuring execute-only memory (XOM) and the first-ever race-free shadow stacks for x86-64. Experiments with the LMBench kernel microbenchmarks display a geometric mean overhead of about 11% for PKK and no additional overhead for XOM. IskiOS's shadow stacks bring the total to 22%. For full applications, experiments with the system benchmarks of the Phoronix test suite display negligible overhead for PKK and XOM, and less than 5% geometric mean overhead for shadow stacks.

In this paper we consider the age of information (AoI) of a status updating system with a relay, where the updates are delivered to destination either from the direct line or the two-hop link via the relay. An updating packet generated at source is sent to receiver and the relay simultaneously. When the direct packet transmission fails, the relay replaces the source and retransmits the packet until it is eventually obtained at the receiver side. Assume that the propagation delay on each link is one time slot, we determine the stationary distribution of the AoI for three cases: (a) relay has no buffer and the packet delivery from relay cannot be preempted by fresher updates from source; (b) relay has no buffer but the packet substitution is allowable; (c) relay has size 1 buffer and the packet in buffer is refreshed when a newer packet is obtained. The idea is invoking a multiple-dimensional state vector which contains the AoI as a part and constituting the multiple-dimensional AoI stochastic process. We find the steady state of each multiple-dimensional AoI process by solving the system of stationary equations. Once the steady-state distribution of larger-dimensional AoI process is known, the stationary AoI distribution is also obtained as it is one of the marginal distributions of that process's steady-state distribution. For all the situations, we derive the explicit expression of AoI distribution, and calculate the mean and the variance of the stationary AoI. All the results are compared numerically, including the AoI performance of the non-relay state updating system. Numerical results show that adding the relay improves the system's timeliness dramatically, and no-buffer-and-preemption setting in relay achieves both minimal average AoI and AoI's variance. Thus, for the system model discussed in this paper, to reduce the AoI at receiver there is no need to add the buffer in relay.

Gaussian processes with derivative information are useful in many settings where derivative information is available, including numerous Bayesian optimization and regression tasks that arise in the natural sciences. Incorporating derivative observations, however, comes with a dominating $O(N^3D^3)$ computational cost when training on $N$ points in $D$ input dimensions. This is intractable for even moderately sized problems. While recent work has addressed this intractability in the low-$D$ setting, the high-$N$, high-$D$ setting is still unexplored and of great value, particularly as machine learning problems increasingly become high dimensional. In this paper, we introduce methods to achieve fully scalable Gaussian process regression with derivatives using variational inference. Analogous to the use of inducing values to sparsify the labels of a training set, we introduce the concept of inducing directional derivatives to sparsify the partial derivative information of a training set. This enables us to construct a variational posterior that incorporates derivative information but whose size depends neither on the full dataset size $N$ nor the full dimensionality $D$. We demonstrate the full scalability of our approach on a variety of tasks, ranging from a high dimensional stellarator fusion regression task to training graph convolutional neural networks on Pubmed using Bayesian optimization. Surprisingly, we find that our approach can improve regression performance even in settings where only label data is available.

We consider anomalous diffusion for molecular communication with a passive receiver. We first consider the probability density function of molecules' location at a given time in a space of arbitrary dimension. The expected number of observed molecules inside a receptor space of the receiver at certain time is derived taking into account the life expectancy of the molecules. In addition, an implicit solution for the time that maximizes the expected number of observed molecules is obtained in terms of Fox's H-function. The closed-form expressions for the bit error rate of a single-bit interval transmission and a multi-bit interval transmission are derived. It is shown that lifetime limited molecules can reduce the inter-symbol interference while also enhancing the reliability of MC systems at a suitable observation time.

In classical information theory, a causal relationship between two random variables is typically modelled by assuming that, for every possible state of one of the variables, there exists a particular distribution of states of the second variable. Let us call these two variables the causal and caused variables, respectively. We assume that both of these random variables are continuous and one-dimensional. Carrying out independent transformations on the causal and caused variable creates two new random variables. Here, we consider transformations that are differentiable and strictly increasing. We call these increasing transformations. If, for example, the mass of an object is a caused variable, a logarithmic transformation could be applied to produce a new caused variable. Any causal relationship (as defined here) is associated with a channel capacity, which is the maximum rate that information could be sent if the causal relationship was used as a signalling system. Channel capacity is unaffected when the variables are changed by use of increasing transformations. For any causal relationship we show that there is always a way to transform the caused variable such that the entropy associated with the caused variable is independent of the value of the causal variable. Furthermore, the resulting universal entropy has an absolute value that is equal to the channel capacity associated with the causal relationship. This observation may be useful in statistical applications, and it implies that, for any causal relationship, there is a `natural' way to transform a continuous caused variable. With additional constraints on the causal relationship, we show that a natural transformation of both variables can be found such that the transformed system behaves like a good measuring device, with the expected value of the caused variable being approximately equal to the value of the causal variable.

This paper presents novel numerical approaches to finding the secrecy capacity of the multiple-input multiple-output (MIMO) wiretap channel subject to multiple linear transmit covariance constraints, including sum power constraint, per antenna power constraints and interference power constraint. An analytical solution to this problem is not known and existing numerical solutions suffer from slow convergence rate and/or high per-iteration complexity. Deriving computationally efficient solutions to the secrecy capacity problem is challenging since the secrecy rate is expressed as a difference of convex functions (DC) of the transmit covariance matrix, for which its convexity is only known for some special cases. In this paper we propose two low-complexity methods to compute the secrecy capacity along with a convex reformulation for degraded channels. In the first method we capitalize on the accelerated DC algorithm which requires solving a sequence of convex subproblems, for which we propose an efficient iterative algorithm where each iteration admits a closed-form solution. In the second method, we rely on the concave-convex equivalent reformulation of the secrecy capacity problem which allows us to derive the so-called partial best response algorithm to obtain an optimal solution. Notably, each iteration of the second method can also be done in closed form. The simulation results demonstrate a faster convergence rate of our methods compared to other known solutions. We carry out extensive numerical experiments to evaluate the impact of various parameters on the achieved secrecy capacity.

This paper investigates the problem of model aggregation in federated learning systems aided by multiple reconfigurable intelligent surfaces (RISs). The effective integration of computation and communication is achieved by over-the-air computation (AirComp). Since all local parameters are transmitted over shared wireless channels, the undesirable propagation error inevitably deteriorates the performance of global aggregation. The objective of this work is to 1) reduce the signal distortion of AirComp; 2) enhance the convergence rate of federated learning. Thus, the mean-square-error and the device set are optimized by designing the transmit power, controlling the receive scalar, tuning the phase shifts, and selecting participants in the model uploading process. The formulated mixed-integer non-linear problem (P0) is decomposed into a non-convex problem (P1) with continuous variables and a combinatorial problem (P2) with integer variables. To solve subproblem (P1), the closed-form expressions for transceivers are first derived, then the multi-antenna cases are addressed by the semidefinite relaxation. Next, the problem of phase shifts design is tackled by invoking the penalty-based successive convex approximation method. In terms of subproblem (P2), the difference-of-convex programming is adopted to optimize the device set for convergence acceleration, while satisfying the aggregation error demand. After that, an alternating optimization algorithm is proposed to find a suboptimal solution for problem (P0). Finally, simulation results demonstrate that i) the designed algorithm can converge faster and aggregate model more accurately compared to baselines; ii) the training loss and prediction accuracy of federated learning can be improved significantly with the aid of multiple RISs.

In this paper, we consider numerical approximation to periodic measure of a time periodic stochastic differential equations (SDEs) under weakly dissipative condition. For this we first study the existence of the periodic measure $\rho_t$ and the large time behaviour of $\mathcal{U}(t+s,s,x) := \mathbb{E}\phi(X_{t}^{s,x})-\int\phi d\rho_t,$ where $X_t^{s,x}$ is the solution of the SDEs and $\phi$ is a test function being smooth and of polynomial growth at infinity. We prove $\mathcal{U}$ and all its spatial derivatives decay to 0 with exponential rate on time $t$ in the sense of average on initial time $s$. We also prove the existence and the geometric ergodicity of the periodic measure of the discretized semi-flow from the Euler-Maruyama scheme and moment estimate of any order when the time step is sufficiently small (uniform for all orders). We thereafter obtain that the weak error for the numerical scheme of infinite horizon is of the order $1$ in terms of the time step. We prove that the choice of step size can be uniform for all test functions $\phi$. Subsequently we are able to estimate the average periodic measure with ergodic numerical schemes.

We revisit the $k$-Hessian eigenvalue problem on a smooth, bounded, $(k-1)$-convex domain in $\mathbb R^n$. First, we obtain a spectral characterization of the $k$-Hessian eigenvalue as the infimum of the first eigenvalues of linear second-order elliptic operators whose coefficients belong to the dual of the corresponding G\r{a}rding cone. Second, we introduce a non-degenerate inverse iterative scheme to solve the eigenvalue problem for the $k$-Hessian operator. We show that the scheme converges, with a rate, to the $k$-Hessian eigenvalue for all $k$. When $2\leq k\leq n$, we also prove a local $L^1$ convergence of the Hessian of solutions of the scheme. Hyperbolic polynomials play an important role in our analysis.

Convolutional Neural Networks experience catastrophic forgetting when optimized on a sequence of learning problems: as they meet the objective of the current training examples, their performance on previous tasks drops drastically. In this work, we introduce a novel framework to tackle this problem with conditional computation. We equip each convolutional layer with task-specific gating modules, selecting which filters to apply on the given input. This way, we achieve two appealing properties. Firstly, the execution patterns of the gates allow to identify and protect important filters, ensuring no loss in the performance of the model for previously learned tasks. Secondly, by using a sparsity objective, we can promote the selection of a limited set of kernels, allowing to retain sufficient model capacity to digest new tasks.Existing solutions require, at test time, awareness of the task to which each example belongs to. This knowledge, however, may not be available in many practical scenarios. Therefore, we additionally introduce a task classifier that predicts the task label of each example, to deal with settings in which a task oracle is not available. We validate our proposal on four continual learning datasets. Results show that our model consistently outperforms existing methods both in the presence and the absence of a task oracle. Notably, on Split SVHN and Imagenet-50 datasets, our model yields up to 23.98% and 17.42% improvement in accuracy w.r.t. competing methods.

北京阿比特科技有限公司