亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

One of the most important technical challenges when designing a Cognitive Radio Networks (CRNs) is spectrum sensing, which has the responsibility of recognizing the presence or absence of the primary users in the frequency bands. A common technique used for spectrum sensing is double energy detection since it can operate without any prior information regarding the characteristics of the primary user signals. A double threshold energy detection algorithm is based on the use of two thresholds, to check the energy of the received signals and decided whether the spectrum is occupied or not. Furthermore, thresholds play a key role in the energy detection algorithm, by considering the stochastic features of noise in this model, as a result calculating the optimal threshold is a crucial task. In this paper, the Bi-Section algorithm was used to detect the optimum energy level in the fuzzy region which is an area between the low and high energy threshold. For this purpose, the decision threshold was determined by the use of the Bisection function for cognitive users. Numerical simulations show that the proposed method achieves better detection performance than the conventional double-threshold energy-sensing schemes. Moreover, the presented technique has advantages such as increasing the probability of detection of primary users and decreasing the probability of Collison between primary and secondary users.

相關內容

Whilst lattice-based cryptosystems are believed to be resistant to quantum attack, they are often forced to pay for that security with inefficiencies in implementation. This problem is overcome by ring- and module-based schemes such as Ring-LWE or Module-LWE, whose keysize can be reduced by exploiting its algebraic structure, allowing for faster computations. Many rings may be chosen to define such cryptoschemes, but cyclotomic rings, due to their cyclic nature allowing for easy multiplication, are the community standard. However, there is still much uncertainty as to whether this structure may be exploited to an adversary's benefit. In this paper, we show that the decomposition group of a cyclotomic ring of arbitrary conductor can be utilised to significantly decrease the dimension of the ideal (or module) lattice required to solve a given instance of SVP. Moreover, we show that there exist a large number of rational primes for which, if the prime ideal factors of an ideal lie over primes of this form, give rise to an "easy" instance of SVP. It is important to note that the work on ideal SVP does not break Ring-LWE, since its security reduction is from worst case ideal SVP to average case Ring-LWE, and is one way.

Fiber metal laminates (FML) are composite structures consisting of metals and fiber reinforced plastics (FRP) which have experienced an increasing interest as the choice of materials in aerospace and automobile industries. Due to a sophisticated built up of the material, not only the design and production of such structures is challenging but also its damage detection. This research work focuses on damage identification in FML with guided ultrasonic waves (GUW) through an inverse approach based on the Bayesian paradigm. As the Bayesian inference approach involves multiple queries of the underlying system, a parameterized reduced-order model (ROM) is used to closely approximate the solution with considerably less computational cost. The signals measured by the embedded sensors and the ROM forecasts are employed for the localization and characterization of damage in FML. In this paper, a Markov Chain Monte-Carlo (MCMC) based Metropolis-Hastings (MH) algorithm and an Ensemble Kalman filtering (EnKF) technique are deployed to identify the damage. Numerical tests illustrate the approaches and the results are compared in regard to accuracy and efficiency. It is found that both methods are successful in multivariate characterization of the damage with a high accuracy and were also able to quantify their associated uncertainties. The EnKF distinguishes itself with the MCMC-MH algorithm in the matter of computational efficiency. In this application of identifying the damage, the EnKF is approximately thrice faster than the MCMC-MH.

The free-form deformation model can represent a wide range of non-rigid deformations by manipulating a control point lattice over the image. However, due to a large number of parameters, it is challenging to fit the free-form deformation model directly to the deformed image for deformation estimation because of the complexity of the fitness landscape. In this paper, we cast the registration task as a multi-objective optimization problem (MOP) according to the fact that regions affected by each control point overlap with each other. Specifically, by partitioning the template image into several regions and measuring the similarity of each region independently, multiple objectives are built and deformation estimation can thus be realized by solving the MOP with off-the-shelf multi-objective evolutionary algorithms (MOEAs). In addition, a coarse-to-fine strategy is realized by image pyramid combined with control point mesh subdivision. Specifically, the optimized candidate solutions of the current image level are inherited by the next level, which increases the ability to deal with large deformation. Also, a post-processing procedure is proposed to generate a single output utilizing the Pareto optimal solutions. Comparative experiments on both synthetic and real-world images show the effectiveness and usefulness of our deformation estimation method.

A single-step high-order implicit time integration scheme with controllable numerical dissipation at high frequency is presented for the transient analysis of structural dynamic problems. The amount of numerical dissipation is controlled by a user-specified value of the spectral radius $\rho_\infty$ in the high frequency limit. Using this user-specified parameter as a weight factor, a Pad\'e expansion of the matrix exponential solution of the equation of motion is constructed by mixing the diagonal and sub-diagonal expansions. An efficient time stepping scheme is designed where systems of equations similar in complexity to the standard Newmark method are solved recursively. It is shown that the proposed high-order scheme achieves high-frequency dissipation while minimizing low-frequency dissipation and period errors. The effectiveness of dissipation control and efficiency of the scheme are demonstrated with numerical examples. A simple recommendation on the choice of the controlling parameter and time step size is provided. The source code written in MATLAB and FORTRAN is available for download at: //github.com/ChongminSong/HighOrderTimeIntegration.

Bilevel optimization has found extensive applications in modern machine learning problems such as hyperparameter optimization, neural architecture search, meta-learning, etc. While bilevel problems with a unique inner minimal point (e.g., where the inner function is strongly convex) are well understood, such a problem with multiple inner minimal points remains to be challenging and open. Existing algorithms designed for such a problem were applicable to restricted situations and do not come with a full guarantee of convergence. In this paper, we adopt a reformulation of bilevel optimization to constrained optimization, and solve the problem via a primal-dual bilevel optimization (PDBO) algorithm. PDBO not only addresses the multiple inner minima challenge, but also features fully first-order efficiency without involving second-order Hessian and Jacobian computations, as opposed to most existing gradient-based bilevel algorithms. We further characterize the convergence rate of PDBO, which serves as the first known non-asymptotic convergence guarantee for bilevel optimization with multiple inner minima. Our experiments demonstrate desired performance of the proposed approach.

Cell-free massive MIMO is a variant of multiuser MIMO and massive MIMO, in which the total number of antennas $LM$ is distributed among the $L$ remote radio units (RUs) in the system, enabling macrodiversity and joint processing. Due to pilot contamination and system scalability, each RU can only serve a limited number of users. Obtaining the optimal number of users simultaneously served on one resource block (RB) by the $L$ RUs regarding the sum spectral efficiency (SE) is not a simple challenge though, as many of the system parameters are intertwined. For example, the dimension $\tau_p$ of orthogonal Demodulation Reference Signal (DMRS) pilots limits the number of users that an RU can serve. Thus, depending on $\tau_p$, the optimal user load yielding the maximum sum SE will vary. Another key parameter is the users' uplink transmit power $P^{\rm ue}_{\rm tx}$, where a trade-off between users in outage, interference and energy inefficiency exists. We study the effect of multiple parameters in cell-free massive MIMO on the sum SE and user outage, as well as the performance of different levels of RU antenna distribution. We provide extensive numerical investigations to illuminate the behavior of the system SE with respect to the various parameters, including the effect of the system load, i.e., the number of active users to be served on any RB. The results show that in general a system with many RUs and few RU antennas yields the largest sum SE, where the benefits of distributed antennas reduce in very dense networks.

Even though auto-encoders (AEs) have the desirable property of learning compact representations without labels and have been widely applied to out-of-distribution (OoD) detection, they are generally still poorly understood and are used incorrectly in detecting outliers where the normal and abnormal distributions are strongly overlapping. In general, the learned manifold is assumed to contain key information that is only important for describing samples within the training distribution, and that the reconstruction of outliers leads to high residual errors. However, recent work suggests that AEs are likely to be even better at reconstructing some types of OoD samples. In this work, we challenge this assumption and investigate what auto-encoders actually learn when they are posed to solve two different tasks. First, we propose two metrics based on the Fr\'echet inception distance (FID) and confidence scores of a trained classifier to assess whether AEs can learn the training distribution and reliably recognize samples from other domains. Second, we investigate whether AEs are able to synthesize normal images from samples with abnormal regions, on a more challenging lung pathology detection task. We have found that state-of-the-art (SOTA) AEs are either unable to constrain the latent manifold and allow reconstruction of abnormal patterns, or they are failing to accurately restore the inputs from their latent distribution, resulting in blurred or misaligned reconstructions. We propose novel deformable auto-encoders (MorphAEus) to learn perceptually aware global image priors and locally adapt their morphometry based on estimated dense deformation fields. We demonstrate superior performance over unsupervised methods in detecting OoD and pathology.

End-to-End (E2E) network slicing enables wireless networks to provide diverse services on a common infrastructure. Each E2E slice, including resources of radio access network (RAN) and core network, is rented to mobile virtual network operators (MVNOs) to provide a specific service to end-users. RAN slicing, which is realized through wireless network virtualization, involves sharing the frequency spectrum and base station antennas in RAN. Similarly, in core slicing, which is achieved by network function virtualization, data center resources such as commodity servers and physical links are shared between users of different MVNOs. In this paper, we study E2E slicing with the aim of minimizing the total energy consumption. The stated optimization problem is non-convex that is solved by a sub-optimal algorithm proposed here. The simulation results show that our proposed joint power control, server and link allocation (JPSLA) algorithm achieves 30% improvement compared to the disjoint scheme, where RAN and core are sliced separately.

Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.

Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.

北京阿比特科技有限公司