亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Large-scale multi-input multi-output (MIMO) code domain non-orthogonal multiple access (CD-NOMA) techniques are one of the potential candidates to address the next-generation wireless needs such as massive connectivity, and high reliability. This work focuses on two primary CD-NOMA techniques: sparse-code multiple access (SCMA) and dense-code multiple access (DCMA). One of the primary challenges in implementing MIMO-CD-NOMA systems is designing the optimal detector with affordable computation cost and complexity. This paper proposes an iterative linear detector based on the alternating direction method of multipliers (ADMM). First, the maximum likelihood (ML) detection problem is converted into a sharing optimization problem. The set constraint in the ML detection problem is relaxed into the box constraint sharing problem. An alternative variable is introduced via the penalty term, which compensates for the loss incurred by the constraint relaxation. The system models, i.e., the relation between the input signal and the received signal, are reformulated so that the proposed sharing optimization problem can be readily applied. The ADMM is a robust algorithm to solve the sharing problem in a distributed manner. The proposed detector leverages the distributive nature to reduce per-iteration cost and time. An ADMM-based linear detector is designed for three MIMO-CD-NOMA systems: single input multi output CD-NOMA (SIMO-CD-NOMA), spatial multiplexing CD-NOMA (SMX-CD-NOMA), and spatial modulated CD-NOMA (SM-CD-NOMA). The impact of various system parameters and ADMM parameters on computational complexity and symbol error rate (SER) has been thoroughly examined through extensive Monte Carlo simulations.

相關內容

Empirical detection of long range dependence (LRD) of a time series often consists of deciding whether an estimate of the memory parameter $d$ corresponds to LRD. Surprisingly, the literature offers numerous spectral domain estimators for $d$ but there are only a few estimators in the time domain. Moreover, the latter estimators are criticized for relying on visual inspection to determine an observation window $[n_1, n_2]$ for a linear regression to run on. Theoretically motivated choices of $n_1$ and $n_2$ are often missing for many time series models. In this paper, we take the well-known variance plot estimator and provide rigorous asymptotic conditions on $[n_1, n_2]$ to ensure the estimator's consistency under LRD. We establish these conditions for a large class of square-integrable time series models. This large class enables one to use the variance plot estimator to detect LRD for infinite-variance time series (after suitable transformation). Thus, detection of LRD for infinite-variance time series is another novelty of our paper. A simulation study indicates that the variance plot estimator can detect LRD better than the popular spectral domain GPH estimator.

Respiration is a critical vital sign for infants, and continuous respiratory monitoring is particularly important for newborns. However, neonates are sensitive and contact-based sensors present challenges in comfort, hygiene, and skin health, especially for preterm babies. As a step toward fully automatic, continuous, and contactless respiratory monitoring, we develop a deep-learning method for estimating respiratory rate and waveform from plain video footage in natural settings. Our automated infant respiration flow-based network (AIRFlowNet) combines video-extracted optical flow input and spatiotemporal convolutional processing tuned to the infant domain. We support our model with the first public annotated infant respiration dataset with 125 videos (AIR-125), drawn from eight infant subjects, set varied pose, lighting, and camera conditions. We include manual respiration annotations and optimize AIRFlowNet training on them using a novel spectral bandpass loss function. When trained and tested on the AIR-125 infant data, our method significantly outperforms other state-of-the-art methods in respiratory rate estimation, achieving a mean absolute error of $\sim$2.9 breaths per minute, compared to $\sim$4.7--6.2 for other public models designed for adult subjects and more uniform environments.

Explicit model-predictive control (MPC) is a widely used control design method that employs optimization tools to find control policies offline; commonly it is posed as a semi-definite program (SDP) or as a mixed-integer SDP in the case of hybrid systems. However, mixed-integer SDPs are computationally expensive, motivating alternative formulations, such as zonotope-based MPC (zonotopes are a special type of symmetric polytopes). In this paper, we propose a robust explicit MPC method applicable to hybrid systems. More precisely, we extend existing zonotope-based MPC methods to account for multiplicative parametric uncertainty. Additionally, we propose a convex zonotope order reduction method that takes advantage of the iterative structure of the zonotope propagation problem to promote diagonal blocks in the zonotope generators and lower the number of decision variables. Finally, we developed a quasi-time-free policy choice algorithm, allowing the system to start from any point on the trajectory and avoid chattering associated with discrete switching of linear control policies based on the current state's membership in state-space regions. Last but not least, we verify the validity of the proposed methods on two experimental setups, varying physical parameters between experiments.

Prognostic and Health Management (PHM) has been widely applied to hardware systems in the electronics and non-electronics domains but has not been explored for software. While software does not decay over time, it can degrade over release cycles. Software health management is confined to diagnostic assessments that identify problems, whereas prognostic assessment potentially indicates when in the future a problem will become detrimental. Relevant research areas such as software defect prediction, software reliability prediction, predictive maintenance of software, software degradation, and software performance prediction, exist, but all of these represent diagnostic models built upon historical data, none of which can predict an RUL for software. This paper addresses the application of PHM concepts to software systems for fault predictions and RUL estimation. Specifically, this paper addresses how PHM can be used to make decisions for software systems such as version update and upgrade, module changes, system reengineering, rejuvenation, maintenance scheduling, budgeting, and total abandonment. This paper presents a method to prognostically and continuously predict the RUL of a software system based on usage parameters (e.g., the numbers and categories of releases) and performance parameters (e.g., response time). The model developed has been validated by comparing actual data, with the results that were generated by predictive models. Statistical validation (regression validation, and k-fold cross validation) has also been carried out. A case study, based on publicly available data for the Bugzilla application is presented. This case study demonstrates that PHM concepts can be applied to software systems and RUL can be calculated to make system management decisions.

We study mechanism design when agents may have hidden secondary goals which will manifest as non-trivial preferences among outcomes for which their primary utility is the same. We show that in such cases, a mechanism is robust against strategic manipulation if and only if it is not only incentive-compatible, but also nonbossy -- a well-studied property in the context of matching and allocation mechanisms. We give complete characterizations of incentive-compatible and nonbossy mechanisms in various settings, including auctions with single-parameter agents and public decision settings where all agents share a common outcome. In particular, we show that in the single-item setting, a mechanism is incentive-compatible, individually rational, and nonbossy if and only if it is a sequential posted-price mechanism. In contrast, we show that in more general single-parameter environments, there exist mechanisms satisfying our characterization that significantly outperform sequential posted-price mechanisms in terms of revenue or efficiency (sometimes by an exponential factor).

In this paper, we consider the one-bit precoding problem for the multiuser downlink massive multiple-input multiple-output (MIMO) system with phase shift keying (PSK) modulation. We focus on the celebrated constructive interference (CI)-based problem formulation. We first establish the NP-hardness of the problem (even in the single-user case), which reveals the intrinsic difficulty of globally solving the problem. Then, we propose a novel negative $\ell_1$ penalty model for the considered problem, which penalizes the one-bit constraint into the objective by a negative $\ell_1$-norm term, and show the equivalence between (global and local) solutions of the original problem and the penalty problem when the penalty parameter is sufficiently large. We further transform the penalty model into an equivalent min-max problem and propose an efficient alternating proximal/projection gradient descent ascent (APGDA) algorithm for solving it, which performs a proximal gradient decent over one block of variables and a projection gradient ascent over the other block of variables alternately. The APGDA algorithm enjoys a low per-iteration complexity and is guaranteed to converge to a stationary point of the min-max problem and a local minimizer of the penalty problem. To further reduce the computational cost, we also propose a low-complexity implementation of the APGDA algorithm, where the values of the variables will be fixed in later iterations once they satisfy the one-bit constraint. Numerical results show that, compared to the state-of-the-art CI-based algorithms, both of the proposed algorithms generally achieve better bit-error-rate (BER) performance with lower computational cost.

As deep neural networks continue to expand and become more complex, most edge devices are unable to handle their extensive processing requirements. Therefore, the concept of distributed inference is essential to distribute the neural network among a cluster of nodes. However, distribution may lead to additional energy consumption and dependency among devices that suffer from unstable transmission rates. Unstable transmission rates harm real-time performance of IoT devices causing low latency, high energy usage, and potential failures. Hence, for dynamic systems, it is necessary to have a resilient DNN with an adaptive architecture that can downsize as per the available resources. This paper presents an empirical study that identifies the connections in ResNet that can be dropped without significantly impacting the model's performance to enable distribution in case of resource shortage. Based on the results, a multi-objective optimization problem is formulated to minimize latency and maximize accuracy as per available resources. Our experiments demonstrate that an adaptive ResNet architecture can reduce shared data, energy consumption, and latency throughout the distribution while maintaining high accuracy.

Channel estimation (CE) plays a key role in reconfigurable intelligent surface (RIS)-aided multiple-input multiple-output (MIMO) communication systems, while it poses a challenging task due to the passive nature of RIS and the cascaded channel structures. In this paper, a partially decoupled atomic norm minimization (PDANM) framework is proposed for CE of RIS-aided MIMO systems, which exploits the three-dimensional angular sparsity of the channel. In particular, PDANM partially decouples the differential angles at the RIS from other angles at the base station and user equipment, reducing the computational complexity compared with existing methods. A reweighted PDANM (RPDANM) algorithm is proposed to further improve CE accuracy, which iteratively refines CE through a specifically designed reweighing strategy. Building upon RPDANM, we propose an iterative approach named RPDANM with adaptive phase control (RPDANM-APC), which adaptively adjusts the RIS phases based on previously estimated channel parameters to facilitate CE, achieving superior CE accuracy while reducing training overhead. Numerical simulations demonstrate the superiority of our proposed approaches in terms of running time, CE accuracy, and training overhead. In particular, the RPDANM-APC approach can achieve higher CE accuracy than existing methods within less than 40 percent training overhead while reducing the running time by tens of times.

Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other's generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.

北京阿比特科技有限公司