亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A cloud service provider strives to provide a high Quality of Service (QoS) to client jobs. Such jobs vary in computational and Service-Level-Agreement (SLA) obligations, as well as differ with respect to tolerating delays and SLA violations. The job scheduling plays a critical role in servicing cloud demands by allocating appropriate resources to execute client jobs. The response to such jobs is optimized by the cloud provider on a multi-tier cloud computing environment. Typically, the complex and dynamic nature of multi-tier environments incurs difficulties in meeting such demands, because tiers are dependent on each other which in turn makes bottlenecks of a tier shift to escalate in subsequent tiers. However, the optimization process of existing approaches produces single-tier-driven schedules that do not employ the differential impact of SLA violations in executing client jobs. Furthermore, the impact of schedules optimized at the tier level on the performance of schedules formulated in subsequent tiers tends to be ignored, resulting in a less than optimal performance when measured at the multi-tier level. Thus, failing in committing job obligations incurs SLA penalties that often take the form of either financial compensations, or losing future interests and motivations of unsatisfied clients in the service provided. In this paper, a scheduling and allocation approach is proposed to formulate schedules that account for differential impacts of SLA violation penalties and, thus, produce schedules that are optimal in financial performance. A queue virtualization scheme is designed to facilitate the formulation of optimal schedules at the tier and multi-tier levels of the cloud environment. Because the scheduling problem is NP-hard, a biologically inspired approach is proposed to mitigate the complexity of finding optimal schedules.

相關內容

There is a growing need for authentication methodology in virtual reality applications. Current systems assume that the immersive experience technology is a collection of peripheral devices connected to a personal computer or mobile device. Hence there is a complete reliance on the computing device with traditional authentication mechanisms to handle the authentication and authorization decisions. Using the virtual reality controllers and headset poses a different set of challenges as it is subject to unauthorized observation, unannounced to the user given the fact that the headset completely covers the field of vision in order to provide an immersive experience. As the need for virtual reality experiences in the commercial world increases, there is a need to provide other alternative mechanisms for secure authentication. In this paper, we analyze a few proposed authentication systems and reached a conclusion that a multidimensional approach to authentication is needed to address the granular nature of authentication and authorization needs of a commercial virtual reality applications in the commercial world.

To fully support vertical industries, 5G and its corresponding channel coding are expected to meet requirements of different applications. However, for applications of 5G and beyond 5G (B5G) such as URLLC, the transmission latency is required to be much shorter than that in eMBB. Therefore, the resulting channel code length reduces drastically. In this case, the traditional 1-D channel coding suffers a lot from the performance degradation and fails to deliver strong reliability with very low latency. To remove this bottleneck, new channel coding scheme beyond the existing 1-D one is in urgent need. By making full use of the spacial freedom of massive MIMO systems, this paper devotes itself in proposing a spatiotemporal 2-D channel coding for very low latency reliable transmission. For a very short time-domain code length $N^{\text{time}}=16$, $64 \times 128$ MIMO system employing the proposed spatiotemporal 2-D coding scheme successfully shows more than $3$\,dB performance gain at $\text{FER}=10^{-3}$, compared to the 1-D time-domain channel coding. It is noted that the proposed coding scheme is suitable for different channel codes and enjoys high flexibility to adapt to difference scenarios. By appropriately selecting the code rate, code length, and the number of codewords in the time and space domains, the proposed coding scheme can achieve a good trade-off between the transmission latency and reliability.

Mobile banking applications have gained popularity and have significantly revolutionised the banking industry. Despite the convenience offered by M-Banking Apps, users are often distrustful of the security of the applications due to an increasing trend of cyber security compromises, cyber-attacks, and data breaches. Considering the upsurge in cyber security vulnerabilities of M-Banking Apps and the paucity of research in this domain, this study was conducted to empirically measure user-perceived security of M-Banking Apps. A total of 315 responses from study participants were analysed using covariance-based structural equation modelling (CB-SEM). The results indicated that most constructs of the baseline Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) structure were validated. Perceived security, institutional trust and technology trust were confirmed as factors that affect user's intention to adopt and use M-Banking Apps. However, perceived risk was not confirmed as a significant predictor. The current study further revealed that in the context of M-Banking Apps, the effects of security and trust are complex. The impact of perceived security and institutional trust on behavioural intention was moderated by age, gender, experience, income, and education, while perceived security on use behaviour was moderated by age, gender, and experience. The effect of technology trust on behavioural intention was moderated by age, education, and experience. Overall, the proposed conceptual model achieved acceptable fit and explained 79% of the variance in behavioural intention and 54.7% in use behaviour of M-Banking Apps, higher than that obtained in the original UTAUT2. The guarantee of enhanced security, advanced privacy mechanisms and trust should be considered paramount in future strategies aimed at promoting M-Banking Apps adoption and use.

The ever-increasing gap between compute and I/O performance in HPC platforms, together with the development of novel NVMe storage devices (NVRAM), led to the emergence of the burst buffer concept - an intermediate persistent storage layer logically positioned between random-access main memory and a parallel file system. Despite the development of real-world architectures as well as research concepts, resource and job management systems, such as Slurm, provide only marginal support for scheduling jobs with burst buffer requirements, in particular ignoring burst buffers when backfilling. We investigate the impact of burst buffer reservations on the overall efficiency of online job scheduling for common algorithms: First-Come-First-Served (FCFS) and Shortest-Job-First (SJF) EASY-backfilling. We evaluate the algorithms in a detailed simulation with I/O side effects. Our results indicate that the lack of burst buffer reservations in backfilling may significantly deteriorate scheduling. We also show that these algorithms can be easily extended to support burst buffers. Finally, we propose a burst-buffer-aware plan-based scheduling algorithm with simulated annealing optimisation, which improves the mean waiting time by over 20% and mean bounded slowdown by 27% compared to the burst-buffer-aware SJF-EASY-backfilling.

An increasing number of mobile applications rely on Machine Learning (ML) routines for analyzing data. Executing such tasks at the user devices saves the energy spent on transmitting and processing large data volumes at distant cloud-deployed servers. However, due to memory and computing limitations, the devices often cannot support the required resource-intensive routines and fail to accurately execute the tasks. In this work, we address the problem of edge-assisted analytics in resource-constrained systems by proposing and evaluating a rigorous selective offloading framework. The devices execute their tasks locally and outsource them to cloudlet servers only when they predict a significant performance improvement. We consider the practical scenario where the offloading gain and resource costs are time-varying; and propose an online optimization algorithm that maximizes the service performance without requiring to know this information. Our approach relies on an approximate dual subgradient method combined with a primal-averaging scheme, and works under minimal assumptions about the system stochasticity. We fully implement the proposed algorithm in a wireless testbed and evaluate its performance using a state-of-the-art image recognition application, finding significant performance gains and cost savings.

Understanding decision-making in dynamic and complex settings is a challenge yet essential for preventing, mitigating, and responding to adverse events (e.g., disasters, financial crises). Simulation games have shown promise to advance our understanding of decision-making in such settings. However, an open question remains on how we extract useful information from these games. We contribute an approach to model human-simulation interaction by leveraging existing methods to characterize: (1) system states of dynamic simulation environments (with Principal Component Analysis), (2) behavioral responses from human interaction with simulation (with Hidden Markov Models), and (3) behavioral responses across system states (with Sequence Analysis). We demonstrate this approach with our game simulating drug shortages in a supply chain context. Results from our experimental study with 135 participants show different player types (hoarders, reactors, followers), how behavior changes in different system states, and how sharing information impacts behavior. We discuss how our findings challenge existing literature.

When assessing the performance of wireless communication systems operating over fading channels, one often encounters the problem of computing expectations of some functional of sums of independent random variables (RVs). The outage probability (OP) at the output of Equal Gain Combining (EGC) and Maximum Ratio Combining (MRC) receivers is among the most important performance metrics that falls within this framework. In general, closed form expressions of expectations of functionals applied to sums of RVs are out of reach. A naive Monte Carlo (MC) simulation is of course an alternative approach. However, this method requires a large number of samples for rare event problems (small OP values for instance). Therefore, it is of paramount importance to use variance reduction techniques to develop fast and efficient estimation methods. In this work, we use importance sampling (IS), being known for its efficiency in requiring less computations for achieving the same accuracy requirement. In this line, we propose a state-dependent IS scheme based on a stochastic optimal control (SOC) formulation to calculate rare events quantities that could be written in a form of an expectation of some functional of sums of independent RVs. Our proposed algorithm is generic and can be applicable without any restriction on the univariate distributions of the different fading envelops/gains or on the functional that is applied to the sum. We apply our approach to the Log-Normal distribution to compute the OP at the output of diversity receivers with and without co-channel interference. For each case, we show numerically that the proposed state-dependent IS algorithm compares favorably to most of the well-known estimators dealing with similar problems.

Cellular networks are expected to be the main communication infrastructure to support the expanding applications of Unmanned Aerial Vehicles (UAVs). As these networks are deployed to serve ground User Equipment (UES), several issues need to be addressed to enhance cellular UAVs'services.In this paper, we propose a realistic communication model on the downlink,and we show that the Quality of Service (QoS)for the users is affected by the number of interfering BSs and the impact they cause. The joint problem of sub-carrier and power allocation is therefore addressed. Given its complexity, which is known to be NP-hard, we introduce a solution based on game theory. First, we argue that separating between UAVs and UEs in terms of the assigned sub-carriers reduces the interference impact on the users. This is materialized through a matching game. Moreover, in order to boost the partition, we propose a coalitional game that considers the outcome of the first one and enables users to change their coalitions and enhance their QoS. Furthermore, a power optimization solution is introduced, which is considered in the two games. Performance evaluations are conducted, and the obtained results demonstrate the effectiveness of the propositions.

The collective attention on online items such as web pages, search terms, and videos reflects trends that are of social, cultural, and economic interest. Moreover, attention trends of different items exhibit mutual influence via mechanisms such as hyperlinks or recommendations. Many visualisation tools exist for time series, network evolution, or network influence; however, few systems connect all three. In this work, we present AttentionFlow, a new system to visualise networks of time series and the dynamic influence they have on one another. Centred around an ego node, our system simultaneously presents the time series on each node using two visual encodings: a tree ring for an overview and a line chart for details. AttentionFlow supports interactions such as overlaying time series of influence and filtering neighbours by time or flux. We demonstrate AttentionFlow using two real-world datasets, VevoMusic and WikiTraffic. We show that attention spikes in songs can be explained by external events such as major awards, or changes in the network such as the release of a new song. Separate case studies also demonstrate how an artist's influence changes over their career, and that correlated Wikipedia traffic is driven by cultural interests. More broadly, AttentionFlow can be generalised to visualise networks of time series on physical infrastructures such as road networks, or natural phenomena such as weather and geological measurements.

In recent years with the rise of Cloud Computing (CC), many companies providing services in the cloud, are empowered a new series of services to their catalog, such as data mining (DM) and data processing, taking advantage of the vast computing resources available to them. Different service definition proposals have been proposed to address the problem of describing services in CC in a comprehensive way. Bearing in mind that each provider has its own definition of the logic of its services, and specifically of DM services, it should be pointed out that the possibility of describing services in a flexible way between providers is fundamental in order to maintain the usability and portability of this type of CC services. The use of semantic technologies based on the proposal offered by Linked Data (LD) for the definition of services, allows the design and modelling of DM services, achieving a high degree of interoperability. In this article a schema for the definition of DM services on CC is presented, in addition are considered all key aspects of service in CC, such as prices, interfaces, Software Level Agreement, instances or workflow of experimentation, among others. The proposal presented is based on LD, so that it reuses other schemata obtaining a best definition of the service. For the validation of the schema, a series of DM services have been created where some of the best known algorithms such as \textit{Random Forest} or \textit{KMeans} are modeled as services.

北京阿比特科技有限公司