亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Scheduling a sports tournament is a complex optimization problem, which requires a large number of hard constraints to satisfy. Despite the availability of several such constraints in the literature, there remains a gap since most of the new sports events pose their own unique set of requirements, and demand novel constraints. Specifically talking of the strictly time bound events, ensuring fairness between the different teams in terms of their rest days, traveling, and the number of successive games they play, becomes a difficult task to resolve, and demands attention. In this work, we present a similar situation with a recently played sports event, where a suboptimal schedule favored some of the sides more than the others. We introduce various competitive parameters to draw a fairness comparison between the sides and propose a weighting criterion to point out the sides that enjoyed this schedule more than the others. Furthermore, we use root mean squared error between an ideal schedule and the actual ones for each side to determine unfairness in the distribution of rest days across their entire schedules. The latter is crucial, since successively playing a large number of games may lead to sportsmen burnout, which must be prevented.

相關內容

面向服務的前后端通信標準 Not React

SARS-CoV-2, like any other virus, continues to mutate as it spreads, according to an evolutionary process. Unlike any other virus, the number of currently available sequences of SARS-CoV-2 in public databases such as GISAID is already several million. This amount of data has the potential to uncover the evolutionary dynamics of a virus like never before. However, a million is already several orders of magnitude beyond what can be processed by the traditional methods designed to reconstruct a virus's evolutionary history, such as those that build a phylogenetic tree. Hence, new and scalable methods will need to be devised in order to make use of the ever increasing number of viral sequences being collected. Since identifying variants is an important part of understanding the evolution of a virus, in this paper, we propose an approach based on clustering sequences to identify the current major SARS-CoV-2 variants. Using a $k$-mer based feature vector generation and efficient feature selection methods, our approach is effective in identifying variants, as well as being efficient and scalable to millions of sequences. Such a clustering method allows us to show the relative proportion of each variant over time, giving the rate of spread of each variant in different locations -- something which is important for vaccine development and distribution. We also compute the importance of each amino acid position of the spike protein in identifying a given variant in terms of information gain. Positions of high variant-specific importance tend to agree with those reported by the USA's Centers for Disease Control and Prevention (CDC), further demonstrating our approach.

With the proliferation of the Internet of Things (IoT) and the wide penetration of wireless networks, the surging demand for data communications and computing calls for the emerging edge computing paradigm. By moving the services and functions located in the cloud to the proximity of users, edge computing can provide powerful communication, storage, networking, and communication capacity. The resource scheduling in edge computing, which is the key to the success of edge computing systems, has attracted increasing research interests. In this paper, we survey the state-of-the-art research findings to know the research progress in this field. Specifically, we present the architecture of edge computing, under which different collaborative manners for resource scheduling are discussed. Particularly, we introduce a unified model before summarizing the current works on resource scheduling from three research issues, including computation offloading, resource allocation, and resource provisioning. Based on two modes of operation, i.e., centralized and distributed modes, different techniques for resource scheduling are discussed and compared. Also, we summarize the main performance indicators based on the surveyed literature. To shed light on the significance of resource scheduling in real-world scenarios, we discuss several typical application scenarios involved in the research of resource scheduling in edge computing. Finally, we highlight some open research challenges yet to be addressed and outline several open issues as the future research direction.

Risk prediction models are a crucial tool in healthcare. Risk prediction models with a binary outcome (i.e., binary classification models) are often constructed using methodology which assumes the costs of different classification errors are equal. In many healthcare applications this assumption is not valid, and the differences between misclassification costs can be quite large. For instance, in a diagnostic setting, the cost of misdiagnosing a person with a life-threatening disease as healthy may be larger than the cost of misdiagnosing a healthy person as a patient. In this work, we present Tailored Bayes (TB), a novel Bayesian inference framework which "tailors" model fitting to optimise predictive performance with respect to unbalanced misclassification costs. We use simulation studies to showcase when TB is expected to outperform standard Bayesian methods in the context of logistic regression. We then apply TB to three real-world applications, a cardiac surgery, a breast cancer prognostication task and a breast cancer tumour classification task, and demonstrate the improvement in predictive performance over standard methods.

One of the most important and challenging problems in coding theory is to construct codes with best possible parameters and properties. The class of quasi-cyclic (QC) codes is known to be fertile to produce such codes. Focusing on QC codes over the binary field, we have found 113 binary QC codes that are new among the class of QC codes using an implementation of a fast cyclic partitioning algorithm and the highly effective ASR algorithm. Moreover, these codes have the following additional properties: a) they have the same parameters as best known linear codes, and b) many of the have additional desired properties such as being reversible, LCD, self-orthogonal or dual-containing. Additionally, we present an algorithm for the generation of new codes from QC codes using ConstructionX, and introduce 35 new record breaking linear codes produced from this method.

We consider the problem of online job scheduling on a single machine or multiple unrelated machines with general job/machine-dependent cost functions. In this model, each job $j$ has a processing requirement (length) $v_{ij}$ and arrives with a nonnegative nondecreasing cost function $g_{ij}(t)$ if it has been dispatched to machine $i$, and this information is revealed to the system upon arrival of job $j$ at time $r_j$. The goal is to dispatch the jobs to the machines in an online fashion and process them preemptively on the machines so as to minimize the generalized completion time $\sum_{j}g_{i(j)j}(C_j)$. Here $i(j)$ refers to the machine to which job $j$ is dispatched, and $C_j$ is the completion time of job $j$ on that machine. It is assumed that jobs cannot migrate between machines and that each machine can work on a single job at any time instance. In particular, we are interested in finding an online scheduling policy whose objective cost is competitive with respect to a slower optimal offline benchmark, i.e., the one that knows all the job specifications a priori and is slower than the online algorithm. We first show that for the case of a single machine and special cost functions $g_j(t)=w_jg(t)$, with nonnegative nondecreasing $g(t)$, the highest-density-first rule is optimal for the generalized fractional completion time. We then extend this result by giving a speed-augmented competitive algorithm for the general nondecreasing cost functions $g_j(t)$ by utilizing a novel optimal control framework. This approach provides a principled method for identifying dual variables in different settings of online job scheduling with general cost functions. Using this method, we also provide a speed-augmented competitive algorithm for multiple unrelated machines with convex functions $g_{ij}(t)$, where the competitive ratio depends on the curvature of cost functions $g_{ij}(t)$.

In this paper, we investigate dynamic resource scheduling (i.e., joint user, subchannel, and power scheduling) for downlink multi-channel non-orthogonal multiple access (MC-NOMA) systems over time-varying fading channels. Specifically, we address the weighted average sum rate maximization problem with quality-of-service (QoS) constraints. In particular, to facilitate fast resource scheduling, we focus on developing a very low-complexity algorithm. To this end, by leveraging Lagrangian duality and the stochastic optimization theory, we first develop an opportunistic MC-NOMA scheduling algorithm whereby the original problem is decomposed into a series of subproblems, one for each time slot. Accordingly, resource scheduling works in an online manner by solving one subproblem per time slot, making it more applicable to practical systems. Then, we further develop a heuristic joint subchannel assignment and power allocation (Joint-SAPA) algorithm with very low computational complexity, called Joint-SAPA-LCC, that solves each subproblem. Finally, through simulation, we show that our Joint-SAPA-LCC algorithm provides good performance comparable to the existing Joint-SAPA algorithms despite requiring much lower computational complexity. We also demonstrate that our opportunistic MC-NOMA scheduling algorithm in which the Joint-SAPA-LCC algorithm is embedded works well while satisfying given QoS requirements.

In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as "attraction force" to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore "local planning" approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.

Recent years have witnessed a rapid growth of distributed machine learning (ML) frameworks, which exploit the massive parallelism of computing clusters to expedite ML training. However, the proliferation of distributed ML frameworks also introduces many unique technical challenges in computing system design and optimization. In a networked computing cluster that supports a large number of training jobs, a key question is how to design efficient scheduling algorithms to allocate workers and parameter servers across different machines to minimize the overall training time. Toward this end, in this paper, we develop an online scheduling algorithm that jointly optimizes resource allocation and locality decisions. Our main contributions are three-fold: i) We develop a new analytical model that considers both resource allocation and locality; ii) Based on an equivalent reformulation and observations on the worker-parameter server locality configurations, we transform the problem into a mixed packing and covering integer program, which enables approximation algorithm design; iii) We propose a meticulously designed approximation algorithm based on randomized rounding and rigorously analyze its performance. Collectively, our results contribute to the state of the art of distributed ML system optimization and algorithm design.

Airline disruption management traditionally seeks to address three problem dimensions: aircraft scheduling, crew scheduling, and passenger scheduling, in that order. However, current efforts have, at most, only addressed the first two problem dimensions concurrently and do not account for the propagative effects that uncertain scheduling outcomes in one dimension can have on another dimension. In addition, existing approaches for airline disruption management include human specialists who decide on necessary corrective actions for airline schedule disruptions on the day of operation. However, human specialists are limited in their ability to process copious amounts of information imperative for making robust decisions that simultaneously address all problem dimensions during disruption management. Therefore, there is a need to augment the decision-making capabilities of a human specialist with quantitative and qualitative tools that can rationalize complex interactions amongst all dimensions in airline disruption management, and provide objective insights to the specialists in the airline operations control center. To that effect, we provide a discussion and demonstration of an agnostic and systematic paradigm for enabling expeditious simultaneously-integrated recovery of all problem dimensions during airline disruption management, through an intelligent multi-agent system that employs principles from artificial intelligence and distributed ledger technology. Results indicate that our paradigm for simultaneously-integrated recovery is effective when all the flights in the airline route network are disrupted.

ESports tournaments, such as Dota 2's The International (TI), attract millions of spectators to watch broadcasts on online streaming platforms, to communicate, and to share their experience and emotions. Unlike traditional streams, tournament broadcasts lack a streamer figure to which spectators can appeal directly. Using topic modelling and cross-correlation analysis of more than three million messages from 86 games of TI7, we uncover main topical and temporal patterns of communication. First, we disentangle contextual meanings of emotes and memes, which play a salient role in communication, and show a meta-topics semantic map of streaming slang. Second, our analysis shows a prevalence of the event-driven game communication during tournament broadcasts and particular topics associated with the event peaks. Third, we show that "copypasta" cascades and other related practices, while occupying a significant share of messages, are strongly associated with periods of lower in-game activity. Based on the analysis, we propose design ideas to support different modes of spectators' communication.

北京阿比特科技有限公司