亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the Hospitals/Residents problem, every hospital has an upper quota that limits the number of residents assigned to it. While, in some applications, each hospital also has a lower quota for the number of residents it receives. In this setting, a stable matching may not exist. Envy-freeness is introduced as a relaxation of stability that allows blocking pairs involving a resident and an empty position of a hospital. While, envy-free matching might not exist either when lower quotas are introduced. We consider the problem of finding a feasible matching that satisfies lower quotas and upper quotas and minimizes envy in terms of envy-pairs and envy-residents in the Hospitals/Resident problem with Lower Quota. We show that the problem is NP-hard with both envy measurement. We also give a simple exponential-time algorithm for the Minimum-Envy-Pair HRLQ problem.

相關內容

An instance $I$ of the Stable Matching Problem (SMP) is given by a bipartite graph with a preference list of neighbors for every vertex. A swap in $I$ is the exchange of two consecutive vertices in a preference list. A swap can be viewed as a smallest perturbation of $I$. Boehmer et al. (2021) designed a polynomial-time algorithm to find the minimum number of swaps required to turn a given maximal matching into a stable matching. To generalize this result to the many-to-many version of SMP, we introduce a new representation of SMP as an extended bipartite graph and reduce the problem to submodular minimization. It is a natural problem to establish computational complexity of deciding whether at most $k$ swaps are enough to turn $I$ into an instance where one of the maximum matchings is stable. Using a hardness result of Gupta et al. (2020), we prove that it is NP-hard to decide whether at most $k$ swaps are enough to turn $I$ into an instance with a stable perfect matching. Moreover, this problem parameterized by $k$ is W[1]-hard. We also obtain a lower bound on the running time for solving the problem using the Exponential Time Hypothesis.

This paper studies optimal motion planning subject to motion and environment uncertainties. By modeling the system as a probabilistic labeled Markov decision process (PL-MDP), the control objective is to synthesize a finite-memory policy, under which the agent satisfies complex high-level tasks expressed as linear temporal logic (LTL) with desired satisfaction probability. In particular, the cost optimization of the trajectory that satisfies infinite horizon tasks is considered, and the trade-off between reducing the expected mean cost and maximizing the probability of task satisfaction is analyzed. Instead of using traditional Rabin automata, the LTL formulas are converted to limit-deterministic B\"uchi automata (LDBA) with a reachability acceptance condition and a compact graph structure. The novelty of this work lies in considering the cases where LTL specifications can be potentially infeasible and developing a relaxed product MDP between PL-MDP and LDBA. The relaxed product MDP allows the agent to revise its motion plan whenever the task is not fully feasible and quantify the revised plan's violation measurement. A multi-objective optimization problem is then formulated to jointly consider the probability of task satisfaction, the violation with respect to original task constraints, and the implementation cost of the policy execution. The formulated problem can be solved via coupled linear programs. To the best of our knowledge, this work first bridges the gap between probabilistic planning revision of potential infeasible LTL specifications and optimal control synthesis of both plan prefix and plan suffix of the trajectory over the infinite horizons. Experimental results are provided to demonstrate the effectiveness of the proposed framework.

We propose a simple and efficient clustering method for high-dimensional data with a large number of clusters. Our algorithm achieves high-performance by evaluating distances of datapoints with a subset of the cluster centres. Our contribution is substantially more efficient than k-means as it does not require an all to all comparison of data points and clusters. We show that the optimal solutions of our approximation are the same as in the exact solution. However, our approach is considerably more efficient at extracting these clusters compared to the state-of-the-art. We compare our approximation with the exact k-means and alternative approximation approaches on a series of standardised clustering tasks. For the evaluation, we consider the algorithmic complexity, including number of operations to convergence, and the stability of the results.

The stochastic dynamic matching problem has recently drawn attention in the stochastic-modeling community due to its numerous applications, ranging from supply-chain management to kidney exchange programs. In this paper, we consider a matching problem in which items of different classes arrive according to independent Poisson processes. Unmatched items are stored in a queue, and compatibility constraints are described by a simple graph on the classes, so that two items can be matched if their classes are neighbors in the graph. We analyze the efficiency of matching policies, not only in terms of system stability, but also in terms of matching rates between different classes. Our results rely on the observation that, under any stable policy, the matching rates satisfy a conservation equation that equates the arrival and departure rates of each item class. Our main contributions are threefold. We first introduce a mapping between the dimension of the solution set of this conservation equation, the structure of the compatibility graph, and the existence of a stable policy. In particular, this allows us to derive a necessary and sufficient stability condition that is verifiable in polynomial time. Secondly, we describe the convex polytope of non-negative solutions of the conservation equation. When this polytope is reduced to a single point, we give a closed-form expression of the solution; in general, we characterize the vertices of this polytope using again the graph structure. Lastly, we show that greedy policies cannot, in general, achieve every point in the polytope. In contrast, non-greedy policies can reach any point of the interior of this polytope, and we give a condition for these policies to also reach the boundary of the polytope.

The sketch-and-project, as a general archetypal algorithm for solving linear systems, unifies a variety of randomized iterative methods such as the randomized Kaczmarz and randomized coordinate descent. However, since it aims to find a least-norm solution from a linear system, the randomized sparse Kaczmarz can not be included. This motivates us to propose a more general framework, called sketched Bregman projection (SBP) method, in which we are able to find solutions with certain structures from linear systems. To generalize the concept of adaptive sampling to the SBP method, we show how the progress, measured by Bregman distance, of single step depends directly on a sketched loss function. Theoretically, we provide detailed global convergence results for the SBP method with different adaptive sampling rules. At last, for the (sparse) Kaczmarz methods, a group of numerical simulations are tested, with which we verify that the methods utilizing sampling Kaczmarz-Motzkin rule demands the fewest computational costs to achieve a given error bound comparing to the corresponding methods with other sampling rules.

This work studies the $K$-user broadcast channel with $\Lambda$ caches, when the association between users and caches is random, i.e., for the scenario where each user can appear within the coverage area of -- and subsequently is assisted by -- a specific cache based on a given probability distribution. Caches are subject to a cumulative memory constraint that is equal to $t$ times the size of the library. We provide a scheme that consists of three phases: the storage allocation phase, the content placement phase, and the delivery phase, and show that an optimized storage allocation across the caches together with a modified uncoded cache placement and delivery strategy alleviates the adverse effect of cache-load imbalance by significantly reducing the multiplicative performance deterioration due to randomness. In a nutshell, our work provides a scheme that manages to substantially mitigate the impact of cache-load imbalance in stochastic networks, as well as -- compared to the best-known state-of-the-art -- the well-known subpacketization bottleneck by showing its applicability in deterministic settings for which it achieves the same delivery time -- which was proven to be close to optimal for bounded values of $t$ -- with an exponential reduction in the subpacketization.

Motivated by many interesting real-world applications in logistics and online advertising, we consider an online allocation problem subject to lower and upper resource constraints, where the requests arrive sequentially, sampled i.i.d. from an unknown distribution, and we need to promptly make a decision given limited resources and lower bounds requirements. First, with knowledge of the measure of feasibility, i.e., $\alpha$, we propose a new algorithm that obtains $1-O(\frac{\epsilon}{\alpha-\epsilon})$ -competitive ratio for the offline problems that know the entire requests ahead of time. Inspired by the previous studies, this algorithm adopts an innovative technique to dynamically update a threshold price vector for making decisions. Moreover, an optimization method to estimate the optimal measure of feasibility is proposed with theoretical guarantee at the end of this paper. Based on this method, if we tolerate slight violation of the lower bounds constraints with parameter $\eta$, the proposed algorithm is naturally extended to the settings without strong feasible assumption, which cover the significantly unexplored infeasible scenarios.

We study learning of a matching model for response selection in retrieval-based dialogue systems. The problem is equally important with designing the architecture of a model, but is less explored in existing literature. To learn a robust matching model from noisy training data, we propose a general co-teaching framework with three specific teaching strategies that cover both teaching with loss functions and teaching with data curriculum. Under the framework, we simultaneously learn two matching models with independent training sets. In each iteration, one model transfers the knowledge learned from its training set to the other model, and at the same time receives the guide from the other model on how to overcome noise in training. Through being both a teacher and a student, the two models learn from each other and get improved together. Evaluation results on two public data sets indicate that the proposed learning approach can generally and significantly improve the performance of existing matching models.

Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.

In this paper, we propose a novel conditional generative adversarial nets based image captioning framework as an extension of traditional reinforcement learning (RL) based encoder-decoder architecture. To deal with the inconsistent evaluation problem between objective language metrics and subjective human judgements, we are inspired to design some "discriminator" networks to automatically and progressively determine whether generated caption is human described or machine generated. Two kinds of discriminator architecture (CNN and RNN based structures) are introduced since each has its own advantages. The proposed algorithm is generic so that it can enhance any existing encoder-decoder based image captioning model and we show that conventional RL training method is just a special case of our framework. Empirically, we show consistent improvements over all language evaluation metrics for different stage-of-the-art image captioning models.

北京阿比特科技有限公司