亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the fair division problem on divisible heterogeneous resources (the cake cutting problem) with strategic agents, where each agent can manipulate his/her private valuation in order to receive a better allocation. A (direct-revelation) mechanism takes agents' reported valuations as input and outputs an allocation that satisfies a given fairness requirement. A natural and fundamental open problem, first raised by [Chen et al., 2010] and subsequently raised by [Procaccia, 2013] [Aziz and Ye, 2014] [Branzei and Miltersen, 2015] [Menon and Larson, 2017] [Bei et al., 2017] [Bei et al., 2020], etc., is whether there exists a deterministic, truthful and envy-free (or even proportional) cake cutting mechanism. In this paper, we resolve this open problem by proving that there does not exist a deterministic, truthful and proportional cake cutting mechanism, even in the special case where all of the following hold: 1. there are only two agents; 2. each agent's valuation is a piecewise-constant function; 3. each agent is hungry: each agent has a strictly positive value on any part of the cake. The impossibility result extends to the case where the mechanism is allowed to leave some part of the cake unallocated. To circumvent this impossibility result, we aim to design mechanisms that possess a certain degree of truthfulness. Motivated by the kind of truthfulness possessed by the classical I-cut-you-choose protocol, we propose a weaker notion of truthfulness: the proportional risk-averse truthfulness. We show that the well-known moving-knife (Dubins-Spanier) procedure and Even-Paz algorithm do not have this truthful property. We propose a mechanism that is proportionally risk-averse truthful and envy-free, and a mechanism that is proportionally risk-averse truthful that always outputs allocations with connected pieces.

相關內容

In this paper we study estimating Generalized Linear Models (GLMs) in the case where the agents (individuals) are strategic or self-interested and they concern about their privacy when reporting data. Compared with the classical setting, here we aim to design mechanisms that can both incentivize most agents to truthfully report their data and preserve the privacy of individuals' reports, while their outputs should also close to the underlying parameter. In the first part of the paper, we consider the case where the covariates are sub-Gaussian and the responses are heavy-tailed where they only have the finite fourth moments. First, motivated by the stationary condition of the maximizer of the likelihood function, we derive a novel private and closed form estimator. Based on the estimator, we propose a mechanism which has the following properties via some appropriate design of the computation and payment scheme for several canonical models such as linear regression, logistic regression and Poisson regression: (1) the mechanism is $o(1)$-jointly differentially private (with probability at least $1-o(1)$); (2) it is an $o(\frac{1}{n})$-approximate Bayes Nash equilibrium for a $(1-o(1))$-fraction of agents to truthfully report their data, where $n$ is the number of agents; (3) the output could achieve an error of $o(1)$ to the underlying parameter; (4) it is individually rational for a $(1-o(1))$ fraction of agents in the mechanism ; (5) the payment budget required from the analyst to run the mechanism is $o(1)$. In the second part, we consider the linear regression model under more general setting where both covariates and responses are heavy-tailed and only have finite fourth moments. By using an $\ell_4$-norm shrinkage operator, we propose a private estimator and payment scheme which have similar properties as in the sub-Gaussian case.

Automatically evaluating the coherence of summaries is of great significance both to enable cost-efficient summarizer evaluation and as a tool for improving coherence by selecting high-scoring candidate summaries. While many different approaches have been suggested to model summary coherence, they are often evaluated using disparate datasets and metrics. This makes it difficult to understand their relative performance and identify ways forward towards better summary coherence modelling. In this work, we conduct a large-scale investigation of various methods for summary coherence modelling on an even playing field. Additionally, we introduce two novel analysis measures, intra-system correlation and bias matrices, that help identify biases in coherence measures and provide robustness against system-level confounders. While none of the currently available automatic coherence measures are able to assign reliable coherence scores to system summaries across all evaluation metrics, large-scale language models fine-tuned on self-supervised tasks show promising results, as long as fine-tuning takes into account that they need to generalize across different summary lengths.

Any approach aimed at pasteurizing and quantifying a particular phenomenon must include the use of robust statistical methodologies for data analysis. With this in mind, the purpose of this study is to present statistical approaches that may be employed in nonparametric nonhomogeneous data frameworks, as well as to examine their application in the field of natural language processing and language clustering. Furthermore, this paper discusses the many uses of nonparametric approaches in linguistic data mining and processing. The data depth idea allows for the centre-outward ordering of points in any dimension, resulting in a new nonparametric multivariate statistical analysis that does not require any distributional assumptions. The concept of hierarchy is used in historical language categorisation and structuring, and it aims to organise and cluster languages into subfamilies using the same premise. In this regard, the current study presents a novel approach to language family structuring based on non-parametric approaches produced from a typological structure of words in various languages, which is then converted into a Cartesian framework using MDS. This statistical-depth-based architecture allows for the use of data-depth-based methodologies for robust outlier detection, which is extremely useful in understanding the categorization of diverse borderline languages and allows for the re-evaluation of existing classification systems. Other depth-based approaches are also applied to processes such as unsupervised and supervised clustering. This paper therefore provides an overview of procedures that can be applied to nonhomogeneous language classification systems in a nonparametric framework.

We study the problem of dynamically allocating indivisible items to a group of agents in a fair manner. We assume that the items are goods and the valuation functions are additive without specification. Due to the negative results to achieve fairness, we allow adjustments to make fairness attainable with the objective to minimize the number of adjustments. We obtain positive results to achieve EF1 for (1) two agents with mixed manna, (2) restricted additive or general identical valuations, and (3) the default setting. We further impose the contiguity constraint on the items and require that each agent obtains a consecutive block of items. We obtain both positive and negative results to achieve either EF1 or proportionality with an additive approximate factor. In particular, we establish matching lower and upper bounds to achieve approximate proportionality for identical valuations. Our results exhibit the large discrepancy between the identical model and nonidentical model in both contiguous and noncontiguous settings. All our positive results are computationally efficient.

Responsible AI has been widely considered as one of the greatest scientific challenges of our time and the key to increase the adoption of AI. A number of AI ethics principles frameworks have been published recently. However, without further best practice guidance, practitioners are left with nothing much beyond truisms. Also, significant efforts have been placed at algorithm-level rather than system-level, mainly focusing on a subset of mathematics-amenable ethical principles (such as fairness). Nevertheless, ethical issues can occur at any step of the development lifecycle crosscutting many AI and non-AI components of systems beyond AI algorithms and models. To operationalize responsible AI from a system perspective, in this paper, we present a Responsible AI Pattern Catalogue based on the results of a Multivocal Literature Review (MLR). Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The Responsible AI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and responsible-AI-by-design product patterns. These patterns provide a systematic and actionable guidance for stakeholders to implement responsible AI.

Voice trigger detection is an important task, which enables activating a voice assistant when a target user speaks a keyword phrase. A detector is typically trained on speech data independent of speaker information and used for the voice trigger detection task. However, such a speaker independent voice trigger detector typically suffers from performance degradation on speech from underrepresented groups, such as accented speakers. In this work, we propose a novel voice trigger detector that can use a small number of utterances from a target speaker to improve detection accuracy. Our proposed model employs an encoder-decoder architecture. While the encoder performs speaker independent voice trigger detection, similar to the conventional detector, the decoder predicts a personalized embedding for each utterance. A personalized voice trigger score is then obtained as a similarity score between the embeddings of enrollment utterances and a test utterance. The personalized embedding allows adapting to target speaker's speech when computing the voice trigger score, hence improving voice trigger detection accuracy. Experimental results show that the proposed approach achieves a 38% relative reduction in a false rejection rate (FRR) compared to a baseline speaker independent voice trigger model.

Session types provide a flexible programming style for structuring interaction, and are used to guarantee a safe and consistent composition of distributed processes. Traditional session types include only one-directional input (external) and output (internal) guarded choices. This prevents the session-processes to explore the full expressive power of the pi-calculus where the mixed choices are proved more expressive than the (non-mixed) guarded choices. To account this issue, recently Casal, Mordido, and Vasconcelos proposed the binary session types with mixed choices (CMV+). This paper carries a surprising, unfortunate result on CMV+: in spite of an inclusion of unrestricted channels with mixed choice, CMV+'s mixed choice is rather separate and not mixed. We prove this negative result using two methodologies (using either the leader election problem or a synchronisation pattern as distinguishing feature), showing that there exists no good encoding from the pi-calculus into CMV+, preserving distribution. We then close their open problem on the encoding from CMV+ into CMV (without mixed choice), proving its soundness and thereby that the encoding is good up to coupled similarity.

Trust has emerged as a key factor in people's interactions with AI-infused systems. Yet, little is known about what models of trust have been used and for what systems: robots, virtual characters, smart vehicles, decision aids, or others. Moreover, there is yet no known standard approach to measuring trust in AI. This scoping review maps out the state of affairs on trust in human-AI interaction (HAII) from the perspectives of models, measures, and methods. Findings suggest that trust is an important and multi-faceted topic of study within HAII contexts. However, most work is under-theorized and under-reported, generally not using established trust models and missing details about methods, especially Wizard of Oz. We offer several targets for systematic review work as well as a research agenda for combining the strengths and addressing the weaknesses of the current literature.

With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition has been inescapably influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, approach and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and the grand challenges still remained. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected and compiled in our Github repository: //github.com/Jyouhou/SceneTextPapers.

北京阿比特科技有限公司