亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In order to coordinate successfully individuals must first identify a target pattern of behaviour. In this paper we investigate the difficulty of identifying prominent outcomes in two kinds of binary action coordination problems in social networks: pure coordination games and anti-coordination games. For both environments, we determine the computational complexity of finding a strategy profile that (i) maximises welfare, (ii) maximises welfare subject to being an equilibrium, and (iii) maximises potential. We show that the complexity of these objectives can vary with the type of coordination problem. Objectives (i) and (iii) are tractable problems in pure coordination games, but for anti-coordination games are NP-hard. Objective (ii), finding the best Nash equilibrium, is NP-hard for both. Our results support the idea that environments in which actions are strategic complements facilitate successful coordination more readily than those in which actions are strategic substitutes.

相關內容

In this paper, we construct a winning condition $W$ over a finite set of colors such that, first, every finite arena has a strategy with 2 states of general memory which is optimal w.r.t.~$W$, and second, there exists no $k$ such that every finite arena has a strategy with $k$ states of chromatic memory which is optimal w.r.t.~$W$.

Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion with invariant measure $d\mu_\phi \propto e^{-\phi} \mathrm{dvol}_g $ on a compact Riemannian manifold. Two estimators of linear functionals of $\mu_\phi $ based on the discretized Markov process are considered: a time-averaging estimator based on a single trajectory and an ensemble-averaging estimator based on multiple independent trajectories. Imposing no restrictions beyond a nominal level of smoothness on $\phi$, first-order error bounds, in discretization step size, on the bias and variances of both estimators are derived. The order of error matches the optimal rate in Euclidean and flat spaces, and leads to a first-order bound on distance between the invariant measure $\mu_\phi$ and a stationary measure of the discretized Markov process. Generality of the proof techniques, which exploit links between two partial differential equations and the semigroup of operators corresponding to the Langevin diffusion, renders them amenable for the study of a more general class of sampling algorithms related to the Langevin diffusion. Conditions for extending analysis to the case of non-compact manifolds are discussed. Numerical illustrations with distributions, log-concave and otherwise, on the manifolds of positive and negative curvature elucidate on the derived bounds and demonstrate practical utility of the sampling algorithm.

Advances in large language models (LLMs) have driven an explosion of interest about their societal impacts. Much of the discourse around how they will impact social equity has been cautionary or negative, focusing on questions like "how might LLMs be biased and how would we mitigate those biases?" This is a vital discussion: the ways in which AI generally, and LLMs specifically, can entrench biases have been well-documented. But equally vital, and much less discussed, is the more opportunity-focused counterpoint: "what promising applications do LLMs enable that could promote equity?" If LLMs are to enable a more equitable world, it is not enough just to play defense against their biases and failure modes. We must also go on offense, applying them positively to equity-enhancing use cases to increase opportunities for underserved groups and reduce societal discrimination. There are many choices which determine the impact of AI, and a fundamental choice very early in the pipeline is the problems we choose to apply it to. If we focus only later in the pipeline -- making LLMs marginally more fair as they facilitate use cases which intrinsically entrench power -- we will miss an important opportunity to guide them to equitable impacts. Here, we highlight the emerging potential of LLMs to promote equity by presenting four newly possible, promising research directions, while keeping risks and cautionary points in clear view.

Dependence is undoubtedly a central concept in statistics. Though, it proves difficult to locate in the literature a formal definition which goes beyond the self-evident 'dependence = non-independence'. This absence has allowed the term 'dependence' and its declination to be used vaguely and indiscriminately for qualifying a variety of disparate notions, leading to numerous incongruities. For example, the classical Pearson's, Spearman's or Kendall's correlations are widely regarded as 'dependence measures' of major interest, in spite of returning 0 in some cases of deterministic relationships between the variables at play, evidently not measuring dependence at all. Arguing that research on such a fundamental topic would benefit from a slightly more rigid framework, this paper suggests a general definition of the dependence between two random variables defined on the same probability space. Natural enough for aligning with intuition, that definition is still sufficiently precise for allowing unequivocal identification of a 'universal' representation of the dependence structure of any bivariate distribution. Links between this representation and familiar concepts are highlighted, and ultimately, the idea of a dependence measure based on that universal representation is explored and shown to satisfy Renyi's postulates.

Text normalization is a crucial technology for low-resource languages which lack rigid spelling conventions or that have undergone multiple spelling reforms. Low-resource text normalization has so far relied upon hand-crafted rules, which are perceived to be more data efficient than neural methods. In this paper we examine the case of text normalization for Ligurian, an endangered Romance language. We collect 4,394 Ligurian sentences paired with their normalized versions, as well as the first open source monolingual corpus for Ligurian. We show that, in spite of the small amounts of data available, a compact transformer-based model can be trained to achieve very low error rates by the use of backtranslation and appropriate tokenization.

In this paper, we study the stability and convergence of a fully discrete finite difference scheme for the initial value problem associated with the Korteweg-De Vries (KdV) equation. We employ the Crank-Nicolson method for temporal discretization and establish that the scheme is $L^2$-conservative. The convergence analysis reveals that utilizing inherent Kato's local smoothing effect, the proposed scheme converges to a classical solution for sufficiently regular initial data $u_0 \in H^{3}(\mathbb{R})$ and to a weak solution in $L^2(0,T;L^2_{\text{loc}}(\mathbb{R}))$ for non-smooth initial data $u_0 \in L^2(\mathbb{R})$. Optimal convergence rates in both time and space for the devised scheme are derived. The theoretical results are justified through several numerical illustrations.

Query-focused summarization (QFS) aims to provide a summary of a single document/multi documents that can satisfy the information needs of a given query. It is useful for various real-world applications, such as abstractive snippet generation or more recent retrieval augmented generation (RAG). A prototypical QFS pipeline consists of a retriever (sparse or dense retrieval) and a generator (usually a large language model). However, applying large language models (LLM) potentially leads to hallucinations, especially when the evidence contradicts the prior belief of LLMs. There has been growing interest in developing new decoding methods to improve generation quality and reduce hallucination. In this work, we conduct a large-scale reproducibility on one recently proposed decoding method -- Context-aware Decoding (CAD). In addition to replicating CAD's experiments on news summarization datasets, we include experiments on QFS datasets, and conduct more rigorous analysis on computational complexity and hyperparameter sensitivity. Experiments with eight different language models show that performance-wise, CAD improves QFS quality by (1) reducing factuality errors/hallucinations while (2) mostly retaining the match of lexical patterns, measured by ROUGE scores, while also at a cost of increased inference-time FLOPs and reduced decoding speed. The code implementation based on Huggingface Library is made available //github.com/zhichaoxu-shufe/context-aware-decoding-qfs

In this paper we address how complex social communities emerge from local decisions by individuals with limited attention and knowledge. This problem is critical; if we understand community formation mechanisms, it may be possible to intervene to improve social welfare. We propose an interpretable, novel model for attributed community formation driven by resource-bounded individuals' strategic, selfish behavior. In our stylized model, attributed individuals act strategically in two dimensions: attribute and network structure. Agents are endowed with limited attention, and communication costs limit the number of active connections. In each time step, each agent proposes a new friendship. Agents then accept proposals, decline proposals, or remove friends, consistent with their strategy to maximize payoff. We identify criteria (number of stable triads) for convergence to some community structure and prove that our community formation model converges to a stable network. Ablations justify the ecological validity of our model and show that each aspect of the model is essential. Our empirical results on a physical world microfinance community demonstrate excellent model fits compared to baseline models.

In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.

This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language

北京阿比特科技有限公司