亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences of constructive separations? We build a case that "constructiveness" serves as a dividing line between many weak lower bounds we know how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Put another way, constructiveness is the opposite of a complexity barrier: it is a property we want lower bounds to have. Our results fall into three broad categories. 1. Our first set of results shows that, for many well-known lower bounds against streaming algorithms, one-tape Turing machines, and query complexity, as well as lower bounds for the Minimum Circuit Size Problem, making these lower bounds constructive would imply breakthrough separations ranging from $EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lower bounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$, $P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separation would further imply a constructive separation. Our results generalize earlier results for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP \neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannot be made constructive. We observe that for all super-polynomially growing functions $t$, there are no constructive separations for detecting high $t$-time Kolmogorov complexity (a task which is known to be not in $P$) from any complexity class, unconditionally.

相關內容

The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry ($\textit{kinetic energy}$), and the regularization of density paths ($\textit{potential energy}$). These combinations yield different variational problems ($\textit{Lagrangians}$), encompassing many variations of the optimal transport problem such as the Schr\"odinger bridge, unbalanced optimal transport, and optimal transport with physical constraints, among others. In general, the optimal density path is unknown, and solving these variational problems can be computationally challenging. Leveraging the dual formulation of the Lagrangians, we propose a novel deep learning based framework approaching all of these problems from a unified perspective. Our method does not require simulating or backpropagating through the trajectories of the learned dynamics, and does not need access to optimal couplings. We showcase the versatility of the proposed framework by outperforming previous approaches for the single-cell trajectory inference, where incorporating prior knowledge into the dynamics is crucial for correct predictions.

Recent literature has seen a significant focus on building machine learning models with specific properties such as fairness, i.e., being non-biased with respect to a given set of attributes, calibration i.e., model confidence being aligned with its predictive accuracy, and explainability, i.e., ability to be understandable to humans. While there has been work focusing on each of these aspects individually, researchers have shied away from simultaneously addressing more than one of these dimensions. In this work, we address the problem of building models which are both fair and calibrated. We work with a specific definition of fairness, which closely matches [Biswas et. al. 2019], and has the nice property that Bayes optimal classifier has the maximum possible fairness under our definition. We show that an existing negative result towards achieving a fair and calibrated model [Kleinberg et. al. 2017] does not hold for our definition of fairness. Further, we show that ensuring group-wise calibration with respect to the sensitive attributes automatically results in a fair model under our definition. Using this result, we provide a first cut approach for achieving fair and calibrated models, via a simple post-processing technique based on temperature scaling. We then propose modifications of existing calibration losses to perform group-wise calibration, as a way of achieving fair and calibrated models in a variety of settings. Finally, we perform extensive experimentation of these techniques on a diverse benchmark of datasets, and present insights on the pareto-optimality of the resulting solutions.

In text generation, a large language model (LM) makes a choice of each new word based only on the former selection of its context using the softmax function. Nevertheless, the link statistics information of concurrent words based on a scene-specific corpus is valuable in choosing the next word, which can help to ensure the topic of the generated text to be aligned with the current task. To fully explore the co-occurrence information,we propose a graphmax function for task-specific text generation. Using the graph-based regularization, graphmax enables the final word choice to be determined by both the global knowledge from the LM and the local knowledge from the scene-specific corpus. The traditional softmax function is regularized with a graph total variation (GTV) term, which incorporates the local knowledge into the LM and encourages the model to consider the statistical relationships between words in a scene-specific corpus. The proposed graphmax is versatile and can be readily plugged into any large pre-trained LM for text generation and machine translation. Through extensive experiments, we demonstrate that the new GTV-based regularization can improve performances in various natural language processing tasks in comparison with existing methods. Moreover, through human experiments, we observe that participants can easily distinguish the text generated by graphmax or softmax.

We consider overlap splines that are defined by connecting the patches of piecewise functions via common values at given finite sets of nodes, without using any partitions of the computational domain. It is shown that some classical finite difference methods may be interpreted as collocation with overlap splines. Moreover, several versions of the meshless finite difference methods, such as the RBF-FD method, are equivalent to the collocation or discrete least squares with appropriately chosen spaces of overlap splines.

According to the classic Chv{\'{a}}tal's Lemma from 1977, a graph of minimum degree $\delta(G)$ contains every tree on $\delta(G)+1$ vertices. Our main result is the following algorithmic "extension" of Chv\'{a}tal's Lemma: For any $n$-vertex graph $G$, integer $k$, and a tree $T$ on at most $\delta(G)+k$ vertices, deciding whether $G$ contains a subgraph isomorphic to $T$, can be done in time $f(k)\cdot n^{\mathcal{O}(1)}$ for some function $f$ of $k$ only. The proof of our main result is based on an interplay between extremal graph theory and parameterized algorithms.

With the emergence of more powerful large language models (LLMs), such as ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence in leveraging these models for specific tasks by utilizing data-label pairs as precondition prompts. While incorporating demonstrations can greatly enhance the performance of LLMs across various tasks, it may introduce a new security concern: attackers can manipulate only the demonstrations without changing the input to perform an attack. In this paper, we investigate the security concern of ICL from an adversarial perspective, focusing on the impact of demonstrations. We propose a novel attack method named advICL, which aims to manipulate only the demonstration without changing the input to mislead the models. Our results demonstrate that as the number of demonstrations increases, the robustness of in-context learning would decrease. Additionally, we also identify the intrinsic property of the demonstrations is that they can be used (prepended) with different inputs. As a result, it introduces a more practical threat model in which an attacker can attack the test input example even without knowing and manipulating it. To achieve it, we propose the transferable version of advICL, named Transferable-advICL. Our experiment shows that the adversarial demonstration generated by Transferable-advICL can successfully attack the unseen test input examples. We hope that our study reveals the critical security risks associated with ICL and underscores the need for extensive research on the robustness of ICL, particularly given its increasing significance in the advancement of LLMs.

In many fault-detection problems, its objective lies in discerning all the defective items from a set of $n$ items with a binary characteristic by utilizing the minimum number of tests. Group testing (i.e., testing a subset of items with a single test) plays a pivotal role in classifying a large population of items. We study a central problem in the combinatorial group testing model for the situation where the number $d$ of defectives is unknown in advance. Let $M(d,n)=\min_{\alpha}M_\alpha(d|n)$, where $M_\alpha(d|n)$ is the maximum number of tests required by an algorithm $\alpha$ for the problem. An algorithm $\alpha$ is called a $c$\emph{-competitive algorithm} if there exist constants $c$ and $a$ such that for $0\le d < n$, $M_{\alpha}(d|n)\le cM(d,n)+a$. We design a new adaptive algorithm with competitive constant $c \le 1.431$, thus pushing the competitive ratio below the best-known one of $1.452$. The new algorithm is a novel solution framework based on a strongly competitive algorithm and an up-zig-zag strategy that has not yet been investigated.

The Graphical House Allocation problem asks: how can $n$ houses (each with a fixed non-negative value) be assigned to the vertices of an undirected graph $G$, so as to minimize the "aggregate local envy", i.e., the sum of absolute differences along the edges of $G$? This problem generalizes the classical Minimum Linear Arrangement problem, as well as the well-known House Allocation Problem from Economics, the latter of which has notable practical applications in organ exchanges. Recent work has studied the computational aspects of Graphical House Allocation and observed that the problem is NP-hard and inapproximable even on particularly simple classes of graphs, such as vertex disjoint unions of paths. However, the dependence of any approximations on the structural properties of the underlying graph had not been studied. In this work, we give a complete characterization of the approximability of the Graphical House Allocation problem. We present algorithms to approximate the optimal envy on general graphs, trees, planar graphs, bounded-degree graphs, bounded-degree planar graphs, and bounded-degree trees. For each of these graph classes, we then prove matching lower bounds, showing that in each case, no significant improvement can be attained unless P = NP. We also present general approximation ratios as a function of structural parameters of the underlying graph, such as treewidth; these match the aforementioned tight upper bounds in general, and are significantly better approximations for many natural subclasses of graphs. Finally, we present constant factor approximation schemes for the special classes of complete binary trees and random graphs.

In the large language model (LLM) revolution, embedding is a key component of various systems. For example, it is used to retrieve knowledge or memories for LLMs, to build content moderation filters, etc. As such cases span from English to other natural or programming languages, from retrieval to classification and beyond, it is desirable to build a unified embedding model rather than dedicated ones for each scenario. In this work, we make an initial step towards this goal, demonstrating that multiple languages (both natural and programming) pre-trained transformer decoders can embed universally when finetuned on limited English data. We provide a comprehensive practice with thorough evaluations. On English MTEB, our models achieve competitive performance on different embedding tasks by minimal training data. On other benchmarks, such as multilingual classification and code search, our models (without any supervision) perform comparably to, or even surpass heavily supervised baselines and/or APIs. These results provide evidence of a promising path towards building powerful unified embedders that can be applied across tasks and languages.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司