亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

When translating a term calculus into a graphical formalism many inessential details are abstracted away. In the case of $\lambda$-calculus translated to proof-nets, these inessential details are captured by a notion of equivalence on $\lambda$-terms known as $\simeq_\sigma$-equivalence, in both the intuitionistic (due to Regnier) and classical (due to Laurent) cases. The purpose of this paper is to uncover a strong bisimulation behind $\simeq_\sigma$-equivalence, as formulated by Laurent for Parigot's $\lambda\mu$-calculus. This is achieved by introducing a relation $\simeq$, defined over a revised presentation of $\lambda\mu$-calculus we dub $\Lambda M$. More precisely, we first identify the reasons behind Laurent's $\simeq_\sigma$-equivalence on $\lambda\mu$-terms failing to be a strong bisimulation. Inspired by Laurent's \emph{Polarized Proof-Nets}, this leads us to distinguish multiplicative and exponential reduction steps on terms. Second, we enrich the syntax of $\lambda\mu$ to allow us to track the exponential operations. These technical ingredients pave the way towards a strong bisimulation for the classical case. We introduce a calculus $\Lambda M$ and a relation $\simeq$ that we show to be a strong bisimulation with respect to reduction in $\Lambda M$, ie. two $\simeq$-equivalent terms have the exact same reduction semantics, a result which fails for Regnier's $\simeq_\sigma$-equivalence in $\lambda$-calculus as well as for Laurent's $\simeq_\sigma$-equivalence in $\lambda\mu$. Although $\simeq$ is formulated over an enriched syntax and hence is not strictly included in Laurent's $\simeq_\sigma$, we show how it can be seen as a restriction of it.

相關內容

This paper presents an $O^{*}(1.42^{n})$ time algorithm for the Maximum Cut problem on split graphs, along with a subexponential time algorithm for its decision variant.

In Path Set Packing, the input is an undirected graph $G$, a collection $\calp$ of simple paths in $G$, and a positive integer $k$. The problem is to decide whether there exist $k$ edge-disjoint paths in $\calp$. We study the parameterized complexity of Path Set Packing with respect to both natural and structural parameters. We show that the problem is $W[1]$-hard with respect to vertex cover number, and $W[1]$-hard respect to pathwidth plus maximum degree plus solution size. These results answer an open question raised in COCOON 2018. On the positive side, we present an FPT algorithm parameterized by feedback vertex number plus maximum degree, and present an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in $\calp$. These positive results complement the hardness of Path Set Packing with respect to any subset of the parameters used in the FPT algorithms. We also give a $4$-approximation algorithm for maximum path set packing problem which runs in FPT time when parameterized by feedback edge number.

Given a graph $G$ and sets $\{\alpha_v~|~v \in V(G)\}$ and $\{\beta_v~|~v \in V(G)\}$ of non-negative integers, it is known that the decision problem whether $G$ contains a spanning tree $T$ such that $\alpha_v \le d_T (v) \le \beta_v $ for all $v \in V(G)$ is $NP$-complete. In this article, we relax the problem by demanding that the degree restrictions apply to vertices $v\in U$ only, where $U$ is a stable set of $G$. In this case, the problem becomes tractable. A. Frank presented a result characterizing the positive instances of that relaxed problem. Using matroid intersection developed by J. Edmonds, we give a new and short proof of Frank's result and show that if $U$ is stable and the edges of $G$ are weighted by arbitrary real numbers, then even a minimum-cost tree $T$ with $\alpha_v \le d_T (v) \le \beta_v $ for all $v \in U$ can be found in polynomial time if such a tree exists.

In this paper, we propose the generalized mixed reduced rank regression method, GMR$^3$ for short. GMR$^3$ is a regression method for a mix of numeric, binary and ordinal response variables. The predictor variables can be a mix of binary, nominal, ordinal, and numeric variables. For dealing with the categorical predictors we use optimal scaling. A majorization-minimization algorithm is derived for maximum likelihood estimation under a local independence assumption. We discuss in detail model selection for the dimensionality or rank, and the selection of predictor variables. We show an application of GMR$^3$ using the Eurobarometer Surveys data set of 2023.

We study the densest subgraph problem and give algorithms via multiplicative weights updated area convexity that converge in $O\left(\frac{\log m}{\epsilon^{2}}\right)$ and $O\left(\frac{\log m}{\epsilon}\right)$ iterations, respectively, both with nearly-linear time per iteration. Compared with the work by Bahmani et al. (2014), our MWU algorithm uses a very different and much simpler procedure for recovering the dense subgraph from the fractional solution and does not employ a binary search. Compared with the work by Boob et al. (2019), our algorithm via area convexity improves the iteration complexity by a factor $\Delta$ -- the maximum degree in the graph, and matches the fastest theoretical runtime currently known via flows (Chekuri et al., 2022) in total time. Next, we study the dense subgraph decomposition problem and give the first practical iterative algorithm with linear convergence rate $O\left(mn\log\frac{1}{\epsilon}\right)$ via accelerated random coordinate descent. This significantly improves over $O\left(\frac{m\sqrt{mn\Delta}}{\epsilon}\right)$ time of the FISTA-based algorithm by Harb et al. (2022). In the high precision regime $\epsilon\ll\frac{1}{n}$ where we can even recover the exact solution, our algorithm has a total runtime of $O\left(mn\log n\right)$, matching the exact algorithm via parametric flows (Gallo et al., 1989). Empirically, we show that this algorithm is very practical and scales to very large graphs, and its performance is competitive with widely used methods that have significantly weaker theoretical guarantees.

We consider the problem of computing tight privacy guarantees for the composition of subsampled differentially private mechanisms. Recent algorithms can numerically compute the privacy parameters to arbitrary precision but must be carefully applied. Our main contribution is to address two common points of confusion. First, some privacy accountants assume that the privacy guarantees for the composition of a subsampled mechanism are determined by self-composing the worst-case datasets for the uncomposed mechanism. We show that this is not true in general. Second, Poisson subsampling is sometimes assumed to have similar privacy guarantees compared to sampling without replacement. We show that the privacy guarantees may in fact differ significantly between the two sampling schemes. In particular, we give an example of hyperparameters that result in $\varepsilon \approx 1$ for Poisson subsampling and $\varepsilon > 10$ for sampling without replacement. This occurs for some parameters that could realistically be chosen for DP-SGD.

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司