亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 2023, Kuznetsov and Speranski introduced infinitary action logic with multiplexing $!^m\nabla \mathrm{ACT}_\omega$ and proved that the derivability problem for it lies between the $\omega$ and $\omega^\omega$ levels of the hyperarithmetical hierarchy. We prove that this problem is $\Delta^0_{\omega^\omega}$-complete under Turing reductions. Namely, we show that it is recursively isomorphic to the satisfaction predicate for computable infinitary formulas of rank less than $\omega^\omega$ in the language of arithmetic. As a consequence we prove that the closure ordinal for $!^m\nabla \mathrm{ACT}_\omega$ equals $\omega^\omega$. We also prove that the fragment of $!^m\nabla \mathrm{ACT}_\omega$ where Kleene star is not allowed to be in the scope of the subexponential is $\Delta^0_{\omega^\omega}$-complete. Finally, we present a family of logics, which are fragments of $!^m\nabla \mathrm{ACT}_\omega$, such that the complexity of the $k$-th logic lies between $\Delta^0_{\omega^k}$ and $\Delta^0_{\omega^{k+1}}$.

相關內容

 LESS 是一個開源的樣式語言,受到 Sass 的影響。嚴格來說,LESS 是一個嵌套的元語言,符合語法規范的 CSS 語句也是符合規范的 Less 代碼。

Vertex splitting is a graph operation that replaces a vertex $v$ with two nonadjacent new vertices and makes each neighbor of $v$ adjacent with one or both of the introduced vertices. Vertex splitting has been used in contexts from circuit design to statistical analysis. In this work, we explore the computational complexity of achieving a given graph property $\Pi$ by a limited number of vertex splits, formalized as the problem $\Pi$ Vertex Splitting ($\Pi$-VS). We focus on hereditary graph properties and contribute four groups of results: First, we classify the classical complexity of $\Pi$-VS for graph properties characterized by forbidden subgraphs of size at most 3. Second, we provide a framework that allows to show NP-completeness whenever one can construct a combination of a forbidden subgraph and prescribed vertex splits that satisfy certain conditions. Leveraging this framework we show NP-completeness when $\Pi$ is characterized by forbidden subgraphs that are sufficiently well connected. In particular, we show that $F$-Free-VS is NP-complete for each biconnected graph $F$. Third, we study infinite families of forbidden subgraphs, obtaining NP-hardness for Bipartite-VS and Perfect-VS. Finally, we touch upon the parameterized complexity of $\Pi$-VS with respect to the number of allowed splits, showing para-NP-hardness for $K_3$-Free-VS and deriving an XP-algorithm when each vertex is only allowed to be split at most once.

In this paper, we investigate the problem of deciding whether two standard normal random vectors $\mathsf{X}\in\mathbb{R}^{n}$ and $\mathsf{Y}\in\mathbb{R}^{n}$ are correlated or not. This is formulated as a hypothesis testing problem, where under the null hypothesis, these vectors are statistically independent, while under the alternative, $\mathsf{X}$ and a randomly and uniformly permuted version of $\mathsf{Y}$, are correlated with correlation $\rho$. We analyze the thresholds at which optimal testing is information-theoretically impossible and possible, as a function of $n$ and $\rho$. To derive our information-theoretic lower bounds, we develop a novel technique for evaluating the second moment of the likelihood ratio using an orthogonal polynomials expansion, which among other things, reveals a surprising connection to integer partition functions. We also study a multi-dimensional generalization of the above setting, where rather than two vectors we observe two databases/matrices, and furthermore allow for partial correlations between these two.

The Sibson and Arimoto capacity, which are based on the Sibson and Arimoto mutual information (MI) of order {\alpha}, respectively, are well-known generalizations of the channel capacity C. In this study, we derive novel alternating optimization algorithms for computing these capacities by providing new variational characterizations of the Sibson MI and Arimoto MI. Moreover, we prove that all iterative algorithms for computing these capacities are equivalent under appropriate conditions imposed on their initial distributions.

In this work, we construct $4$-phase Golay complementary sequence (GCS) set of cardinality $2^{3+\lceil \log_2 r \rceil}$ with arbitrary sequence length $n$, where the $10^{13}$-base expansion of $n$ has $r$ nonzero digits. Specifically, the GCS octets (eight sequences) cover all the lengths no greater than $10^{13}$. Besides, based on the representation theory of signed symmetric group, we construct Hadamard matrices from some special GCS to improve their asymptotic existence: there exist Hadamard matrices of order $2^t m$ for any odd number $m$, where $t = 6\lfloor \frac{1}{40}\log_{2}m\rfloor + 10$.

We prove that to each real singularity $f: (\mathbb{R}^{n+1}, 0) \to (\mathbb{R}, 0)$ one can associate two systems of differential equations $\mathfrak{g}^{k\pm}_f$ which are pushforwards in the category of $\mathcal{D}$-modules over $\mathbb{R}^{\pm}$, of the sheaf of real analytic functions on the total space of the positive, respectively negative, Milnor fibration. We prove that for $k=0$ if $f$ is an isolated singularity then $\mathfrak{g}^{\pm}$ determines the the $n$-th homology groups of the positive, respectively negative, Milnor fibre. We then calculate $\mathfrak{g}^{+}$ for ordinary quadratic singularities and prove that under certain conditions on the choice of morsification, one recovers the top homology groups of the Milnor fibers of any isolated singularity $f$. As an application we construct a public-key encryption scheme based on morsification of singularities.

We study the discrete bin covering problem where a multiset of items from a fixed set $S \subseteq (0,1]$ must be split into disjoint subsets while maximizing the number of subsets whose contents sum to at least $1$. We study the online discrete variant, where $S$ is finite, and items arrive sequentially. In the purely online setting, we show that the competitive ratios of best deterministic (and randomized) algorithms converge to $\frac{1}{2}$ for large $S$, similar to the continuous setting. Therefore, we consider the problem under the prediction setting, where algorithms may access a vector of frequencies predicting the frequency of items of each size in the instance. In this setting, we introduce a family of online algorithms that perform near-optimally when the predictions are correct. Further, we introduce a second family of more robust algorithms that presents a tradeoff between the performance guarantees when the predictions are perfect and when predictions are adversarial. Finally, we consider a stochastic setting where items are drawn independently from any fixed but unknown distribution of $S$. Using results from the PAC-learnability of probabilities in discrete distributions, we also introduce a purely online algorithm whose average-case performance is near-optimal with high probability for all finite sets $S$ and all distributions of $S$.

Consider the supervised learning setting where the goal is to learn to predict labels $\mathbf y$ given points $\mathbf x$ from a distribution. An \textit{omnipredictor} for a class $\mathcal L$ of loss functions and a class $\mathcal C$ of hypotheses is a predictor whose predictions incur less expected loss than the best hypothesis in $\mathcal C$ for every loss in $\mathcal L$. Since the work of [GKR+21] that introduced the notion, there has been a large body of work in the setting of binary labels where $\mathbf y \in \{0, 1\}$, but much less is known about the regression setting where $\mathbf y \in [0,1]$ can be continuous. Our main conceptual contribution is the notion of \textit{sufficient statistics} for loss minimization over a family of loss functions: these are a set of statistics about a distribution such that knowing them allows one to take actions that minimize the expected loss for any loss in the family. The notion of sufficient statistics relates directly to the approximate rank of the family of loss functions. Our key technical contribution is a bound of $O(1/\varepsilon^{2/3})$ on the $\epsilon$-approximate rank of convex, Lipschitz functions on the interval $[0,1]$, which we show is tight up to a factor of $\mathrm{polylog} (1/\epsilon)$. This yields improved runtimes for learning omnipredictors for the class of all convex, Lipschitz loss functions under weak learnability assumptions about the class $\mathcal C$. We also give efficient omnipredictors when the loss families have low-degree polynomial approximations, or arise from generalized linear models (GLMs). This translation from sufficient statistics to faster omnipredictors is made possible by lifting the technique of loss outcome indistinguishability introduced by [GKH+23] for Boolean labels to the regression setting.

Calibrating simulation models that take large quantities of multi-dimensional data as input is a hard simulation optimization problem. Existing adaptive sampling strategies offer a methodological solution. However, they may not sufficiently reduce the computational cost for estimation and solution algorithm's progress within a limited budget due to extreme noise levels and heteroskedasticity of system responses. We propose integrating stratification with adaptive sampling for the purpose of efficiency in optimization. Stratification can exploit local dependence in the simulation inputs and outputs. Yet, the state-of-the-art does not provide a full capability to adaptively stratify the data as different solution alternatives are evaluated. We devise two procedures for data-driven calibration problems that involve a large dataset with multiple covariates to calibrate models within a fixed overall simulation budget. The first approach dynamically stratifies the input data using binary trees, while the second approach uses closed-form solutions based on linearity assumptions between the objective function and concomitant variables. We find that dynamical adjustment of stratification structure accelerates optimization and reduces run-to-run variability in generated solutions. Our case study for calibrating a wind power simulation model, widely used in the wind industry, using the proposed stratified adaptive sampling, shows better-calibrated parameters under a limited budget.

In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

小貼士
登錄享
相關主題
北京阿比特科技有限公司