亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we propose a new classification of $\Sigma^0_2$ formulas under the realizability interpretation of many-one reducibility (i.e., Levin reducibility). For example, ${\sf Fin}$, the decision of being eventually zero for sequences, is many-one/Levin complete among $\Sigma^0_2$ formulas of the form $\exists n\forall m\geq n.\varphi(m,x)$, where $\varphi$ is decidable. The decision of boundedness for sequences ${\sf BddSeq}$ and posets ${\sf PO}_{\sf top}$ are many-one/Levin complete among $\Sigma^0_2$ formulas of the form $\exists n\forall m\geq n\forall k.\varphi(m,k,x)$, where $\varphi$ is decidable. However, unlike the classical many-one reducibility, none of the above is $\Sigma^0_2$-complete. The decision of non-density of linear order ${\sf NonDense}$ is truly $\Sigma^0_2$-complete.

相關內容

In this paper we extend to two-dimensional data two recently introduced one-dimensional compressibility measures: the $\gamma$ measure defined in terms of the smallest {string attractor}, and the $\delta$ measure defined in terms of the number of distinct substrings of the input string. Concretely, we introduce the two-dimensional measures $\gamma_{2D}$ and $\delta_{2D}$, as natural generalizations of $\gamma$ and $\delta$, and we initiate the study of their properties. Among other things, we prove that $\delta_{2D}$ is monotone and can be computed in linear time, and we show that, although it is still true that $\delta_{2D} \leq \gamma_{2D}$, the gap between the two measures can be $\Omega(\sqrt{n})$ for families of $n\times n$ matrices and therefore asymptotically larger than the gap between $\gamma$ and $\delta$. To complete the scenario of two-dimensional compressibility measures, we introduce the measure $b_{2D}$ which generalizes to two dimensions the notion of optimal parsing. We prove that, somewhat surprisingly, the relationship between $b_{2D}$ and $\gamma_{2D}$ is significantly different than in the one-dimensional case. As an application of our results we provide the first analysis of the space usage of the two-dimensional block tree introduced in [Brisaboa et al., Two-dimensional block trees, The computer Journal, 2023]. Our analysis shows that the space usage can be bounded in terms of both $\gamma_{2D}$ and $\delta_{2D}$ providing a theoretical justification for the use of this data structure. Finally, using insights from our analysis, we design the first linear time and space algorithm for constructing the two-dimensional block tree for arbitrary matrices. Our algorithm is asymptotically faster than the best known solution which is probabilistic and only works for binary matrices.

In the mixture of experts model, a common assumption is the linearity between a response variable and covariates. While this assumption has theoretical and computational benefits, it may lead to suboptimal estimates by overlooking potential nonlinear relationships among the variables. To address this limitation, we propose a partially linear structure that incorporates unspecified functions to capture nonlinear relationships. We establish the identifiability of the proposed model under mild conditions and introduce a practical estimation algorithm. We present the performance of our approach through numerical studies, including simulations and real data analysis.

In reinsurance, Poisson and Negative binomial distributions are employed for modeling frequency. However, the incomplete data regarding reported incurred claims above a priority level presents challenges in estimation. This paper focuses on frequency estimation using Schnieper's framework for claim numbering. We demonstrate that Schnieper's model is consistent with a Poisson distribution for the total number of claims above a priority at each year of development, providing a robust basis for parameter estimation. Additionally, we explain how to build an alternative assumption based on a Negative binomial distribution, which yields similar results. The study includes a bootstrap procedure to manage uncertainty in parameter estimation and a case study comparing assumptions and evaluating the impact of the bootstrap approach.

We show that the $n$'th digit of the base-$b$ representation of the golden ratio is a finite-state function of the Zeckendorf representation of $b^n$, and hence can be computed by a finite automaton. Similar results can be proven for any quadratic irrational. We use a satisfiability (SAT) solver to prove, in some cases, that the automata we construct are minimal.

The classical Heawood inequality states that if the complete graph $K_n$ on $n$ vertices is embeddable in the sphere with $g$ handles, then $g \ge\dfrac{(n-3)(n-4)}{12}$. A higher-dimensional analogue of the Heawood inequality is the K\"uhnel conjecture. In a simplified form it states that for every integer $k>0$ there is $c_k>0$ such that if the union of $k$-faces of $n$-simplex embeds into the connected sum of $g$ copies of the Cartesian product $S^k\times S^k$ of two $k$-dimensional spheres, then $g\ge c_k n^{k+1}$. For $k>1$ only linear estimates were known. We present a quadratic estimate $g\ge c_k n^2$. The proof is based on beautiful and fruitful interplay between geometric topology, combinatorics and linear algebra.

Distance correlation is a novel class of multivariate dependence measure, taking positive values between 0 and 1, and applicable to random vectors of arbitrary dimensions, not necessarily equal. It offers several advantages over the well-known Pearson correlation coefficient, the most important is that distance correlation equals zero if and only if the random vectors are independent. There are two different estimators of the distance correlation available in the literature. The first one, proposed by Sz\'ekely et al. (2007), is based on an asymptotically unbiased estimator of the distance covariance which turns out to be a V-statistic. The second one builds on an unbiased estimator of the distance covariance proposed in Sz\'ekely et al. (2014), proved to be an U-statistic by Sz\'ekely and Huo (2016). This study evaluates their efficiency (mean squared error) and compares computational times for both methods under different dependence structures. Under conditions of independence or near-independence, the V-estimates are biased, while the U-estimator frequently cannot be computed due to negative values. To address this challenge, a convex linear combination of the former estimators is proposed and studied, yielding good results regardless of the level of dependence.

We examine the consequences of having a total division operation $\frac{x}{y}$ on commutative rings. We consider two forms of binary division, one derived from a unary inverse, the other defined directly as a general operation; each are made total by setting $1/0$ equal to an error value $\bot$, which is added to the ring. Such totalised divisions we call common divisions. In a field the two forms are equivalent and we have a finite equational axiomatisation $E$ that is complete for the equational theory of fields equipped with common division, called common meadows. These equational axioms $E$ turn out to be true of commutative rings with common division but only when defined via inverses. We explore these axioms $E$ and their role in seeking a completeness theorem for the conditional equational theory of common meadows. We prove they are complete for the conditional equational theory of commutative rings with inverse based common division. By adding a new proof rule, we can prove a completeness theorem for the conditional equational theory of common meadows. Although, the equational axioms $E$ fail with common division defined directly, we observe that the direct division does satisfies the equations in $E$ under a new congruence for partial terms called eager equality.

Optimization over the set of matrices that satisfy $X^\top B X = I_p$, referred to as the generalized Stiefel manifold, appears in many applications involving sampled covariance matrices such as canonical correlation analysis (CCA), independent component analysis (ICA), and the generalized eigenvalue problem (GEVP). Solving these problems is typically done by iterative methods, such as Riemannian approaches, which require a computationally expensive eigenvalue decomposition involving fully formed $B$. We propose a cheap stochastic iterative method that solves the optimization problem while having access only to a random estimate of the feasible set. Our method does not enforce the constraint in every iteration exactly, but instead it produces iterations that converge to a critical point on the generalized Stiefel manifold defined in expectation. The method has lower per-iteration cost, requires only matrix multiplications, and has the same convergence rates as its Riemannian counterparts involving the full matrix $B$. Experiments demonstrate its effectiveness in various machine learning applications involving generalized orthogonality constraints, including CCA, ICA, and GEVP.

Out-of-distribution (OOD) detection, crucial for reliable pattern classification, discerns whether a sample originates outside the training distribution. This paper concentrates on the high-dimensional features output by the final convolutional layer, which contain rich image features. Our key idea is to project these high-dimensional features into two specific feature subspaces, leveraging the dimensionality reduction capacity of the network's linear layers, trained with Predefined Evenly-Distribution Class Centroids (PEDCC)-Loss. This involves calculating the cosines of three projection angles and the norm values of features, thereby identifying distinctive information for in-distribution (ID) and OOD data, which assists in OOD detection. Building upon this, we have modified the batch normalization (BN) and ReLU layer preceding the fully connected layer, diminishing their impact on the output feature distributions and thereby widening the distribution gap between ID and OOD data features. Our method requires only the training of the classification network model, eschewing any need for input pre-processing or specific OOD data pre-tuning. Extensive experiments on several benchmark datasets demonstrates that our approach delivers state-of-the-art performance. Our code is available at //github.com/Hewell0/ProjOOD.

We generalize McDiarmid's inequality for functions with bounded differences on a high probability set, using an extension argument. Those functions concentrate around their conditional expectations. We further extend the results to concentration in general metric spaces.

北京阿比特科技有限公司