亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A poset $I=(\{1,\ldots, n\}, \leq_I)$ is called non-negative if the symmetric Gram matrix $G_I:=\frac{1}{2}(C_I + C_I^{tr})\in\mathbb{M}_n(\mathbb{R})$ is positive semi-definite, where $C_I\in\mathbb{M}_n(\mathbb{Z})$ is the $(0,1)$-matrix encoding the relation $\leq_I$. Every such a connected poset $I$, up to the $\mathbb{Z}$-congruence of the $G_I$ matrix, is determined by a unique simply-laced Dynkin diagram $\mathrm{Dyn}_I\in\{\mathbb{A}_m, \mathbb{D}_m,\mathbb{E}_6,\mathbb{E}_7,\mathbb{E}_8\}$. We show that $\mathrm{Dyn}_I=\mathbb{A}_n$ implies that the matrix $G_I$ is of rank $n$ or $n-1$. Moreover, we depict explicit shapes of Hasse digraphs $\mathcal{H}(I)$ of all such posets~$I$ and devise formulae for their number.

相關內容

Many real-world datasets live on high-dimensional Stiefel and Grassmannian manifolds, $V_k(\mathbb{R}^N)$ and $Gr(k, \mathbb{R}^N)$ respectively, and benefit from projection onto lower-dimensional Stiefel (respectively, Grassmannian) manifolds. In this work, we propose an algorithm called Principal Stiefel Coordinates (PSC) to reduce data dimensionality from $ V_k(\mathbb{R}^N)$ to $V_k(\mathbb{R}^n)$ in an $O(k)$-equivariant manner ($k \leq n \ll N$). We begin by observing that each element $\alpha \in V_n(\mathbb{R}^N)$ defines an isometric embedding of $V_k(\mathbb{R}^n)$ into $V_k(\mathbb{R}^N)$. Next, we optimize for such an embedding map that minimizes data fit error by warm-starting with the output of principal component analysis (PCA) and applying gradient descent. Then, we define a continuous and $O(k)$-equivariant map $\pi_\alpha$ that acts as a ``closest point operator'' to project the data onto the image of $V_k(\mathbb{R}^n)$ in $V_k(\mathbb{R}^N)$ under the embedding determined by $\alpha$, while minimizing distortion. Because this dimensionality reduction is $O(k)$-equivariant, these results extend to Grassmannian manifolds as well. Lastly, we show that the PCA output globally minimizes projection error in a noiseless setting, but that our algorithm achieves a meaningfully different and improved outcome when the data does not lie exactly on the image of a linearly embedded lower-dimensional Stiefel manifold as above. Multiple numerical experiments using synthetic and real-world data are performed.

We compute the weight distribution of the ${\mathcal R} (4,9)$ by combining the approach described in D. V. Sarwate's Ph.D. thesis from 1973 with knowledge on the affine equivalence classification of Boolean functions. To solve this problem posed, e.g., in the MacWilliams and Sloane book [p. 447], we apply a refined approach based on the classification of Boolean quartic forms in $8$ variables due to Ph. Langevin and G. Leander, and recent results on the classification of the quotient space ${\mathcal R} (4,7)/{\mathcal R} (2,7)$ due to V. Gillot and Ph. Langevin.

A non-zero $\mathbb{F}$-valued $\mathbb{F}$-linear map on a finite dimensional $\mathbb{F}$-algebra is called an $\mathbb{F}$-valued trace if its kernel does not contain any non-zero ideals. However, given an $\mathbb{F}$-algebra such a map may not always exist. We find an infinite class of finite-dimensional commutative $\mathbb{F}$-algebras which admit an $\mathbb{F}$-valued trace. In fact, in these cases, we explicitly construct a trace map. The existence of an $\mathbb{F}$-valued trace on a finite dimensional commutative $\mathbb{F}$-algebra induces a non-degenerate bilinear form on the $\mathbb{F}$-algebra which may be helpful both theoretically and computationally. In this article, we suggest a couple of applications of an $\mathbb{F}$-valued trace map of an $\mathbb{F}$-algebra to algebraic coding theory.

Stabilizer-free $P_k$ virtual elements are constructed on polygonal and polyhedral meshes. Here the interpolating space is the space of continuous $P_k$ polynomials on a triangular-subdivision of each polygon, or a tetrahedral-subdivision of each polyhedron. With such an accurate and proper interpolation, the stabilizer of the virtual elements is eliminated while the system is kept positive-definite. We show that the stabilizer-free virtual elements converge at the optimal order in 2D and 3D. Numerical examples are computed, validating the theory.

We provide a simple $(1-O(\frac{1}{\sqrt{k}}))$-selectable Online Contention Resolution Scheme for $k$-uniform matroids against a fixed-order adversary. If $A_i$ and $G_i$ denote the set of selected elements and the set of realized active elements among the first $i$ (respectively), our algorithm selects with probability $1-\frac{1}{\sqrt{k}}$ any active element $i$ such that $|A_{i-1}| + 1 \leq (1-\frac{1}{\sqrt{k}})\cdot \mathbb{E}[|G_i|]+\sqrt{k}$. This implies a $(1-O(\frac{1}{\sqrt{k}}))$ prophet inequality against fixed-order adversaries for $k$-uniform matroids that is considerably simpler than previous algorithms [Ala14, AKW14, JMZ22]. We also prove that no OCRS can be $(1-\Omega(\sqrt{\frac{\log k}{k}}))$-selectable for $k$-uniform matroids against an almighty adversary. This guarantee is matched by the (known) simple greedy algorithm that accepts every active element with probability $1-\Theta(\sqrt{\frac{\log k}{k}})$ [HKS07].

Let $X$ be a set of items of size $n$ , which may contain some defective items denoted by $I$, where $I \subseteq X$. In group testing, a {\it test} refers to a subset of items $Q \subset X$. The test outcome is $1$ (positive) if $Q$ contains at least one defective item, i.e., $Q\cap I \neq \emptyset$, and $0$ (negative) otherwise. We give a novel approach to obtaining tight lower bounds in non-adaptive randomized group testing. Employing this new method, we can prove the following result. Any non-adaptive randomized algorithm that, for any set of defective items $I$, with probability at least $2/3$, returns an estimate of the number of defective items $|I|$ to within a constant factor requires at least $\Omega({\log n})$ tests. Our result matches the upper bound of $O(\log n)$ and solves the open problem posed by Damaschke and Sheikh Muhammad.

Two recent lower bounds on the compressibility of repetitive sequences, $\delta \le \gamma$, have received much attention. It has been shown that a length-$n$ string $S$ over an alphabet of size $\sigma$ can be represented within the optimal $O(\delta\log\tfrac{n\log \sigma}{\delta \log n})$ space, and further, that within that space one can find all the $occ$ occurrences in $S$ of any length-$m$ pattern in time $O(m\log n + occ \log^\epsilon n)$ for any constant $\epsilon>0$. Instead, the near-optimal search time $O(m+({occ+1})\log^\epsilon n)$ has been achieved only within $O(\gamma\log\frac{n}{\gamma})$ space. Both results are based on considerably different locally consistent parsing techniques. The question of whether the better search time could be supported within the $\delta$-optimal space remained open. In this paper, we prove that both techniques can indeed be combined to obtain the best of both worlds: $O(m+({occ+1})\log^\epsilon n)$ search time within $O(\delta\log\tfrac{n\log \sigma}{\delta \log n})$ space. Moreover, the number of occurrences can be computed in $O(m+\log^{2+\epsilon}n)$ time within $O(\delta\log\tfrac{n\log \sigma}{\delta \log n})$ space. We also show that an extra sublogarithmic factor on top of this space enables optimal $O(m+occ)$ search time, whereas an extra logarithmic factor enables optimal $O(m)$ counting time.

The field of category theory seeks to unify and generalize concepts and constructions across different areas of mathematics, from algebra to geometry to topology and also to logic and theoretical computer science. Formalized $1$-category theory forms a core component of various libraries of mathematical proofs. However, more sophisticated results in fields from algebraic topology to theoretical physics, where objects have "higher structure", rely on infinite-dimensional categories in place of $1$-dimensional categories, and $\infty$-category theory has thus far proved unamenable to computer formalization. Using a new proof assistant called Rzk, which is designed to support Riehl-Shulman's simplicial extension of homotopy type theory for synthetic $\infty$-category theory, we provide the first formalizations of results from $\infty$-category theory. This includes in particular a formalization of the Yoneda lemma, often regarded as the fundamental theorem of category theory, a theorem which roughly states that an object of a given category is determined by its relationship to all of the other objects of the category. A key feature of our framework is that, thanks to the synthetic theory, many constructions are automatically natural or functorial. We plan to use Rzk to formalize further results from $\infty$-category theory, such as the theory of limits and colimits and adjunctions.

Standard multiparameter eigenvalue problems (MEPs) are systems of $k\ge 2$ linear $k$-parameter square matrix pencils. Recently, a new form of multiparameter eigenvalue problems has emerged: a rectangular MEP (RMEP) with only one multivariate rectangular matrix pencil, where we are looking for combinations of the parameters for which the rank of the pencil is not full. Applications include finding the optimal least squares autoregressive moving average (ARMA) model and the optimal least squares realization of autonomous linear time-invariant (LTI) dynamical system. For linear and polynomial RMEPs, we give the number of solutions and show how these problems can be solved numerically by a transformation into a standard MEP. For the transformation we provide new linearizations for quadratic multivariate matrix polynomials with a specific structure of monomials and consider mixed systems of rectangular and square multivariate matrix polynomials. This numerical approach seems computationally considerably more attractive than the block Macaulay method, the only other currently available numerical method for polynomial RMEPs.

We consider the problem of locating a nearest descriptor system of prescribed reduced order to a descriptor system with large order with respect to the ${\mathcal L}_\infty$ norm. Widely employed approaches such as the balanced truncation and best Hankel norm approximation for this ${\mathcal L}_\infty$ model reduction problem are usually expensive and yield solutions that are not optimal, not even locally. We propose approaches based on the minimization of the ${\mathcal L}_\infty$ objective by means of smooth optimization techniques. As we illustrate, direct applications of smooth optimization techniques are not feasible, since the optimization techniques converge at best at a linear rate requiring too many evaluations of the costly ${\mathcal L}_\infty$-norm objective to be practical. We replace the original large-scale system with a system of smaller order that interpolates the original system at points on the imaginary axis, and minimize the ${\mathcal L}_\infty$ objective after this replacement. The smaller system is refined by interpolating at additional imaginary points determined based on the local minimizer of the ${\mathcal L}_\infty$ objective, and the optimization is repeated. We argue the framework converges at a quadratic rate under smoothness and nondegeneracy assumptions, and describe how asymptotic stability constraints on the reduced system sought can be incorporated into our approach. The numerical experiments on benchmark examples illustrate that the approach leads to locally optimal solutions to the ${\mathcal L}_\infty$ model reduction problem, and the convergence occurs quickly for descriptors systems of order a few ten thousands.

北京阿比特科技有限公司