亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Lattices defined as modules over algebraic rings or orders have garnered interest recently, particularly in the fields of cryptography and coding theory. Whilst there exist many attempts to generalise the conditions for LLL reduction to such lattices, there do not seem to be any attempts so far to generalise stronger notions of reduction such as Minkowski, HKZ and BKZ reduction. Moreover, most lattice reduction methods for modules over algebraic rings involve applying traditional techniques to the embedding of the module into real space, which distorts the structure of the algebra. In this paper, we generalise some classical notions of reduction theory to that of free modules defined over an order. Moreover, we extend the definitions of Minkowski, HKZ and BKZ reduction to that of such modules and show that bases reduced in this manner have vector lengths that can be bounded above by the successive minima of the lattice multiplied by a constant that depends on the algebra and the dimension of the module. In particular, we show that HKZ reduced bases are polynomially close to the successive minima of the lattice in terms of the module dimension. None of our definitions require the module to be embedded and thus preserve the structure of the module.

相關內容

We prove a lower bound on the probability of Shor's order-finding algorithm successfully recovering the order $r$ in a single run. The bound implies that by performing two limited searches in the classical post-processing part of the algorithm, a high success probability can be guaranteed, for any $r$, without re-running the quantum part or increasing the exponent length compared to Shor. Asymptotically, in the limit as $r$ tends to infinity, the probability of successfully recovering $r$ in a single run tends to one. Already for moderate $r$, a high success probability exceeding e.g. $1 - 10^{-4}$ can be guaranteed. As corollaries, we prove analogous results for the probability of completely factoring any integer $N$ in a single run of the order-finding algorithm.

We study the problem of {\sl certification}: given queries to a function $f : \{0,1\}^n \to \{0,1\}$ with certificate complexity $\le k$ and an input $x^\star$, output a size-$k$ certificate for $f$'s value on $x^\star$. This abstractly models a central problem in explainable machine learning, where we think of $f$ as a blackbox model that we seek to explain the predictions of. For monotone functions, a classic local search algorithm of Angluin accomplishes this task with $n$ queries, which we show is optimal for local search algorithms. Our main result is a new algorithm for certifying monotone functions with $O(k^8 \log n)$ queries, which comes close to matching the information-theoretic lower bound of $\Omega(k \log n)$. The design and analysis of our algorithm are based on a new connection to threshold phenomena in monotone functions. We further prove exponential-in-$k$ lower bounds when $f$ is non-monotone, and when $f$ is monotone but the algorithm is only given random examples of $f$. These lower bounds show that assumptions on the structure of $f$ and query access to it are both necessary for the polynomial dependence on $k$ that we achieve.

In numerical linear algebra, a well-established practice is to choose a norm that exploits the structure of the problem at hand in order to optimize accuracy or computational complexity. In numerical polynomial algebra, a single norm (attributed to Weyl) dominates the literature. This article initiates the use of $L_p$ norms for numerical algebraic geometry, with an emphasis on $L_{\infty}$. This classical idea yields strong improvements in the analysis of the number of steps performed by numerous iterative algorithms. In particular, we exhibit three algorithms where, despite the complexity of computing $L_{\infty}$-norm, the use of $L_p$-norms substantially reduces computational complexity: a subdivision-based algorithm in real algebraic geometry for computing the homology of semialgebraic sets, a well-known meshing algorithm in computational geometry, and the computation of zeros of systems of complex quadratic polynomials (a particular case of Smale's 17th problem).

Graph analytics attract much attention from both research and industry communities. Due to the linear time complexity, the $k$-core decomposition is widely used in many real-world applications such as biology, social networks, community detection, ecology, and information spreading. In many such applications, the data graphs continuously change over time. The changes correspond to edge insertion and removal. Instead of recomputing the $k$-core, which is time-consuming, we study how to maintain the $k$-core efficiently. That is, when inserting or deleting an edge, we need to identify the affected vertices by searching for more vertices. The state-of-the-art order-based method maintains an order, the so-called $k$-order, among all vertices, which can significantly reduce the searching space. However, this order-based method is complicated for understanding and implementation, and its correctness is not formally discussed. In this work, we propose a simplified order-based approach by introducing the classical Order Data Structure to maintain the $k$-order, which significantly improves the worst-case time complexity for both edge insertion and removal algorithms. Also, our simplified method is intuitive to understand and implement; it is easy to argue the correctness formally. Additionally, we discuss a simplified batch insertion approach. The experiments evaluate our simplified method over 12 real and synthetic graphs with billions of vertices. Compared with the existing method, our simplified approach achieves high speedups up to 7.7x and 9.7x for edge insertion and removal, respectively.

We show that solution to the Hermite-Pad\'{e} type I approximation problem leads in a natural way to a subclass of solutions of the Hirota (discrete Kadomtsev-Petviashvili) system and of its adjoint linear problem. Our result explains the appearence of various ingredients of the integrable systems theory in application to multiple orthogonal polynomials, numerical algorthms, random matrices, and in other branches of mathematical physics and applied mathematics where the Hermite-Pad\'{e} approximation problem is relevant. We present also the geometric algorithm, based on the notion of Desargues maps, of construction of solutions of the problem in the projective space over the field of rational functions. As a byproduct we obtain the corresponding generalization of the Wynn recurrence. We isolate the boundary data of the Hirota system which provide solutions to Hermite-Pad\'{e} problem showing that the corresponding reduction lowers dimensionality of the system. In particular, we obtain certain equations which, in addition to the known ones given by Paszkowski, can be considered as direct analogs of the Frobenius identities. We study the place of the reduced system within the integrability theory, which results in finding multidimensional (in the sense of number of variables) extension of the discrete-time Toda chain equations.

A priori error bounds have been derived for different balancing-related model reduction methods. The most classical result is a bound for balanced truncation and singular perturbation approximation that is applicable for asymptotically stable linear time-invariant systems with homogeneous initial conditions. Recently, there have been a few attempts to generalize the balancing-related reduction methods to the case with inhomogeneous initial conditions, but the existing error bounds for these generalizations are quite restrictive. Particularly, it is required to restrict the initial conditions to a low-dimensional subspace, which has to be chosen before the reduced model is constructed. In this paper, we propose an estimator that circumvents this hard constraint completely. Our estimator is applicable to a large class of reduction methods, whereas the former results were only derived for certain specific methods. Moreover, our approach yields to significantly more effective error estimation, as also will be demonstrated numerically.

A systematic theory of structural limits for finite models has been developed by Nesetril and Ossona de Mendez. It is based on the insight that the collection of finite structures can be embedded, via a map they call the Stone pairing, in a space of measures, where the desired limits can be computed. We show that a closely related but finer grained space of (finitely additive) measures arises -- via Stone-Priestley duality and the notion of types from model theory -- by enriching the expressive power of first-order logic with certain "probabilistic operators". We provide a sound and complete calculus for this extended logic and expose the functorial nature of this construction. The consequences are two-fold. On the one hand, we identify the logical gist of the theory of structural limits. On the other hand, our construction shows that the duality theoretic variant of the Stone pairing captures the adding of a layer of quantifiers, thus making a strong link to recent work on semiring quantifiers in logic on words. In the process, we identify the model theoretic notion of types as the unifying concept behind this link. These results contribute to bridging the strands of logic in computer science which focus on semantics and on more algorithmic and complexity related areas, respectively.

The Gromov-Hausdorff distance $(d_{GH})$ proves to be a useful distance measure between shapes. In order to approximate $d_{GH}$ for compact subsets $X,Y\subset\mathbb{R}^d$, we look into its relationship with $d_{H,iso}$, the infimum Hausdorff distance under Euclidean isometries. As already known for dimension $d\geq 2$, the $d_{H,iso}$ cannot be bounded above by a constant factor times $d_{GH}$. For $d=1$, however, we prove that $d_{H,iso}\leq\frac{5}{4}d_{GH}$. We also show that the bound is tight. In effect, this gives rise to an $O(n\log{n})$-time algorithm to approximate $d_{GH}$ with an approximation factor of $\left(1+\frac{1}{4}\right)$.

Rule-based classifier, that extract a subset of induced rules to efficiently learn/mine while preserving the discernibility information, plays a crucial role in human-explainable artificial intelligence. However, in this era of big data, rule induction on the whole datasets is computationally intensive. So far, to the best of our knowledge, no known method focusing on accelerating rule induction has been reported. This is first study to consider the acceleration technique to reduce the scale of computation in rule induction. We propose an accelerator for rule induction based on fuzzy rough theory; the accelerator can avoid redundant computation and accelerate the building of a rule classifier. First, a rule induction method based on consistence degree, called Consistence-based Value Reduction (CVR), is proposed and used as basis to accelerate. Second, we introduce a compacted search space termed Key Set, which only contains the key instances required to update the induced rule, to conduct value reduction. The monotonicity of Key Set ensures the feasibility of our accelerator. Third, a rule-induction accelerator is designed based on Key Set, and it is theoretically guaranteed to display the same results as the unaccelerated version. Specifically, the rank preservation property of Key Set ensures consistency between the rule induction achieved by the accelerator and the unaccelerated method. Finally, extensive experiments demonstrate that the proposed accelerator can perform remarkably faster than the unaccelerated rule-based classifier methods, especially on datasets with numerous instances.

In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.

北京阿比特科技有限公司