亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Rueppel's conjecture on the linear complexity of the first $n$ terms of the sequence $(1,1,0,1,0^3,1,0^7,1,0^{15},\ldots)$ was first proved by Dai using the Euclidean algorithm. We have previously shown that we can attach a homogeneous (annihilator) ideal of $F[x,z]$ to the first $n$ terms of a sequence over a field $F$ and construct a pair of generating forms for it. This approach gives another proof of Rueppel's conjecture. We also prove additional properties of these forms and deduce the outputs of the LFSR synthesis algorithm applied to the first $n$ terms. Further, dehomogenising the leading generators yields the minimal polynomials of Dai.

相關內容

The combinatorial pure exploration (CPE) in the stochastic multi-armed bandit setting (MAB) is a well-studied online decision-making problem: A player wants to find the optimal \emph{action} $\boldsymbol{\pi}^*$ from \emph{action class} $\mathcal{A}$, which is a collection of subsets of arms with certain combinatorial structures. Though CPE can represent many combinatorial structures such as paths, matching, and spanning trees, most existing works focus only on binary action class $\mathcal{A}\subseteq\{0, 1\}^d$ for some positive integer $d$. This binary formulation excludes important problems such as the optimal transport, knapsack, and production planning problems. To overcome this limitation, we extend the binary formulation to real, $\mathcal{A}\subseteq\mathbb{R}^d$, and propose a new algorithm. The only assumption we make is that the number of actions in $\mathcal{A}$ is polynomial in $d$. We show an upper bound of the sample complexity for our algorithm and the action class-dependent lower bound for R-CPE-MAB, by introducing a quantity that characterizes the problem's difficulty, which is a generalization of the notion \emph{width} introduced in Chen et al.[2014].

Classic learning theory suggests that proper regularization is the key to good generalization and robustness. In classification, current training schemes only target the complexity of the classifier itself, which can be misleading and ineffective. Instead, we advocate directly measuring the complexity of the decision boundary. Existing literature is limited in this area with few well-established definitions of boundary complexity. As a proof of concept, we start by analyzing ReLU neural networks, whose boundary complexity can be conveniently characterized by the number of affine pieces. With the help of tropical geometry, we develop a novel method that can explicitly count the exact number of boundary pieces, and as a by-product, the exact number of total affine pieces. Numerical experiments are conducted and distinctive properties of our boundary complexity are uncovered. First, the boundary piece count appears largely independent of other measures, e.g., total piece count, and $l_2$ norm of weights, during the training process. Second, the boundary piece count is negatively correlated with robustness, where popular robust training techniques, e.g., adversarial training or random noise injection, are found to reduce the number of boundary pieces.

The equation $x^m = 0$ defines a fat point on a line. The algebra of regular functions on the arc space of this scheme is the quotient of $k[x, x', x^{(2)}, \ldots]$ by all differential consequences of $x^m = 0$. This infinite-dimensional algebra admits a natural filtration by finite dimensional algebras corresponding to the truncations of arcs. We show that the generating series for their dimensions equals $\frac{m}{1 - mt}$. We also determine the lexicographic initial ideal of the defining ideal of the arc space. These results are motivated by nonreduced version of the geometric motivic Poincar\'e series, multiplicities in differential algebra, and connections between arc spaces and the Rogers-Ramanujan identities. We also prove a recent conjecture put forth by Afsharijoo in the latter context.

We show that the sum of a sequence of integers can be computed in linear time on a Turing machine. In particular, the most obvious algorithm for this problem, which appears to require quadratic time due to carry propagation, actually runs in linear time by amortized analysis.

Given a set $P$ of $n$ points in $\mathbb{R}^2$ and an input line $\gamma$ in $\mathbb{R}^2$, we present an algorithm that runs in optimal $\Theta(n\log n)$ time and $\Theta(n)$ space to solve a restricted version of the $1$-Steiner tree problem. Our algorithm returns a minimum-weight tree interconnecting $P$ using at most one Steiner point $s \in \gamma$, where edges are weighted by the Euclidean distance between their endpoints. We then extend the result to $j$ input lines. Following this, we show how the algorithm of Brazil et al. ("Generalised k-Steiner Tree Problems in Normed Planes", arXiv:1111.1464) that solves the $k$-Steiner tree problem in $\mathbb{R}^2$ in $O(n^{2k})$ time can be adapted to our setting. For $k>1$, restricting the (at most) $k$ Steiner points to lie on an input line, the runtime becomes $O(n^{k})$. Next we show how the results of Brazil et al. ("Generalised k-Steiner Tree Problems in Normed Planes", arXiv:1111.1464) allow us to maintain the same time and space bounds while extending to some non-Euclidean norms and different tree cost functions. Lastly, we extend the result to $j$ input curves.

Graph theory is an interdisciplinary field of study that has various applications in mathematical modeling and computer science. Research in graph theory depends on the creation of not only theorems but also conjectures. Conjecture-refuting algorithms attempt to refute conjectures by searching for counterexamples to those conjectures, often by maximizing certain score functions on graphs. This study proposes a novel conjecture-refuting algorithm, referred to as the adaptive Monte Carlo search (AMCS) algorithm, obtained by modifying the Monte Carlo tree search algorithm. Evaluated based on its success in finding counterexamples to several graph theory conjectures, AMCS outperforms existing conjecture-refuting algorithms. The algorithm is further utilized to refute six open conjectures, two of which were chemical graph theory conjectures formulated by Liu et al. in 2021 and four of which were formulated by the AutoGraphiX computer system in 2006. Finally, four of the open conjectures are strongly refuted by generalizing the counterexamples obtained by AMCS to produce a family of counterexamples. It is expected that the algorithm can help researchers test graph-theoretic conjectures more effectively.

The subset cover problem for $k \geq 1$ hash functions, which can be seen as an extension of the collision problem, was introduced in 2002 by Reyzin and Reyzin to analyse the security of their hash-function based signature scheme HORS. The security of many hash-based signature schemes relies on this problem or a variant of this problem (e.g. HORS, SPHINCS, SPHINCS+, $\dots$). Recently, Yuan, Tibouchi and Abe (2022) introduced a variant to the subset cover problem, called restricted subset cover, and proposed a quantum algorithm for this problem. In this work, we prove that any quantum algorithm needs to make $\Omega\left((k+1)^{-\frac{2^{k}}{2^{k+1}-1}}\cdot N^{\frac{2^{k}-1}{2^{k+1}-1}}\right)$ queries to the underlying hash functions with codomain size $N$ to solve the restricted subset cover problem, which essentially matches the query complexity of the algorithm proposed by Yuan, Tibouchi and Abe. We also analyze the security of the general $(r,k)$-subset cover problem, which is the underlying problem that implies the unforgeability of HORS under a $r$-chosen message attack (for $r \geq 1$). We prove that a generic quantum algorithm needs to make $\Omega\left(N^{k/5}\right)$ queries to the underlying hash functions to find a $(1,k)$-subset cover. We also propose a quantum algorithm that finds a $(r,k)$-subset cover making $O\left(N^{k/(2+2r)}\right)$ queries to the $k$ hash functions.

Conjectures have historically played an important role in the development of pure mathematics. We propose a systematic approach to finding abstract patterns in mathematical data, in order to generate conjectures about mathematical inequalities, using machine intelligence. We focus on strict inequalities of type f < g and associate them with a vector space. By geometerising this space, which we refer to as a conjecture space, we prove that this space is isomorphic to a Banach manifold. We develop a structural understanding of this conjecture space by studying linear automorphisms of this manifold and show that this space admits several free group actions. Based on these insights, we propose an algorithmic pipeline to generate novel conjectures using geometric gradient descent, where the metric is informed by the invariances of the conjecture space. As proof of concept, we give a toy algorithm to generate novel conjectures about the prime counting function and diameters of Cayley graphs of non-abelian simple groups. We also report private communications with colleagues in which some conjectures were proved, and highlight that some conjectures generated using this procedure are still unproven. Finally, we propose a pipeline of mathematical discovery in this space and highlight the importance of domain expertise in this pipeline.

This paper presents a new distance metric to compare two continuous probability density functions. The main advantage of this metric is that, unlike other statistical measurements, it can provide an analytic, closed-form expression for a mixture of Gaussian distributions while satisfying all metric properties. These characteristics enable fast, stable, and efficient calculations, which are highly desirable in real-world signal processing applications. The application in mind is Gaussian Mixture Reduction (GMR), which is widely used in density estimation, recursive tracking, and belief propagation. To address this problem, we developed a novel algorithm dubbed the Optimization-based Greedy GMR (OGGMR), which employs our metric as a criterion to approximate a high-order Gaussian mixture with a lower order. Experimental results show that the OGGMR algorithm is significantly faster and more efficient than state-of-the-art GMR algorithms while retaining the geometric shape of the original mixture.

The notion of graph covers is a discretization of covering spaces introduced and deeply studied in topology. In discrete mathematics and theoretical computer science, they have attained a lot of attention from both the structural and complexity perspectives. Nonetheless, disconnected graphs were usually omitted from the considerations with the explanation that it is sufficient to understand coverings of the connected components of the target graph by components of the source one. However, different (but equivalent) versions of the definition of covers of connected graphs generalize to non-equivalent definitions for disconnected graphs. The aim of this paper is to summarize this issue and to compare three different approaches to covers of disconnected graphs: 1) locally bijective homomorphisms, 2) globally surjective locally bijective homomorphisms (which we call \emph{surjective covers}), and 3) locally bijective homomorphisms which cover every vertex the same number of times (which we call \emph{equitable covers}). The standpoint of our comparison is the complexity of deciding if an input graph covers a fixed target graph. We show that both surjective and equitable covers satisfy what certainly is a natural and welcome property: covering a disconnected graph is polynomial-time decidable if such it is for every connected component of the graph, and it is NP-complete if it is NP-complete for at least one of its components. We further argue that the third variant, equitable covers, is the most natural one, namely when considering covers of colored graphs. Moreover, the complexity of surjective and equitable covers differ from the fixed parameter complexity point of view. In line with the current trends in topological graph theory, as well as its applications in mathematical physics, we consider graphs in a very general sense[...]

北京阿比特科技有限公司