亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 1953, Carlitz~\cite{Car53} showed that all permutation polynomials over $\F_q$, where $q>2$ is a power of a prime, are generated by the special permutation polynomials $x^{q-2}$ (the inversion) and $ ax+b$ (affine functions, where $0\neq a, b\in \F_q$). Recently, Nikova, Nikov and Rijmen~\cite{NNR19} proposed an algorithm (NNR) to find a decomposition of the inverse function in quadratics, and computationally covered all dimensions $n\leq 16$. Petrides~\cite{P23} found a class of integers for which it is easy to decompose the inverse into quadratics, and improved the NNR algorithm, thereby extending the computation up to $n\leq 32$. Here, we extend Petrides' result, as well as we propose a number theoretical approach, which allows us to cover easily all (surely, odd) exponents up to~$250$, at least.

相關內容

We compute the weight distribution of the ${\mathcal R} (4,9)$ by combining the approach described in D. V. Sarwate's Ph.D. thesis from 1973 with knowledge on the affine equivalence classification of Boolean functions. To solve this problem posed, e.g., in the MacWilliams and Sloane book [p. 447], we apply a refined approach based on the classification of Boolean quartic forms in $8$ variables due to Ph. Langevin and G. Leander, and recent results on the classification of the quotient space ${\mathcal R} (4,7)/{\mathcal R} (2,7)$ due to V. Gillot and Ph. Langevin.

We show that for any non-real algebraic number $q$ such that $|q-1|>1$ or $\Re(q)>\frac{3}{2}$ it is \textsc{\#P}-hard to compute a multiplicative (resp. additive) approximation to the absolute value (resp. argument) of the chromatic polynomial evaluated at $q$ on planar graphs. This implies \textsc{\#P}-hardness for all non-real algebraic $q$ on the family of all graphs. We moreover prove several hardness results for $q$ such that $|q-1|\leq 1$. Our hardness results are obtained by showing that a polynomial time algorithm for approximately computing the chromatic polynomial of a planar graph at non-real algebraic $q$ (satisfying some properties) leads to a polynomial time algorithm for \emph{exactly} computing it, which is known to be hard by a result of Vertigan. Many of our results extend in fact to the more general partition function of the random cluster model, a well known reparametrization of the Tutte polynomial.

The fractional differential equation $L^\beta u = f$ posed on a compact metric graph is considered, where $\beta>0$ and $L = \kappa^2 - \nabla(a\nabla)$ is a second-order elliptic operator equipped with certain vertex conditions and sufficiently smooth and positive coefficients $\kappa, a$. We demonstrate the existence of a unique solution for a general class of vertex conditions and derive the regularity of the solution in the specific case of Kirchhoff vertex conditions. These results are extended to the stochastic setting when $f$ is replaced by Gaussian white noise. For the deterministic and stochastic settings under generalized Kirchhoff vertex conditions, we propose a numerical solution based on a finite element approximation combined with a rational approximation of the fractional power $L^{-\beta}$. For the resulting approximation, the strong error is analyzed in the deterministic case, and the strong mean squared error as well as the $L_2(\Gamma\times \Gamma)$-error of the covariance function of the solution are analyzed in the stochastic setting. Explicit rates of convergences are derived for all cases. Numerical experiments for ${L = \kappa^2 - \Delta, \kappa>0}$ are performed to illustrate the results.

Learning nonparametric systems of Ordinary Differential Equations (ODEs) dot x = f(t,x) from noisy data is an emerging machine learning topic. We use the well-developed theory of Reproducing Kernel Hilbert Spaces (RKHS) to define candidates for f for which the solution of the ODE exists and is unique. Learning f consists of solving a constrained optimization problem in an RKHS. We propose a penalty method that iteratively uses the Representer theorem and Euler approximations to provide a numerical solution. We prove a generalization bound for the L2 distance between x and its estimator and provide experimental comparisons with the state-of-the-art.

This paper develops the notion of \emph{Word Linear Complexity} ($WLC$) of vector valued sequences over finite fields $\ff$ as an extension of Linear Complexity ($LC$) of sequences and their ensembles. This notion of complexity extends the concept of the minimal polynomial of an ensemble (vector valued) sequence to that of a matrix minimal polynomial and shows that the matrix minimal polynomial can be used with iteratively generated vector valued sequences by maps $F:\ff^n\rightarrow\ff^n$ at a given $y$ in $\ff^n$ for solving the unique local inverse $x$ of the equation $y=F(x)$ when the sequence is periodic. The idea of solving a local inverse of a map in finite fields when the iterative sequence is periodic and its application to various problems of Cryptanalysis is developed in previous papers \cite{sule322, sule521, sule722,suleCAM22} using the well known notion of $LC$ of sequences. $LC$ is the degree of the associated minimal polynomial of the sequence. The generalization of $LC$ to $WLC$ considers vector valued (or word oriented) sequences such that the word oriented recurrence relation is obtained by matrix vector multiplication instead of scalar multiplication as considered in the definition of $LC$. Hence the associated minimal polynomial is matrix valued whose degree is called $WLC$. A condition is derived when a nontrivial matrix polynomial associated with the word oriented recurrence relation exists when the sequence is periodic. It is shown that when the matrix minimal polynomial exists $n(WLC)=LC$. Finally it is shown that the local inversion problem is solved using the matrix minimal polynomial when such a polynomail exists hence leads to a word oriented approach to local inversion.

We consider inverse problems where the conditional distribution of the observation ${\bf y}$ given the latent variable of interest ${\bf x}$ (also known as the forward model) is known, and we have access to a data set in which multiple instances of ${\bf x}$ and ${\bf y}$ are both observed. In this context, algorithm unrolling has become a very popular approach for designing state-of-the-art deep neural network architectures that effectively exploit the forward model. We analyze the statistical complexity of the gradient descent network (GDN), an algorithm unrolling architecture driven by proximal gradient descent. We show that the unrolling depth needed for the optimal statistical performance of GDNs is of order $\log(n)/\log(\varrho_n^{-1})$, where $n$ is the sample size, and $\varrho_n$ is the convergence rate of the corresponding gradient descent algorithm. We also show that when the negative log-density of the latent variable ${\bf x}$ has a simple proximal operator, then a GDN unrolled at depth $D'$ can solve the inverse problem at the parametric rate $O(D'/\sqrt{n})$. Our results thus also suggest that algorithm unrolling models are prone to overfitting as the unrolling depth $D'$ increases. We provide several examples to illustrate these results.

Sketch-and-precondition techniques are efficient and popular for solving large least squares (LS) problems of the form $Ax=b$ with $A\in\mathbb{R}^{m\times n}$ and $m\gg n$. This is where $A$ is ``sketched" to a smaller matrix $SA$ with $S\in\mathbb{R}^{\lceil cn\rceil\times m}$ for some constant $c>1$ before an iterative LS solver computes the solution to $Ax=b$ with a right preconditioner $P$, where $P$ is constructed from $SA$. Prominent sketch-and-precondition LS solvers are Blendenpik and LSRN. We show that the sketch-and-precondition technique in its most commonly used form is not numerically stable for ill-conditioned LS problems. For provable and practical backward stability and optimal residuals, we suggest using an unpreconditioned iterative LS solver on $(AP)z=b$ with $x=Pz$. Provided the condition number of $A$ is smaller than the reciprocal of the unit round-off, we show that this modification ensures that the computed solution has a backward error comparable to the iterative LS solver applied to a well-conditioned matrix. Using smoothed analysis, we model floating-point rounding errors to argue that our modification is expected to compute a backward stable solution even for arbitrarily ill-conditioned LS problems. Additionally, we provide experimental evidence that using the sketch-and-solve solution as a starting vector in sketch-and-precondition algorithms (as suggested by Rokhlin and Tygert in 2008) should be highly preferred over the zero vector. The initialization often results in much more accurate solutions -- albeit not always backward stable ones.

We extend our formulation of Merge and Minimalism in terms of Hopf algebras to an algebraic model of a syntactic-semantic interface. We show that methods adopted in the formulation of renormalization (extraction of meaningful physical values) in theoretical physics are relevant to describe the extraction of meaning from syntactic expressions. We show how this formulation relates to computational models of semantics and we answer some recent controversies about implications for generative linguistics of the current functioning of large language models.

We present a space-time ultra-weak discontinuous Galerkin discretization of the linear Schr\"odinger equation with variable potential. The proposed method is well-posed and quasi-optimal in mesh-dependent norms for very general discrete spaces. Optimal $h$-convergence error estimates are derived for the method when test and trial spaces are chosen either as piecewise polynomials, or as a novel quasi-Trefftz polynomial space. The latter allows for a substantial reduction of the number of degrees of freedom and admits piecewise-smooth potentials. Several numerical experiments validate the accuracy and advantages of the proposed method.

We present a method for finding large fixed-size primes of the form $X^2+c$. We study the density of primes on the sets $E_c = \{N(X,c)=X^2+c,\ X \in (2\mathbb{Z}+(c-1))\}$, $c \in \mathbb{N}^*$. We describe an algorithm for generating values of $c$ such that a given prime $p$ is the minimum of the union of prime divisors of all elements in $E_c$. We also present quadratic forms generating divisors of Ec and study the prime divisors of its terms. This paper uses the results of Dirichlet's arithmetic progression theorem [1] and the article [6] to rewrite a conjecture of Shanks [2] on the density of primes in $E_c$. Finally, based on these results, we discuss the heuristics of large primes occurrences in the research set of our algorithm.

北京阿比特科技有限公司