亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a construction of partial spread bent functions using subspaces generated by linear recurring sequences (LRS). We first show that the kernels of the linear mappings defined by two LRS have a trivial intersection if and only if their feedback polynomials are relatively prime. Then, we characterize the appropriate parameters for a family of pairwise coprime polynomials to generate a partial spread required for the support of a bent function, showing that such families exist if and only if the degrees of the underlying polynomials is either $1$ or $2$. We then count the resulting sets of polynomials and prove that for degree $1$, our LRS construction coincides with the Desarguesian partial spread. Finally, we perform a computer search of all $\mathcal{PS}^-$ and $\mathcal{PS}^+$ bent functions of $n=8$ variables generated by our construction and compute their 2-ranks. The results show that many of these functions defined by polynomials of degree $b=2$ are not EA-equivalent to any Maiorana-McFarland or Desarguesian partial spread function.

相關內容

Depending on the node ordering, an adjacency matrix can highlight distinct characteristics of a graph. Deriving a "proper" node ordering is thus a critical step in visualizing a graph as an adjacency matrix. Users often try multiple matrix reorderings using different methods until they find one that meets the analysis goal. However, this trial-and-error approach is laborious and disorganized, which is especially challenging for novices. This paper presents a technique that enables users to effortlessly find a matrix reordering they want. Specifically, we design a generative model that learns a latent space of diverse matrix reorderings of the given graph. We also construct an intuitive user interface from the learned latent space by creating a map of various matrix reorderings. We demonstrate our approach through quantitative and qualitative evaluations of the generated reorderings and learned latent spaces. The results show that our model is capable of learning a latent space of diverse matrix reorderings. Most existing research in this area generally focused on developing algorithms that can compute "better" matrix reorderings for particular circumstances. This paper introduces a fundamentally new approach to matrix visualization of a graph, where a machine learning model learns to generate diverse matrix reorderings of a graph.

This paper presents an inverse reinforcement learning~(IRL) framework for Bayesian stopping time problems. By observing the actions of a Bayesian decision maker, we provide a necessary and sufficient condition to identify if these actions are consistent with optimizing a cost function. In a Bayesian (partially observed) setting, the inverse learner can at best identify optimality wrt the observed actions. Our IRL algorithm identifies optimality and then constructs set valued estimates of the cost function. To achieve this IRL objective, we use novel ideas from Bayesian revealed preferences stemming from microeconomics. We illustrate the proposed IRL scheme using two important examples of stopping time problems, namely, sequential hypothesis testing and Bayesian search. Finally, for finite datasets, we propose an IRL detection algorithm and give finite sample bounds on its error probabilities.

We consider a graph-structured change point problem in which we observe a random vector with piecewise constant but unknown mean and whose independent, sub-Gaussian coordinates correspond to the $n$ nodes of a fixed graph. We are interested in the localisation task of recovering the partition of the nodes associated to the constancy regions of the mean vector. When the partition $\mathcal{S}$ consists of only two elements, we characterise the difficulty of the localisation problem in terms of four key parameters: the maximal noise variance $\sigma^2$, the size $\Delta$ of the smaller element of the partition, the magnitude $\kappa$ of the difference in the signal values across contiguous elements of the partition and the sum of the effective resistance edge weights $|\partial_r(\mathcal{S})|$ of the corresponding cut -- a graph theoretic quantity quantifying the size of the partition boundary. In particular, we demonstrate an information theoretical lower bound implying that, in the low signal-to-noise ratio regime $\kappa^2 \Delta \sigma^{-2} |\partial_r(\mathcal{S})|^{-1} \lesssim 1$, no consistent estimator of the true partition exists. On the other hand, when $\kappa^2 \Delta \sigma^{-2} |\partial_r(\mathcal{S})|^{-1} \gtrsim \zeta_n \log\{r(|E|)\}$, with $r(|E|)$ being the sum of effective resistance weighted edges and $\zeta_n$ being any diverging sequence in $n$, we show that a polynomial-time, approximate $\ell_0$-penalised least squared estimator delivers a localisation error -- measured by the symmetric difference between the true and estimated partition -- of order $ \kappa^{-2} \sigma^2 |\partial_r(\mathcal{S})| \log\{r(|E|)\}$. Aside from the $\log\{r(|E|)\}$ term, this rate is minimax optimal. Finally, we provide discussions on the localisation error for more general partitions of unknown sizes.

We show that corner polyhedra and 3-connected Schnyder labelings join the growing list of planar structures that can be set in exact correspondence with (weighted) models of quadrant walks via a bijection due to Kenyon, Miller, Sheffield and Wilson. Our approach leads to a first polynomial time algorithm to count these structures, and to the determination of their exact asymptotic growth constants: the number $p_n$ of corner polyhedra and $s_n$ of 3-connected Schnyder woods of size $n$ respectively satisfy $(p_n)^{1/n}\to 9/2$ and $(s_n)^{1/n}\to 16/3$ as $n$ goes to infinity. While the growth rates are rational, like in the case of previously known instances of such correspondences, the exponent of the asymptotic polynomial correction to the exponential growth does not appear to follow from the now standard Denisov-Wachtel approach, due to a bimodal behavior of the step set of the underlying tandem walk. However a heuristic argument suggests that these exponents are $-1-\pi/\arccos(9/16)\approx -4.23$ for $p_n$ and $-1-\pi/\arccos(22/27)\approx -6.08$ for $s_n$, which would imply that the associated series are not D-finite.

We consider the existence of fixed points of nonnegative neural networks, i.e., neural networks that take as an input nonnegative vectors and process them using nonnegative parameters. We first show that nonnegative neural networks can be recognized as monotonic and (weakly) scalable functions within the framework of nonlinear Perron-Frobenius theory. This fact enables us to provide conditions for the existence of fixed points of nonnegative neural networks, and these conditions are weaker than those obtained recently using arguments in convex analysis. Furthermore, we prove that the shape of the fixed point set of nonnegative neural networks is often an interval, which degenerates to a point for the case of scalable networks. The results of this paper contribute to the understanding of the behavior of autoencoders, because the fixed point set of an autoencoder is precisely the set of points that can be perfectly reconstructed. Moreover, they provide insight into neural networks designed using the loop-unrolling technique, which can be seen as a fixed point searching algorithm. The chief theoretical results of this paper are verified in numerical simulations, where we consider an autoencoder that first compresses angular power spectra in massive MIMO systems, and, second, reconstruct the input spectra from the compressed signals.

Let $\mathbb{F}_q$ be a finite field of size $q$ and $\mathbb{F}_q^*$ the set of non-zero elements of $\mathbb{F}_q$. In this paper, we study a class of twisted generalized Reed-Solomon code $C_\ell(D, k, \eta, \vec{v})\subset \mathbb{F}_q^n$ generated by the following matrix \[ \left(\begin{array}{cccc} v_{1} & v_{2} & \cdots & v_{n} \\ v_{1} \alpha_{1} & v_{2} \alpha_{2} & \cdots & v_{n} \alpha_{n} \\ \vdots & \vdots & \ddots & \vdots \\ v_{1} \alpha_{1}^{\ell-1} & v_{2} \alpha_{2}^{\ell-1} & \cdots & v_{n} \alpha_{n}^{\ell-1} \\ v_{1} \alpha_{1}^{\ell+1} & v_{2} \alpha_{2}^{\ell+1} & \cdots & v_{n} \alpha_{n}^{\ell+1} \\ \vdots & \vdots & \ddots & \vdots \\ v_{1} \alpha_{1}^{k-1} & v_{2} \alpha_{2}^{k-1} & \cdots & v_{n} \alpha_{n}^{k-1} \\ v_{1}\left(\alpha_{1}^{\ell}+\eta\alpha_{1}^{q-{2}}\right) & v_{2}\left(\alpha_{2}^{\ell}+ \eta \alpha_{2}^{q-2}\right) &\cdots & v_{n}\left(\alpha_{n}^{\ell}+\eta\alpha_{n}^{q-2}\right) \end{array}\right) \] where $0\leq \ell\leq k-1,$ the evaluation set $D=\{\alpha_{1},\alpha_{2},\cdots, \alpha_{n}\}\subseteq \mathbb{F}_q^*$, scaling vector $\vec{v}=(v_1,v_2,\cdots,v_n)\in (\mathbb{F}_q^*)^n$ and $\eta\in\mathbb{F}_q^*$. The minimum distance and dual code of $C_\ell(D, k, \eta, \vec{v})$ will be determined. For the special case $\ell=k-1,$ a sufficient and necessary condition for $C_{k-1}(D, k, \eta, \vec{v})$ to be self-dual will be given. We will also show that the code is MDS or near-MDS. Moreover, a complete classification when the code is near-MDS or MDS will be presented.

Given a graph function, defined on an arbitrary set of edge weights and node features, does there exist a Graph Neural Network (GNN) whose output is identical to the graph function? In this paper, we fully answer this question and characterize the class of graph problems that can be represented by GNNs. We identify an algebraic condition, in terms of the permutation of edge weights and node features, which proves to be necessary and sufficient for a graph problem to lie within the reach of GNNs. Moreover, we show that this condition can be efficiently verified by checking quadratically many constraints. Note that our refined characterization on the expressive power of GNNs are orthogonal to those theoretical results showing equivalence between GNNs and Weisfeiler-Lehman graph isomorphism heuristic. For instance, our characterization implies that many natural graph problems, such as min-cut value, max-flow value, and max-clique size, can be represented by a GNN. In contrast, and rather surprisingly, there exist very simple graphs for which no GNN can correctly find the length of the shortest paths between all nodes. Note that finding shortest paths is one of the most classical problems in Dynamic Programming (DP). Thus, the aforementioned negative example highlights the misalignment between DP and GNN, even though (conceptually) they follow very similar iterative procedures. Finally, we support our theoretical results by experimental simulations.

It is currently known how to characterize functions that neural networks can learn with SGD for two extremal parameterizations: neural networks in the linear regime, and neural networks with no structural constraints. However, for the main parametrization of interest (non-linear but regular networks) no tight characterization has yet been achieved, despite significant developments. We take a step in this direction by considering depth-2 neural networks trained by SGD in the mean-field regime. We consider functions on binary inputs that depend on a latent low-dimensional subspace (i.e., small number of coordinates). This regime is of interest since it is poorly understood how neural networks routinely tackle high-dimensional datasets and adapt to latent low-dimensional structure without suffering from the curse of dimensionality. Accordingly, we study SGD-learnability with $O(d)$ sample complexity in a large ambient dimension $d$. Our main results characterize a hierarchical property, the "merged-staircase property", that is both necessary and nearly sufficient for learning in this setting. We further show that non-linear training is necessary: for this class of functions, linear methods on any feature map (e.g., the NTK) are not capable of learning efficiently. The key tools are a new "dimension-free" dynamics approximation result that applies to functions defined on a latent space of low-dimension, a proof of global convergence based on polynomial identity testing, and an improvement of lower bounds against linear methods for non-almost orthogonal functions.

We consider Gaussian measures $\mu, \tilde{\mu}$ on a separable Hilbert space, with fractional-order covariance operators $A^{-2\beta}$ resp. $\tilde{A}^{-2\tilde{\beta}}$, and derive necessary and sufficient conditions on $A, \tilde{A}$ and $\beta, \tilde{\beta} > 0$ for I. equivalence of the measures $\mu$ and $\tilde{\mu}$, and II. uniform asymptotic optimality of linear predictions for $\mu$ based on the misspecified measure $\tilde{\mu}$. These results hold, e.g., for Gaussian processes on compact metric spaces. As an important special case, we consider the class of generalized Whittle-Mat\'ern Gaussian random fields, where $A$ and $\tilde{A}$ are elliptic second-order differential operators, formulated on a bounded Euclidean domain $\mathcal{D}\subset\mathbb{R}^d$ and augmented with homogeneous Dirichlet boundary conditions. Our outcomes explain why the predictive performances of stationary and non-stationary models in spatial statistics often are comparable, and provide a crucial first step in deriving consistency results for parameter estimation of generalized Whittle-Mat\'ern fields.

This paper presents a safety-aware learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain feedback controllers only when safety is about to be violated. Under some mild assumptions, solutions to the constrained feedback-controller optimization are guaranteed to be globally optimal, and the monotonic improvement of a feedback controller is thus ensured. In addition, we reformulate the (action-)value function approximation to make any kernel-based nonlinear function estimation method applicable. We then employ a state-of-the-art kernel adaptive filtering technique for the (action-)value function approximation. The resulting framework is verified experimentally on a brushbot, whose dynamics is unknown and highly complex.

北京阿比特科技有限公司