亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There are exactly two non-commutative rings of size $4$, namely, $E = \langle a, b ~\vert ~ 2a = 2b = 0, a^2 = a, b^2 = b, ab= a, ba = b\rangle$ and its opposite ring $F$. These rings are non-unital. A subset $D$ of $E^m$ is defined with the help of simplicial complexes, and utilized to construct linear left-$E$-codes $C^L_D=\{(v\cdot d)_{d\in D} : v\in E^m\}$ and right-$E$-codes $C^R_D=\{(d\cdot v)_{d\in D} : v\in E^m\}$. We study their corresponding binary codes obtained via a Gray map. The weight distributions of all these codes are computed. We achieve a couple of infinite families of optimal codes with respect to the Griesmer bound. Ashikhmin-Barg's condition for minimality of a linear code is satisfied by most of the binary codes we constructed here. All the binary codes in this article are few-weight codes, and self-orthogonal codes under certain mild conditions. This is the first attempt to study the structure of linear codes over non-unital non-commutative rings using simplicial complexes.

相關內容

Interpretability of neural networks and their underlying theoretical behavior remain an open field of study even after the great success of their practical applications, particularly with the emergence of deep learning. In this work, NN2Poly is proposed: a theoretical approach to obtain an explicit polynomial model that provides an accurate representation of an already trained fully-connected feed-forward artificial neural network (a multilayer perceptron or MLP). This approach extends a previous idea proposed in the literature, which was limited to single hidden layer networks, to work with arbitrarily deep MLPs in both regression and classification tasks. The objective of this paper is to achieve this by using a Taylor expansion on the activation function, at each layer, and then using several combinatorial properties to calculate the coefficients of the desired polynomials. Discussion is presented on the main computational challenges of this method, and the way to overcome them by imposing certain constraints during the training phase. Finally, simulation experiments as well as an application to a real data set are presented to demonstrate the effectiveness of the proposed method.

For various $2\leq n,m \leq 6$, we propose some new algorithms for multiplying an $n\times m$ matrix with an $m \times 6$ matrix over a possibly noncommutative coefficient ring.

The concept class of low-degree polynomial threshold functions (PTFs) plays a fundamental role in machine learning. In this paper, we study PAC learning of $K$-sparse degree-$d$ PTFs on $\mathbb{R}^n$, where any such concept depends only on $K$ out of $n$ attributes of the input. Our main contribution is a new algorithm that runs in time $({nd}/{\epsilon})^{O(d)}$ and under the Gaussian marginal distribution, PAC learns the class up to error rate $\epsilon$ with $O(\frac{K^{4d}}{\epsilon^{2d}} \cdot \log^{5d} n)$ samples even when an $\eta \leq O(\epsilon^d)$ fraction of them are corrupted by the nasty noise of Bshouty et al. (2002), possibly the strongest corruption model. Prior to this work, attribute-efficient robust algorithms are established only for the special case of sparse homogeneous halfspaces. Our key ingredients are: 1) a structural result that translates the attribute sparsity to a sparsity pattern of the Chow vector under the basis of Hermite polynomials, and 2) a novel attribute-efficient robust Chow vector estimation algorithm which uses exclusively a restricted Frobenius norm to either certify a good approximation or to validate a sparsity-induced degree-$2d$ polynomial as a filter to detect corrupted samples.

Structural balance theory studies stability in networks. Given a $n$-vertex complete graph $G=(V,E)$ whose edges are labeled positive or negative, the graph is considered \emph{balanced} if every triangle either consists of three positive edges (three mutual ``friends''), or one positive edge and two negative edges (two ``friends'' with a common ``enemy''). From a computational perspective, structural balance turns out to be a special case of correlation clustering with the number of clusters at most two. The two main algorithmic problems of interest are: $(i)$ detecting whether a given graph is balanced, or $(ii)$ finding a partition that approximates the \emph{frustration index}, i.e., the minimum number of edge flips that turn the graph balanced. We study these problems in the streaming model where edges are given one by one and focus on \emph{memory efficiency}. We provide randomized single-pass algorithms for: $(i)$ determining whether an input graph is balanced with $O(\log{n})$ memory, and $(ii)$ finding a partition that induces a $(1 + \varepsilon)$-approximation to the frustration index with $O(n \cdot \text{polylog}(n))$ memory. We further provide several new lower bounds, complementing different aspects of our algorithms such as the need for randomization or approximation. To obtain our main results, we develop a method using pseudorandom generators (PRGs) to sample edges between independently-chosen \emph{vertices} in graph streaming. Furthermore, our algorithm that approximates the frustration index improves the running time of the state-of-the-art correlation clustering with two clusters (Giotis-Guruswami algorithm [SODA 2006]) from $n^{O(1/\varepsilon^2)}$ to $O(n^2\log^3{n}/\varepsilon^2 + n\log n \cdot (1/\varepsilon)^{O(1/\varepsilon^4)})$ time for $(1+\varepsilon)$-approximation. These results may be of independent interest.

How to better reduce measurement variability and bias introduced by subjectivity in crowdsourced labelling remains an open question. We introduce a theoretical framework for understanding how random error and measurement bias enter into crowdsourced annotations of subjective constructs. We then propose a pipeline that combines pairwise comparison labelling with Elo scoring, and demonstrate that it outperforms the ubiquitous majority-voting method in reducing both types of measurement error. To assess the performance of the labelling approaches, we constructed an agent-based model of crowdsourced labelling that lets us introduce different types of subjectivity into the tasks. We find that under most conditions with task subjectivity, the comparison approach produced higher $f_1$ scores. Further, the comparison approach is less susceptible to inflating bias, which majority voting tends to do. To facilitate applications, we show with simulated and real-world data that the number of required random comparisons for the same classification accuracy scales log-linearly $O(N \log N)$ with the number of labelled items. We also implemented the Elo system as an open-source Python package.

Graph Neural Networks (GNNs) typically operate by message-passing, where the state of a node is updated based on the information received from its neighbours. Most message-passing models act as graph convolutions, where features are mixed by a shared, linear transformation before being propagated over the edges. On node-classification tasks, graph convolutions have been shown to suffer from two limitations: poor performance on heterophilic graphs, and over-smoothing. It is common belief that both phenomena occur because such models behave as low-pass filters, meaning that the Dirichlet energy of the features decreases along the layers incurring a smoothing effect that ultimately makes features no longer distinguishable. In this work, we rigorously prove that simple graph-convolutional models can actually enhance high frequencies and even lead to an asymptotic behaviour we refer to as over-sharpening, opposite to over-smoothing. We do so by showing that linear graph convolutions with symmetric weights minimize a multi-particle energy that generalizes the Dirichlet energy; in this setting, the weight matrices induce edge-wise attraction (repulsion) through their positive (negative) eigenvalues, thereby controlling whether the features are being smoothed or sharpened. We also extend the analysis to non-linear GNNs, and demonstrate that some existing time-continuous GNNs are instead always dominated by the low frequencies. Finally, we validate our theoretical findings through ablations and real-world experiments.

Predictive parity (PP), also known as sufficiency, is a core definition of algorithmic fairness essentially stating that model outputs must have the same interpretation of expected outcomes regardless of group. Testing and satisfying PP is especially important in many settings where model scores are interpreted by humans or directly provide access to opportunity, such as healthcare or banking. Solutions for PP violations have primarily been studied through the lens of model calibration. However, we find that existing calibration-based tests and mitigation methods are designed for independent data, which is often not assumable in large-scale applications such as social media or medical testing. In this work, we address this issue by developing a statistically rigorous non-parametric regression based test for PP with dependent observations. We then apply our test to illustrate that PP testing can significantly vary under the two assumptions. Lastly, we provide a mitigation solution to provide a minimally-biased post-processing transformation function to achieve PP.

The Tensor Isomorphism problem (TI) has recently emerged as having connections to multiple areas of research within complexity and beyond, but the current best upper bound is essentially the brute force algorithm. Being an algebraic problem, TI (or rather, proving that two tensors are non-isomorphic) lends itself very naturally to algebraic and semi-algebraic proof systems, such as the Polynomial Calculus (PC) and Sum of Squares (SoS). For its combinatorial cousin Graph Isomorphism, essentially optimal lower bounds are known for approaches based on PC and SoS (Berkholz & Grohe, SODA '17). Our main results are an $\Omega(n)$ lower bound on PC degree or SoS degree for Tensor Isomorphism, and a nontrivial upper bound for testing isomorphism of tensors of bounded rank. We also show that PC cannot perform basic linear algebra in sub-linear degree, such as comparing the rank of two matrices, or deriving $BA=I$ from $AB=I$. As linear algebra is a key tool for understanding tensors, we introduce a strictly stronger proof system, PC+Inv, which allows as derivation rules all substitution instances of the implication $AB=I \rightarrow BA=I$. We conjecture that even PC+Inv cannot solve TI in polynomial time either, but leave open getting lower bounds on PC+Inv for any system of equations, let alone those for TI. We also highlight many other open questions about proof complexity approaches to TI.

A long line of work in the past two decades or so established close connections between several different pseudorandom objects and applications. These connections essentially show that an asymptotically optimal construction of one central object will lead to asymptotically optimal solutions to all the others. However, despite considerable effort, previous works can get close but still lack one final step to achieve truly asymptotically optimal constructions. In this paper we provide the last missing link, thus simultaneously achieving explicit, asymptotically optimal constructions and solutions for various well studied extractors and applications, that have been the subjects of long lines of research. Our results include: Asymptotically optimal seeded non-malleable extractors, which in turn give two source extractors for asymptotically optimal min-entropy of $O(\log n)$, explicit constructions of $K$-Ramsey graphs on $N$ vertices with $K=\log^{O(1)} N$, and truly optimal privacy amplification protocols with an active adversary. Two source non-malleable extractors and affine non-malleable extractors for some linear min-entropy with exponentially small error, which in turn give the first explicit construction of non-malleable codes against $2$-split state tampering and affine tampering with constant rate and \emph{exponentially} small error. Explicit extractors for affine sources, sumset sources, interleaved sources, and small space sources that achieve asymptotically optimal min-entropy of $O(\log n)$ or $2s+O(\log n)$ (for space $s$ sources). An explicit function that requires strongly linear read once branching programs of size $2^{n-O(\log n)}$, which is optimal up to the constant in $O(\cdot)$. Previously, even for standard read once branching programs, the best known size lower bound for an explicit function is $2^{n-O(\log^2 n)}$.

By using the notion of $d$-embedding $\Gamma$ of a (canonical) subgeometry $\Sigma$ and of exterior set with respect to the $h$-secant variety $\Omega_{h}(\mathcal{A})$ of a subset $\mathcal{A}$, $ 0 \leq h \leq n-1$, in the finite projective space $\mathrm{PG}(n-1,q^n)$, $n \geq 3$, in this article we construct a class of non-linear $(n,n,q;d)$-MRD codes for any $ 2 \leq d \leq n-1$. A code $\mathcal{C}_{\sigma,T}$ of this class, where $1\in T \subset \mathbb{F}_q^*$ and $\sigma$ is a generator of $\mathrm{Gal}(\mathbb{F}_{q^n}|\mathbb{F}_q)$, arises from a cone of $\mathrm{PG}(n-1,q^n)$ with vertex an $(n-d-2)$-dimensional subspace over a maximum exterior set $\mathcal{E}$ with respect to $\Omega_{d-2}(\Gamma)$. We prove that the codes introduced in [Cossidente, A., Marino, G., Pavese, F.: Non-linear maximum rank distance codes. Des. Codes Cryptogr. 79, 597--609 (2016); Durante, N., Siciliano, A.: Non-linear maximum rank distance codes in the cyclic model for the field reduction of finite geometries. Electron. J. Comb. (2017); Donati, G., Durante, N.: A generalization of the normal rational curve in $\mathrm{PG}(d,q^n)$ and its associated non-linear MRD codes. Des. Codes Cryptogr. 86, 1175--1184 (2018)] are appropriate punctured ones of $\mathcal{C}_{\sigma,T}$ and solve completely the inequivalence issue for this class showing that $\mathcal{C}_{\sigma,T}$ is neither equivalent nor adjointly equivalent to the non-linear MRD code $\mathcal{C}_{n,k,\sigma,I}$, $I \subseteq \mathbb{F}_q$, obtained in [Otal, K., \"Ozbudak, F.: Some new non-additive maximum rank distance codes. Finite Fields and Their Applications 50, 293--303 (2018).].

北京阿比特科技有限公司