亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Subspace codes have important applications in random network coding. It is interesting to construct subspace codes with both sizes, and the minimum distances are as large as possible. In particular, cyclic constant dimension subspaces codes have additional properties which can be used to make encoding and decoding more efficient. In this paper, we construct large cyclic constant dimension subspace codes with minimum distances $2k-2$ and $2k$. These codes are contained in $\mathcal{G}_q(n, k)$, where $\mathcal{G}_q(n, k)$ denotes the set of all $k$-dimensional subspaces of $\mathbb{F}_{q^n}$. Consequently, some results in \cite{FW}, \cite{NXG}, and \cite{ZT} are extended.

相關內容

In this paper we consider algebraic geometry (AG) codes: a class of codes constructed from algebraic codes (equivalently, using function fields) by Goppa. These codes can be list-decoded using the famous Guruswami-Sudan (GS) list-decoder, but the genus $g$ of the used function field gives rise to negative term in the decoding radius, which we call the genus penalty. In this article, we present a GS-like list-decoding algorithm for arbitrary AG codes, which we call the \emph{inseparable GS list-decoder}. Apart from the multiplicity parameter $s$ and designed list size $\ell$, common for the GS list-decoder, we introduce an inseparability exponent $e$. Choosing this exponent to be positive gives rise to a list-decoder for which the genus penalty is reduced with a factor $1/p^e$ compared to the usual GS list-decoder. Here $p$ is the characteristic. Our list-decoder can be executed in $\tilde{\mathcal{O}}(s\ell^{\omega}\mu^{\omega-1}p^e(n+g))$ field operations, where $n$ is the code length.

In distributed storage systems, locally repairable codes (LRCs) are designed to reduce disk I/O and repair costs by enabling recovery of each code symbol from a small number of other symbols. To handle multiple node failures, $(r,\delta)$-LRCs are introduced to enable local recovery in the event of up to $\delta-1$ failed nodes. Constructing optimal $(r,\delta)$-LRCs has been a significant research topic over the past decade. In \cite{Luo2022}, Luo \emph{et al.} proposed a construction of linear codes by using unions of some projective subspaces within a projective space. Several new classes of Griesmer codes and distance-optimal codes were constructed, and some of them were proved to be alphabet-optimal $2$-LRCs. In this paper, we first modify the method of constructing linear codes in \cite{Luo2022} by considering a more general situation of intersecting projective subspaces. This modification enables us to construct good codes with more flexible parameters. Additionally, we present the conditions for the constructed linear codes to qualify as Griesmer codes or achieve distance optimality. Next, we explore the locality of linear codes constructed by eliminating elements from a complete projective space. The novelty of our work lies in establishing the locality as $(2,p-2)$, $(2,p-1)$, or $(2,p)$-locality, in contrast to the previous literature that only considered $2$-locality. Moreover, by combining analysis of code parameters and the C-M like bound for $(r,\delta)$-LRCs, we construct some alphabet-optimal $(2,\delta)$-LRCs which may be either Griesmer codes or not Griesmer codes. Finally, we investigate the availability and alphabet-optimality of $(r,\delta)$-LRCs constructed from our modified framework.

The Transposition Distance Problem (TDP) is a classical problem in genome rearrangements which seeks to determine the minimum number of transpositions needed to transform a linear chromosome into another represented by the permutations $\pi$ and $\sigma$, respectively. This paper focuses on the equivalent problem of Sorting By Transpositions (SBT), where $\sigma$ is the identity permutation $\iota$. Specifically, we investigate palisades, a family of permutations that are "hard" to sort, as they require numerous transpositions above the celebrated lower bound devised by Bafna and Pevzner. By determining the transposition distance of palisades, we were able to provide the exact transposition diameter for $3$-permutations (TD3), a special subset of the Symmetric Group $S_n$, essential for the study of approximate solutions for SBT using the simplification technique. The exact value for TD3 has remained unknown since Elias and Hartman showed an upper bound for it. Another consequence of determining the transposition distance of palisades is that, using as lower bound the one by Bafna and Pevzner, it is impossible to guarantee approximation ratios lower than $1.375$ when approximating SBT. This finding has significant implications for the study of SBT, as this problem has been subject of intense research efforts for the past 25 years.

We consider the problem of uncertainty quantification in change point regressions, where the signal can be piecewise polynomial of arbitrary but fixed degree. That is we seek disjoint intervals which, uniformly at a given confidence level, must each contain a change point location. We propose a procedure based on performing local tests at a number of scales and locations on a sparse grid, which adapts to the choice of grid in the sense that by choosing a sparser grid one explicitly pays a lower price for multiple testing. The procedure is fast as its computational complexity is always of the order $\mathcal{O} (n \log (n))$ where $n$ is the length of the data, and optimal in the sense that under certain mild conditions every change point is detected with high probability and the widths of the intervals returned match the mini-max localisation rates for the associated change point problem up to log factors. A detailed simulation study shows our procedure is competitive against state of the art algorithms for similar problems. Our procedure is implemented in the R package ChangePointInference which is available via //github.com/gaviosha/ChangePointInference.

We study the problem of decomposing a polynomial $p$ into a sum of $r$ squares by minimizing a quadratically penalized objective $f_p(\mathbf{u}) = \left\lVert \sum_{i=1}^r u_i^2 - p\right\lVert^2$. This objective is nonconvex and is equivalent to the rank-$r$ Burer-Monteiro factorization of a semidefinite program (SDP) encoding the sum of squares decomposition. We show that for all univariate polynomials $p$, if $r \ge 2$ then $f_p(\mathbf{u})$ has no spurious second-order critical points, showing that all local optima are also global optima. This is in contrast to previous work showing that for general SDPs, in addition to genericity conditions, $r$ has to be roughly the square root of the number of constraints (the degree of $p$) for there to be no spurious second-order critical points. Our proof uses tools from computational algebraic geometry and can be interpreted as constructing a certificate using the first- and second-order necessary conditions. We also show that by choosing a norm based on sampling equally-spaced points on the circle, the gradient $\nabla f_p$ can be computed in nearly linear time using fast Fourier transforms. Experimentally we demonstrate that this method has very fast convergence using first-order optimization algorithms such as L-BFGS, with near-linear scaling to million-degree polynomials.

The $\boldsymbol{\beta}$-model for random graphs is commonly used for representing pairwise interactions in a network with degree heterogeneity. Going beyond pairwise interactions, Stasi et al. (2014) introduced the hypergraph $\boldsymbol{\beta}$-model for capturing degree heterogeneity in networks with higher-order (multi-way) interactions. In this paper we initiate the rigorous study of the hypergraph $\boldsymbol{\beta}$-model with multiple layers, which allows for hyperedges of different sizes across the layers. To begin with, we derive the rates of convergence of the maximum likelihood (ML) estimate and establish their minimax rate optimality. We also derive the limiting distribution of the ML estimate and construct asymptotically valid confidence intervals for the model parameters. Next, we consider the goodness-of-fit problem in the hypergraph $\boldsymbol{\beta}$-model. Specifically, we establish the asymptotic normality of the likelihood ratio (LR) test under the null hypothesis, derive its detection threshold, and also its limiting power at the threshold. Interestingly, the detection threshold of the LR test turns out to be minimax optimal, that is, all tests are asymptotically powerless below this threshold. The theoretical results are further validated in numerical experiments. In addition to developing the theoretical framework for estimation and inference for hypergraph $\boldsymbol{\beta}$-models, the above results fill a number of gaps in the graph $\boldsymbol{\beta}$-model literature, such as the minimax optimality of the ML estimates and the non-null properties of the LR test, which, to the best of our knowledge, have not been studied before.

For all the successes in verifying low-level, efficient, security-critical code, little has been said or studied about the structure, architecture and engineering of such large-scale proof developments. We present the design, implementation and evaluation of a set of language-based techniques that allow the programmer to modularly write and verify code at a high level of abstraction, while retaining control over the compilation process and producing high-quality, zero-overhead, low-level code suitable for integration into mainstream software. We implement our techniques within the F* proof assistant, and specifically its shallowly-embedded Low* toolchain that compiles to C. Through our evaluation, we establish that our techniques were critical in scaling the popular HACL* library past 100,000 lines of verified source code, and brought about significant gains in proof engineer productivity. The exposition of our methodology converges on one final, novel case study: the streaming API, a finicky API that has historically caused many bugs in high-profile software. Using our approach, we manage to capture the streaming semantics in a generic way, and apply it ``for free'' to over a dozen use-cases. Six of those have made it into the reference implementation of the Python programming language, replacing the previous CVE-ridden code.

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

Multimodal sentiment analysis is a very actively growing field of research. A promising area of opportunity in this field is to improve the multimodal fusion mechanism. We present a novel feature fusion strategy that proceeds in a hierarchical fashion, first fusing the modalities two in two and only then fusing all three modalities. On multimodal sentiment analysis of individual utterances, our strategy outperforms conventional concatenation of features by 1%, which amounts to 5% reduction in error rate. On utterance-level multimodal sentiment analysis of multi-utterance video clips, for which current state-of-the-art techniques incorporate contextual information from other utterances of the same clip, our hierarchical fusion gives up to 2.4% (almost 10% error rate reduction) over currently used concatenation. The implementation of our method is publicly available in the form of open-source code.

北京阿比特科技有限公司