亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Motivated by applications to noncoherent network coding, we study subspace codes defined by sets of linear cellular automata (CA). As a first remark, we show that a family of linear CA where the local rules have the same diameter -- and thus the associated polynomials have the same degree -- induces a Grassmannian code. Then, we prove that the minimum distance of such a code is determined by the maximum degree occurring among the pairwise greatest common divisors (GCD) of the polynomials in the family. Finally, we consider the setting where all such polynomials have the same GCD, and determine the cardinality of the corresponding Grassmannian code. As a particular case, we show that if all polynomials in the family are pairwise coprime, the resulting Grassmannian code has the highest minimum distance possible.

相關內容

Most link prediction methods return estimates of the connection probability of missing edges in a graph. Such output can be used to rank the missing edges, from most to least likely to be a true edge, but it does not directly provide a classification into true and non-existent. In this work, we consider the problem of identifying a set of true edges with a control of the false discovery rate (FDR). We propose a novel method based on high-level ideas from the literature on conformal inference. The graph structure induces intricate dependence in the data, which we carefully take into account, as this makes the setup different from the usual setup in conformal inference, where exchangeability is assumed. The FDR control is empirically demonstrated for both simulated and real data.

In 1991 H\'ebrard introduced a factorization of words that turned out to be a powerful tool for the investigation of a word's scattered factors (also known as (scattered) subwords or subsequences). Based on this, first Karandikar and Schnoebelen introduced the notion of $k$-richness and later on Barker et al. the notion of $k$-universality. In 2022 Fleischmann et al. presented a generalization of the arch factorization by intersecting the arch factorization of a word and its reverse. While the authors merely used this factorization for the investigation of shortest absent scattered factors, in this work we investigate this new $\alpha$-$\beta$-factorization as such. We characterize the famous Simon congruence of $k$-universal words in terms of $1$-universal words. Moreover, we apply these results to binary words. In this special case, we obtain a full characterization of the classes and calculate the index of the congruence. Lastly, we start investigating the ternary case, present a full list of possibilities for $\alpha\beta\alpha$-factors, and characterize their congruence.

We study variable-length codes for point-to-point discrete memoryless channels with noiseless unlimited-rate feedback that occurs in $L$ bursts. We term such codes variable-length bursty-feedback (VLBF) codes. Unlike classical codes with feedback after each transmitted code symbol, bursty feedback fits better with protocols that employ sparse feedback after a packet is sent and also with half-duplex end devices that cannot transmit and listen to the channel at the same time. We present a novel non-asymptotic achievability bound for VLBF codes with $L$ bursts of feedback over any discrete memoryless channel. We numerically evaluate the bound over the binary symmetric channel (BSC). We perform optimization over the time instances at which feedback occurs for both our own bound and Yavas et al.'s non-asymptotic achievability bound for variable-length stop-feedback (VLSF) codes, where only a single bit is sent at each feedback instance. Our results demonstrate the advantages of richer feedback: VLBF codes significantly outperform VLSF codes at short blocklengths, especially as the error probability $\epsilon$ decreases. Remarkably, for BSC(0.11) and error probability $10^{-10}$, our VLBF code with $L=5$ and expected decoding time $N\leq 400$ outperforms the achievability bound given by Polyanskiy et al. for VLSF codes with $L=\infty$, and our VLBF code with $L=3$.

The small size, high dexterity, and intrinsic compliance of continuum robots (CRs) make them well suited for constrained environments. Solving the inverse kinematics (IK), that is finding robot joint configurations that satisfy desired position or pose queries, is a fundamental challenge in motion planning, control, and calibration for any robot structure. For CRs, the need to avoid obstacles in tightly confined workspaces greatly complicates the search for feasible IK solutions. Without an accurate initialization or multiple re-starts, existing algorithms often fail to find a solution. We present CIDGIKc (Convex Iteration for Distance-Geometric Inverse Kinematics for Continuum Robots), an algorithm that solves these nonconvex feasibility problems with a sequence of semidefinite programs whose objectives are designed to encourage low-rank minimizers. CIDGIKc is enabled by a novel distance-geometric parameterization of constant curvature segment geometry for CRs with extensible segments. The resulting IK formulation involves only quadratic expressions and can efficiently incorporate a large number of collision avoidance constraints. Our experimental results demonstrate >98% solve success rates within complex, highly cluttered environments which existing algorithms cannot account for.

Entropic optimal transport (EOT) presents an effective and computationally viable alternative to unregularized optimal transport (OT), offering diverse applications for large-scale data analysis. In this work, we derive novel statistical bounds for empirical plug-in estimators of the EOT cost and show that their statistical performance in the entropy regularization parameter $\epsilon$ and the sample size $n$ only depends on the simpler of the two probability measures. For instance, under sufficiently smooth costs this yields the parametric rate $n^{-1/2}$ with factor $\epsilon^{-d/2}$, where $d$ is the minimum dimension of the two population measures. This confirms that empirical EOT also adheres to the lower complexity adaptation principle, a hallmark feature only recently identified for unregularized OT. As a consequence of our theory, we show that the empirical entropic Gromov-Wasserstein distance and its unregularized version for measures on Euclidean spaces also obey this principle. Additionally, we comment on computational aspects and complement our findings with Monte Carlo simulations. Our techniques employ empirical process theory and rely on a dual formulation of EOT over a single function class. Crucial to our analysis is the observation that the entropic cost-transformation of a function class does not increase its uniform metric entropy by much.

We solve a long-standing open problem about the optimal codebook structure of codes in $n$-dimensional Euclidean space that consist of $n+1$ codewords subject to a codeword energy constraint, in terms of minimizing the average decoding error probability. The conjecture states that optimal codebooks are formed by the $n+1$ vertices of a regular simplex (the $n$-dimensional generalization of a regular tetrahedron) inscribed in the unit sphere. A self-contained proof of this conjecture is provided that hinges on symmetry arguments and leverages a relaxation approach that consists in jointly optimizing the codebook and the decision regions, rather than the codeword locations alone.

The effect of higher order continuity in the solution field by using NURBS basis function in isogeometric analysis (IGA) is investigated for an efficient mixed finite element formulation for elastostatic beams. It is based on the Hu-Washizu variational principle considering geometrical and material nonlinearities. Here we present a reduced degree of basis functions for the additional fields of the stress resultants and strains of the beam, which are allowed to be discontinuous across elements. This approach turns out to significantly improve the computational efficiency and the accuracy of the results. We consider a beam formulation with extensible directors, where cross-sectional strains are enriched to avoid Poisson locking by an enhanced assumed strain method. In numerical examples, we show the superior per degree-of-freedom accuracy of IGA over conventional finite element analysis, due to the higher order continuity in the displacement field. We further verify the efficient rotational coupling between beams, as well as the path-independence of the results.

Dimension reduction is crucial in functional data analysis (FDA). The key tool to reduce the dimension of the data is functional principal component analysis. Existing approaches for functional principal component analysis usually involve the diagonalization of the covariance operator. With the increasing size and complexity of functional datasets, estimating the covariance operator has become more challenging. Therefore, there is a growing need for efficient methodologies to estimate the eigencomponents. Using the duality of the space of observations and the space of functional features, we propose to use the inner-product between the curves to estimate the eigenelements of multivariate and multidimensional functional datasets. The relationship between the eigenelements of the covariance operator and those of the inner-product matrix is established. We explore the application of these methodologies in several FDA settings and provide general guidance on their usability.

In many modern statistical problems, the limited available data must be used both to develop the hypotheses to test, and to test these hypotheses-that is, both for exploratory and confirmatory data analysis. Reusing the same dataset for both exploration and testing can lead to massive selection bias, leading to many false discoveries. Selective inference is a framework that allows for performing valid inference even when the same data is reused for exploration and testing. In this work, we are interested in the problem of selective inference for data clustering, where a clustering procedure is used to hypothesize a separation of the data points into a collection of subgroups, and we then wish to test whether these data-dependent clusters in fact represent meaningful differences within the data. Recent work by Gao et al. [2022] provides a framework for doing selective inference for this setting, where a hierarchical clustering algorithm is used for producing the cluster assignments, which was then extended to k-means clustering by Chen and Witten [2022]. Both these works rely on assuming a known covariance structure for the data, but in practice, the noise level needs to be estimated-and this is particularly challenging when the true cluster structure is unknown. In our work, we extend this work to the setting of noise with unknown variance, and provide a selective inference method for this more general setting. Empirical results show that our new method is better able to maintain high power while controlling Type I error when the true noise level is unknown.

This paper presents a novel, efficient, high-order accurate, and stable spectral element-based model for computing the complete three-dimensional linear radiation and diffraction problem for floating offshore structures. We present a solution to a pseudo-impulsive formulation in the time domain, where the frequency-dependent quantities, such as added mass, radiation damping, and wave excitation force for arbitrary heading angle, $\beta$, are evaluated using Fourier transforms from the tailored time-domain responses. The spatial domain is tessellated by an unstructured high-order hybrid configured mesh and represented by piece-wise polynomial basis functions in the spectral element space. Fourth-order accurate time integration is employed through an explicit four-stage Runge-Kutta method and complemented by fourth-order finite difference approximations for time differentiation. To reduce the computational burden, the model can make use of symmetry boundaries in the domain representation. The key piece of the numerical model -- the discrete Laplace solver -- is validated through $p$- and $h$-convergence studies. Moreover, to highlight the capabilities of the proposed model, we present prof-of-concept examples of simple floating bodies (a sphere and a box). Lastly, a much more involved case is performed of an oscillating water column, including generalized modes resembling the piston motion and wave sloshing effects inside the wave energy converter chamber. In this case, the spectral element model trivially computes the infinite-frequency added mass, which is a singular problem for conventional boundary element type solvers.

北京阿比特科技有限公司