In a recent paper, Tang and Ding introduced a class of binary cyclic codes of rate close to one half with a designed lower bound on their minimum distance. The definition involves the base $2$ expansion of the integers in their defining set. In this paper we propose an analogue for quaternary codes. In addition, the performances of the subfield subcode and of the trace code (two binary cyclic codes) are investigated.
We present explicit formulae for parameterized families of distributions of the number of nonoverlapping words and increasing nonverlapping words in independent and identically distributed (i.i.d.) finite valued random variables, respectively. Then we provide an explicit formula for a parameterized family of distributions of the number of runs, which generalizes \(\mu\)-overlapping distributions for \(\mu\geq 0\) in i.i.d.~binary valued random variables. We also demonstrate that of runs whose size are exactly given numbers (Mood 1940). The number of arithmetic operations required to compute our formula for generalized distributions of runs for fixed number of parameters and fixed range is linear order of sample size.
Already since the work by Abbe and Rayleigh the difficulty of super resolution where one wants to recover a collection of point sources from low-resolved microscopy measurements is thought to be dependent on whether the distance between the sources is below or above a certain resolution or diffraction limit. Even though there has been a number of approaches to define this limit more rigorously, there is still a gap between the situation where the task is known to be hard and scenarios where the task is provably simpler. For instance, an interesting approach for the univariate case using the size of the Cram\'er-Rao lower bound was introduced in a recent work by Ferreira Da Costa and Mitra. In this paper, we prove their conjecture on the transition point between good and worse tractability of super resolution and extend it to higher dimensions. Specifically, the bivariate statistical analysis allows to link the findings by the Cram\'er-Rao lower bound to the classical Rayleigh limit.
We propose a data-driven approach to explicitly learn the progressive encoding of a continuous source, which is successively decoded with increasing levels of quality and with the aid of correlated side information. This setup refers to the successive refinement of the Wyner-Ziv coding problem. Assuming ideal Slepian-Wolf coding, our approach employs recurrent neural networks (RNNs) to learn layered encoders and decoders for the quadratic Gaussian case. The models are trained by minimizing a variational bound on the rate-distortion function of the successively refined Wyner-Ziv coding problem. We demonstrate that RNNs can explicitly retrieve layered binning solutions akin to scalable nested quantization. Moreover, the rate-distortion performance of the scheme is on par with the corresponding monolithic Wyner-Ziv coding approach and is close to the rate-distortion bound.
We present a hierarchical Bayesian pipeline, BP3M, that measures positions, parallaxes, and proper motions (PMs) for cross-matched sources between Hubble~Space~Telescope (HST) images and Gaia -- even for sparse fields ($N_*<10$ per image) -- expanding from the recent GaiaHub tool. This technique uses Gaia-measured astrometry as priors to predict the locations of sources in HST images, and is therefore able to put the HST images onto a global reference frame without the use of background galaxies/QSOs. Testing our publicly-available code in the Fornax and Draco dSphs, we measure accurate PMs that are a median of 8-13 times more precise than Gaia DR3 alone for $20.5<G<21~\mathrm{mag}$. We are able to explore the effect of observation strategies on BP3M astrometry using synthetic data, finding an optimal strategy to improve parallax and position precision at no cost to the PM uncertainty. Using 1619 HST images in the sparse COSMOS field (median 9 Gaia sources per HST image), we measure BP3M PMs for 2640 unique sources in the $16<G<21.5~\mathrm{mag}$ range, 25% of which have no Gaia PMs; the median BP3M PM uncertainty for $20.25<G<20.75~\mathrm{mag}$ sources is $0.44~$mas/yr compared to $1.03~$mas/yr from Gaia, while the median BP3M PM uncertainty for sources without Gaia-measured PMs ($20.75<G<21.5~\mathrm{mag}$) is $1.16~$mas/yr. The statistics that underpin the BP3M pipeline are a generalized way of combining position measurements from different images, epochs, and telescopes, which allows information to be shared between surveys and archives to achieve higher astrometric precision than that from each catalog alone.
Quadratic NURBS-based discretizations of the Galerkin method suffer from volumetric locking when applied to nearly-incompressible linear elasticity. Volumetric locking causes not only smaller displacements than expected, but also large-amplitude spurious oscillations of normal stresses. Continuous-assumed-strain (CAS) elements have been recently introduced to remove membrane locking in quadratic NURBS-based discretizations of linear plane curved Kirchhoff rods (Casquero et al., CMAME, 2022). In this work, we propose two generalizations of CAS elements (named CAS1 and CAS2 elements) to overcome volumetric locking in quadratic NURBS-based discretizations of nearly-incompressible linear elasticity. CAS1 elements linearly interpolate the strains at the knots in each direction for the term in the variational form involving the first Lam\'e parameter while CAS2 elements linearly interpolate the dilatational strains at the knots in each direction. For both element types, a displacement vector with C1 continuity across element boundaries results in assumed strains with C0 continuity across element boundaries. In addition, the implementation of the two locking treatments proposed in this work does not require any additional global or element matrix operations such as matrix inversions or matrix multiplications. The locking treatments are applied at the element level and the nonzero pattern of the global stiffness matrix is preserved. The numerical examples solved in this work show that CAS1 and CAS2 elements, using either two or three Gauss-Legrendre quadrature points per direction, are effective locking treatments since they not only result in more accurate displacements for coarse meshes, but also remove the spurious oscillations of normal stresses.
We numerically investigate the generalized Steklov problem for the modified Helmholtz equation and focus on the relation between its spectrum and the geometric structure of the domain. We address three distinct aspects: (i) the asymptotic behavior of eigenvalues for polygonal domains; (ii) the dependence of the integrals of eigenfunctions on the domain symmetries; and (iii) the localization and exponential decay of Steklov eigenfunctions away from the boundary for smooth shapes and in the presence of corners. For this purpose, we implemented two complementary numerical methods to compute the eigenvalues and eigenfunctions of the associated Dirichlet-to-Neumann operator for various simply-connected planar domains. We also discuss applications of the obtained results in the theory of diffusion-controlled reactions and formulate several conjectures with relevance in spectral geometry.
Remotely sensed data are dominated by mixed Land Use and Land Cover (LULC) types. Spectral unmixing is a technique to extract information from mixed pixels into their constituent LULC types and corresponding abundance fractions. Traditionally, solving this task has relied on either classical methods that require prior knowledge of endmembers or machine learning methods that avoid explicit endmembers calculation, also known as blind spectral unmixing (BSU). Most BSU studies based on Deep Learning (DL) focus on one time-step hyperspectral or multispectral data. To our knowledge, here we provide the first study on BSU of LULC classes using MODIS multispectral time series, in presence of missing data, with end-to-end DL models. We further boost the performance of a Long-Short Term Memory (LSTM)-based model by incorporating geographic plus topographic (geo-topographic) and climatic ancillary information. Our experiments show that combining spectral-temporal input data together with geo-topographic and climatic information substantially improves the abundance estimation of LULC classes in mixed pixels. To carry out this study, we built a new labeled dataset of the region of Andalusia (Spain) with monthly multispectral time series of pixels for the year 2013 from MODIS at 460m resolution, for two hierarchical levels of LULC classes, named Andalusia MultiSpectral MultiTemporal Unmixing (Andalusia-MSMTU). This dataset provides, at the pixel level, a multispectral time series plus ancillary information annotated with the abundance of each LULC class inside each pixel. The dataset (//zenodo.org/record/7752348##.ZBmkkezMLdo) and code (//github.com/jrodriguezortega/MSMTU) are available to the public.
We determine the minimum possible column multiplicity of even, doubly-, and triply-even codes given their length. This refines a classification result for the possible lengths of $q^r$-divisible codes over $\mathbb{F}_q$. We also give a few computational results for field sizes $q>2$. Non-existence results of divisible codes with restricted column multiplicities for a given length have applications e.g. in Galois geometry and can be used for upper bounds on the maximum cardinality of subspace codes.
Quantum error correction is crucial for scalable quantum information processing applications. Traditional discrete-variable quantum codes that use multiple two-level systems to encode logical information can be hardware-intensive. An alternative approach is provided by bosonic codes, which use the infinite-dimensional Hilbert space of harmonic oscillators to encode quantum information. Two promising features of bosonic codes are that syndrome measurements are natively analog and that they can be concatenated with discrete-variable codes. In this work, we propose novel decoding methods that explicitly exploit the analog syndrome information obtained from the bosonic qubit readout in a concatenated architecture. Our methods are versatile and can be generally applied to any bosonic code concatenated with a quantum low-density parity-check (QLDPC) code. Furthermore, we introduce the concept of quasi-single-shot protocols as a novel approach that significantly reduces the number of repeated syndrome measurements required when decoding under phenomenological noise. To realize the protocol, we present a first implementation of time-domain decoding with the overlapping window method for general QLDPC codes, and a novel analog single-shot decoding method. Our results lay the foundation for general decoding algorithms using analog information and demonstrate promising results in the direction of fault-tolerant quantum computation with concatenated bosonic-QLDPC codes.
This paper presents the workspace optimization of one-translational two-rotational (1T2R) parallel manipulators using a dimensionally homogeneous constraint-embedded Jacobian. The mixed degrees of freedom of 1T2R parallel manipulators, which cause dimensional inconsistency, make it difficult to optimize their architectural parameters. To solve this problem, a point-based approach with a shifting property, selection matrix, and constraint-embedded inverse Jacobian is proposed. A simplified formulation is provided, eliminating the complex partial differentiation required in previous approaches. The dimensional homogeneity of the proposed method was analytically proven, and its validity was confirmed by comparing it with the conventional point-based method using a 3-PRS manipulator. Furthermore, the approach was applied to an asymmetric 2-RRS/RRRU manipulator with no parasitic motion. This mechanism has a T-shape combination of limbs with different kinematic parameters, making it challenging to derive a dimensionally homogeneous Jacobian using the conventional method. Finally, optimization was performed, and the results show that the proposed method is more efficient than the conventional approach. The efficiency and simplicity of the proposed method were verified using two distinct parallel manipulators.