N. G. de Bruijn (1958) studied the asymptotic expansion of iterates of sin$(x)$ with $0 < x \leq \pi/2$. Bencherif & Robin (1994) generalized this result to increasing analytic functions $f(x)$ with an attractive fixed point at 0 and $x > 0$ suitably small. Mavecha & Laohakosol (2013) formulated an algorithm for explicitly deriving required parameters. We review their method, testing it initally on the logistic function $\ell(x)$, a certain radical function $r(x)$, and later on several transcendental functions. Along the way, we show how $\ell(x)$ and $r(x)$ are kindred functions; the same is also true for sin$(x)$ and arcsinh$(x)$.
A convergent numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation is derived. The method is based on applying a tailor-made projection operator to the initial data, and then solving exactly using the generalized method of characteristics. The projection step is the only step that introduces any approximation error. It is therefore crucial that its design ensures not only a good approximation of the initial data, but also that errors due to the energy dissipation at later times remain small. Furthermore, it is shown that the main quantity of interest, the wave profile, converges in $L^{\infty}$ for all $t \geq 0$, while a subsequence of the energy density converges weakly for almost every time.
We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (\UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is \NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the \UGC).
We propose and justify a matrix reduction method for calculating the optimal approximation of an observed matrix $A \in {\mathbb C}^{m \times n}$ by a sum $\sum_{i=1}^p \sum_{j=1}^q B_iX_{ij}C_j$ of matrix products where each $B_i \in {\mathbb C}^{m \times g_i}$ and $C_j \in {\mathbb C}^{h_j \times n}$ is known and where the unknown matrix kernels $X_{ij}$ are determined by minimizing the Frobenius norm of the error. The sum can be represented as a bounded linear mapping $BXC$ with unknown kernel $X$ from a prescribed subspace ${\mathcal T} \subseteq {\mathbb C}^n$ onto a prescribed subspace ${\mathcal S} \subseteq {\mathbb C}^m$ defined respectively by the collective domains and ranges of the given matrices $C_1,\ldots,C_q$ and $B_1,\ldots,B_p$. We show that the optimal kernel is $X = B^{\dag}AC^{\dag}$ and that the optimal approximation $BB^{\dag}AC^{\dag}C$ is the projection of the observed mapping $A$ onto a mapping from ${\mathcal T}$ to ${\mathcal S}$. If $A$ is large $B$ and $C$ may also be large and direct calculation of $B^{\dag}$ and $C^{\dag}$ becomes unwieldy and inefficient. { The proposed method avoids} this difficulty by reducing the solution process to finding the pseudo-inverses of a collection of much smaller matrices. This significantly reduces the computational burden.
We study the finite element approximation of problems involving the weighted $p$-Laplacian for $p \in (1,\infty)$ and weights belonging to the Muckenhoupt class $A_1$. In particular, we consider an equation and an obstacle problem for the weighted $p$-Laplacian and derive error estimates in both cases. The analysis is based on the language of weighted Orlicz and Orlicz--Sobolev spaces.
$\ell_1$ regularization is used to preserve edges or enforce sparsity in a solution to an inverse problem. We investigate the Split Bregman and the Majorization-Minimization iterative methods that turn this non-smooth minimization problem into a sequence of steps that include solving an $\ell_2$-regularized minimization problem. We consider selecting the regularization parameter in the inner generalized Tikhonov regularization problems that occur at each iteration in these $\ell_1$ iterative methods. The generalized cross validation and $\chi^2$ degrees of freedom methods are extended to these inner problems. In particular, for the $\chi^2$ method this includes extending the $\chi^2$ result for problems in which the regularization operator has more rows than columns, and showing how to use the $A-$weighted generalized inverse to estimate prior information at each inner iteration. Numerical experiments for image deblurring problems demonstrate that it is more effective to select the regularization parameter automatically within the iterative schemes than to keep it fixed for all iterations. Moreover, an appropriate regularization parameter can be estimated in the early iterations and used fixed to convergence.
This work introduces a novel approach to constructing DNA codes from linear codes over a non-chain extension of $\mathbb{Z}_4$. We study $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}=\mathbb{Z}_4+\omega\mathbb{Z}_4, \omega^2=\omega,$ with an $\mathfrak{R}$-automorphism $\text{\textbaro}$ and a $\text{\textbaro}$-derivation $\mathfrak{d}$ over $\mathfrak{R}.$ Further, we determine the generators of the $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}$ of any arbitrary length and establish the reverse constraint for these codes. Besides the necessary and sufficient criterion to derive reverse-complement codes, we present a construction to obtain DNA codes from these reversible codes. Moreover, we use another construction on the $(\text{\textbaro},\mathfrak{d},\gamma)$-constacyclic codes to generate additional optimal and new classical codes. Finally, we provide several examples of $(\text{\textbaro},\mathfrak{d}, \gamma)$ constacyclic codes and construct DNA codes from established results. The parameters of these linear codes over $\mathbb{Z}_4$ are better and optimal according to the codes available at \cite{z4codes}.
Efficient algorithms for solving the Smallest Enclosing Sphere (SES) problem, such as Welzl's algorithm, often fail to handle degenerate subsets of points in 3D space. Degeneracies and ill-posed configurations present significant challenges, leading to failures in convergence, inaccuracies or increased computational cost in such cases. Existing improvements to these algorithms, while addressing some of these issues, are either computationally expensive or only partially effective. In this paper, we propose a hybrid algorithm designed to mitigate degeneracy while maintaining an overall computational complexity of $O(N)$. By combining robust preprocessing steps with efficient core computations, our approach avoids the pitfalls of degeneracy without sacrificing scalability. The proposed method is validated through theoretical analysis and experimental results, demonstrating its efficacy in addressing degenerate configurations and achieving high efficiency in practice.
We consider two-dimensional $(\lambda_1, \lambda_2)$-constacyclic codes over $\mathbb{F}_{q}$ of area $M N$, where $q$ is some power of prime $p$ with $\gcd(M,p)=1$ and $\gcd(N,p)=1$. With the help of common zero (CZ) set, we characterize 2-D constacyclic codes. Further, we provide an algorithm to construct an ideal basis of these codes by using their essential common zero (ECZ) sets. We describe the dual of 2-D constacyclic codes. Finally, we provide an encoding scheme for generating 2-D constacyclic codes. We present an example to illustrate that 2-D constacyclic codes can have better minimum distance compared to their cyclic counterparts with the same code size and code rate.
We propose a $C^0$ interior penalty method for the fourth-order stream function formulation of the surface Stokes problem. The scheme utilizes continuous, piecewise polynomial spaces defined on an approximate surface. We show that the resulting discretization is positive definite and derive error estimates in various norms in terms of the polynomial degree of the finite element space as well as the polynomial degree to define the geometry approximation. A notable feature of the scheme is that it does not explicitly depend on the Gauss curvature of the surface. This is achieved via a novel integration-by-parts formula for the surface biharmonic operator.
In this paper, we study the differential properties of $x^d$ over $\mathbb{F}_{p^n}$ with $d=p^{2l}-p^{l}+1$. By studying the differential equation of $x^d$ and the number of rational points on some curves over finite fields, we completely determine differential spectrum of $x^{d}$. Then we investigate the $c$-differential uniformity of $x^{d}$. We also calculate the value distribution of a class of exponential sum related to $x^d$. In addition, we obtain a class of six-weight consta-cyclic codes, whose weight distribution is explicitly determined. Part of our results is a complement of the works shown in [\ref{H1}, \ref{H2}] which mainly focus on cross correlations.