In this paper, we develop a new weak Galerkin finite element scheme for the Stokes interface problem with curved interfaces. We take a unique vector-valued function at the interface and reflect the interface condition in the variational problem. Theoretical analysis and numerical experiments show that the errors can reach the optimal convergence order under the energy norm and $L^2$ norm.
Due to the weakness of public key cryptosystems encounter of quantum computers, the need to provide a solution was emerged. The McEliece cryptosystem and its security equivalent, the Niederreiter cryptosystem, which are based on Goppa codes, are one of the solutions, but they are not practical due to their long key length. Several prior attempts to decrease the length of the public key in code-based cryptosystems involved substituting the Goppa code family with other code families. However, these efforts ultimately proved to be insecure. In 2016, the National Institute of Standards and Technology (NIST) called for proposals from around the world to standardize post-quantum cryptography (PQC) schemes to solve this issue. After receiving of various proposals in this field, the Classic McEliece cryptosystem, as well as the Hamming Quasi-Cyclic (HQC) and Bit Flipping Key Encapsulation (BIKE), chosen as code-based encryption category cryptosystems that successfully progressed to the final stage. This article proposes a method for developing a code-based public key cryptography scheme that is both simple and implementable. The proposed scheme has a much shorter public key length compared to the NIST finalist cryptosystems. The key length for the primary parameters of the McEliece cryptosystem (n=1024, k=524, t=50) ranges from 18 to 500 bits. The security of this system is at least as strong as the security of the Niederreiter cryptosystem. The proposed structure is based on the Niederreiter cryptosystem which exhibits a set of highly advantageous properties that make it a suitable candidate for implementation in all extant systems.
We present a coordination method for multiple mobile manipulators to sort objects in clutter. We consider the object rearrangement problem in which the objects must be sorted into different groups in a particular order. In clutter, the order constraints could not be easily satisfied since some objects occlude other objects so the occluded ones are not directly accessible to the robots. Those objects occluding others need to be moved more than once to make the occluded objects accessible. Such rearrangement problems fall into the class of nonmonotone rearrangement problems which are computationally intractable. While the nonmonotone problems with order constraints are harder, involving with multiple robots requires another computation for task allocation. The proposed method first finds a sequence of objects to be sorted using a search such that the order constraint in each group is satisfied. The search can solve nonmonotone instances that require temporal relocation of some objects to access the next object to be sorted. Once a complete sorting sequence is found, the objects in the sequence are assigned to multiple mobile manipulators using a greedy allocation method. We develop four versions of the method with different search strategies. In the experiments, we show that our method can find a sorting sequence quickly (e.g., 4.6 sec with 20 objects sorted into five groups) even though the solved instances include hard nonmonotone ones. The extensive tests and the experiments in simulation show the ability of the method to solve the real-world sorting problem using multiple mobile manipulators.
This paper presents two new augmented flexible (AF)-Krylov subspace methods, AF-GMRES and AF-LSQR, to compute solutions of large-scale linear discrete ill-posed problems that can be modeled as the sum of two independent random variables, exhibiting smooth and sparse stochastic characteristics respectively. Following a Bayesian modelling approach, this corresponds to adding a covariance-weighted quadratic term and a sparsity enforcing $\ell_1$ term in the original least-squares minimization scheme. To handle the $\ell_1$ regularization term, the proposed approach constructs a sequence approximating quadratic problems that are partially solved using augmented flexible Krylov-Tikhonov methods. Compared to other traditional methods used to solve this minimization problem, such as those based on iteratively reweighted norm schemes, the new algorithms build a single (augmented, flexible) approximation (Krylov) subspace that encodes information about the different regularization terms through adaptable "preconditioning". The solution space is then expanded as soon as a new problem within the sequence is defined. This also allows for the regularization parameters to be chosen on-the-fly at each iteration. Compared to most recent work on generalized flexible Krylov methods, our methods offer theoretical assurance of convergence and a more stable numerical performance. The efficiency of the new methods is shown through a variety of experiments, including a synthetic image deblurring problem, a synthetic atmospheric transport problem, and fluorescence molecular tomography reconstructions using both synthetic and real-world experimental data.
With observational data alone, causal structure learning is a challenging problem. The task becomes easier when having access to data collected from perturbations of the underlying system, even when the nature of these is unknown. Existing methods either do not allow for the presence of latent variables or assume that these remain unperturbed. However, these assumptions are hard to justify if the nature of the perturbations is unknown. We provide results that enable scoring causal structures in the setting with additive, but unknown interventions. Specifically, we propose a maximum-likelihood estimator in a structural equation model that exploits system-wide invariances to output an equivalence class of causal structures from perturbation data. Furthermore, under certain structural assumptions on the population model, we provide a simple graphical characterization of all the DAGs in the interventional equivalence class. We illustrate the utility of our framework on synthetic data as well as real data involving California reservoirs and protein expressions. The software implementation is available as the Python package \emph{utlvce}.
This paper introduces a novel method for the automatic detection and handling of nonlinearities in a generic transformation. A nonlinearity index that exploits second order Taylor expansions and polynomial bounding techniques is first introduced to rigorously estimate the Jacobian variation of a nonlinear transformation. This index is then embedded into a low-order automatic domain splitting algorithm that accurately describes the mapping of an initial uncertainty set through a generic nonlinear transformation by splitting the domain whenever some imposed linearity constraints are non met. The algorithm is illustrated in the critical case of orbital uncertainty propagation, and it is coupled with a tailored merging algorithm that limits the growth of the domains in time by recombining them when nonlinearities decrease. The low-order automatic domain splitting algorithm is then combined with Gaussian mixtures models to accurately describe the propagation of a probability density function. A detailed analysis of the proposed method is presented, and the impact of the different available degrees of freedom on the accuracy and performance of the method is studied.
Permutation pattern-avoidance is a central concept of both enumerative and extremal combinatorics. In this paper we study the effect of permutation pattern-avoidance on the complexity of optimization problems. In the context of the dynamic optimality conjecture (Sleator, Tarjan, STOC 1983), Chalermsook, Goswami, Kozma, Mehlhorn, and Saranurak (FOCS 2015) conjectured that the amortized access cost of an optimal binary search tree (BST) is $O(1)$ whenever the access sequence avoids some fixed pattern. They showed a bound of $2^{\alpha{(n)}^{O(1)}}$, which was recently improved to $2^{\alpha{(n)}(1+o(1))}$ by Chalermsook, Pettie, and Yingchareonthawornchai (2023); here $n$ is the BST size and $\alpha(\cdot)$ the inverse-Ackermann function. In this paper we resolve the conjecture, showing a tight $O(1)$ bound. This indicates a barrier to dynamic optimality: any candidate online BST (e.g., splay trees or greedy trees) must match this optimum, but current analysis techniques only give superconstant bounds. More broadly, we argue that the easiness of pattern-avoiding input is a general phenomenon, not limited to BSTs or even to data structures. To illustrate this, we show that when the input avoids an arbitrary, fixed, a priori unknown pattern, one can efficiently compute a $k$-server solution of $n$ requests from a unit interval, with total cost $n^{O(1/\log k)}$, in contrast to the worst-case $\Theta(n/k)$ bound; and a traveling salesman tour of $n$ points from a unit box, of length $O(\log{n})$, in contrast to the worst-case $\Theta(\sqrt{n})$ bound; similar results hold for the euclidean minimum spanning tree, Steiner tree, and nearest-neighbor graphs. We show both results to be tight. Our techniques build on the Marcus-Tardos proof of the Stanley-Wilf conjecture, and on the recently emerging concept of twin-width; we believe our techniques to be more generally applicable.
In this work we study different Implicit-Explicit (IMEX) schemes for incompressible flow problems with variable viscosity. Unlike most previous work on IMEX schemes, which focuses on the convective part, we here focus on treating parts of the diffusive term explicitly to reduce the coupling between the velocity components. We present different, both monolithic and fractional-step, IMEX alternatives for the variable-viscosity Navier--Stokes system, analysing their theoretical and algorithmic properties. Stability results are proven for all the methods presented, with all these results being unconditional, except for one of the discretisations using a fractional-step scheme, where a CFL condition (in terms of the problem data) is required for showing stability. Our analysis is supported by a series of numerical experiments.
This paper addresses the problem of designing the {\it continuous-discrete} unscented Kalman filter (UKF) implementation methods. More precisely, the aim is to propose the MATLAB-based UKF algorithms for {\it accurate} and {\it robust} state estimation of stochastic dynamic systems. The accuracy of the {\it continuous-discrete} nonlinear filters heavily depends on how the implementation method manages the discretization error arisen at the filter prediction step. We suggest the elegant and accurate implementation framework for tracking the hidden states by utilizing the MATLAB built-in numerical integration schemes developed for solving ordinary differential equations (ODEs). The accuracy is boosted by the discretization error control involved in all MATLAB ODE solvers. This keeps the discretization error below the tolerance value provided by users, automatically. Meanwhile, the robustness of the UKF filtering methods is examined in terms of the stability to roundoff. In contrast to the pseudo-square-root UKF implementations established in engineering literature, which are based on the one-rank Cholesky updates, we derive the stable square-root methods by utilizing the $J$-orthogonal transformations for calculating the Cholesky square-root factors.
This paper proposes a strategy to solve the problems of the conventional s-version of finite element method (SFEM) fundamentally. Because SFEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from its strengths. To solve these issues, we propose a novel strategy called B-spline based SFEM. To improve the accuracy of numerical integration, we employed cubic B-spline basis functions with $C^2$-continuity across element boundaries as the global basis functions. To avoid matrix singularity, we applied different basis functions to different meshes. Specifically, we employed the Lagrange basis functions as local basis functions. The numerical results indicate that using the proposed method, numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional SFEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional SFEM.
The emergence of complex structures in the systems governed by a simple set of rules is among the most fascinating aspects of Nature. The particularly powerful and versatile model suitable for investigating this phenomenon is provided by cellular automata, with the Game of Life being one of the most prominent examples. However, this simplified model can be too limiting in providing a tool for modelling real systems. To address this, we introduce and study an extended version of the Game of Life, with the dynamical process governing the rule selection at each step. We show that the introduced modification significantly alters the behaviour of the game. We also demonstrate that the choice of the synchronization policy can be used to control the trade-off between the stability and the growth in the system.