P-spline represents an unknown univariate function with uniform B-splines on equidistant knots and penalizes their coefficients using a simple difference matrix for smoothness. But for non-uniform B-splines on unevenly spaced knots, such difference penalty fails, and the conventional derivative penalty is hitherto the only choice. We proposed a general P-spline estimator to lift this restriction by deriving a general difference penalty for non-uniform B-splines. We also established a sandwich formula between derivative and general difference penalties for a better understanding of their connections. Simulations show that both P-spline variants have close MSE performance in general. But in practice, one can yield a more satisfactory fit than the other. For example, the bone mineral content (BMC) data favor general P-spline, while the fossil shell data favor standard P-spline. We therefore believe both variants to be useful tools for practical modeling. To implement our general P-spline, we developed two new R packages: gps and gps.mgcv. The latter creates a new "gps" smooth class for mgcv, so that a general P-spline can be specified as s(x, bs = "gps") in a model formula and estimated in the framework of generalized additive models.
Isogeometric analysis with the boundary element method (IGABEM) has recently gained interest. In this paper, the approximability of IGABEM on 3D acoustic scattering problems will be investigated and a new improved BeTSSi submarine will be presented as a benchmark example. Both Galerkin and collocation are considered in combination with several boundary integral equations (BIE). In addition to the conventional BIE, regularized versions of this BIE will be considered. Moreover, the hyper-singular BIE and the Burton--Miller formulation are also considered. A new adaptive integration routine is presented, and the numerical examples show the importance of the integration procedure in the boundary element method. The numerical examples also include comparison between standard BEM and IGABEM, which again verifies the higher accuracy obtained from the increased inter-element continuity of the spline basis functions. One of the main objectives in this paper is benchmarking acoustic scattering problems, and the method of manufactured solution will be used frequently in this regard.
Two novel parallel Newton-Krylov Balancing Domain Decomposition by Constraints (BDDC) and Dual-Primal Finite Element Tearing and Interconnecting (FETI-DP) solvers are here constructed, analyzed and tested numerically for implicit time discretizations of the three-dimensional Bidomain system of equations. This model represents the most advanced mathematical description of the cardiac bioelectrical activity and it consists of a degenerate system of two non-linear reaction-diffusion partial differential equations (PDEs), coupled with a stiff system of ordinary differential equations (ODEs). A finite element discretization in space and a segregated implicit discretization in time, based on decoupling the PDEs from the ODEs, yields at each time step the solution of a non-linear algebraic system. The Jacobian linear system at each Newton iteration is solved by a Krylov method, accelerated by BDDC or FETI-DP preconditioners, both augmented with the recently introduced {\em deluxe} scaling of the dual variables. A polylogarithmic convergence rate bound is proven for the resulting parallel Bidomain solvers. Extensive numerical experiments on linux clusters up to two thousands processors confirm the theoretical estimates, showing that the proposed parallel solvers are scalable and quasi-optimal.
The perfectly matched layer (PML) formulation is a prominent way of handling radiation problems in unbounded domain and has gained interest due to its simple implementation in finite element codes. However, its simplicity can be advanced further using the isogeometric framework. This work presents a spline based PML formulation which avoids additional coordinate transformation as the formulation is based on the same space in which the numerical solution is sought. The procedure can be automated for any convex artificial boundary. This removes restrictions on the domain construction using PML and can therefore reduce computational cost and improve mesh quality. The usage of spline basis functions with higher continuity also improves the accuracy of the numerical solution.
We propose a real-time vision-based teleoperation approach for robotic arms that employs a single depth-based camera, exempting the user from the need for any wearable devices. By employing a natural user interface, this novel approach leverages the conventional fine-tuning control, turning it into a direct body pose capture process. The proposed approach is comprised of two main parts. The first is a nonlinear customizable pose mapping based on Thin-Plate Splines (TPS), to directly transfer human body motion to robotic arm motion in a nonlinear fashion, thus allowing matching dissimilar bodies with different workspace shapes and kinematic constraints. The second is a Deep Neural Network hand-state classifier based on Long-term Recurrent Convolutional Networks (LRCN) that exploits the temporal coherence of the acquired depth data. We validate, evaluate and compare our approach through both classical cross-validation experiments of the proposed hand state classifier; and user studies over a set of practical experiments involving variants of pick-and-place and manufacturing tasks. Results revealed that LRCN networks outperform single image Convolutional Neural Networks; and that users' learning curves were steep, thus allowing the successful completion of the proposed tasks. When compared to a previous approach, the TPS approach revealed no increase in task complexity and similar times of completion, while providing more precise operation in regions closer to workspace boundaries.
Publicly traded companies are required to submit periodic reports with eXtensive Business Reporting Language (XBRL) word-level tags. Manually tagging the reports is tedious and costly. We, therefore, introduce XBRL tagging as a new entity extraction task for the financial domain and release FiNER-139, a dataset of 1.1M sentences with gold XBRL tags. Unlike typical entity extraction datasets, FiNER-139 uses a much larger label set of 139 entity types. Most annotated tokens are numeric, with the correct tag per token depending mostly on context, rather than the token itself. We show that subword fragmentation of numeric expressions harms BERT's performance, allowing word-level BILSTMs to perform better. To improve BERT's performance, we propose two simple and effective solutions that replace numeric expressions with pseudo-tokens reflecting original token shapes and numeric magnitudes. We also experiment with FIN-BERT, an existing BERT model for the financial domain, and release our own BERT (SEC-BERT), pre-trained on financial filings, which performs best. Through data and error analysis, we finally identify possible limitations to inspire future work on XBRL tagging.
While utilization of digital agents to support crucial decision making is increasing, trust in suggestions made by these agents is hard to achieve. However, it is essential to profit from their application, resulting in a need for explanations for both the decision making process and the model. For many systems, such as common black-box models, achieving at least some explainability requires complex post-processing, while other systems profit from being, to a reasonable extent, inherently interpretable. We propose a rule-based learning system specifically conceptualised and, thus, especially suited for these scenarios. Its models are inherently transparent and easily interpretable by design. One key innovation of our system is that the rules' conditions and which rules compose a problem's solution are evolved separately. We utilise independent rule fitnesses which allows users to specifically tailor their model structure to fit the given requirements for explainability.
This paper studies the application of reconfigurable intelligent surface (RIS) to cooperative non-orthogonal multiple access (C-NOMA) networks with simultaneous wireless information and power transfer (SWIPT). We aim for maximizing the rate of the strong user with guaranteed weak user's quality of service (QoS) by jointly optimizing power splitting factors, beamforming coefficients, and RIS reflection coefficients in two transmission phases. The formulated problem is difficult to solve due to its complex and non-convex constraints. To tackle this challenging problem, we first use alternating optimization (AO) framework to transform it into three subproblems, and then use the penalty-based arithmetic-geometric mean approximation (PBAGM) algorithm and the successive convex approximation (SCA)-based method to solve them. Numerical results verify the superiority of the proposed algorithm over the baseline schemes.
We study the numerical approximation by space-time finite element methods of a multi-physics system coupling hyperbolic elastodynamics with parabolic transport and modelling poro- and thermoelasticity. The equations are rewritten as a first-order system in time. Discretizations by continuous Galerkin methods in space and time with inf-sup stable pairs of finite elements for the spatial approximation of the unknowns are investigated. Optimal order error estimates of energy-type are proven. Superconvergence at the time nodes is addressed briefly. The error analysis can be extended to discontinuous and enriched Galerkin space discretizations. The error estimates are confirmed by numerical experiments.
We introduce a novel methodology for particle filtering in dynamical systems where the evolution of the signal of interest is described by a SDE and observations are collected instantaneously at prescribed time instants. The new approach includes the discretisation of the SDE and the design of efficient particle filters for the resulting discrete-time state-space model. The discretisation scheme converges with weak order 1 and it is devised to create a sequential dependence structure along the coordinates of the discrete-time state vector. We introduce a class of space-sequential particle filters that exploits this structure to improve performance when the system dimension is large. This is numerically illustrated by a set of computer simulations for a stochastic Lorenz 96 system with additive noise. The new space-sequential particle filters attain approximately constant estimation errors as the dimension of the Lorenz 96 system is increased, with a computational cost that increases polynomially, rather than exponentially, with the system dimension. Besides the new numerical scheme and particle filters, we provide in this paper a general framework for discrete-time filtering in continuous-time dynamical systems described by a SDE and instantaneous observations. Provided that the SDE is discretised using a weakly-convergent scheme, we prove that the marginal posterior laws of the resulting discrete-time state-space model converge to the posterior marginal posterior laws of the original continuous-time state-space model under a suitably defined metric. This result is general and not restricted to the numerical scheme or particle filters specifically studied in this manuscript.
We demonstrate that merely analog transmissions and match filtering can realize the function of an edge server in federated learning (FL). Therefore, a network with massively distributed user equipments (UEs) can achieve large-scale FL without an edge server. We also develop a training algorithm that allows UEs to continuously perform local computing without being interrupted by the global parameter uploading, which exploits the full potential of UEs' processing power. We derive convergence rates for the proposed schemes to quantify their training efficiency. The analyses reveal that when the interference obeys a Gaussian distribution, the proposed algorithm retrieves the convergence rate of a server-based FL. But if the interference distribution is heavy-tailed, then the heavier the tail, the slower the algorithm converges. Nonetheless, the system run time can be largely reduced by enabling computation in parallel with communication, whereas the gain is particularly pronounced when communication latency is high. These findings are corroborated via excessive simulations.