Isogeometric analysis with the boundary element method (IGABEM) has recently gained interest. In this paper, the approximability of IGABEM on 3D acoustic scattering problems will be investigated and a new improved BeTSSi submarine will be presented as a benchmark example. Both Galerkin and collocation are considered in combination with several boundary integral equations (BIE). In addition to the conventional BIE, regularized versions of this BIE will be considered. Moreover, the hyper-singular BIE and the Burton--Miller formulation are also considered. A new adaptive integration routine is presented, and the numerical examples show the importance of the integration procedure in the boundary element method. The numerical examples also include comparison between standard BEM and IGABEM, which again verifies the higher accuracy obtained from the increased inter-element continuity of the spline basis functions. One of the main objectives in this paper is benchmarking acoustic scattering problems, and the method of manufactured solution will be used frequently in this regard.
Empirical likelihood is a popular nonparametric statistical tool that does not require any distributional assumptions. In this paper, we explore the possibility of conducting variable selection via Bayesian empirical likelihood. We show theoretically that when the prior distribution satisfies certain mild conditions, the corresponding Bayesian empirical likelihood estimators are posteriorly consistent and variable selection consistent. As special cases, we show the prior of Bayesian empirical likelihood LASSO and SCAD satisfies such conditions and thus can identify the non-zero elements of the parameters with probability tending to 1. In addition, it is easy to verify that those conditions are met for other widely used priors such as ridge, elastic net and adaptive LASSO. Empirical likelihood depends on a parameter that needs to be obtained by numerically solving a non-linear equation. Thus, there exists no conjugate prior for the posterior distribution, which causes the slow convergence of the MCMC sampling algorithm in some cases. To solve this problem, we propose a novel approach, which uses an approximation distribution as the proposal. The computational results demonstrate quick convergence for the examples used in the paper. We use both simulation and real data analyses to illustrate the advantages of the proposed methods.
Fluid-structure systems occur in a range of scientific and engineering applications. The immersed boundary(IB) method is a widely recognized and effective modeling paradigm for simulating fluid-structure interaction(FSI) in such systems, but a difficulty of the IB formulation is that the pressure and viscous stress are generally discontinuous at the interface. The conventional IB method regularizes these discontinuities, which typically yields low-order accuracy at these interfaces. The immersed interface method(IIM) is an IB-like approach to FSI that sharply imposes stress jump conditions, enabling higher-order accuracy, but prior applications of the IIM have been largely restricted to methods that rely on smooth representations of the interface geometry. This paper introduces an IIM that uses only a C0 representation of the interface,such as those provided by standard nodal Lagrangian FE methods. Verification examples for models with prescribed motion demonstrate that the method sharply resolves stress discontinuities along the IB while avoiding the need for analytic information of the interface geometry. We demonstrate that only the lowest-order jump conditions for the pressure and velocity gradient are required to realize global 2nd-order accuracy. Specifically,we show 2nd-order global convergence rate along with nearly 2nd-order local convergence in the Eulerian velocity, and between 1st-and 2nd-order global convergence rates along with 1st-order local convergence for the Eulerian pressure. We also show 2nd-order local convergence in the interfacial displacement and velocity along with 1st-order local convergence in the fluid traction. As a demonstration of the method's ability to tackle complex geometries,this approach is also used to simulate flow in an anatomical model of the inferior vena cava.
Wireless sensor networks are among the most promising technologies of the current era because of their small size, lower cost, and ease of deployment. With the increasing number of wireless sensors, the probability of generating missing data also rises. This incomplete data could lead to disastrous consequences if used for decision-making. There is rich literature dealing with this problem. However, most approaches show performance degradation when a sizable amount of data is lost. Inspired by the emerging field of graph signal processing, this paper performs a new study of a Sobolev reconstruction algorithm in wireless sensor networks. Experimental comparisons on several publicly available datasets demonstrate that the algorithm surpasses multiple state-of-the-art techniques by a maximum margin of 54%. We further show that this algorithm consistently retrieves the missing data even during massive data loss situations.
Interpolating between measures supported by polygonal or polyhedral domains is a problem that has been recently addressed by the semi-discrete optimal transport framework. Within this framework, one of the domains is discretized with a set of samples, while the other one remains continuous. In this paper we present a method to introduce some symmetry into the solution using coupled power diagrams. This symmetry is key to capturing the discontinuities of the transport map reflected in the geometry of the power cells. We design our method as a fixed-point algorithm alternating between computations of semi-discrete transport maps and recentering of the sites. The resulting objects are coupled power diagrams with identical geometry, allowing us to approximate displacement interpolation through linear interpolation of the meshes vertices. Through these coupled power diagrams, we have a natural way of jointly sampling measures.
We consider the problem of kernel classification. Works on kernel regression have shown that the rate of decay of the prediction error with the number of samples for a large class of data-sets is well characterized by two quantities: the capacity and source of the data-set. In this work, we compute the decay rates for the misclassification (prediction) error under the Gaussian design, for data-sets satisfying source and capacity assumptions. We derive the rates as a function of the source and capacity coefficients for two standard kernel classification settings, namely margin-maximizing Support Vector Machines (SVM) and ridge classification, and contrast the two methods. As a consequence, we find that the known worst-case rates are loose for this class of data-sets. Finally, we show that the rates presented in this work are also observed on real data-sets.
We propose a fourth-order unfitted characteristic finite element method to solve the advection-diffusion equation on time-varying domains. Based on a characteristic-Galerkin formulation, our method combines the cubic MARS method for interface tracking, the fourth-order backward differentiation formula for temporal integration, and an unfitted finite element method for spatial discretization. Our convergence analysis includes errors of discretely representing the moving boundary, tracing boundary markers, and the spatial discretization and the temporal integration of the governing equation. Numerical experiments are performed on a rotating domain and a severely deformed domain to verify our theoretical results and to demonstrate the optimal convergence of the proposed method.
The computation of the partial generalized singular value decomposition (GSVD) of large-scale matrix pairs can be approached by means of iterative methods based on expanding subspaces, particularly Krylov subspaces. We consider the joint Lanczos bidiagonalization method, and analyze the feasibility of adapting the thick restart technique that is being used successfully in the context of other linear algebra problems. Numerical experiments illustrate the effectiveness of the proposed method. We also compare the new method with an alternative solution via equivalent eigenvalue problems, considering accuracy as well as computational performance. The analysis is done using a parallel implementation in the SLEPc library.
The fusion of multi-modal sensors has become increasingly popular in autonomous driving and intelligent robots since it can provide richer information than any single sensor, enhance reliability in complex environments. Multi-sensor extrinsic calibration is one of the key factors of sensor fusion. However, such calibration is difficult due to the variety of sensor modalities and the requirement of calibration targets and human labor. In this paper, we demonstrate a new targetless cross-modal calibration framework by focusing on the extrinsic transformations among stereo cameras, thermal cameras, and laser sensors. Specifically, the calibration between stereo and laser is conducted in 3D space by minimizing the registration error, while the thermal extrinsic to the other two sensors is estimated by optimizing the alignment of the edge features. Our method requires no dedicated targets and performs the multi-sensor calibration in a single shot without human interaction. Experimental results show that the calibration framework is accurate and applicable in general scenes.
We analyze and validate the virtual element method combined with a projection approach similar to the one in [1, 2], to solve problems on two dimensional domains with curved boundaries approximated by polygonal domains obtained as the union of squared elements out of a uniform structured mesh, such as the one that naturally arise when the domain is issued from an image. We show, both theoretically and numerically, that resorting to the use of polygonal element allows to satisfy the assumptions required for the stability of the projection approach, thus allowing to fully exploit the potential of higher order methods, which makes the resulting approach an effective alternative to the use of the finite element method.
In linear regression we wish to estimate the optimum linear least squares predictor for a distribution over $d$-dimensional input points and real-valued responses, based on a small sample. Under standard random design analysis, where the sample is drawn i.i.d. from the input distribution, the least squares solution for that sample can be viewed as the natural estimator of the optimum. Unfortunately, this estimator almost always incurs an undesirable bias coming from the randomness of the input points, which is a significant bottleneck in model averaging. In this paper we show that it is possible to draw a non-i.i.d. sample of input points such that, regardless of the response model, the least squares solution is an unbiased estimator of the optimum. Moreover, this sample can be produced efficiently by augmenting a previously drawn i.i.d. sample with an additional set of $d$ points, drawn jointly according to a certain determinantal point process constructed from the input distribution rescaled by the squared volume spanned by the points. Motivated by this, we develop a theoretical framework for studying volume-rescaled sampling, and in the process prove a number of new matrix expectation identities. We use them to show that for any input distribution and $\epsilon>0$ there is a random design consisting of $O(d\log d+ d/\epsilon)$ points from which an unbiased estimator can be constructed whose expected square loss over the entire distribution is bounded by $1+\epsilon$ times the loss of the optimum. We provide efficient algorithms for generating such unbiased estimators in a number of practical settings and support our claims experimentally.