A method is presented for the evaluation of integrals on tetrahedra where the integrand has a singularity at one vertex. The approach uses a transformation to spherical polar coordinates which explicitly eliminates the singularity and facilitates the evaluation of integration limits. The method can also be implemented in an adaptive form which gives convergence to a required tolerance. Results from the method are compared to the output from an exact analytical method and show high accuracy. In particular, when the adaptive algorithm is used, highly accurate results are found for poorly conditioned tetrahedra which normally present difficulties for numerical quadrature techniques.
Let $\bx_j = \btheta +\bep_j, j=1,...,n$, be observations of an unknown parameter $\btheta$ in a Euclidean or separable Hilbert space $\scrH$, where $\bep_j$ are noises as random elements in $\scrH$ from a general distribution. We study the estimation of $f(\btheta)$ for a given functional $f:\scrH\rightarrow \RR$ based on $\bx_j$'s. The key element of our approach is a new method which we call High-Order Degenerate Statistical Expansion. It leverages the use of classical multivariate Taylor expansion and degenerate $U$-statistic and yields an elegant explicit formula. In the univariate case of $\scrH=\R$, the formula expresses the error of the proposed estimator as a sum of order $k$ degenerate $U$-products of the noises with coefficient $f^{(k)}(\btheta)/k!$ and an explicit remainder term in the form of the Riemann-Liouville integral as in the Taylor expansion around the true $\btheta$. For general $\scrH$, the formula expresses the estimation error in terms of the inner product of $f^{(k)}(\btheta)/k!$ and the average of the tensor products of $k$ noises with distinct indices and a parallel extension of the remainder term from the univariate case. This makes the proposed method a natural statistical version of the classical Taylor expansion. The proposed estimator can be viewed as a jackknife estimator of an ideal degenerate expansion of $f(\cdot)$ around the true $\btheta$ with the degenerate $U$-product of the noises, and can be approximated by bootstrap. Thus, the jackknife, bootstrap and Taylor expansion approaches all converge to the proposed estimator. We develop risk bounds for the proposed estimator and a central limit theorem under a second moment condition (even in expansions of higher than the second order). We apply this new method to generalize several existing results with smooth and nonsmooth $f$ to universal $\bep_j$'s with only minimum moment constraints.
This paper develops a lowest-order conforming virtual element method for planar linear elasticity in the displacement/traction formulation, which can be viewed as an extension of the idea in Brenner \& Sung (1992) to the virtual element method, with the family of polygonal meshes satisfying a very general geometric assumption. The method is shown to be uniformly convergent with the Lam\'{e} constant with the optimal rates of convergence.
We develop a lowest-order nonconforming virtual element method for planar linear elasticity, which can be viewed as an extension of the idea in Falk (1991) to the virtual element method (VEM), with the family of polygonal meshes satisfying a very general geometric assumption. The method is shown to be uniformly convergent for the nearly incompressible case with optimal rates of convergence. The crucial step is to establish the discrete Korn's inequality, yielding the coercivity of the discrete bilinear form. We also provide a unified locking-free scheme both for the conforming and nonconforming VEMs in the lowest order case. Numerical results validate the feasibility and effectiveness of the proposed numerical algorithms.
Support vector machine (SVM) is a powerful classification method that has achieved great success in many fields. Since its performance can be seriously impaired by redundant covariates, model selection techniques are widely used for SVM with high dimensional covariates. As an alternative to model selection, significant progress has been made in the area of model averaging in the past decades. Yet no frequentist model averaging method was considered for SVM. This work aims to fill the gap and to propose a frequentist model averaging procedure for SVM which selects the optimal weight by cross validation. Even when the number of covariates diverges at an exponential rate of the sample size, we show asymptotic optimality of the proposed method in the sense that the ratio of its hinge loss to the lowest possible loss converges to one. We also derive the convergence rate which provides more insights to model averaging. Compared to model selection methods of SVM which require a tedious but critical task of tuning parameter selection, the model averaging method avoids the task and shows promising performances in the empirical studies.
Efficient channel estimation is challenging in full-dimensional multiple-input multiple-output communication systems, particularly in those with hybrid digital-analog architectures. Under a compressive sensing framework, this letter first designs a uniform dictionary based on a spherical Fibonacci grid to represent channels in a sparse domain, yielding smaller angular errors in three-dimensional beamspace than traditional dictionaries. Then, a Bayesian inference-aided greedy pursuit algorithm is developed to estimate channels in the frequency domain. Finally, simulation results demonstrate that both the designed dictionary and the proposed Bayesian channel estimation outperform the benchmark schemes and attain a lower normalized mean squared error of channel estimation.
In this paper, we study deep neural networks (DNNs) for solving high-dimensional evolution equations with oscillatory solutions. Different from deep least-squares methods that deal with time and space variables simultaneously, we propose a deep adaptive basis Galerkin (DABG) method which employs the spectral-Galerkin method for time variable by tensor-product basis for oscillatory solutions and the deep neural network method for high-dimensional space variables. The proposed method can lead to a linear system of differential equations having unknown DNNs that can be trained via the loss function. We establish a posterior estimates of the solution error which is bounded by the minimal loss function and the term $O(N^{-m})$, where $N$ is the number of basis functions and $m$ characterizes the regularity of the equation, and show that if the true solution is a Barron-type function, the error bound converges to zero as $M=O(N^p)$ approaches to infinity where $M$ is the width of the used networks and $p$ is a positive constant. Numerical examples including high-dimensional linear parabolic and hyperbolic equations, and nonlinear Allen-Cahn equation are presented to demonstrate the performance of the proposed DABG method is better than that of existing DNNs.
A unified construction of div-conforming finite element tensors, including vector div element, symmetric div matrix element, traceless div matrix element, and in general tensors with constraints, is developed in this work. It is based on the geometric decomposition of Lagrange elements into bubble functions on each sub-simplex. Then the tensor at each sub-simplex is decomposed into the tangential and the normal component. The tangential component forms the bubble function space and the normal component characterizes the trace. A deep exploration on boundary degrees of freedom is presented for discovering various finite elements. The developed finite element spaces are div conforming and satisfy the discrete inf-sup condition. An explicit basis of the constraint tensor space is also established.
Content-delivery applications can achieve scalability and reduce wide-area network traffic using geographically distributed caches. However, each deployed cache has an associated cost, and under time-varying request rates (e.g., a daily cycle) there may be long periods when the request rate from the local region is not high enough to justify this cost. Cloud computing offers a solution to problems of this kind, by supporting dynamic allocation and release of resources. In this paper, we analyze the potential benefits from dynamically instantiating caches using resources from cloud service providers. We develop novel analytic caching models that accommodate time-varying request rates, transient behavior as a cache fills following instantiation, and selective cache insertion policies. Within the context of a simple cost model, we then develop bounds and compare policies with optimized parameter selections to obtain insights into key cost/performance tradeoffs. We find that dynamic cache instantiation can provide substantial cost reductions, that potential reductions strongly dependent on the object popularity skew, and that selective cache insertion can be even more beneficial in this context than with conventional edge caches. Finally, our contributions also include accurate and easy-to-compute approximations that are shown applicable to LRU caches under time-varying workloads.
In this work we propose a deep adaptive sampling (DAS) method for solving partial differential equations (PDEs), where deep neural networks are utilized to approximate the solutions of PDEs and deep generative models are employed to generate new collocation points that refine the training set. The overall procedure of DAS consists of two components: solving the PDEs by minimizing the residual loss on the collocation points in the training set and generating a new training set to further improve the accuracy of current approximate solution. In particular, we treat the residual as a probability density function and approximate it with a deep generative model, called KRnet. The new samples from KRnet are consistent with the distribution induced by the residual, i.e., more samples are located in the region of large residual and less samples are located in the region of small residual. Analogous to classical adaptive methods such as the adaptive finite element, KRnet acts as an error indicator that guides the refinement of the training set. Compared to the neural network approximation obtained with uniformly distributed collocation points, the developed algorithms can significantly improve the accuracy, especially for low regularity and high-dimensional problems. We present a theoretical analysis to show that the proposed DAS method can reduce the error bound and demonstrate its effectiveness with numerical experiments.
Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model performance, but they generally neglect the positional information, which is important for generating spatially selective attention maps. In this paper, we propose a novel attention mechanism for mobile networks by embedding positional information into channel attention, which we call "coordinate attention". Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes that aggregate features along the two spatial directions, respectively. In this way, long-range dependencies can be captured along one spatial direction and meanwhile precise positional information can be preserved along the other spatial direction. The resulting feature maps are then encoded separately into a pair of direction-aware and position-sensitive attention maps that can be complementarily applied to the input feature map to augment the representations of the objects of interest. Our coordinate attention is simple and can be flexibly plugged into classic mobile networks, such as MobileNetV2, MobileNeXt, and EfficientNet with nearly no computational overhead. Extensive experiments demonstrate that our coordinate attention is not only beneficial to ImageNet classification but more interestingly, behaves better in down-stream tasks, such as object detection and semantic segmentation. Code is available at //github.com/Andrew-Qibin/CoordAttention.