The approximation of integral type functionals is studied for discrete observations of a continuous It\^o semimartingale. Based on novel approximations in the Fourier domain, central limit theorems are proved for $L^2$-Sobolev functions with fractional smoothness. An explicit $L^2$-lower bound shows that already lower order quadrature rules, such as the trapezoidal rule and the classical Riemann estimator, are rate optimal, but only the trapezoidal rule is efficient, achieving the minimal asymptotic variance.
The integral fractional Laplacian of order $s \in (0,1)$ is a nonlocal operator. It is known that solutions to the Dirichlet problem involving such an operator exhibit an algebraic boundary singularity regardless of the domain regularity. This, in turn, deteriorates the global regularity of solutions and as a result the global convergence rate of the numerical solutions. For finite element discretizations, we derive local error estimates in the $H^s$-seminorm and show optimal convergence rates in the interior of the domain by only assuming meshes to be shape-regular. These estimates quantify the fact that the reduced approximation error is concentrated near the boundary of the domain. We illustrate our theoretical results with several numerical examples.
The private collection of multiple statistics from a population is a fundamental statistical problem. One possible approach to realize this is to rely on the local model of differential privacy (LDP). Numerous LDP protocols have been developed for the task of frequency estimation of single and multiple attributes. These studies mainly focused on improving the utility of the algorithms to ensure the server performs the estimations accurately. In this paper, we investigate privacy threats (re-identification and attribute inference attacks) against LDP protocols for multidimensional data following two state-of-the-art solutions for frequency estimation of multiple attributes. To broaden the scope of our study, we have also experimentally assessed five widely used LDP protocols, namely, generalized randomized response, optimal local hashing, subset selection, RAPPOR and optimal unary encoding. Finally, we also proposed a countermeasure that improves both utility and robustness against the identified threats. Our contributions can help practitioners aiming to collect users' statistics privately to decide which LDP mechanism best fits their needs.
We present an efficient method for propagating the time-dependent Kohn-Sham equations in free space, based on the recently introduced Fourier contour deformation (FCD) approach. For potentials which are constant outside a bounded domain, FCD yields a high-order accurate numerical solution of the time-dependent Schrodinger equation directly in free space, without the need for artificial boundary conditions. Of the many existing artificial boundary condition schemes, FCD is most similar to an exact nonlocal transparent boundary condition, but it works directly on Cartesian grids in any dimension, and runs on top of the fast Fourier transform rather than fast algorithms for the application of nonlocal history integral operators. We adapt FCD to time-dependent density functional theory (TDDFT), and describe a simple algorithm to smoothly and automatically truncate long-range Coulomb-like potentials to a time-dependent constant outside of a bounded domain of interest, so that FCD can be used. This approach eliminates errors originating from the use of artificial boundary conditions, leaving only the error of the potential truncation, which is controlled and can be systematically reduced. The method enables accurate simulations of ultrastrong nonlinear electronic processes in molecular complexes in which the inteference between bound and continuum states is of paramount importance. We demonstrate results for many-electron TDDFT calculations of absorption and strong field photoelectron spectra for one and two-dimensional models, and observe a significant reduction in the size of the computational domain required to achieve high quality results, as compared with the popular method of complex absorbing potentials.
Surveys in mobile learning developed so far have analysed in a global way the effects on the usage of mobile devices by means of general apps or apps already developed. However, more and more teachers are developing their own apps to address issues not covered by existing m-learning apps. In this article, by means of a systematic literature review that covers 62 publications placed in the hype of teacher-created m-learning apps (between 2012 and 2017, the early adopters) and the usage of 71 apps, we have analysed the use of specific m-learning apps. Our results show that apps have been used both out of the classroom to develop autonomous learning or field trips, and in the classroom, mainly, for collaborative activities. The experiences analysed only develop low level outcomes and the results obtained are positive improving learning, learning performance, and attitude. As a conclusion of this study is that the results obtained with specific developed apps are quite similar to previous general surveys and that the development of long-term experiences are required to determine the real effect of instructional designs based on mobile devices. These designs should also be oriented to evaluate high level skills and take advantage of mobile features of mobile devices to develop learning activities that be made anytime at anyplace and taking into account context and realistic situations. Furthermore, it is considered relevant the study of the role of educational mobile development frameworks in facilitating teachers the development of m-learning apps.
The Hilbert spaces $H(\mathrm{curl})$ and $H(\mathrm{div})$ are needed for variational problems formulated in the context of the de Rham complex in order to guarantee well-posedness. Consequently, the construction of conforming subspaces is a crucial step in the formulation of viable numerical solutions. Alternatively to the standard definition of a finite element as per Ciarlet, given by the triplet of a domain, a polynomial space and degrees of freedom, this work aims to introduce a novel, simple method of directly constructing semi-continuous vectorial base functions on the reference element via polytopal templates and an underlying $H^1$-conforming polynomial subspace. The base functions are then mapped from the reference element to the element in the physical domain via consistent Piola transformations. The method is defined in such a way, that the underlying $H^1$-conforming subspace can be chosen independently, thus allowing for constructions of arbitrary polynomial order. The base functions arise by multiplication of the basis with template vectors defined for each polytope of the reference element. We prove a unisolvent construction of N\'ed\'elec elements of the first and second type, Brezzi-Douglas-Marini elements, and Raviart-Thomas elements. An application for the method is demonstrated with two examples in the relaxed micromorphic model
Traffic splitting is a required functionality in networks, for example for load balancing over paths or servers, or by the source's access restrictions. The capacities of the servers (or the number of users with particular access restrictions) determine the sizes of the parts into which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are often also required for other tasks such as classification and routing. In the longest-prefix model (LPM), Draves et al. (INFOCOM 1999) find a minimal representation of a function, and Sadeh et al. (INFOCOM 2019) find a minimal representation of a partition. In certain situations, range-functions are of special interest, that is, all the addresses with the same target, or action, are consecutive. In this paper we show that minimizing the amount of TCAM entries to represent a partition comes at the cost of fragmentation, such that for some partitions some actions must have multiple ranges. Then, we also study the case where each target must have a single segment of addresses.
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.
Medical image segmentation requires consensus ground truth segmentations to be derived from multiple expert annotations. A novel approach is proposed that obtains consensus segmentations from experts using graph cuts (GC) and semi supervised learning (SSL). Popular approaches use iterative Expectation Maximization (EM) to estimate the final annotation and quantify annotator's performance. Such techniques pose the risk of getting trapped in local minima. We propose a self consistency (SC) score to quantify annotator consistency using low level image features. SSL is used to predict missing annotations by considering global features and local image consistency. The SC score also serves as the penalty cost in a second order Markov random field (MRF) cost function optimized using graph cuts to derive the final consensus label. Graph cut obtains a global maximum without an iterative procedure. Experimental results on synthetic images, real data of Crohn's disease patients and retinal images show our final segmentation to be accurate and more consistent than competing methods.
Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at //github.com/2051/RSICD_optimal