DFT is the numerical implementation of Fourier transform (FT), and it has many forms. Ordinary DFT (ODFT) and symmetric DFT (SDFT) are the two main forms of DFT. The most widely used DFT is ODFT, and the phase spectrum of this form is widely used in engineering applications. However, it is found ODFT has the problem of phase aliasing. Moreover, ODFT does not have many FT properties, such as symmetry, integration, and interpolation. When compared with ODFT, SDFT has more FT properties. Theoretically, the more properties a transformation has, the wider its application range. Hence, SDFT is more suitable as the discrete form of FT. In order to promote SDFT, the unique nature of SDFT is demonstrated. The time-domain of even-point SDFT is not symmetric to zero, and the author corrects it in this study. The author raises a new issue that should the signal length be odd or even when performing SDFT. The answer is odd. However, scientists and engineers are accustomed to using even-numbered sequences. At the end of this study, the reasons why the author advocates odd SDFT are given. Besides, even sampling function, discrete frequency Fourier transform, and the Gibbs phenomenon of the SDFT are introduced.
Strategic term rewriting and attribute grammars are two powerful programming techniques widely used in language engineering. The former, relies on strategies to apply term rewrite rules in defining language transformations, while the latter is suitable to express context-dependent language processing algorithms. Each of these techniques, however, is usually implemented by its own powerful and large language processor system. As a result, it makes such systems harder to extend and to combine. In this paper, we present the embedding of both strategic tree rewriting and attribute grammars in a zipper-based, purely functional setting. Zippers provide a simple, but generic tree-walk mechanism that is the building block technique we use to express the purely-functional embedding of both techniques. The embedding of the two techniques in the same setting has several advantages: First, we easily combine/zip attribute grammars and strategies, thus providing language engineers the best of the two worlds. Second, the combined embedding is easier to maintain and extend since it is written in a concise and uniform setting. This results in a very small library which is able to express advanced (static) analysis and transformation tasks. We show the expressive power of our library in optimizing Haskell let expressions, expressing several Haskell refactorings and solving several language processing tasks of the LDTA Tool Challenge.
We present a new approach to detecting projective equivalences and symmetries of rational parametric 3D curves. To detect projective equivalences, we first derive two projective differential invariants that are also invariant with respect to the change of parameters called M\"obius transformations. Given two rational curves, we form a system consists of two homogeneous polynomials in four variables using the projective differential invariants. The solution of the system yields the M\"obius transformations, each of which corresponds to a projective equivalence. If the input curves are the same, then our method detects the projective symmetries of the input curve. Our method is substantially faster than methods addressing a similar problem and provides solutions even for the curves with degree up to 24 and coefficients up to 78 digits.
We propose a new variant of Chubanov's method for solving the feasibility problem over the symmetric cone by extending Roos's method (2018) for the feasibility problem over the nonnegative orthant. The proposed method considers a feasibility problem associated with a norm induced by the maximum eigenvalue of an element and uses a rescaling focusing on the upper bound of the sum of eigenvalues of any feasible solution to the problem. Its computational bound is (i) equivalent to Roos's original method (2018) and superior to Louren\c{c}o et al.'s method (2019) when the symmetric cone is the nonnegative orthant, (ii) superior to Louren\c{c}o et al.'s method (2019) when the symmetric cone is a Cartesian product of second-order cones, and (iii) equivalent to Louren\c{c}o et al.'s method (2019) when the symmetric cone is the simple positive semidefinite cone, under the assumption that the costs of computing the spectral decomposition and the minimum eigenvalue are of the same order for any given symmetric matrix. We also conduct numerical experiments that compare the performance of our method with existing methods by generating instance in three types: (i) strongly (but ill-conditioned) feasible instances, (ii) weakly feasible instances, and (iii) infeasible instances. For any of these instances, the proposed method is rather more efficient than the existing methods in terms of accuracy and execution time.
For a sample of Exponentially distributed durations we aim at point estimation and a confidence interval for its parameter. A duration is only observed if it has ended within a certain time interval, determined by a Uniform distribution. Hence, the data is a truncated empirical process that we can approximate by a Poisson process when only a small portion of the sample is observed, as is the case for our applications. We derive the likelihood from standard arguments for point processes, acknowledging the size of the latent sample as the second parameter, and derive the maximum likelihood estimator for both. Consistency and asymptotic normality of the estimator for the Exponential parameter are derived from standard results on M-estimation. We compare the design with a simple random sample assumption for the observed durations. Theoretically, the derivative of the log-likelihood is less steep in the truncation-design for small parameter values, indicating a larger computational effort for root finding and a larger standard error. In applications from the social and economic sciences and in simulations, we indeed, find a moderately increased standard error when acknowledging truncation.
Music structure analysis (MSA) methods traditionally search for musically meaningful patterns in audio: homogeneity, repetition, novelty, and segment-length regularity. Hand-crafted audio features such as MFCCs or chromagrams are often used to elicit these patterns. However, with more annotations of section labels (e.g., verse, chorus, and bridge) becoming available, one can use supervised feature learning to make these patterns even clearer and improve MSA performance. To this end, we take a supervised metric learning approach: we train a deep neural network to output embeddings that are near each other for two spectrogram inputs if both have the same section type (according to an annotation), and otherwise far apart. We propose a batch sampling scheme to ensure the labels in a training pair are interpreted meaningfully. The trained model extracts features that can be used in existing MSA algorithms. In evaluations with three datasets (HarmonixSet, SALAMI, and RWC), we demonstrate that using the proposed features can improve a traditional MSA algorithm significantly in both intra- and cross-dataset scenarios.
Given independent identically-distributed samples from a one-dimensional distribution, IAs are formed by partitioning samples into pairs, triplets, or nth-order groupings and retaining the median of those groupings that are approximately equal. A new statistical method, Independent Approximates (IAs), is defined and proven to enable closed-form estimation of the parameters of heavy-tailed distributions. The pdf of the IAs is proven to be the normalized nth-power of the original density. From this property, heavy-tailed distributions are proven to have well-defined means for their IA pairs, finite second moments for their IA triplets, and a finite, well-defined (n-1)th-moment for the nth-grouping. Estimation of the location, scale, and shape (inverse of degree of freedom) of the generalized Pareto and Student's t distributions are possible via a system of three equations. Performance analysis of the IA estimation methodology is conducted for the Student's t distribution using between 1000 to 100,000 samples. Closed-form estimates of the location and scale are determined from the mean of the IA pairs and the variance of the IA triplets, respectively. For the Student's t distribution, the geometric mean of the original samples provides a third equation to determine the shape, though its nonlinear solution requires an iterative solver. With 10,000 samples the relative bias of the parameter estimates is less than 0.01 and the relative precision is less than +/-0.1. The theoretical precision is finite for a limited range of the shape but can be extended by using higher-order groupings for a given moment.
The recently introduced polar codes constitute a breakthrough in coding theory due to their capacityachieving property. This goes hand in hand with a quasilinear construction, encoding, and successive cancellation list decoding procedures based on the Plotkin construction. The decoding algorithm can be applied with slight modifications to Reed-Muller or eBCH codes, that both achieve the capacity of erasure channels, although the list size needed for good performance grows too fast to make the decoding practical even for moderate block lengths. The key ingredient for proving the capacity-achieving property of Reed-Muller and eBCH codes is their group of symmetries. It can be plugged into the concept of Plotkin decomposition to design various permutation decoding algorithms. Although such techniques allow to outperform the straightforward polar-like decoding, the complexity stays impractical. In this paper, we show that although invariance under a large automorphism group is valuable in a theoretical sense, it also ensures that the list size needed for good performance grows exponentially. We further establish the bounds that arise if we sacrifice some of the symmetries. Although the theoretical analysis of the list decoding algorithm remains an open problem, our result provides an insight into the factors that impact the decoding complexity.
A novel confidence interval estimator is proposed for the risk difference in noninferiority binomial trials. The confidence interval is consistent with an exact unconditional test that preserves the type-I error, and has improved power, particularly for smaller sample sizes, compared to the confidence interval by Chan & Zhang (1999). The improved performance of the proposed confidence interval is theoretically justified and demonstrated with simulations and examples. An R package is also distributed that implements the proposed methods along with other confidence interval estimators.
Despite the remarkable development of parametric modeling methods for architectural design, a significant problem still exists, which is the lack of knowledge and skill regarding the professional implementation of parametric design in architectural modeling. Considering the numerous advantages of digital/parametric modeling in rapid prototyping and simulation most instructors encourage students to use digital modeling even from the early stages of design; however, an appropriate context to learn the basics of digital design thinking is rarely provided in architectural pedagogy. This paper presents an educational tool, specifically an Augmented Reality (AR) intervention, to help students understand the fundamental concepts of para-metric modeling before diving into complex parametric modeling platforms. The goal of the AR intervention is to illustrate geometric transformation and the associated math functions so that students learn the mathematical logic behind the algorithmic thinking of parametric modeling. We have developed BRICKxAR_T, an educational AR prototype, that intends to help students learn geometric transformations in an immersive spatial AR environment. A LEGO set is used within the AR intervention as a physical manipulative to support physical interaction and im-prove spatial skill through body gesture.
Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.