In this paper, the finite free-form beam element is formulated by the isogeometric approach based on the Timoshenko beam theory to investigate the free vibration behavior of the beams. The non-uniform rational B-splines (NURBS) functions which define the geometry of the beam are used as the basis functions for the finite element analysis. In order to enrich the basis functions and to increase the accuracy of the solution fields, the h-, p-, and k-refinement techniques are implemented. The geometry and curvature of the beams are modelled in a unique way based on NURBS. All the effects of the the shear deformation, and the rotary inertia are taken into consideration by the present isogeometric model. Results of the beams for non-dimensional frequencies are compared with other available results in order to show the accuracy and efficiency of the present isogeometric approach. From numerical results, the present element can produce very accurate values of natural frequencies and the mode shapes due to exact definition of the geometry. With higher order basis functions, there is no shear locking phenomenon in very thin beam situations. Finally, the benchmark tests described in this study are provided as future reference solutions for Timoshenko beam vibration problem.
The multiple scattering theory (MST) is a Green's function method that has been widely used in electronic structure calculations for crystalline disordered systems. The key property of the MST method is the scattering path matrix (SPM) that characterizes the Green's function within a local solution representation. This paper studies various approximations of the SPM, under the condition that an appropriate reference is used for perturbation. In particular, we justify the convergence of the SPM approximations with respect to the size of scattering region and scattering length of reference, which are the central numerical parameters to achieve a linear scaling method with MST. We also present some numerical experiments on several typical systems to support the theory.
Neural collapse provides an elegant mathematical characterization of learned last layer representations (a.k.a. features) and classifier weights in deep classification models. Such results not only provide insights but also motivate new techniques for improving practical deep models. However, most of the existing empirical and theoretical studies in neural collapse focus on the case that the number of classes is small relative to the dimension of the feature space. This paper extends neural collapse to cases where the number of classes are much larger than the dimension of feature space, which broadly occur for language models, retrieval systems, and face recognition applications. We show that the features and classifier exhibit a generalized neural collapse phenomenon, where the minimum one-vs-rest margins is maximized.We provide empirical study to verify the occurrence of generalized neural collapse in practical deep neural networks. Moreover, we provide theoretical study to show that the generalized neural collapse provably occurs under unconstrained feature model with spherical constraint, under certain technical conditions on feature dimension and number of classes.
In the recently emerging field of nonabelian group-based cryptography, a prominently used one-way function is the Conjugacy Search Problem (CSP), and two important classes of platform groups are polycyclic and matrix groups. In this paper, we discuss the complexity of the conjugacy search problem (CSP) in these two classes of platform groups using the three protocols in [10], [26], and [29] as our starting point. We produce a polynomial time solution for the CSP in a finite polycyclic group with two generators, and show that a restricted CSP is reducible to a DLP. In matrix groups over finite fields, we usedthe Jordan decomposition of a matrix to produce a polynomial time reduction of an A-restricted CSP, where A is a cyclic subgroup of the general linear group, to a set of DLPs over an extension of Fq. We use these general methods and results to describe concrete cryptanalysis algorithms for these three systems. In particular, we show that in the group of invertible matrices over finite fields and in polycyclic groups with two generators, a CSP where conjugators are restricted to a cyclic subgroup is reducible to a set of O(n2) discrete logarithm problems. Using our general results, we demonstrate concrete cryptanalysis algorithms for each of these three schemes. We believe that our methods and findings are likely to allow for several other heuristic attacks in the general case.
Discrete particle simulations have become the standard in science and industrial applications exploring the properties of particulate systems. Most of such simulations rely on the concept of interacting spherical particles to describe the properties of particulates, although, the correct representation of the nonspherical particle shape is crucial for a number of applications. In this work we describe the implementation of clumps, i.e. assemblies of rigidly connected spherical particles, which can approximate given nonspherical shapes, within the \textit{MercuryDPM} particle dynamics code. \textit{MercuryDPM} contact detection algorithm is particularly efficient for polydisperse particle systems, which is essential for multilevel clumps approximating complex surfaces. We employ the existing open-source \texttt{CLUMP} library to generate clump particles. We detail the pre-processing tools providing necessary initial data, as well as the necessary adjustments of the algorithms of contact detection, collision/migration and numerical time integration. The capabilities of our implementation are illustrated for a variety of examples.
This paper presents an alternative approach to dehomogenisation of elastic Rank-N laminate structures based on the computer graphics discipline of phasor noise. The proposed methodology offers an improvement of existing methods, where high-quality single-scale designs can be obtained efficiently without the utilisation of any least-squares problem or pre-trained models. By utilising a continuous and periodic representation of the translation at each intermediate step, appropriate length-scale and thicknesses can be obtained. Numerical tests verifies the performance of the proposed methodology compared to state-of-the-art alternatives, and the dehomogenised designs achieve structural performance within a few percentages of the optimised homogenised solution. The nature of the phasor-based dehomogenisation is inherently mesh-independent and highly parallelisable, allowing for further efficient implementations and future extensions to 3D problems on unstructured meshes.
This paper develops an updatable inverse probability weighting (UIPW) estimation for the generalized linear models with response missing at random in streaming data sets. A two-step online updating algorithm is provided for the proposed method. In the first step we construct an updatable estimator for the parameter in propensity function and hence obtain an updatable estimator of the propensity function; in the second step we propose an UIPW estimator with the inverse of the updating propensity function value at each observation as the weight for estimating the parameter of interest. The UIPW estimation is universally applicable due to its relaxation on the constraint on the number of data batches. It is shown that the proposed estimator is consistent and asymptotically normal with the same asymptotic variance as that of the oracle estimator, and hence the oracle property is obtained. The finite sample performance of the proposed estimator is illustrated by the simulation and real data analysis. All numerical studies confirm that the UIPW estimator performs as well as the batch learner.
We present an incomplete proof synthesis method for the Calculus of Constructions which is always terminating and a complete Vernacular for the Calculus of Constructions based on this method.
New criteria for energy stability of multi-step, multi-stage, and mixed schemes are introduced in the context of evolution equations that arise as gradient flow with respect to a metric. These criteria are used to exhibit second and third order consistent, energy stable schemes, which are then demonstrated on several partial differential equations that arise as gradient flow with respect to the 2-Wasserstein metric.
Information geometry is a study of statistical manifolds, that is, spaces of probability distributions from a geometric perspective. Its classical information-theoretic applications relate to statistical concepts such as Fisher information, sufficient statistics, and efficient estimators. Today, information geometry has emerged as an interdisciplinary field that finds applications in diverse areas such as radar sensing, array signal processing, quantum physics, deep learning, and optimal transport. This article presents an overview of essential information geometry to initiate an information theorist, who may be unfamiliar with this exciting area of research. We explain the concepts of divergences on statistical manifolds, generalized notions of distances, orthogonality, and geodesics, thereby paving the way for concrete applications and novel theoretical investigations. We also highlight some recent information-geometric developments, which are of interest to the broader information theory community.
While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.