Quadratic NURBS-based discretizations of the Galerkin method suffer from volumetric locking when applied to nearly-incompressible linear elasticity. Volumetric locking causes not only smaller displacements than expected, but also large-amplitude spurious oscillations of normal stresses. Continuous-assumed-strain (CAS) elements have been recently introduced to remove membrane locking in quadratic NURBS-based discretizations of linear plane curved Kirchhoff rods (Casquero et al., CMAME, 2022). In this work, we propose two generalizations of CAS elements (named CAS1 and CAS2 elements) to overcome volumetric locking in quadratic NURBS-based discretizations of nearly-incompressible linear elasticity. CAS1 elements linearly interpolate the strains at the knots in each direction for the term in the variational form involving the first Lam\'e parameter while CAS2 elements linearly interpolate the dilatational strains at the knots in each direction. For both element types, a displacement vector with C1 continuity across element boundaries results in assumed strains with C0 continuity across element boundaries. In addition, the implementation of the two locking treatments proposed in this work does not require any additional global or element matrix operations such as matrix inversions or matrix multiplications. The locking treatments are applied at the element level and the nonzero pattern of the global stiffness matrix is preserved. The numerical examples solved in this work show that CAS1 and CAS2 elements, using either two or three Gauss-Legrendre quadrature points per direction, are effective locking treatments since they not only result in more accurate displacements for coarse meshes, but also remove the spurious oscillations of normal stresses.
In this paper, we formulate and analyse a symmetric low-regularity integrator for solving the nonlinear Klein-Gordon equation in the $d$-dimensional space with $d=1,2,3$. The integrator is constructed based on the two-step trigonometric method and the proposed integrator has a simple form. Error estimates are rigorously presented to show that the integrator can achieve second-order time accuracy in the energy space under the regularity requirement in $H^{1+\frac{d}{4}}\times H^{\frac{d}{4}}$. Moreover, the time symmetry of the scheme ensures the good long-time energy conservation which is rigorously proved by the technique of modulated Fourier expansions. A numerical test is presented and the numerical results demonstrate the superiorities of the new integrator over some existing methods.
Fisher's fiducial argument is widely viewed as a failed version of Neyman's theory of confidence limits. But Fisher's goal -- Bayesian-like probabilistic uncertainty quantification without priors -- was more ambitious than Neyman's, and it's not out of reach. I've recently shown that reliable, prior-free probabilistic uncertainty quantification must be grounded in the theory of imprecise probability, and I've put forward a possibility-theoretic solution that achieves it. This has been met with resistance, however, in part due to statisticians' singular focus on confidence limits. Indeed, if imprecision isn't needed to perform confidence-limit-related tasks, then what's the point? In this paper, for a class of practically useful models, I explain specifically why the fiducial argument gives valid confidence limits, i.e., it's the "best probabilistic approximation" of the possibilistic solution I recently advanced. This sheds new light on what the fiducial argument is doing and on what's lost in terms of reliability when imprecision is ignored and the fiducial argument is pushed for more than just confidence limits.
A posteriori reduced-order models, e.g. proper orthogonal decomposition, are essential to affordably tackle realistic parametric problems. They rely on a trustful training set, that is a family of full-order solutions (snapshots) representative of all possible outcomes of the parametric problem. Having such a rich collection of snapshots is not, in many cases, computationally viable. A strategy for data augmentation, designed for parametric laminar incompressible flows, is proposed to enrich poorly populated training sets. The goal is to include in the new, artificial snapshots emerging features, not present in the original basis, that do enhance the quality of the reduced-order solution. The methodologies devised are based on exploiting basic physical principles, such as mass and momentum conservation, to devise physically-relevant, artificial snapshots at a fraction of the cost of additional full-order solutions. Interestingly, the numerical results show that the ideas exploiting only mass conservation (i.e., incompressibility) are not producing significant added value with respect to the standard linear combinations of snapshots. Conversely, accounting for the linearized momentum balance via the Oseen equation does improve the quality of the resulting approximation and therefore is an effective data augmentation strategy in the framework of viscous incompressible laminar flows.
Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.
We provide a non-unit disk framework to solve combinatorial optimization problems such as Maximum Cut (Max-Cut) and Maximum Independent Set (MIS) on a Rydberg quantum annealer. Our setup consists of a many-body interacting Rydberg system where locally controllable light shifts are applied to individual qubits in order to map the graph problem onto the Ising spin model. Exploiting the flexibility that optical tweezers offer in terms of spatial arrangement, our numerical simulations implement the local-detuning protocol while globally driving the Rydberg annealer to the desired many-body ground state, which is also the solution to the optimization problem. Using optimal control methods, these solutions are obtained for prototype graphs with varying sizes at time scales well within the system lifetime and with approximation ratios close to one. The non-blockade approach facilitates the encoding of graph problems with specific topologies that can be realized in two-dimensional Rydberg configurations and is applicable to both unweighted as well as weighted graphs. A comparative analysis with fast simulated annealing is provided which highlights the advantages of our scheme in terms of system size, hardness of the graph, and the number of iterations required to converge to the solution.
A Cahn-Hilliard-Allen-Cahn phase-field model coupled with a heat transfer equation, particularly with full non-diagonal mobility matrices, is studied. After reformulating the problem w.r.t. the inverse of temperature, we proposed and analysed a structure-preserving approximation for the semi-discretisation in space and then a fully discrete approximation using conforming finite elements and time-stepping methods. We prove structure-preserving property and discrete stability using relative entropy methods for the semi-discrete and fully discrete case. The theoretical results are illustrated by numerical experiments.
Deep neural networks (DNNs) often fail silently with over-confident predictions on out-of-distribution (OOD) samples, posing risks in real-world deployments. Existing techniques predominantly emphasize either the feature representation space or the gradient norms computed with respect to DNN parameters, yet they overlook the intricate gradient distribution and the topology of classification regions. To address this gap, we introduce GRadient-aware Out-Of-Distribution detection in interpolated manifolds (GROOD), a novel framework that relies on the discriminative power of gradient space to distinguish between in-distribution (ID) and OOD samples. To build this space, GROOD relies on class prototypes together with a prototype that specifically captures OOD characteristics. Uniquely, our approach incorporates a targeted mix-up operation at an early intermediate layer of the DNN to refine the separation of gradient spaces between ID and OOD samples. We quantify OOD detection efficacy using the distance to the nearest neighbor gradients derived from the training set, yielding a robust OOD score. Experimental evaluations substantiate that the introduction of targeted input mix-upamplifies the separation between ID and OOD in the gradient space, yielding impressive results across diverse datasets. Notably, when benchmarked against ImageNet-1k, GROOD surpasses the established robustness of state-of-the-art baselines. Through this work, we establish the utility of leveraging gradient spaces and class prototypes for enhanced OOD detection for DNN in image classification.
Partitioned neural network functions are used to approximate the solution of partial differential equations. The problem domain is partitioned into non-overlapping subdomains and the partitioned neural network functions are defined on the given non-overlapping subdomains. Each neural network function then approximates the solution in each subdomain. To obtain the convergent neural network solution, certain continuity conditions on the partitioned neural network functions across the subdomain interface need to be included in the loss function, that is used to train the parameters in the neural network functions. In our work, by introducing suitable interface values, the loss function is reformulated into a sum of localized loss functions and each localized loss function is used to train the corresponding local neural network parameters. In addition, to accelerate the neural network solution convergence, the localized loss function is enriched with an augmented Lagrangian term, where the interface condition and the boundary condition are enforced as constraints on the local solutions by using Lagrange multipliers. The local neural network parameters and Lagrange multipliers are then found by optimizing the localized loss function. To take the advantage of the localized loss function for the parallel computation, an iterative algorithm is also proposed. For the proposed algorithms, their training performance and convergence are numerically studied for various test examples.
This study introduces a force-based higher-order shear deformable beam finite element model that incorporates a rational shear stress distribution, designed for the precise analysis of functionally graded sandwich beams. Unlike conventional higher-order shear beam finite elements that regard generalized displacements as unknown fields, this model considers the distributions of stress resultants along the beam axis as the unknown fields. The specific forms of these stress resultants and the generalized displacements are analytically determined, based on the differential equilibrium equations of the higher-order shear beam. This approach effectively circumvents numerical errors that can arise from finite element discretization. Furthermore, the model introduces a stress equilibrium equation to accurately depict the distribution of transverse shear stress across the beam thickness. A corrected shear stiffness, which takes into account rational shear stress, is derived and incorporated into the proposed beam element. Numerical examples underscore the accuracy and efficacy of the proposed higher-order beam element model in the static analysis of functionally graded sandwich beams, particularly in terms of true transverse shear stress distribution.
Conventional computing paradigm struggles to fulfill the rapidly growing demands from emerging applications, especially those for machine intelligence, because much of the power and energy is consumed by constant data transfers between logic and memory modules. A new paradigm, called "computational random-access memory (CRAM)" has emerged to address this fundamental limitation. CRAM performs logic operations directly using the memory cells themselves, without having the data ever leave the memory. The energy and performance benefits of CRAM for both conventional and emerging applications have been well established by prior numerical studies. However, there lacks an experimental demonstration and study of CRAM to evaluate its computation accuracy, which is a realistic and application-critical metrics for its technological feasibility and competitiveness. In this work, a CRAM array based on magnetic tunnel junctions (MTJs) is experimentally demonstrated. First, basic memory operations as well as 2-, 3-, and 5-input logic operations are studied. Then, a 1-bit full adder with two different designs is demonstrated. Based on the experimental results, a suite of modeling has been developed to characterize the accuracy of CRAM computation. Further analysis of scalar addition, multiplication, and matrix multiplication shows promising results. These results are then applied to a complete application: a neural network based handwritten digit classifier, as an example to show the connection between the application performance and further MTJ development. The classifier achieved almost-perfect classification accuracy, with reasonable projections of future MTJ development. With the confirmation of MTJ-based CRAM's accuracy, there is a strong case that this technology will have a significant impact on power- and energy-demanding applications of machine intelligence.