This paper presents a general, nonlinear isogeometric finite element formulation for rotation-free shells with embedded fibers that captures anisotropy in stretching, shearing, twisting and bending -- both in-plane and out-of-plane. These capabilities allow for the simulation of large sheets of heterogeneous and fibrous materials either with or without matrix, such as textiles, composites, and pantographic structures. The work is a computational extension of our earlier theoretical work [1] that extends existing Kirchhoff-Love shell theory to incorporate the in-plane bending resistance of initially straight or curved fibers. The formulation requires only displacement degrees-of-freedom to capture all mentioned modes of deformation. To this end, isogeometric shape functions are used in order to satisfy the required $C^1$-continuity for bending across element boundaries. The proposed formulation can admit a wide range of material models, such as surface hyperelasticity that does not require any explicit thickness integration. To deal with possible material instability due to fiber compression, a stabilization scheme is added. Several benchmark examples are used to demonstrate the robustness and accuracy of the proposed computational formulation.
Nowadays, data is represented by vectors. Retrieving those vectors, among millions and billions, that are similar to a given query is a ubiquitous problem, known as similarity search, of relevance for a wide range of applications. Graph-based indices are currently the best performing techniques for billion-scale similarity search. However, their random-access memory pattern presents challenges to realize their full potential. In this work, we present new techniques and systems for creating faster and smaller graph-based indices. To this end, we introduce a novel vector compression method, Locally-adaptive Vector Quantization (LVQ), that uses per-vector scaling and scalar quantization to improve search performance with fast similarity computations and a reduced effective bandwidth, while decreasing memory footprint and barely impacting accuracy. LVQ, when combined with a new high-performance computing system for graph-based similarity search, establishes the new state of the art in terms of performance and memory footprint. For billions of vectors, LVQ outcompetes the second-best alternatives: (1) in the low-memory regime, by up to 20.7x in throughput with up to a 3x memory footprint reduction, and (2) in the high-throughput regime by 5.8x with 1.4x less memory.
This paper addresses the problem of solving nonlinear systems in the context of symmetric quantum signal processing (QSP), a powerful technique for implementing matrix functions on quantum computers. Symmetric QSP focuses on representing target polynomials as products of matrices in SU(2) that possess symmetry properties. We present a novel Newton's method tailored for efficiently solving the nonlinear system involved in determining the phase factors within the symmetric QSP framework. Our method demonstrates rapid and robust convergence in all parameter regimes, including the challenging scenario with ill-conditioned Jacobian matrices, using standard double precision arithmetic operations. For instance, solving symmetric QSP for a highly oscillatory target function $\alpha \cos(1000 x)$ (polynomial degree $\approx 1433$) takes $6$ iterations to converge to machine precision when $\alpha=0.9$, and the number of iterations only increases to $18$ iterations when $\alpha=1-10^{-9}$ with a highly ill-conditioned Jacobian matrix. Leveraging the matrix product states the structure of symmetric QSP, the computation of the Jacobian matrix incurs a computational cost comparable to a single function evaluation. Moreover, we introduce a reformulation of symmetric QSP using real-number arithmetics, further enhancing the method's efficiency. Extensive numerical tests validate the effectiveness and robustness of our approach, which has been implemented in the QSPPACK software package.
This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.
In this paper, we study the weighted $k$-server problem on the uniform metric in both the offline and online settings. We start with the offline setting. In contrast to the (unweighted) $k$-server problem which has a polynomial-time solution using min-cost flows, there are strong computational lower bounds for the weighted $k$-server problem, even on the uniform metric. Specifically, we show that assuming the unique games conjecture, there are no polynomial-time algorithms with a sub-polynomial approximation factor, even if we use $c$-resource augmentation for $c < 2$. Furthermore, if we consider the natural LP relaxation of the problem, then obtaining a bounded integrality gap requires us to use at least $\ell$ resource augmentation, where $\ell$ is the number of distinct server weights. We complement these results by obtaining a constant-approximation algorithm via LP rounding, with a resource augmentation of $(2+\epsilon)\ell$ for any constant $\epsilon > 0$. In the online setting, an $\exp(k)$ lower bound is known for the competitive ratio of any randomized algorithm for the weighted $k$-server problem on the uniform metric. In contrast, we show that $2\ell$-resource augmentation can bring the competitive ratio down by an exponential factor to only $O(\ell^2 \log \ell)$. Our online algorithm uses the two-stage approach of first obtaining a fractional solution using the online primal-dual framework, and then rounding it online.
Quantum neural networks (QNNs) use parameterized quantum circuits with data-dependent inputs and generate outputs through the evaluation of expectation values. Calculating these expectation values necessitates repeated circuit evaluations, thus introducing fundamental finite-sampling noise even on error-free quantum computers. We reduce this noise by introducing the variance regularization, a technique for reducing the variance of the expectation value during the quantum model training. This technique requires no additional circuit evaluations if the QNN is properly constructed. Our empirical findings demonstrate the reduced variance speeds up the training and lowers the output noise as well as decreases the number of necessary evaluations of gradient circuits. This regularization method is benchmarked on the regression of multiple functions. We show that in our examples, it lowers the variance by an order of magnitude on average and leads to a significantly reduced noise level of the QNN. We finally demonstrate QNN training on a real quantum device and evaluate the impact of error mitigation. Here, the optimization is feasible only due to the reduced number of necessary shots in the gradient evaluation resulting from the reduced variance.
We study partially linear models in settings where observations are arranged in independent groups but may exhibit within-group dependence. Existing approaches estimate linear model parameters through weighted least squares, with optimal weights (given by the inverse covariance of the response, conditional on the covariates) typically estimated by maximising a (restricted) likelihood from random effects modelling or by using generalised estimating equations. We introduce a new 'sandwich loss' whose population minimiser coincides with the weights of these approaches when the parametric forms for the conditional covariance are well-specified, but can yield arbitrarily large improvements in linear parameter estimation accuracy when they are not. Under relatively mild conditions, our estimated coefficients are asymptotically Gaussian and enjoy minimal variance among estimators with weights restricted to a given class of functions, when user-chosen regression methods are used to estimate nuisance functions. We further expand the class of functional forms for the weights that may be fitted beyond parametric models by leveraging the flexibility of modern machine learning methods within a new gradient boosting scheme for minimising the sandwich loss. We demonstrate the effectiveness of both the sandwich loss and what we call 'sandwich boosting' in a variety of settings with simulated and real-world data.
Micro Aerial Vehicles (MAVs) often face a high risk of collision during autonomous flight, particularly in cluttered and unstructured environments. To mitigate the collision impact on sensitive onboard devices, resilient MAVs with mechanical protective cages and reinforced frames are commonly used. However, compliant and impact-resilient MAVs offer a promising alternative by reducing the potential damage caused by impacts. In this study, we present novel findings on the impact-resilient capabilities of MAVs equipped with passive springs in their compliant arms. We analyze the effect of compliance through dynamic modeling and demonstrate that the inclusion of passive springs enhances impact resilience. The impact resilience is extensively tested to stabilize the MAV following wall collisions under high-speed and large-angle conditions. Additionally, we provide comprehensive comparisons with rigid MAVs to better determine the tradeoffs in flight by embedding compliance onto the robot's frame.
We study the Landau-de Gennes Q-tensor model of liquid crystals subjected to an electric field and develop a fully discrete numerical scheme for its solution. The scheme uses a convex splitting of the bulk potential, and we introduce a truncation operator for the Q-tensors to ensure well-posedness of the problem. We prove the stability and well-posedness of the scheme. Finally, making a restriction on the admissible parameters of the scheme, we show that up to a subsequence, solutions to the fully discrete scheme converge to weak solutions of the Q-tensor model as the time step and mesh are refined. We then present numerical results computed by the numerical scheme, among which, we show that it is possible to simulate the Fr\'eedericksz transition with this scheme.
Understanding dynamics in complex systems is challenging because there are many degrees of freedom, and those that are most important for describing events of interest are often not obvious. The leading eigenfunctions of the transition operator are useful for visualization, and they can provide an efficient basis for computing statistics such as the likelihood and average time of events (predictions). Here we develop inexact iterative linear algebra methods for computing these eigenfunctions (spectral estimation) and making predictions from a data set of short trajectories sampled at finite intervals. We demonstrate the methods on a low-dimensional model that facilitates visualization and a high-dimensional model of a biomolecular system. Implications for the prediction problem in reinforcement learning are discussed.
Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Additionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most.