The virtual element method (VEM) allows discretization of elasticity and plasticity problems with polygons in 2D and polyhedrals in 3D. The polygons (and polyhedrals) can have an arbitrary number of sides and can be concave or convex. These features, among others, are attractive for meshing complex geometries. However, to the author's knowledge axisymmetric virtual elements have not appeared before in the literature. Hence, in this work a novel first order consistent axisymmetric virtual element method is applied to problems of elasticity and plasticity. The VEM specific implementation details and adjustments needed to solve axisymmetric simulations are presented. Representative benchmark problems including pressure vessels and circular plates are illustrated. Examples also show that problems of near incompressibility are solved successfully. Consequently, this research demonstrates that the axisymmetric VEM formulation successfully solves certain classes of solid mechanics problems. The work concludes with a discussion of results for the current formulation and future research directions.
We present an analysis of total-variation (TV) on non-Euclidean parameterized surfaces, a natural representation of the shapes used in 3D graphics. Our work explains recent experimental findings in shape spectral TV [Fumero et al., 2020] and adaptive anisotropic spectral TV [Biton and Gilboa, 2022]. A new way to generalize set convexity from the plane to surfaces is derived by characterizing the TV eigenfunctions on surfaces. Relationships between TV, area, eigenvalue, eigenfunctions and their discontinuities are discovered. Further, we expand the shape spectral TV toolkit to include versatile zero-homogeneous flows demonstrated through smoothing and exaggerating filters. Last but not least, we propose the first TV-based method for shape deformation, characterized by deformations along geometrical bottlenecks. We show these bottlenecks to be aligned with eigenfunction discontinuities. This research advances the field of spectral TV on surfaces and its application in 3D graphics, offering new perspectives for shape filtering and deformation.
Most formal methods see the correctness of a software system as a binary decision. However, proving the correctness of complex systems completely is difficult because they are composed of multiple components, usage scenarios, and environments. We present QuAC, a modular approach for quantifying the correctness of service-oriented software systems by combining software architecture modeling with deductive verification. Our approach is based on a model of the service-oriented architecture and the probabilistic usage scenarios of the system. The correctness of a single service is approximated by a coverage region, which is a formula describing which inputs for that service are proven to not lead to an erroneous execution. The coverage regions can be determined by a combination of various analyses, e.g., formal verification, expert estimations, or testing. The coverage regions and the software model are then combined into a probabilistic program. From this, we can compute the probability that under a given usage profile no service is called outside its coverage region. If the coverage region is large enough, then instead of attempting to get 100% coverage, which may be prohibitively expensive, run-time verification or testing approaches may be used to deal with inputs outside the coverage region. We also present an implementation of QuAC for Java using the modeling tool Palladio and the deductive verification tool KeY. We demonstrate its usability by applying it to a software simulation of an energy system.
Though a core element of the digital age, numerical difference algorithms struggle with noise susceptibility. This stems from a key disconnect between the infinitesimal quantities in continuous differentiation and the finite intervals in its discrete counterpart. This disconnect violates the fundamental definition of differentiation (Leibniz and Cauchy). To bridge this gap, we build a novel general difference (Tao General Difference, TGD). Departing from derivative-by-integration, TGD generalizes differentiation to finite intervals in continuous domains through three key constraints. This allows us to calculate the general difference of a sequence in discrete domain via the continuous step function constructed from the sequence. Two construction methods, the rotational construction and the orthogonal construction, are proposed to construct the operators of TGD. The construction TGD operators take same convolution mode in calculation for continuous functions, discrete sequences, and arrays across any dimension. Our analysis with example operations showcases TGD's capability in both continuous and discrete domains, paving the way for accurate and noise-resistant differentiation in the digital era.
We study the complexity-theoretic boundaries of tractability for three classical problems in the context of Hierarchical Task Network Planning: the validation of a provided plan, whether an executable plan exists, and whether a given state can be reached by some plan. We show that all three problems can be solved in polynomial time on primitive task networks of constant partial order width (and a generalization thereof), whereas for the latter two problems this holds only under a provably necessary restriction to the state space. Next, we obtain an algorithmic meta-theorem along with corresponding lower bounds to identify tight conditions under which general polynomial-time solvability results can be lifted from primitive to general task networks. Finally, we enrich our investigation by analyzing the parameterized complexity of the three considered problems, and show that (1) fixed-parameter tractability for all three problems can be achieved by replacing the partial order width with the vertex cover number of the network as the parameter, and (2) other classical graph-theoretic parameters of the network (including treewidth, treedepth, and the aforementioned partial order width) do not yield fixed-parameter tractability for any of the three problems.
Background and purpose: The unanticipated detection by magnetic resonance imaging (MRI) in the brain of asymptomatic subjects of white matter lesions suggestive of multiple sclerosis (MS) has been named radiologically isolated syndrome (RIS). As the difference between early MS [i.e. clinically isolated syndrome (CIS)] and RIS is the occurrence of a clinical event, it is logical to improve detection of the subclinical form without interfering with MRI as there are radiological diagnostic criteria for that. Our objective was to use machine-learning classification methods to identify morphometric measures that help to discriminate patients with RIS from those with CIS. Methods: We used a multimodal 3-T MRI approach by combining MRI biomarkers (cortical thickness, cortical and subcortical grey matter volume, and white matter integrity) of a cohort of 17 patients with RIS and 17 patients with CIS for single-subject level classification. Results: The best proposed models to predict the diagnosis of CIS and RIS were based on the Naive Bayes, Bagging and Multilayer Perceptron classifiers using only three features: the left rostral middle frontal gyrus volume and the fractional anisotropy values in the right amygdala and right lingual gyrus. The Naive Bayes obtained the highest accuracy [overall classification, 0.765; area under the receiver operating characteristic (AUROC), 0.782]. Conclusions: A machine-learning approach applied to multimodal MRI data may differentiate between the earliest clinical expressions of MS (CIS and RIS) with an accuracy of 78%. Keywords: Bagging; Multilayer Perceptron; Naive Bayes classifier; clinically isolated syndrome; diffusion tensor imaging; machine-learning; magnetic resonance imaging; multiple sclerosis; radiologically isolated syndrome.
In this paper, a comparison analysis between geometric impedance controls (GICs) derived from two different potential functions on SE(3) for robotic manipulators is presented. The first potential function is defined on the Lie group, utilizing the Frobenius norm of the configuration error matrix. The second potential function is defined utilizing the Lie algebra, i.e., log-map of the configuration error. Using a differential geometric approach, the detailed derivation of the distance metric and potential function on SE(3) is introduced. The GIC laws are respectively derived from the two potential functions, followed by extensive comparison analyses. In the qualitative analysis, the properties of the error function and control laws are analyzed, while the performances of the controllers are quantitatively compared using numerical simulation.
We propose a predictor-corrector adaptive method for the study of hyperbolic partial differential equations (PDEs) under uncertainty. Constructed around the framework of stochastic finite volume (SFV) methods, our approach circumvents sampling schemes or simulation ensembles while also preserving fundamental properties, in particular hyperbolicity of the resulting systems and conservation of the discrete solutions. Furthermore, we augment the existing SFV theory with a priori convergence results for statistical quantities, in particular push-forward densities, which we demonstrate through numerical experiments. By linking refinement indicators to regions of the physical and stochastic spaces, we drive anisotropic refinements of the discretizations, introducing new degrees of freedom (DoFs) where deemed profitable. To illustrate our proposed method, we consider a series of numerical examples for non-linear hyperbolic PDEs based on Burgers' and Euler's equations.
We propose model-free (nonparametric) estimators of the volatility of volatility and leverage effect using high-frequency observations of short-dated options. At each point in time, we integrate available options into estimates of the conditional characteristic function of the price increment until the options' expiration and we use these estimates to recover spot volatility. Our volatility of volatility estimator is then formed from the sample variance and first-order autocovariance of the spot volatility increments, with the latter correcting for the bias in the former due to option observation errors. The leverage effect estimator is the sample covariance between price increments and the estimated volatility increments. The rate of convergence of the estimators depends on the diffusive innovations in the latent volatility process as well as on the observation error in the options with strikes in the vicinity of the current spot price. Feasible inference is developed in a way that does not require prior knowledge of the source of estimation error that is asymptotically dominating.
When using ordinal patterns, which describe the ordinal structure within a data vector, the problem of ties appeared permanently. So far, model classes were used which do not allow for ties; randomization has been another attempt to overcome this problem. Often, time periods with constant values even have been counted as times of monotone increase. To overcome this, a new approach is proposed: it explicitly allows for ties and, hence, considers more patterns than before. Ties are no longer seen as nuisance, but to carry valuable information. Limit theorems in the new framework are provided, both, for a single time series and for the dependence between two time series. The methods are used on hydrological data sets. It is common to distinguish five flood classes (plus 'absence of flood'). Considering data vectors of these classes at a certain gauge in a river basin, one will usually encounter several ties. Co-monotonic behavior between the data sets of two gauges (increasing, constant, decreasing) can be detected by the method as well as spatial patterns. Thus, it helps to analyze the strength of dependence between different gauges in an intuitive way. This knowledge can be used to asses risk and to plan future construction projects.
Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.