In this paper, we demonstrate for the first time how the Integrated Finite Element Neural Network (I-FENN) framework, previously proposed by the authors, can efficiently simulate the entire loading history of non-local gradient damage propagation. To achieve this goal, we first adopt a Temporal Convolutional Network (TCN) as the neural network of choice to capture the history-dependent evolution of the non-local strain in a coarsely meshed domain. The quality of the network predictions governs the computational performance of I-FENN, and therefore we perform an extended investigation aimed at enhancing them. We explore a data-driven vs. physics-informed TCN setup to arrive at an optimum network training, evaluating the network based on a coherent set of relevant performance metrics. We address the crucial issue of training a physics-informed network with input data that span vastly different length scales by proposing a systematic way of input normalization and output un-normalization. We then integrate the trained TCN within the nonlinear iterative FEM solver and apply I-FENN to simulate the damage propagation analysis. I-FENN is always applied in mesh idealizations different from the one used for the TCN training, showcasing the framework's ability to be used at progressively refined mesh resolutions. We illustrate several cases that I-FENN completes the simulation using either a modified or a full Newton-Raphson scheme, and we showcase its computational savings compared to both the classical monolithic and staggered FEM solvers. We underline that we satisfy very strict convergence criteria for every increment across the entire simulation, providing clear evidence of the robustness and accuracy of I-FENN. All the code and data used in this work will be made publicly available upon publication of the article.
In this paper, we present a comparative analysis of various self-supervised Vision Transformers (ViTs), focusing on their local representative power. Inspired by large language models, we examine the abilities of ViTs to perform various computer vision tasks with little to no fine-tuning. We design evaluation framework to analyze the quality of local, i.e.\ patch-level, representations in the context of few-shot semantic segmentation, instance identification, object retrieval and tracking. We discover that contrastive learning based methods like DINO produce more universal patch representations that can be immediately applied for downstream tasks with no parameter tuning, compared to masked image modeling. The embeddings learned using the latter approach, e.g. in masked autoencoders, have high variance features that harm distance-based algorithms, such as k-NN, and do not contain useful information for most downstream tasks. Furthermore, we demonstrate that removing these high-variance features enhances k-NN for MAE, as well as for its recent extension Scale-MAE. Finally, we find an object instance retrieval setting where DINOv2, a model pretrained on two orders of magnitude more data, falls short of its less compute intensive counterpart DINO.
In this paper we present a new "external verifier" for the Lean theorem prover, written in Lean itself. This is the first complete verifier for Lean 4 other than the reference implementation in C++ used by Lean itself, and our new verifier is competitive with the original, running between 20% and 50% slower and usable to verify all of Lean's mathlib library, forming an additional step in Lean's aim to self-host the full elaborator and compiler. Moreover, because the verifier is written in a language which admits formal verification, it is possible to state and prove properties about the kernel itself, and we report on some initial steps taken in this direction to formalize the Lean type theory abstractly and show that the kernel correctly implements this theory, to eliminate the possibility of implementation bugs in the kernel and increase the trustworthiness of proofs conducted in it. This work is still ongoing but we plan to use this project to help justify any future changes to the kernel and type theory and ensure unsoundness does not sneak in through either the abstract theory or implementation bugs.
We present a general refinement of the Cauchy-Schwarz inequality over complete inner product spaces and show that it can be of interest for some statistical applications. This generalizes and simplifies previous results on the same subject.
In this paper, we present PARAMANU-AYN, a language model based exclusively on case documents of the Supreme Court of India, the Constitution of India, and the Indian Penal Code. The novel Auto Regressive (AR) decoder based model is pretrained from scratch at a context size of 8192. We evaluated our pretrained legal model on perplexity metrics. We also instruction-tuned our pretrained model on a set of 10,763 instructions covering various legal tasks such as legal reasoning, judgement explanation, legal clause generation, legal drafting, legal contract drafting, case summarization, constitutional question-answering, etc. We also evaluated the responses of prompts for instruction-tuned models by GPT-3.5-Turbo on clarity, relevance, completeness, and legal reasoning metrics in a scale of 10. Our model can be run on CPU and achieved 42.46 tokens/sec CPU inference speed. We found that our models, despite not being pretrained on legal books, various legal contracts, and legal documents, were able to learn the domain knowledge required for drafting various legal contracts and legal clauses, and generalize to draft legal contracts and legal clauses with limited instruction tuning. Hence, we conclude that for a strong domain-specialized generative language model (such as legal), very large amounts of data are not required to develop models from scratch. We believe that this work is the first attempt to make a dedicated generative legal language model from scratch for Indian Supreme Court jurisdiction or in legal NLP overall. We plan to release our Paramanu-Ayn model at //www.bharatgpts.com.
In this article we present Enhanced Rhetorical Structure Theory (eRST), a new theoretical framework for computational discourse analysis, based on an expansion of Rhetorical Structure Theory (RST). The framework encompasses discourse relation graphs with tree-breaking, nonprojective and concurrent relations, as well as implicit and explicit signals which give explainable rationales to our analyses. We survey shortcomings of RST and other existing frameworks, such as Segmented Discourse Representation Theory (SDRT), the Penn Discourse Treebank (PDTB) and Discourse Dependencies, and address these using constructs in the proposed theory. We provide annotation, search and visualization tools for data, and present and evaluate a freely available corpus of English annotated according to our framework, encompassing 12 spoken and written genres with over 200K tokens. Finally, we discuss automatic parsing, evaluation metrics and applications for data in our framework.
This paper presents, for the first time, the Mediterraneous protocol. It is designed to support the development of an Internet of digital services, owned by their creators, and consumed by users by presenting their decentralised digital identity and a proof of service purchase. Mediterraneous is Self-Sovereign Identity (SSI) native, integrating the SSI model at the core of its working principles to overcome the limitations resulting from using pseudonyms and centralised access control of existing Web3 solutions.
In this paper, we propose a simple and strong framework for Tracking Any Point with TRansformers (TAPTR). Based on the observation that point tracking bears a great resemblance to object detection and tracking, we borrow designs from DETR-like algorithms to address the task of TAP. In the proposed framework, in each video frame, each tracking point is represented as a point query, which consists of a positional part and a content part. As in DETR, each query (its position and content feature) is naturally updated layer by layer. Its visibility is predicted by its updated content feature. Queries belonging to the same tracking point can exchange information through self-attention along the temporal dimension. As all such operations are well-designed in DETR-like algorithms, the model is conceptually very simple. We also adopt some useful designs such as cost volume from optical flow models and develop simple designs to provide long temporal information while mitigating the feature drifting issue. Our framework demonstrates strong performance with state-of-the-art performance on various TAP datasets with faster inference speed.
In this paper, we propose a generic approach to perform global sensitivity analysis (GSA) for compartmental models based on continuous-time Markov chains (CTMC). This approach enables a complete GSA for epidemic models, in which not only the effects of uncertain parameters such as epidemic parameters (transmission rate, mean sojourn duration in compartments) are quantified, but also those of intrinsic randomness and interactions between the two. The main step in our approach is to build a deterministic representation of the underlying continuous-time Markov chain by controlling the latent variables modeling intrinsic randomness. Then, model output can be written as a deterministic function of both uncertain parameters and controlled latent variables, so that it becomespossible to compute standard variance-based sensitivity indices, e.g. the so-called Sobol' indices. However, different simulation algorithms lead to different representations. We exhibit in this work three different representations for CTMC stochastic compartmental models and discuss the results obtained by implementing and comparing GSAs based on each of these representations on a SARS-CoV-2 epidemic model.
In this paper, we study parameter identification for solutions to (possibly non-linear) SDEs driven by additive Rosenblatt process and singularity of the induced laws on the path space. We propose a joint estimator for the drift parameter, diffusion intensity, and Hurst index that can be computed from discrete-time observations with a bounded time horizon and we prove its strong consistency (as well as the speed of convergence) under in-fill asymptotics with a fixed time horizon. As a consequence of this strong consistency, singularity of measures generated by the solutions with different drifts is shown. This results in the invalidity of a Girsanov-type theorem for Rosenblatt processes.
We present alternative approaches to routing and scheduling in Answer Set Programming (ASP), and explore them in the context of Multi-agent Path Finding. The idea is to capture the flow of time in terms of partial orders rather than time steps attached to actions and fluents. This also abolishes the need for fixed upper bounds on the length of plans. The trade-off for this avoidance is that (parts of) temporal trajectories must be acyclic, since multiple occurrences of the same action or fluent cannot be distinguished anymore. While this approach provides an interesting alternative for modeling routing, it is without alternative for scheduling since fine-grained timings cannot be represented in ASP in a feasible way. This is different for partial orders that can be efficiently handled by external means such as acyclicity and difference constraints. We formally elaborate upon this idea and present several resulting ASP encodings. Finally, we demonstrate their effectiveness via an empirical analysis.