Here we merge the two fields of Cops and Robbers and Graph Pebbling to introduce the new topic of Cops and Robbers Pebbling. Both paradigms can be described by moving tokens (the cops) along the edges of a graph to capture a special token (the robber). In Cops and Robbers, all tokens move freely, whereas, in Graph Pebbling, some of the chasing tokens disappear with movement while the robber is stationary. In Cops and Robbers Pebbling, some of the chasing tokens (cops) disappear with movement, while the robber moves freely. We define the cop pebbling number of a graph to be the minimum number of cops necessary to capture the robber in this context, and present upper and lower bounds and exact values, some involving various domination parameters, for an array of graph classes. We also offer several interesting problems and conjectures.
This work is concerned with the analysis of a space-time finite element discontinuous Galerkin method on polytopal meshes (XT-PolydG) for the numerical discretization of wave propagation in coupled poroelastic-elastic media. The mathematical model consists of the low-frequency Biot's equations in the poroelastic medium and the elastodynamics equation for the elastic one. To realize the coupling, suitable transmission conditions on the interface between the two domains are (weakly) embedded in the formulation. The proposed PolydG discretization in space is then coupled with a dG time integration scheme, resulting in a full space-time dG discretization. We present the stability analysis for both the continuous and the semidiscrete formulations, and we derive error estimates for the semidiscrete formulation in a suitable energy norm. The method is applied to a wide set of numerical test cases to verify the theoretical bounds. Examples of physical interest are also presented to investigate the capability of the proposed method in relevant geophysical scenarios.
In this note we use the State of the Union Address dataset from Kaggle to make some surprising (and some not so surprising) observations pertaining to the general timeline of American history, and the character and nature of the addresses themselves. Our main approach is using vector embeddings, such as BERT (DistilBERT) and GPT-2. While it is widely believed that BERT (and its variations) is most suitable for NLP classification tasks, we find out that GPT-2 in conjunction with nonlinear dimension reduction methods such as UMAP provide better separation and stronger clustering. This makes GPT-2 + UMAP an interesting alternative. In our case, no model fine-tuning is required, and the pre-trained out-of-the-box GPT-2 model is enough. We also used a fine-tuned DistilBERT model for classification (detecting which president delivered which address), with very good results (accuracy 93% - 95% depending on the run). All computations can be replicated by using the accompanying code on GitHub.
Reduced-order models have been widely adopted in fluid mechanics, particularly in the context of Newtonian fluid flows. These models offer the ability to predict complex dynamics, such as instabilities and oscillations, at a considerably reduced computational cost. In contrast, the reduced-order modeling of non-Newtonian viscoelastic fluid flows remains relatively unexplored. This work leverages the sparse identification of nonlinear dynamics algorithm to develop interpretable reduced-order models for viscoelastic flows. In particular, we explore a benchmark oscillatory viscoelastic flow on the four-roll mill geometry using the classical Oldroyd-B fluid. This flow exemplifies many canonical challenges associated with non-Newtonian flows, including transitions, asymmetries, instabilities, and bifurcations arising from the interplay of viscous and elastic forces, all of which require expensive computations in order to resolve the fast timescales and long transients characteristic of such flows. First, we demonstrate the effectiveness of our data-driven surrogate model to predict the transient evolution and accurately reconstruct the spatial flow field for fixed flow parameters. We then develop a fully parametric, nonlinear model capable of capturing the dynamic variations as a function of the Weissenberg number. While the training data is predominantly concentrated on a limit cycle regime for moderate Wi, we show that the parameterized model can be used to extrapolate, accurately predicting the dominant dynamics in the case of high Weissenberg numbers. The proposed methodology represents an initial step in the field of reduced-order modeling for viscoelastic flows with the potential to be further refined and enhanced for the design, optimization, and control of a wide range of non-Newtonian fluid flows using machine learning and reduced-order modeling techniques.
As the Internet of Things (IoT) has become truly ubiquitous, so has the surrounding threat landscape. However, while the security of classical computing systems has significantly matured in the last decades, IoT cybersecurity is still typically low or fully neglected. This paper provides a classification of IoT malware. Major targets and used exploits for attacks are identified and referred to the specific malware. The lack of standard definitions of IoT devices and, therefore, security goals has been identified during this research as a profound barrier in advancing IoT cybersecurity. Furthermore, standardized reporting of IoT malware by trustworthy sources is required in the field. The majority of current IoT attacks continue to be of comparably low effort and level of sophistication and could be mitigated by existing technical measures.
In this contribution, we provide a new mass lumping scheme for explicit dynamics in isogeometric analysis (IGA). To this end, an element formulation based on the idea of dual functionals is developed. Non-Uniform Rational B-splines (NURBS) are applied as shape functions and their corresponding dual basis functions are applied as test functions in the variational form, where two kinds of dual basis functions are compared. The first type are approximate dual basis functions (AD) with varying degree of reproduction, resulting in banded mass matrices. Dual basis functions derived from the inversion of the Gram matrix (IG) are the second type and already yield diagonal mass matrices. We will show that it is possible to apply the dual scheme as a transformation of the resulting system of equations based on NURBS as shape and test functions. Hence, it can be easily implemented into existing IGA routines. Treating the application of dual test functions as preconditioner reduces the additional computational effort, but it cannot entirely erase it and the density of the stiffness matrix still remains higher than in standard Bubnov-Galerkin formulations. In return applying additional row-sum lumping to the mass matrices is either not necessary for IG or the caused loss of accuracy is lowered to a reasonable magnitude in the case of AD. Numerical examples show a significantly better approximation of the dynamic behavior for the dual lumping scheme compared to standard NURBS approaches making use of row-sum lumping. Applying IG yields accurate numerical results without additional lumping. But as result of the global support of the IG dual basis functions, fully populated stiffness matrices occur, which are entirely unsuitable for explicit dynamic simulations. Combining AD and row-sum lumping leads to an efficient computation regarding effort and accuracy.
In real-world tasks, there is usually a large amount of unlabeled data and labeled data. The task of combining the two to learn is known as semi-supervised learning. Experts can use logical rules to label unlabeled data, but this operation is costly. The combination of perception and reasoning has a good effect in processing such semi-supervised tasks with domain knowledge. However, acquiring domain knowledge and the correction, reduction and generation of rules remain complex problems to be solved. Rough set theory is an important method for solving knowledge processing in information systems. In this paper, we propose a rule general abductive learning by rough set (RS-ABL). By transforming the target concept and sub-concepts of rules into information tables, rough set theory is used to solve the acquisition of domain knowledge and the correction, reduction and generation of rules at a lower cost. This framework can also generate more extensive negative rules to enhance the breadth of the knowledge base. Compared with the traditional semi-supervised learning method, RS-ABL has higher accuracy in dealing with semi-supervised tasks.
Low Reynolds number fluid flows are governed by the Stokes equations. In two dimensions, Stokes flows can be described by two analytic functions, known as Goursat functions. Brubeck and Trefethen (2022) recently introduced a lightning Stokes solver that uses rational functions to approximate the Goursat functions in polygonal domains. In this paper, we present the "LARS" algorithm (Lightning-AAA Rational Stokes) for computing 2D Stokes flows in domains with smooth boundaries and multiply-connected domains using lightning and AAA rational approximation (Nakatsukasa et al., 2018). After validating our solver against known analytical solutions, we solve a variety of 2D Stokes flow problems with physical and engineering applications. Using these examples, we show rational approximation can now be used to compute 2D Stokes flows in general domains. The computations take less than a second and give solutions with at least 6-digit accuracy.
This paper examines inverse Cauchy problems that are governed by a kind of elliptic partial differential equation. The inverse problems involve recovering the missing data on an inaccessible boundary from the measured data on an accessible boundary, which is severely ill-posed. By using the coupled complex boundary method (CCBM), which integrates both Dirichlet and Neumann data into a single Robin boundary condition, we reformulate the underlying problem into an operator equation. Based on this new formulation, we study the solution existence issue of the reduced problem with noisy data. A Golub-Kahan bidiagonalization (GKB) process together with Givens rotation is employed for iteratively solving the proposed operator equation. The regularizing property of the developed method, called CCBM-GKB, and its convergence rate results are proved under a posteriori stopping rule. Finally, a linear finite element method is used for the numerical realization of CCBM-GKB. Various numerical experiments demonstrate that CCBM-GKB is a kind of accelerated iterative regularization method, as it is much faster than the classic Landweber method.
Positron Emission Tomography (PET) enables functional imaging of deep brain structures, but the bulk and weight of current systems preclude their use during many natural human activities, such as locomotion. The proposed long-term solution is to construct a robotic system that can support an imaging system surrounding the subject's head, and then move the system to accommodate natural motion. This requires a system to measure the motion of the head with respect to the imaging ring, for use by both the robotic system and the image reconstruction software. We report here the design, calibration, and experimental evaluation of a parallel string encoder mechanism for sensing this motion. Our results indicate that with kinematic calibration, the measurement system can achieve accuracy within 0.5mm, especially for small motions.
*《Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs》A Jolicoeur-Martineau, I Mitliagkas [Mila] (2019)