Fluid flow simulation is a highly active area with applications in a wide range of engineering problems and interactive systems. Meshless methods like the Moving Particle Semi-implicit (MPS) are a great alternative to deal efficiently with large deformations and free-surface flow. However, mesh-based approaches can achieve higher numerical precision than particle-based techniques with a performance cost. This paper presents a numerically stable and parallelized system that benefits from advances in the literature and parallel computing to obtain an adaptable MPS method. The proposed technique can simulate liquids using different approaches, such as two ways to calculate the particles' pressure, turbulent flow, and multiphase interaction. The method is evaluated under traditional test cases presenting comparable results to recent techniques. This work integrates the previously mentioned advances into a single solution, which can switch on improvements, such as better momentum conservation and less spurious pressure oscillations, through a graphical interface. The code is entirely open-source under the GPLv3 free software license. The GPU-accelerated code reached speedups ranging from 3 to 43 times, depending on the total number of particles. The simulation runs at one fps for a case with approximately 200,000 particles. Code: //github.com/andreluizbvs/VoxarMPS
In many real-world applications, we are interested in approximating black-box, costly functions as accurately as possible with the smallest number of function evaluations. A complex computer code is an example of such a function. In this work, a Gaussian process (GP) emulator is used to approximate the output of complex computer code. We consider the problem of extending an initial experiment (set of model runs) sequentially to improve the emulator. A sequential sampling approach based on leave-one-out (LOO) cross-validation is proposed that can be easily extended to a batch mode. This is a desirable property since it saves the user time when parallel computing is available. After fitting a GP to training data points, the expected squared LOO (ES-LOO) error is calculated at each design point. ES-LOO is used as a measure to identify important data points. More precisely, when this quantity is large at a point it means that the quality of prediction depends a great deal on that point and adding more samples nearby could improve the accuracy of the GP. As a result, it is reasonable to select the next sample where ES-LOO is maximised. However, ES-LOO is only known at the experimental design and needs to be estimated at unobserved points. To do this, a second GP is fitted to the ES-LOO errors and where the maximum of the modified expected improvement (EI) criterion occurs is chosen as the next sample. EI is a popular acquisition function in Bayesian optimisation and is used to trade-off between local/global search. However, it has a tendency towards exploitation, meaning that its maximum is close to the (current) "best" sample. To avoid clustering, a modified version of EI, called pseudo expected improvement, is employed which is more explorative than EI yet allows us to discover unexplored regions. Our results show that the proposed sampling method is promising.
We present symbolic and numerical methods for computing Poisson brackets on the spaces of measures with positive densities of the plane, the 2-torus, and the 2-sphere. We apply our methods to compute symplectic areas of finite regions for the case of the 2-sphere, including an explicit example for Gaussian measures with positive densities.
This paper explores the role of multiple antennas in mitigating jamming attacks for the Rayleigh fading environment with exogenous random traffic arrival. The jammer is assumed to have energy harvesting ability where energy arrives according to Bernoulli process. The outage probabilities are derived with different assumptions on the number of antennas at the transmitter and receiver. The outage probability for the Alamouti space-time code is also derived. The work characterizes the average service rate for different antenna configurations taking into account of random arrival of data and energy at the transmitter and jammer, respectively. In many practical applications, latency and timely updates are of importance, thus, delay and Average Age of Information (AAoI) are the meaningful metrics to be considered. The work characterizes these metrics under jamming attack. The impact of finite and infinite energy battery size at the jammer on various performance metrics is also explored. Two optimization problems are considered to explore the interplay between AAoI and delay under jamming attack. Furthermore, our results show that Alamouti code can significantly improve the performance of the system even under jamming attack, with less power budget. The paper also demonstrates how the developed results can be useful for multiuser scenarios.
The Poisson pressure solve resulting from the spectral element discretization of the incompressible Navier-Stokes equation requires fast, robust, and scalable preconditioning. In the current work, a parallel scaling study of Chebyshev-accelerated Schwarz and Jacobi preconditioning schemes is presented, with special focus on GPU architectures, such as OLCF's Summit. Convergence properties of the Chebyshev-accelerated schemes are compared with alternative methods, such as low-order preconditioners combined with algebraic multigrid. Performance and scalability results are presented for a variety of preconditioner and solver settings. The authors demonstrate that Chebyshev-accelerated-Schwarz methods provide a robust and effective smoothing strategy when using $p$-multigrid as a preconditioner in a Krylov-subspace projector. At the same time, optimal preconditioning parameters can vary for different geometries, problem sizes, and processor counts. This variance motivates the development of an autotuner to optimize solver parameters on-line, during the course of production simulations.
The time domain linear sampling method (TD-LSM) solves inverse scattering problems using time domain data by creating an indicator function for the support of the unknown scatterer. It involves only solving a linear integral equation called the near-field equation using different data from sampling points that probe the domain where the scatterer is located. To date, the method has been used for the acoustic wave equation and has been tested for several different types of scatterers, i.e. sound hard, impedance, and penetrable, and for wave-guides. In this paper, we extend the TD-LSM to the time dependent Maxwell's system with impedance boundary conditions - a similar analysis handles the case of a perfectly electrically conducting (PEC) body. We provide an analysis that supports the use of the TD-LSM for this problem, and preliminary numerical tests of the algorithm. Our analysis relies on the Laplace transform approach previously used for the acoustic wave equation. This is the first application of the TD-LSM in electromagnetism.
Potts models, which can be used to analyze dependent observations on a lattice, have seen widespread application in a variety of areas, including statistical mechanics, neuroscience, and quantum computing. To address the intractability of Potts likelihoods for large spatial fields, we propose fast ordered conditional approximations that enable rapid inference for observed and hidden Potts models. Our methods can be used to directly obtain samples from the approximate joint distribution of an entire Potts field. The computational complexity of our approximation methods is linear in the number of spatial locations; in addition, some of the necessary computations are naturally parallel. We illustrate the advantages of our approach using simulated data and a satellite image.
In the application of underwater creature study, comparing with propeller-powered ROVs and servo motor actuated robotic fish, novel biomimetic fish robot designs with soft actuation structure could interact with aquatic creatures closely and record authentic habitats and behaviours. This final project report presents the detailed design process of a hydraulic soft actuator powered robotic fish for aquatic creature study capable of swimming along the 3D trajectory. The robotic fish is designed based on the analysis of the pro and cons of existing designs. Except for the mechanical and electronic designs and manufacturing method of crucial components, a simplified open-loop control algorithm was designed to check the functionality of the application board and microcontroller in the Proteus simulation environment. As the key component of the robotic fish, Finite Element Method (FEM) simulations were conducted to visualise the soft actuator's deformation under different pressure to validate the design. Computational Fluid Dynamics (CFD) simulations were also conducted to improve the hydrodynamic efficiency of the shape of robotic fish. Although physical manufacturing is impossible due to the pandemic, the simulations show overall good performance in terms of control, actuation, and hydrodynamic efficiency.
Node clustering is a powerful tool in the analysis of networks. We introduce a graph neural network framework to obtain node embeddings for directed networks in a self-supervised manner, including a novel probabilistic imbalance loss, which can be used for network clustering. Here, we propose directed flow imbalance measures, which are tightly related to directionality, to reveal clusters in the network even when there is no density difference between clusters. In contrast to standard approaches in the literature, in this paper, directionality is not treated as a nuisance, but rather contains the main signal. DIGRAC optimizes directed flow imbalance for clustering without requiring label supervision, unlike existing GNN methods, and can naturally incorporate node features, unlike existing spectral methods. Experimental results on synthetic data, in the form of directed stochastic block models, and real-world data at different scales, demonstrate that our method, based on flow imbalance, attains state-of-the-art results on directed graph clustering, for a wide range of noise and sparsity levels and graph structures and topologies.
As data collection and analysis become critical functions for many cloud applications, proper data sharing with approved parties is required. However, the traditional data sharing scheme through centralized data escrow servers may sacrifice owners' privacy and is weak in security. Mainly, the servers physically own all data while the original data owners have only virtual ownership and lose actual access control. Therefore, we propose a 3-layer SSE-ABE-AES (3LSAA) cryptography-based privacy-protected data-sharing protocol based on the assumption that servers are honest-but-curious. The 3LSAA protocol realizes automatic access control management and convenient file search even if the server is not trustable. Besides achieving data self-sovereignty, our approach also improves system usability, eliminates the defects in the traditional SSE and ABE approaches, and provides a local AES key recovery method for user's availability.
Recent demand for distributed software had led to a surge in popularity in actor-based frameworks. However, even with the stylized message passing model of actors, writing correct distributed software is still difficult. We present our work on linearizability checking in DS2, an integrated framework for specifying, synthesizing, and testing distributed actor systems. The key insight of our approach is that often subcomponents of distributed actor systems represent common algorithms or data structures (e.g.\ a distributed hash table or tree) that can be validated against a simple sequential model of the system. This makes it easy for developers to validate their concurrent actor systems without complex specifications. DS2 automatically explores the concurrent schedules that system could arrive at, and it compares observed output of the system to ensure it is equivalent to what the sequential implementation could have produced. We describe DS2's linearizability checking and test it on several concurrent replication algorithms from the literature. We explore in detail how different algorithms for enumerating the model schedule space fare in finding bugs in actor systems, and we present our own refinements on algorithms for exploring actor system schedules that we show are effective in finding bugs.