Poroelasticity is an example of coupled processes which are crucial for many applications including safety assessment of radioactive waste repositories. Numerical solution of poroelasticity problems discretized with finite volume -- virtual element scheme leads to systems of algebraic equations, which may be solved simultaneously or iteratively. In this work, parallel scalability of the monolithic strategy and of the fixed-strain splitting strategy is examined, which depends mostly on linear solver performance. It was expected that splitting strategy would show better scalability due to better performance of a black-box linear solver on systems with simpler structure. However, this is not always the case.
A key barrier to using reinforcement learning (RL) in many real-world applications is the requirement of a large number of system interactions to learn a good control policy. Off-policy and Offline RL methods have been proposed to reduce the number of interactions with the physical environment by learning control policies from historical data. However, their performances suffer from the lack of exploration and the distributional shifts in trajectories once controllers are updated. Moreover, most RL methods require that all states are directly observed, which is difficult to be attained in many settings. To overcome these challenges, we propose a trajectory generation algorithm, which adaptively generates new trajectories as if the system is being operated and explored under the updated control policies. Motivated by the fundamental lemma for linear systems, assuming sufficient excitation, we generate trajectories from linear combinations of historical trajectories. For linear feedback control, we prove that the algorithm generates trajectories with the exact distribution as if they are sampled from the real system using the updated control policy. In particular, the algorithm extends to systems where the states are not directly observed. Experiments show that the proposed method significantly reduces the number of sampled data needed for RL algorithms.
We study the problem of learning unknown parameters in stochastic interacting particle systems with polynomial drift, interaction and diffusion functions from the path of one single particle in the system. Our estimator is obtained by solving a linear system which is constructed by imposing appropriate conditions on the moments of the invariant distribution of the mean field limit and on the quadratic variation of the process. Our approach is easy to implement as it only requires the approximation of the moments via the ergodic theorem and the solution of a low-dimensional linear system. Moreover, we prove that our estimator is asymptotically unbiased in the limits of infinite data and infinite number of particles (mean field limit). In addition, we present several numerical experiments that validate the theoretical analysis and show the effectiveness of our methodology to accurately infer parameters in systems of interacting particles.
This paper describes three methods for carrying out non-asymptotic inference on partially identified parameters that are solutions to a class of optimization problems. Applications in which the optimization problems arise include estimation under shape restrictions, estimation of models of discrete games, and estimation based on grouped data. The partially identified parameters are characterized by restrictions that involve the unknown population means of observed random variables in addition to structural parameters. Inference consists of finding confidence intervals for functions of the structural parameters. Our theory provides finite-sample lower bounds on the coverage probabilities of the confidence intervals under three sets of assumptions of increasing strength. With the moderate sample sizes found in most economics applications, the bounds become tighter as the assumptions strengthen. We discuss estimation of population parameters that the bounds depend on and contrast our methods with alternative methods for obtaining confidence intervals for partially identified parameters. The results of Monte Carlo experiments and empirical examples illustrate the usefulness of our method.
In this paper, we propose an inverse-kinematics controller for a class of multi-robot systems in the scenario of sampled communication. The goal is to make a group of robots perform trajectory tracking in a coordinated way when the sampling time of communications is much larger than the sampling time of low-level controllers, disrupting theoretical convergence guarantees of standard control design in continuous time. Given a desired trajectory in configuration space which is precomputed offline, the proposed controller receives configuration measurements, possibly via wireless, to re-compute velocity references for the robots, which are tracked by a low-level controller. We propose joint design of a sampled proportional feedback plus a novel continuous-time feedforward that linearizes the dynamics around the reference trajectory: this method is amenable to distributed communication implementation where only one broadcast transmission is needed per sample. Also, we provide closed-form expressions for instability and stability regions and convergence rate in terms of proportional gain $k$ and sampling period $T$. We test the proposed control strategy via numerical simulations in the scenario of cooperative aerial manipulation of a cable-suspended load using a realistic simulator (Fly-Crane). Finally, we compare our proposed controller with centralized approaches that adapt the feedback gain online through smart heuristics, and show that it achieves comparable performance.
Large convolutional neural networks (CNN) can be difficult to train in the differentially private (DP) regime, since the optimization algorithms require a computationally expensive operation, known as the per-sample gradient clipping. We propose an efficient and scalable implementation of this clipping on convolutional layers, termed as the mixed ghost clipping, that significantly eases the private training in terms of both time and space complexities, without affecting the accuracy. The improvement in efficiency is rigorously studied through the first complexity analysis for the mixed ghost clipping and existing DP training algorithms. Extensive experiments on vision classification tasks, with large ResNet, VGG, and Vision Transformers, demonstrate that DP training with mixed ghost clipping adds $1\sim 10\%$ memory overhead and $<2\times$ slowdown to the standard non-private training. Specifically, when training VGG19 on CIFAR10, the mixed ghost clipping is $3\times$ faster than state-of-the-art Opacus library with $18\times$ larger maximum batch size. To emphasize the significance of efficient DP training on convolutional layers, we achieve 96.7\% accuracy on CIFAR10 and 83.0\% on CIFAR100 at $\epsilon=1$ using BEiT, while the previous best results are 94.8\% and 67.4\%, respectively. We open-source a privacy engine (\url{//github.com/woodyx218/private_vision}) that implements DP training of CNN with a few lines of code.
Rising usage of deep neural networks to perform decision making in critical applications like medical diagnosis and financial analysis have raised concerns regarding their reliability and trustworthiness. As automated systems become more mainstream, it is important their decisions be transparent, reliable and understandable by humans for better trust and confidence. To this effect, concept-based models such as Concept Bottleneck Models (CBMs) and Self-Explaining Neural Networks (SENN) have been proposed which constrain the latent space of a model to represent high level concepts easily understood by domain experts in the field. Although concept-based models promise a good approach to both increasing explainability and reliability, it is yet to be shown if they demonstrate robustness and output consistent concepts under systematic perturbations to their inputs. To better understand performance of concept-based models on curated malicious samples, in this paper, we aim to study their robustness to adversarial perturbations, which are also known as the imperceptible changes to the input data that are crafted by an attacker to fool a well-learned concept-based model. Specifically, we first propose and analyze different malicious attacks to evaluate the security vulnerability of concept based models. Subsequently, we propose a potential general adversarial training-based defense mechanism to increase robustness of these systems to the proposed malicious attacks. Extensive experiments on one synthetic and two real-world datasets demonstrate the effectiveness of the proposed attacks and the defense approach.
Mobility systems often suffer from a high price of anarchy due to the uncontrolled behavior of selfish users. This may result in societal costs that are significantly higher compared to what could be achieved by a centralized system-optimal controller. Monetary tolling schemes can effectively align the behavior of selfish users with the system-optimum. Yet, they inevitably discriminate the population in terms of income. Artificial currencies were recently presented as an effective alternative that can achieve the same performance, whilst guaranteeing fairness among the population. However, those studies were based on behavioral models that may differ from practical implementations. This paper presents a data-driven approach to automatically adapt artificial-currency tolls within repetitive-game settings. We first consider a parallel-arc setting whereby users commute on a daily basis from a unique origin to a unique destination, choosing a route in exchange of an artificial-currency price or reward while accounting for the impact of the choices of the other users on travel discomfort. Second, we devise a model-based reinforcement learning controller that autonomously learns the optimal pricing policy by interacting with the proposed framework considering the closeness of the observed aggregate flows to a desired system-optimal distribution as a reward function. Our numerical results show that the proposed data-driven pricing scheme can effectively align the users' flows with the system optimum, significantly reducing the societal costs with respect to the uncontrolled flows (by about 15% and 25% depending on the scenario), and respond to environmental changes in a robust and efficient manner.
There are existing standard solvers for tackling discrete optimization problems. However, in practice, it is uncommon to apply them directly to the large input space typical of this class of problems. Rather, the input is preprocessed to look for simplifications and to extract the core subset of the problem space, which is called the Kernel. This pre-processing procedure is known in the context of parameterized complexity theory as Kernelization. In this thesis, I implement parallel versions of some Kernelization algorithms and evaluate their performance. The performance of Kernelization algorithms is measured either by the size of the output Kernel or by the time it takes to compute the kernel. Sometimes the Kernel is the same as the original input, so it is desirable to know this, as soon as possible. The problem scope is limited to a particular type of discrete optimisation problem which is a version of the K-clique problem in which nodes of the given graph are pre-coloured legally using k colours. The final evaluation shows that my parallel implementations achieve over 50% improvement in efficiency for at least one of these algorithms. This is attained not just in terms of speed, but it is also able to produce a smaller kernel.
Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.
Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.