Logit dynamics are evolution equations that describe transitions to equilibria of actions among many players. We formulate a pair-wise logit dynamic in a continuous action space with a generalized exponential function, which we call a generalized pair-wise logit dynamic, depicted by a new evolution equation nonlocal in space. We prove the well-posedness and approximability of the generalized pair-wise logit dynamic to show that it is computationally implementable. We also show that this dynamic has an explicit connection to a mean field game of a controlled pure-jump process, with which the two different mathematical models can be understood in a unified way. Particularly, we show that the generalized pair-wise logit dynamic is derived as a myopic version of the corresponding mean field game, and that the conditions to guarantee the existence of unique solutions are different from each other. The key in this procedure is to find the objective function to be optimized in the mean field game based on the logit function. The monotonicity of the utility is unnecessary for the generalized pair-wise logit dynamic but crucial for the mean field game. Finally, we present applications of the two approaches to fisheries management problems with collected data.
We consider the tasks of learning quantum states, measurements and channels generated by continuous-variable (CV) quantum circuits. This family of circuits is suited to describe optical quantum technologies and in particular it includes state-of-the-art photonic processors capable of showing quantum advantage. We define classes of functions that map classical variables, encoded into the CV circuit parameters, to outcome probabilities evaluated on those circuits. We then establish efficient learnability guarantees for such classes, by computing bounds on their pseudo-dimension or covering numbers, showing that CV quantum circuits can be learned with a sample complexity that scales polynomially with the circuit's size, i.e., the number of modes. Our results show that CV circuits can be trained efficiently using a number of training samples that, unlike their finite-dimensional counterpart, does not scale with the circuit depth.
Missing values are prevalent across various fields, posing challenges for training and deploying predictive models. In this context, imputation is a common practice, driven by the hope that accurate imputations will enhance predictions. However, recent theoretical and empirical studies indicate that simple constant imputation can be consistent and competitive. This empirical study aims at clarifying if and when investing in advanced imputation methods yields significantly better predictions. Relating imputation and predictive accuracies across combinations of imputation and predictive models on 20 datasets, we show that imputation accuracy matters less i) when using expressive models, ii) when incorporating missingness indicators as complementary inputs, iii) matters much more for generated linear outcomes than for real-data outcomes. Interestingly, we also show that the use of the missingness indicator is beneficial to the prediction performance, even in MCAR scenarios. Overall, on real-data with powerful models, improving imputation only has a minor effect on prediction performance. Thus, investing in better imputations for improved predictions often offers limited benefits.
Sequential excavation is common in shallow tunnel engineering, especially for large-span tunnels. However, existing complex variable solutions can not handle sequential shallow tunnelling effectively. This paper proposes a new complex variable solution on sequential shallow tunnelling in gravitational geomaterial with reasonable far-field displacement in a non-iterative manner by incorporating a bidirectional stepwise conformal mapping combining Charge Simulation Method and Complex Dipole Simulation Method. The non-iterative manner ensures that the mechanical models of sequential excavation stages share similar mathematical formation with non-successive mixed boundary conditions, which are respectively transformed into corresponding homogenerous Riemann-Hilbert problems, which are solved to obtain stress and displacement fields of sequential shallow tunnelling. The proposed solution is subsequently validated by sufficient comparisons with equivalent finite element solution with good agreements. The comparisons also suggest that the proposed solution should be more accurate than the finite element one. A parametric investigation is finally conducted to illustrate possible practical applications of the proposed solution with several engineering recommendations. Additionally, the theoretical improvements and defects of the proposed solution are discussed for objectivity.
Numerically solving high-dimensional random parametric PDEs poses a challenging computational problem. It is well-known that numerical methods can greatly benefit from adaptive refinement algorithms, in particular when functional approximations in polynomials are computed as in stochastic Galerkin finite element methods. This work investigates a residual based adaptive algorithm, akin to classical adaptive FEM, used to approximate the solution of the stationary diffusion equation with lognormal coefficients, i.e. with a non-affine parameter dependence of the data. It is known that the refinement procedure is reliable but the theoretical convergence of the scheme for this class of unbounded coefficients remains a challenging open question. This paper advances the theoretical state-of-the-art by providing a quasi-error reduction result for the adaptive solution of the lognormal stationary diffusion problem. The presented analysis generalizes previous results in that guaranteed convergence for uniformly bounded coefficients follows directly as a corollary. Moreover, it highlights the fundamental challenges with unbounded coefficients that cannot be overcome with common techniques. A computational benchmark example illustrates the main theoretical statement.
The modern approaches for computer vision tasks significantly rely on machine learning, which requires a large number of quality images. While there is a plethora of image datasets with a single type of images, there is a lack of datasets collected from multiple cameras. In this thesis, we introduce Paired Image and Video data from three CAMeraS, namely PIV3CAMS, aimed at multiple computer vision tasks. The PIV3CAMS dataset consists of 8385 pairs of images and 82 pairs of videos taken from three different cameras: Canon D5 Mark IV, Huawei P20, and ZED stereo camera. The dataset includes various indoor and outdoor scenes from different locations in Zurich (Switzerland) and Cheonan (South Korea). Some of the computer vision applications that can benefit from the PIV3CAMS dataset are image/video enhancement, view interpolation, image matching, and much more. We provide a careful explanation of the data collection process and detailed analysis of the data. The second part of this thesis studies the usage of depth information in the view synthesizing task. In addition to the regeneration of a current state-of-the-art algorithm, we investigate several proposed alternative models that integrate depth information geometrically. Through extensive experiments, we show that the effect of depth is crucial in small view changes. Finally, we apply our model to the introduced PIV3CAMS dataset to synthesize novel target views as an example application of PIV3CAMS.
We consider a model of learning and evolution in games whose action sets are endowed with a partition-based similarity structure intended to capture exogenous similarities between strategies. In this model, revising agents have a higher probability of comparing their current strategy with other strategies that they deem similar, and they switch to the observed strategy with probability proportional to its payoff excess. Because of this implicit bias toward similar strategies, the resulting dynamics - which we call the nested replicator dynamics - do not satisfy any of the standard monotonicity postulates for imitative game dynamics; nonetheless, we show that they retain the main long-run rationality properties of the replicator dynamics, albeit at quantitatively different rates. We also show that the induced dynamics can be viewed as a stimulus-response model in the spirit of Erev & Roth (1998), with choice probabilities given by the nested logit choice rule of Ben-Akiva (1973) and McFadden (1978). This result generalizes an existing relation between the replicator dynamics and the exponential weights algorithm in online learning, and provides an additional layer of interpretation to our analysis and results.
We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.
The standard mathematical approach to fourth-down decision making in American football is to make the decision that maximizes estimated win probability. Win probability estimates arise from machine learning models fit from historical data. These models attempt to capture a nuanced relationship between a noisy binary outcome variable and game-state variables replete with interactions and non-linearities from a finite dataset of just a few thousand games. Thus, it is imperative to knit uncertainty quantification into the fourth-down decision procedure; we do so using bootstrapping. We find that uncertainty in the estimated optimal fourth-down decision is far greater than that currently expressed by sports analysts in popular sports media.
The use of variable grid BDF methods for parabolic equations leads to structures that are called variable (coefficient) Toeplitz. Here, we consider a more general class of matrix-sequences and we prove that they belong to the maximal $*$-algebra of generalized locally Toeplitz (GLT) matrix-sequences. Then, we identify the associated GLT symbols in the general setting and in the specific case, by providing in both cases a spectral and singular value analysis. More specifically, we use the GLT tools in order to study the asymptotic behaviour of the eigenvalues and singular values of the considered BDF matrix-sequences, in connection with the given non-uniform grids. Numerical examples, visualizations, and open problems end the present work.
This dissertation studies a fundamental open challenge in deep learning theory: why do deep networks generalize well even while being overparameterized, unregularized and fitting the training data to zero error? In the first part of the thesis, we will empirically study how training deep networks via stochastic gradient descent implicitly controls the networks' capacity. Subsequently, to show how this leads to better generalization, we will derive {\em data-dependent} {\em uniform-convergence-based} generalization bounds with improved dependencies on the parameter count. Uniform convergence has in fact been the most widely used tool in deep learning literature, thanks to its simplicity and generality. Given its popularity, in this thesis, we will also take a step back to identify the fundamental limits of uniform convergence as a tool to explain generalization. In particular, we will show that in some example overparameterized settings, {\em any} uniform convergence bound will provide only a vacuous generalization bound. With this realization in mind, in the last part of the thesis, we will change course and introduce an {\em empirical} technique to estimate generalization using unlabeled data. Our technique does not rely on any notion of uniform-convergece-based complexity and is remarkably precise. We will theoretically show why our technique enjoys such precision. We will conclude by discussing how future work could explore novel ways to incorporate distributional assumptions in generalization bounds (such as in the form of unlabeled data) and explore other tools to derive bounds, perhaps by modifying uniform convergence or by developing completely new tools altogether.