Three-dimensional effect of tunnel face and gravitational excavation generally occur in shallow tunnelling, which are nevertheless not adequately considered in present complex variable solutions. In this paper, a new time-dependent complex variable solution on quasi three-dimensional shallow tunnelling in gravitational geomaterial is derived, and the far-field displacement singularity is eliminated by fixed far-field ground surface in the whole excavation time span. With an equivalent coefficient of three-dimensional effect, the quasi three-dimensional shallow tunnelling is transformed into a plane strain problem with time-dependent virtual traction along tunnel periphery. The mixed boundaries of fixed far-field ground surface and nearby free segment form a homogenerous Riemann-Hilbert problem with extra constraints of the virtual traction along tunnel periphery, which is simultaneously solved using an iterative linear system with good numerical stability. The mixed boundary conditions along the ground surface in the whole excavation time span are well satisified in a numerical case, which is further examined by comparing with corresponding finite element solution. The results are in good agreements, and the proposed solution illustrates high efficiency. More discussions are made on excavation rate, viscosity, and solution convergence. A latent paradox is disclosed for objectivity.
We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.
We describe a randomized algorithm for producing a near-optimal hierarchical off-diagonal low-rank (HODLR) approximation to an $n\times n$ matrix $\mathbf{A}$, accessible only though matrix-vector products with $\mathbf{A}$ and $\mathbf{A}^{\mathsf{T}}$. We prove that, for the rank-$k$ HODLR approximation problem, our method achieves a $(1+\beta)^{\log(n)}$-optimal approximation in expected Frobenius norm using $O(k\log(n)/\beta^3)$ matrix-vector products. In particular, the algorithm obtains a $(1+\varepsilon)$-optimal approximation with $O(k\log^4(n)/\varepsilon^3)$ matrix-vector products, and for any constant $c$, an $n^c$-optimal approximation with $O(k \log(n))$ matrix-vector products. Apart from matrix-vector products, the additional computational cost of our method is just $O(n \operatorname{poly}(\log(n), k, \beta))$. We complement the upper bound with a lower bound, which shows that any matrix-vector query algorithm requires at least $\Omega(k\log(n) + k/\varepsilon)$ queries to obtain a $(1+\varepsilon)$-optimal approximation. Our algorithm can be viewed as a robust version of widely used "peeling" methods for recovering HODLR matrices and is, to the best of our knowledge, the first matrix-vector query algorithm to enjoy theoretical worst-case guarantees for approximation by any hierarchical matrix class. To control the propagation of error between levels of hierarchical approximation, we introduce a new perturbation bound for low-rank approximation, which shows that the widely used Generalized Nystr\"om method enjoys inherent stability when implemented with noisy matrix-vector products. We also introduced a novel randomly perforated matrix sketching method to further control the error in the peeling algorithm.
We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.
This work aims to extend the well-known high-order WENO finite-difference methods for systems of conservation laws to nonconservative hyperbolic systems. The main difficulty of these systems both from the theoretical and the numerical points of view comes from the fact that the definition of weak solution is not unique: according to the theory developed by Dal Maso, LeFloch, and Murat in 1995, it depends on the choice of a family of paths. A general strategy is proposed here in which WENO operators are not only used to reconstruct fluxes but also the nonconservative products of the system. Moreover, if a Roe linearization is available, the nonconservative products can be computed through matrix-vector operations instead of path-integrals. The methods are extended to problems with source terms and two different strategies are introduced to obtain well-balanced schemes. These numerical schemes will be then applied to the two-layer shallow water equations in one- and two- dimensions to obtain high-order methods that preserve water-at-rest steady states.
In the field of autonomous driving research, the use of immersive virtual reality (VR) techniques is widespread to enable a variety of studies under safe and controlled conditions. However, this methodology is only valid and consistent if the conduct of participants in the simulated setting mirrors their actions in an actual environment. In this paper, we present a first and innovative approach to evaluating what we term the behavioural gap, a concept that captures the disparity in a participant's conduct when engaging in a VR experiment compared to an equivalent real-world situation. To this end, we developed a digital twin of a pre-existed crosswalk and carried out a field experiment (N=18) to investigate pedestrian-autonomous vehicle interaction in both real and simulated driving conditions. In the experiment, the pedestrian attempts to cross the road in the presence of different driving styles and an external Human-Machine Interface (eHMI). By combining survey-based and behavioural analysis methodologies, we develop a quantitative approach to empirically assess the behavioural gap, as a mechanism to validate data obtained from real subjects interacting in a simulated VR-based environment. Results show that participants are more cautious and curious in VR, affecting their speed and decisions, and that VR interfaces significantly influence their actions.
We propose a new class of models for variable clustering called Asymptotic Independent block (AI-block) models, which defines population-level clusters based on the independence of the maxima of a multivariate stationary mixing random process among clusters. This class of models is identifiable, meaning that there exists a maximal element with a partial order between partitions, allowing for statistical inference. We also present an algorithm depending on a tuning parameter that recovers the clusters of variables without specifying the number of clusters \emph{a priori}. Our work provides some theoretical insights into the consistency of our algorithm, demonstrating that under certain conditions it can effectively identify clusters in the data with a computational complexity that is polynomial in the dimension. A data-driven selection method for the tuning parameter is also proposed. To further illustrate the significance of our work, we applied our method to neuroscience and environmental real-datasets. These applications highlight the potential and versatility of the proposed approach.
Highly resolved finite element simulations of a laser beam welding process are considered. The thermomechanical behavior of this process is modeled with a set of thermoelasticity equations resulting in a nonlinear, nonsymmetric saddle point system. Newton's method is used to solve the nonlinearity and suitable domain decomposition preconditioners are applied to accelerate the convergence of the iterative method used to solve all linearized systems. Since a onelevel Schwarz preconditioner is in general not scalable, a second level has to be added. Therefore, the construction and numerical analysis of a monolithic, twolevel overlapping Schwarz approach with the GDSW (Generalized Dryja-Smith-Widlund) coarse space and an economic variant thereof are presented here.
Efficiently enumerating all the extreme points of a polytope identified by a system of linear inequalities is a well-known challenge issue.We consider a special case and present an algorithm that enumerates all the extreme points of a bisubmodular polyhedron in $\mathcal{O}(n^4|V|)$ time and $\mathcal{O}(n^2)$ space complexity, where $ n$ is the dimension of underlying space and $V$ is the set of outputs. We use the reverse search and signed poset linked to extreme points to avoid the redundant search. Our algorithm is a generalization of enumerating all the extreme points of a base polyhedron which comprises some combinatorial enumeration problems.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples is difficult and highly subjective through standard methods. Inference for high quantiles can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. We develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in the threshold estimation and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation, relative to the leading existing methods, and show how the method's effectiveness is not sensitive to the tuning parameters. We apply our method to the well-known, troublesome example of the River Nidd dataset.
The use of operator-splitting methods to solve differential equations is widespread, but the methods are generally only defined for a given number of operators, most commonly two. Most operator-splitting methods are not generalizable to problems with $N$ operators for arbitrary $N$. In fact, there are only two known methods that can be applied to general $N$-split problems: the first-order Lie--Trotter (or Godunov) method and the second-order Strang (or Strang--Marchuk) method. In this paper, we derive two second-order operator-splitting methods that also generalize to $N$-split problems. These methods are complex valued but have positive real parts, giving them favorable stability properties, and require few sub-integrations per stage, making them computationally inexpensive. They can also be used as base methods from which to construct higher-order $N$-split operator-splitting methods with positive real parts. We verify the orders of accuracy of these new $N$-split methods and demonstrate their favorable efficiency properties against well-known real-valued operator-splitting methods on both real-valued and complex-valued differential equations.