In order to perform isogeometric analysis with increased smoothness on complex domains, trimming, variational coupling or unstructured spline methods can be used. The latter two classes of methods require a multi-patch segmentation of the domain, and provide continuous bases along patch interfaces. In the context of shell modeling, variational methods are widely used, whereas the application of unstructured spline methods on shell problems is rather scarce. In this paper, we therefore provide a qualitative and a quantitative comparison of a selection of unstructured spline constructions, in particular the D-Patch, Almost-$C^1$, Analysis-Suitable $G^1$ and the Approximate $C^1$ constructions. Using this comparison, we aim to provide insight into the selection of methods for practical problems, as well as directions for future research. In the qualitative comparison, the properties of each method are evaluated and compared. In the quantitative comparison, a selection of numerical examples is used to highlight different advantages and disadvantages of each method. In the latter, comparison with weak coupling methods such as Nitsche's method or penalty methods is made as well. In brief, it is concluded that the Approximate $C^1$ and Analysis-Suitable $G^1$ converge optimally in the analysis of a bi-harmonic problem, without the need of special refinement procedures. Furthermore, these methods provide accurate stress fields. On the other hand, the Almost-$C^1$ and D-Patch provide relatively easy construction on complex geometries. The Almost-$C^1$ method does not have limitations on the valence of boundary vertices, unlike the D-Patch, but is only applicable to biquadratic local bases. Following from these conclusions, future research directions are proposed, for example towards making the Approximate $C^1$ and Analysis-Suitable $G^1$ applicable to more complex geometries.
We present algorithms and a C code to decide quantum contextuality and evaluate the contextuality degree (a way to quantify contextuality) for a variety of point-line geometries located in binary symplectic polar spaces of small rank. With this code we were not only able to recover, in a more efficient way, all the results of a recent paper by de Boutray et al (J. Phys. A: Math. Theor. 55 475301, 2022), but also arrived at a bunch of new noteworthy results. The paper first describes the algorithms and the C code. Then it illustrates its power on a number of subspaces of symplectic polar spaces whose rank ranges from two to seven. The most interesting new results include: (i) non-contextuality of configurations whose contexts are subspaces of dimension two and higher, (ii) non-existence of negative subspaces of dimension three and higher, (iii) considerably improved bounds for the contextuality degree of both elliptic and hyperbolic quadrics for ranks four, as well as for a particular subgeometry of the three-qubit space whose contexts are the lines of this space, (iv) proof for the non-contextuality of perpsets and, last but not least, (v) contextual nature of a distinguished subgeometry of a multi-qubit doily, called a two-spread, and computation of its contextuality degree.
Discovery of mathematical descriptors of physical phenomena from observational and simulated data, as opposed to from the first principles, is a rapidly evolving research area. Two factors, time-dependence of the inputs and hidden translation invariance, are known to complicate this task. To ameliorate these challenges, we combine Lagrangian dynamic mode decomposition with a locally time-invariant approximation of the Koopman operator. The former component of our method yields the best linear estimator of the system's dynamics, while the latter deals with the system's nonlinearity and non-autonomous behavior. We provide theoretical estimators (bounds) of prediction accuracy and perturbation error to guide the selection of both rank truncation and temporal discretization. We demonstrate the performance of our approach on several non-autonomous problems, including two-dimensional Navier-Stokes equations.
Confounder selection, namely choosing a set of covariates to control for confounding between a treatment and an outcome, is arguably the most important step in the design of observational studies. Previous methods, such as Pearl's celebrated back-door criterion, typically require pre-specifying a causal graph, which can often be difficult in practice. We propose an interactive procedure for confounder selection that does not require pre-specifying the graph or the set of observed variables. This procedure iteratively expands the causal graph by finding what we call "primary adjustment sets" for a pair of possibly confounded variables. This can be viewed as inverting a sequence of latent projections of the underlying causal graph. Structural information in the form of primary adjustment sets is elicited from the user, bit by bit, until either a set of covariates are found to control for confounding or it can be determined that no such set exists. Other information, such as the causal relations between confounders, is not required by the procedure. We show that if the user correctly specifies the primary adjustment sets in every step, our procedure is both sound and complete.
Meta-analysis is the aggregation of data from multiple studies to find patterns across a broad range relating to a particular subject. It is becoming increasingly useful to apply meta-analysis to summarize these studies being done across various fields. In meta-analysis, it is common to use the mean and standard deviation from each study to compare for analysis. While many studies reported mean and standard deviation for their summary statistics, some report other values including the minimum, maximum, median, and first and third quantiles. Often, the quantiles and median are reported when the data is skewed and does not follow a normal distribution. In order to correctly summarize the data and draw conclusions from multiple studies, it is necessary to estimate the mean and standard deviation from each study, considering variation and skewness within each study. In past literature, methods have been proposed to estimate the mean and standard deviation, but do not consider negative values. Data that include negative values are common and would increase the accuracy and impact of the me-ta-analysis. We propose a method that implements a generalized Box-Cox transformation to estimate the mean and standard deviation accounting for such negative values while maintaining similar accuracy.
We develop a provably efficient importance sampling scheme that estimates exit probabilities of solutions to small-noise stochastic reaction-diffusion equations from scaled neighborhoods of a stable equilibrium. The moderate deviation scaling allows for a local approximation of the nonlinear dynamics by their linearized version. In addition, we identify a finite-dimensional subspace where exits take place with high probability. Using stochastic control and variational methods we show that our scheme performs well both in the zero noise limit and pre-asymptotically. Simulation studies for stochastically perturbed bistable dynamics illustrate the theoretical results.
Symmetry is a cornerstone of much of mathematics, and many probability distributions possess symmetries characterized by their invariance to a collection of group actions. Thus, many mathematical and statistical methods rely on such symmetry holding and ostensibly fail if symmetry is broken. This work considers under what conditions a sequence of probability measures asymptotically gains such symmetry or invariance to a collection of group actions. Considering the many symmetries of the Gaussian distribution, this work effectively proposes a non-parametric type of central limit theorem. That is, a Lipschitz function of a high dimensional random vector will be asymptotically invariant to the actions of certain compact topological groups. Applications of this include a partial law of the iterated logarithm for uniformly random points in an $\ell_p^n$-ball and an asymptotic equivalence between classical parametric statistical tests and their randomization counterparts even when invariance assumptions are violated.
Models of complex technological systems inherently contain interactions and dependencies among their input variables that affect their joint influence on the output. Such models are often computationally expensive and few sensitivity analysis methods can effectively process such complexities. Moreover, the sensitivity analysis field as a whole pays limited attention to the nature of interaction effects, whose understanding can prove to be critical for the design of safe and reliable systems. In this paper, we introduce and extensively test a simple binning approach for computing sensitivity indices and demonstrate how complementing it with the smart visualization method, simulation decomposition (SimDec), can permit important insights into the behavior of complex engineering models. The simple binning approach computes first-, second-order effects, and a combined sensitivity index, and is considerably more computationally efficient than Sobol' indices. The totality of the sensitivity analysis framework provides an efficient and intuitive way to analyze the behavior of complex systems containing interactions and dependencies.
Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.
Since its introduction in 2019, the whole end-to-end neural diarization (EEND) line of work has been addressing speaker diarization as a frame-wise multi-label classification problem with permutation-invariant training. Despite EEND showing great promise, a few recent works took a step back and studied the possible combination of (local) supervised EEND diarization with (global) unsupervised clustering. Yet, these hybrid contributions did not question the original multi-label formulation. We propose to switch from multi-label (where any two speakers can be active at the same time) to powerset multi-class classification (where dedicated classes are assigned to pairs of overlapping speakers). Through extensive experiments on 9 different benchmarks, we show that this formulation leads to significantly better performance (mostly on overlapping speech) and robustness to domain mismatch, while eliminating the detection threshold hyperparameter, critical for the multi-label formulation.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.