Base polytopes of polymatroids, also known as generalized permutohedra, are polytopes whose edges are parallel to a vector of the form $\mathbf{e}_i - \mathbf{e}_j$. We consider the following computational problem: Given two vertices of a generalized permutohedron $P$, find a shortest path between them on the skeleton of $P$. This captures many known flip distance problems, such as computing the minimum number of exchanges between two spanning trees of a graph, the rotation distance between binary search trees, the flip distance between acyclic orientations of a graph, or rectangulations of a square. We prove that this problem is $NP$-hard, even when restricted to very simple polymatroids in $\mathbb{R}^n$ defined by $O(n)$ inequalities. Assuming $P\not= NP$, this rules out the existence of an efficient simplex pivoting rule that performs a minimum number of nondegenerate pivoting steps to an optimal solution of a linear program, even when the latter defines a polymatroid. We also prove that the shortest path problem is inapproximable when the polymatroid is specified via an evaluation oracle for a corresponding submodular function, strengthening a recent result by Ito et al. (ICALP'23). More precisely, we prove the $APX$-hardness of the shortest path problem when the polymatroid is a hypergraphic polytope, whose vertices are in bijection with acyclic orientations of a given hypergraph. The shortest path problem then amounts to computing the flip distance between two acyclic orientations of a hypergraph. On the positive side, we provide a polynomial-time approximation algorithm for the problem of computing the flip distance between two acyclic orientations of a hypergraph, where the approximation factor is the maximum codegree of the hypergraph. Our result implies an exact polynomial-time algorithm for the flip distance between two acyclic orientations of any linear hypergraph.
We present an isogeometric collocation method for solving the biharmonic equation over planar bilinearly parameterized multi-patch domains. The developed approach is based on the use of the globally $C^4$-smooth isogeometric spline space [34] to approximate the solution of the considered partial differential equation, and proposes as collocation points two different choices, namely on the one hand the Greville points and on the other hand the so-called superconvergent points. Several examples demonstrate the potential of our collocation method for solving the biharmonic equation over planar multi-patch domains, and numerically study the convergence behavior of the two types of collocation points with respect to the $L^2$-norm as well as to equivalents of the $H^s$-seminorms for $1 \leq s \leq 4$. In the studied case of spline degree $p=9$, the numerical results indicate in case of the Greville points a convergence of order $\mathcal{O}(h^{p-3})$ independent of the considered (semi)norm, and show in case of the superconvergent points an improved convergence of order $\mathcal{O}(h^{p-2})$ for all (semi)norms except for the equivalent of the $H^4$-seminorm, where the order $\mathcal{O}(h^{p-3})$ is anyway optimal.
A posteriori reduced-order models, e.g. proper orthogonal decomposition, are essential to affordably tackle realistic parametric problems. They rely on a trustful training set, that is a family of full-order solutions (snapshots) representative of all possible outcomes of the parametric problem. Having such a rich collection of snapshots is not, in many cases, computationally viable. A strategy for data augmentation, designed for parametric laminar incompressible flows, is proposed to enrich poorly populated training sets. The goal is to include in the new, artificial snapshots emerging features, not present in the original basis, that do enhance the quality of the reduced-order solution. The methodologies devised are based on exploiting basic physical principles, such as mass and momentum conservation, to devise physically-relevant, artificial snapshots at a fraction of the cost of additional full-order solutions. Interestingly, the numerical results show that the ideas exploiting only mass conservation (i.e., incompressibility) are not producing significant added value with respect to the standard linear combinations of snapshots. Conversely, accounting for the linearized momentum balance via the Oseen equation does improve the quality of the resulting approximation and therefore is an effective data augmentation strategy in the framework of viscous incompressible laminar flows.
Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.
Many interesting physical problems described by systems of hyperbolic conservation laws are stiff, and thus impose a very small time-step because of the restrictive CFL stability condition. In this case, one can exploit the superior stability properties of implicit time integration which allows to choose the time-step only from accuracy requirements, and thus avoid the use of small time-steps. We discuss an efficient framework to devise high order implicit schemes for stiff hyperbolic systems without tailoring it to a specific problem. The nonlinearity of high order schemes, due to space- and time-limiting procedures which control nonphysical oscillations, makes the implicit time integration difficult, e.g.~because the discrete system is nonlinear also on linear problems. This nonlinearity of the scheme is circumvented as proposed in (Puppo et al., Comm.~Appl.~Math.~\& Comput., 2023) for scalar conservation laws, where a first order implicit predictor is computed to freeze the nonlinear coefficients of the essentially non-oscillatory space reconstruction, and also to assist limiting in time. In addition, we propose a novel conservative flux-centered a-posteriori time-limiting procedure using numerical entropy indicators to detect troubled cells. The numerical tests involve classical and artificially devised stiff problems using the Euler's system of gas-dynamics.
As a crossover frontier of physics and mechanics, quantum computing is showing its great potential in computational mechanics. However, quantum hardware noise remains a critical barrier to achieving accurate simulation results due to the limitation of the current hardware level. In this paper, we integrate error-mitigated quantum computing in data-driven computational mechanics, where the zero-noise extrapolation (ZNE) technique is employed to improve the accuracy of quantum computing. Numerical examples including multiscale simulation of a composite L-shaped beam are conducted with the quantum computer simulator Qpanda, and the results validate the effectiveness of the proposed method. We believe this work presents a promising step towards using the power of quantum computing in computational mechanics.
Partitioned neural network functions are used to approximate the solution of partial differential equations. The problem domain is partitioned into non-overlapping subdomains and the partitioned neural network functions are defined on the given non-overlapping subdomains. Each neural network function then approximates the solution in each subdomain. To obtain the convergent neural network solution, certain continuity conditions on the partitioned neural network functions across the subdomain interface need to be included in the loss function, that is used to train the parameters in the neural network functions. In our work, by introducing suitable interface values, the loss function is reformulated into a sum of localized loss functions and each localized loss function is used to train the corresponding local neural network parameters. In addition, to accelerate the neural network solution convergence, the localized loss function is enriched with an augmented Lagrangian term, where the interface condition and the boundary condition are enforced as constraints on the local solutions by using Lagrange multipliers. The local neural network parameters and Lagrange multipliers are then found by optimizing the localized loss function. To take the advantage of the localized loss function for the parallel computation, an iterative algorithm is also proposed. For the proposed algorithms, their training performance and convergence are numerically studied for various test examples.
Black-box variational inference is widely used in situations where there is no proof that its stochastic optimization succeeds. We suggest this is due to a theoretical gap in existing stochastic optimization proofs: namely the challenge of gradient estimators with unusual noise bounds, and a composite non-smooth objective. For dense Gaussian variational families, we observe that existing gradient estimators based on reparameterization satisfy a quadratic noise bound and give novel convergence guarantees for proximal and projected stochastic gradient descent using this bound. This provides rigorous guarantees that methods similar to those used in practice converge on realistic inference problems.
We obtain error approximation bounds between expected suprema of canonical processes that are generated by random vectors with independent coordinates and expected suprema of Gaussian processes. In particular, we obtain a sharper proximity estimate for Rademacher and Gaussian complexities. Our estimates are dimension-free, and depend only on the geometric parameters and the numerical complexity of the underlying index set.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.
Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.