We introduce a high-order spline geometric approach for the initial boundary value problem for Maxwell's equations. The method is geometric in the sense that it discretizes in structure preserving fashion the two de Rham sequences of differential forms involved in the formulation of the continuous system. Both the Ampere--Maxwell and the Faraday equations are required to hold strongly, while to make the system solvable two discrete Hodge star operators are used. By exploiting the properties of the chosen spline spaces and concepts from exterior calculus, a non-standard explicit in time formulation is introduced, based on the solution of linear systems with matrices presenting Kronecker product structure, rather than mass matrices as in the standard literature. These matrices arise from the application of the exterior (wedge) product in the discrete setting, and they present Kronecker product structure independently of the geometry of the domain or the material parameters. The resulting scheme preserves the desirable energy conservation properties of the known approaches. The computational advantages of the newly proposed scheme are studied both through a complexity analysis and through numerical experiments in three dimensions.
The logistics industry in Japan is facing a severe shortage of labor. Therefore, there is an increasing need for joint transportation allowing large amounts of cargo to be transported using fewer trucks. In recent years, the use of artificial intelligence and other new technologies has gained wide attention for improving matching efficiency. However, it is difficult to develop a system that can instantly respond to requests because browsing through enormous combinations of two transport lanes is time consuming. In this study, we focus on a form of joint transportation called triangular transportation and enumerate the combinations with high cooperation effects. The proposed algorithm makes good use of hidden inequalities, such as the distance axiom, to narrow down the search range without sacrificing accuracy. Numerical experiments show that the proposed algorithm is thousands of times faster than simple brute force. With this technology as the core engine, we developed a joint transportation matching system. The system has already been in use by over 150 companies as of October 2022, and was featured in a collection of logistics digital transformation cases published by Japan's Ministry of Land, Infrastructure, Transport and Tourism.
Classical model order reduction (MOR) for parametric problems may become computationally inefficient due to large sizes of the required projection bases, especially for problems with slowly decaying Kolmogorov n-widths. Additionally, Hamiltonian structure of dynamical systems may be available and should be preserved during the reduction. In the current presentation, we address these two aspects by proposing a corresponding dictionary-based, online-adaptive MOR approach. The method requires dictionaries for the state-variable, non-linearities and discrete empirical interpolation (DEIM) points. During the online simulation, local basis extensions/simplifications are performed in an online-efficient way, i.e. the runtime complexity of basis modifications and online simulation of the reduced models do not depend on the full state dimension. Experiments on a linear wave equation and a non-linear Sine-Gordon example demonstrate the efficiency of the approach.
We propose a non-intrusive, reduced-basis, and data-driven method for approximating both eigenvalues and eigenvectors in parametric eigenvalue problems. We generate the basis of the reduced space by applying the proper orthogonal decomposition (POD) approach on a collection of pre-computed, full-order snapshots at a chosen set of parameters. Then, we use Bayesian linear regression (a.k.a. Gaussian Process Regression) in the online phase to predict both eigenvalues and eigenvectors at new parameters. A split of the data generated in the offline phase into training and test data sets is utilized in the numerical experiments following standard practices in the field of supervised machine learning. Furthermore, we discuss the connection between Gaussian Process Regression and spline methods, and compare the performance of GPR method against linear and cubic spline methods. We show that GPR outperforms other methods for functions with a certain regularity. To this end, we discuss various different covariance functions which influence the performance of GPR. The proposed method is shown to be accurate and efficient for the approximation of multiple 1D and 2D affine and non-affine parameter-dependent eigenvalue problems that exhibit crossing of eigenvalues.
Data-driven offline model-based optimization (MBO) is an established practical approach to black-box computational design problems for which the true objective function is unknown and expensive to query. However, the standard approach which optimizes designs against a learned proxy model of the ground truth objective can suffer from distributional shift. Specifically, in high-dimensional design spaces where valid designs lie on a narrow manifold, the standard approach is susceptible to producing out-of-distribution, invalid designs that "fool" the learned proxy model into outputting a high value. Using an ensemble rather than a single model as the learned proxy can help mitigate distribution shift, but naive formulations for combining gradient information from the ensemble, such as minimum or mean gradient, are still suboptimal and often hampered by non-convergent behavior. In this work, we explore alternate approaches for combining gradient information from the ensemble that are robust to distribution shift without compromising optimality of the produced designs. More specifically, we explore two functions, formulated as convex optimization problems, for combining gradient information: multiple gradient descent algorithm (MGDA) and conflict-averse gradient descent (CAGrad). We evaluate these algorithms on a diverse set of five computational design tasks. We compare performance of ensemble MBO with MGDA and ensemble MBO with CAGrad with three naive baseline algorithms: (a) standard single-model MBO, (b) ensemble MBO with mean gradient, and (c) ensemble MBO with minimum gradient. Our results suggest that MGDA and CAGrad strike a desirable balance between conservatism and optimality and can help robustify data-driven offline MBO without compromising optimality of designs.
Completely random measures (CRMs) provide a broad class of priors, arguably, the most popular, for Bayesian nonparametric (BNP) analysis of trait allocations. As a peculiar property, CRM priors lead to predictive distributions that share the following common structure: for fixed prior's parameters, a new data point exhibits a Poisson (random) number of ``new'' traits, i.e., not appearing in the sample, which depends on the sampling information only through the sample size. While the Poisson posterior distribution is appealing for analytical tractability and ease of interpretation, its independence from the sampling information is a critical drawback, as it makes the posterior distribution of ``new'' traits completely determined by the estimation of the unknown prior's parameters. In this paper, we introduce the class of transform-scaled process (T-SP) priors as a tool to enrich the posterior distribution of ``new'' traits arising from CRM priors, while maintaining the same analytical tractability and ease of interpretation. In particular, we present a framework for posterior analysis of trait allocations under T-SP priors, showing that Stable T-SP priors, i.e., T-SP priors built from Stable CRMs, lead to predictive distributions such that, for fixed prior's parameters, a new data point displays a negative-Binomial (random) number of ``new'' traits, which depends on the sampling information through the number of distinct traits and the sample size. Then, by relying on a hierarchical version of T-SP priors, we extend our analysis to the more general setting of trait allocations with multiple groups of data or subpopulations. The empirical effectiveness of our methods is demonstrated through numerical experiments and applications to real data.
We describe a fast method for solving elliptic partial differential equations (PDEs) with uncertain coefficients using kernel interpolation at a lattice point set. By representing the input random field of the system using the model proposed by Kaarnioja, Kuo, and Sloan (SIAM J.~Numer.~Anal.~2020), in which a countable number of independent random variables enter the random field as periodic functions, it was shown by Kaarnioja, Kazashi, Kuo, Nobile, and Sloan (Numer.~Math.~2022) that the lattice-based kernel interpolant can be constructed for the PDE solution as a function of the stochastic variables in a highly efficient manner using fast Fourier transform (FFT). In this work, we discuss the connection between our model and the popular ``affine and uniform model'' studied widely in the literature of uncertainty quantification for PDEs with uncertain coefficients. We also propose a new class of weights entering the construction of the kernel interpolant -- \emph{serendipitous weights} -- which dramatically improve the computational performance of the kernel interpolant for PDE problems with uncertain coefficients, and allow us to tackle function approximation problems up to very high dimensionalities. Numerical experiments are presented to showcase the performance of the serendipitous weights.
We investigate the linear stability analysis of a pathway-based diffusion model (PBDM), which characterizes the dynamics of the engineered Escherichia coli populations [X. Xue and C. Xue and M. Tang, P LoS Computational Biology, 14 (2018), pp. e1006178]. This stability analysis considers small perturbations of the density and chemical concentration around two non-trivial steady states, and the linearized equations are transformed into a generalized eigenvalue problem. By formal analysis, when the internal variable responds to the outside signal fast enough, the PBDM converges to an anisotropic diffusion model, for which the probability density distribution in the internal variable becomes a delta function. We introduce an asymptotic preserving (AP) scheme for the PBDM that converges to a stable limit scheme consistent with the anisotropic diffusion model. Further numerical simulations demonstrate the theoretical results of linear stability analysis, i.e., the pattern formation, and the convergence of the AP scheme.
Balanced hypergraph partitioning is an NP-hard problem with many applications, e.g., optimizing communication in distributed data placement problems. The goal is to place all nodes across $k$ different blocks of bounded size, such that hyperedges span as few parts as possible. This problem is well-studied in sequential and distributed settings, but not in shared-memory. We close this gap by devising efficient and scalable shared-memory algorithms for all components employed in the best sequential solvers without compromises with regards to solution quality. This work presents the scalable and high-quality hypergraph partitioning framework Mt-KaHyPar. Its most important components are parallel improvement algorithms based on the FM algorithm and maximum flows, as well as a parallel clustering algorithm for coarsening - which are used in a multilevel scheme with $\log(n)$ levels. As additional components, we parallelize the $n$-level partitioning scheme, devise a deterministic version of our algorithm, and present optimizations for plain graphs. We evaluate our solver on more than 800 graphs and hypergraphs, and compare it with 25 different algorithms from the literature. Our fastest configuration outperforms almost all existing hypergraph partitioners with regards to both solution quality and running time. Our highest-quality configuration achieves the same solution quality as the best sequential partitioner KaHyPar, while being an order of magnitude faster with ten threads. Thus, two of our configurations occupy all fronts of the Pareto curve for hypergraph partitioning. Furthermore, our solvers exhibit good speedups, e.g., 29.6x in the geometric mean on 64 cores (deterministic), 22.3x ($\log(n)$-level), and 25.9x ($n$-level).
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.
In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax