We present a Lagrangian-Eulerian scheme to solve the shallow water equations in the case of spatially variable bottom geometry. Using a local curvilinear reference system anchored on the bottom surface, we develop an effective first-order and high-resolution space-time discretization of the no-flow surfaces and solve a Lagrangian initial value problem that describes the evolution of the balance laws governing the geometrically intrinsic shallow water equations. The evolved solution set is then projected back to the original surface grid to complete the proposed Lagrangian-Eulerian formulation. The resulting scheme maintains monotonicity and captures shocks without providing excessive numerical dissipation also in the presence of non-autonomous fluxes such as those arising from the geometrically intrinsic shallow water equation on variable topographies. We provide a representative set of numerical examples to illustrate the accuracy and robustness of the proposed Lagrangian-Eulerian formulation for two-dimensional surfaces with general curvatures and discontinuous initial conditions.
The paper proposes a decoupled numerical scheme of the time-dependent Ginzburg-Landau equations under temporal gauge. For the order parameter and the magnetic potential, the discrete scheme adopts the second type Ned${\rm \acute{e}}$lec element and the linear element for spatial discretization, respectively, and a fully linearized backward Euler method and the first order exponential time differencing method for time discretization, respectively. The maximum bound principle of the order parameter and the energy dissipation law in the discrete sense are proved for this finite element-based scheme. This allows the application of the adaptive time stepping method which can significantly speed up long-time simulations compared to existing numerical schemes, especially for superconductors with complicated shapes. The error estimate is rigorously established in the fully discrete sense. Numerical examples verify the theoretical results of the proposed scheme and demonstrate the vortex motions of superconductors in an external magnetic field.
Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs due to their impressive capability of capturing nonlinear relations in graph-structured data. However, for node classification tasks, often, only marginal improvement of GNNs over their linear counterparts has been observed. Previous works provide very few understandings of this phenomenon. In this work, we resort to Bayesian learning to deeply investigate the functions of non-linearity in GNNs for node classification tasks. Given a graph generated from the statistical model CSBM, we observe that the max-a-posterior estimation of a node label given its own and neighbors' attributes consists of two types of non-linearity, a possibly non-linear transformation of node attributes and a ReLU-activated feature aggregation from neighbors. The latter surprisingly matches the type of non-linearity used in many GNN models. By further imposing Gaussian assumption on node attributes, we prove that the superiority of those ReLU activations is only significant when the node attributes are far more informative than the graph structure, which nicely matches many previous empirical observations. A similar argument can be achieved when there is a distribution shift of node attributes between the training and testing datasets. Finally, we verify our theory on both synthetic and real-world networks.
Real-time scheduling theory assists developers of embedded systems in verifying that the timing constraints required by critical software tasks can be feasibly met on a given hardware platform. Fundamental problems in the theory are often formulated as search problems for fixed points of functions and are solved by fixed-point iterations. These fixed-point methods are used widely because they are simple to understand, simple to implement, and seem to work well in practice. These fundamental problems can also be formulated as integer programs and solved with algorithms that are based on theories of linear programming and cutting planes amongst others. However, such algorithms are harder to understand and implement than fixed-point iterations. In this research, we show that ideas like linear programming duality and cutting planes can be used to develop algorithms that are as easy to implement as existing fixed-point iteration schemes but have better convergence properties. We evaluate the algorithms on synthetically generated problem instances to demonstrate that the new algorithms are faster than the existing algorithms.
Fractional vector calculus is the building block of the fractional partial differential equations that model non-local or long-range phenomena, e.g., anomalous diffusion, fractional electromagnetism, and fractional advection-dispersion. In this work, we reformulate a type of fractional vector calculus that uses Caputo fractional partial derivatives and discretize this reformulation using discrete exterior calculus on a cubical complex in the structure-preserving way, meaning that the continuous-level properties $\operatorname{curl}^\alpha \operatorname{grad}^\alpha = \mathbf{0}$ and $\operatorname{div}^\alpha \operatorname{curl}^\alpha = 0$ hold exactly on the discrete level. We discuss important properties of our fractional discrete exterior derivatives and verify their second-order convergence in the root mean square error numerically. Our proposed discretization has the potential to provide accurate and stable numerical solutions to fractional partial differential equations and exactly preserve fundamental physics laws on the discrete level regardless of the mesh size.
Existing NFTs confront restrictions of \textit{one-time incentive} and \textit{product isolation}. Creators cannot obtain benefits once having sold their NFT products due to the lack of relationships across different NFTs, which results in controversial possible profit sharing. This work proposes a referable NFT scheme to extend the incentive sustainability of NFTs. We construct the referable NFT (rNFT) network to increase exposure and enhance the referring relationship of inclusive items. We introduce the DAG topology to generate directed edges between each pair of NFTs with corresponding weights and labels for advanced usage. We accordingly implement and propose the scheme under Ethereum Improvement Proposal (EIP) standards, indexed in EIP-1155. Further, we provide the mathematical formation to analyze the utility for each rNFT participant. The discussion gives general guidance among multi-dimensional parameters. To our knowledge, this is the first study to build the referable NFT network, explicitly showing the virtual connections among NFTs.
We introduce an $r-$adaptive algorithm to solve Partial Differential Equations using a Deep Neural Network. The proposed method restricts to tensor product meshes and optimizes the boundary node locations in one dimension, from which we build two- or three-dimensional meshes. The method allows the definition of fixed interfaces to design conforming meshes, and enables changes in the topology, i.e., some nodes can jump across fixed interfaces. The method simultaneously optimizes the node locations and the PDE solution values over the resulting mesh. To numerically illustrate the performance of our proposed $r-$adaptive method, we apply it in combination with a collocation method, a Least Squares Method, and a Deep Ritz Method. We focus on the latter to solve one- and two-dimensional problems whose solutions are smooth, singular, and/or exhibit strong gradients.
Physical law learning is the ambiguous attempt at automating the derivation of governing equations with the use of machine learning techniques. The current literature focuses however solely on the development of methods to achieve this goal, and a theoretical foundation is at present missing. This paper shall thus serve as a first step to build a comprehensive theoretical framework for learning physical laws, aiming to provide reliability to according algorithms. One key problem consists in the fact that the governing equations might not be uniquely determined by the given data. We will study this problem in the common situation of having a physical law be described by an ordinary or partial differential equation. For various different classes of differential equations, we provide both necessary and sufficient conditions for a function from a given function class to uniquely determine the differential equation which is governing the phenomenon. We then use our results to devise numerical algorithms to determine whether a function solves a differential equation uniquely. Finally, we provide extensive numerical experiments showing that our algorithms in combination with common approaches for learning physical laws indeed allow to guarantee that a unique governing differential equation is learnt, without assuming any knowledge about the function, thereby ensuring reliability.
In supersingular isogeny-based cryptography, the path-finding problem reduces to the endomorphism ring problem. Can path-finding be reduced to knowing just one endomorphism? It is known that a small endomorphism enables polynomial-time path-finding and endomorphism ring computation (Love-Boneh [36]). An endomorphism gives an explicit orientation of a supersingular elliptic curve. In this paper, we use the volcano structure of the oriented supersingular isogeny graph to take ascending/descending/horizontal steps on the graph and deduce path-finding algorithms to an initial curve. Each altitude of the volcano corresponds to a unique quadratic order, called the primitive order. We introduce a new hard problem of computing the primitive order given an arbitrary endomorphism on the curve, and we also provide a sub-exponential quantum algorithm for solving it. In concurrent work (Wesolowski [54]), it was shown that the endomorphism ring problem in the presence of one endomorphism with known primitive order reduces to a vectorization problem, implying path-finding algorithms. Our path-finding algorithms are more general in the sense that we don't assume the knowledge of the primitive order associated with the endomorphism.
Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.
While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.