The ability to deal with complex geometries and to go to higher orders is the main advantage of space-time finite element methods. Therefore, we want to develop a solid background from which we can construct appropriate space-time methods. In this paper, we will treat time as another space direction, which is the main idea of space-time methods. First, we will briefly discuss how exactly the vectorial wave equation is derived from Maxwell's equations in a space-time structure, taking into account Ohm's law. Then we will derive a space-time variational formulation for the vectorial wave equation using different trial and test spaces. This paper has two main goals. First, we prove unique solvability for the resulting Galerkin--Petrov variational formulation. Second, we analyze the discrete equivalent of the equation in a tensor product and show conditional stability, i.e. a CFL condition. Understanding the vectorial wave equation and the corresponding space-time finite element methods is crucial for improving the existing theory of Maxwell's equations and paves the way to computations of more complicated electromagnetic problems.
In this paper, we derive explicit second-order necessary and sufficient optimality conditions of a local minimizer to an optimal control problem for a quasilinear second-order partial differential equation with a piecewise smooth but not differentiable nonlinearity in the leading term. The key argument rests on the analysis of level sets of the state. Specifically, we show that if a function vanishes on the boundary and its the gradient is different from zero on a level set, then this set decomposes into finitely many closed simple curves. Moreover, the level sets depend continuously on the functions defining these sets. We also prove the continuity of the integrals on the level sets. In particular, Green's first identity is shown to be applicable on an open set determined by two functions with nonvanishing gradients. In the second part to this paper, the explicit sufficient second-order conditions will be used to derive error estimates for a finite-element discretization of the control problem.
Traveling phenomena are prevalent in a variety of fields, from atmospheric science to seismography and oceanography. However, there are two main shortcomings in the current literature: the lack of realistic modeling tools and the prohibitive computational costs for grid resolutions useful for data applications. We propose a flexible simulation method for traveling phenomena. To our knowledge, ours is the first method that is able to simulate extensions of the classical frozen field, which only involves one deterministic velocity, to a combination of velocities with random components, either in translation, rotation or both, as well as to velocity fields point-wise varying with space and time. We study extensions of the frozen field by relaxing constraints on its spectrum as well, giving rise to still stationary but more realistic traveling phenomena. Moreover, our proposed method is characterized by a lower computational complexity than the one required for circulant embedding, one of the most commonly employed simulation methods for Gaussian random fields, in $\mathbb{R}^{2+1}$.
This paper introduces a novel deep neural network architecture for solving the inverse scattering problem in frequency domain with wide-band data, by directly approximating the inverse map, thus avoiding the expensive optimization loop of classical methods. The architecture is motivated by the filtered back-projection formula in the full aperture regime and with homogeneous background, and it leverages the underlying equivariance of the problem and compressibility of the integral operator. This drastically reduces the number of training parameters, and therefore the computational and sample complexity of the method. In particular, we obtain an architecture whose number of parameters scale sub-linearly with respect to the dimension of the inputs, while its inference complexity scales super-linearly but with very small constants. We provide several numerical tests that show that the current approach results in better reconstruction than optimization-based techniques such as full-waveform inversion, but at a fraction of the cost while being competitive with state-of-the-art machine learning methods.
In physics, there is a scalar function called the action which behaves like a cost function. When minimized, it yields the "path of least action" which represents the path a physical system will take through space and time. This function is crucial in theoretical physics and is usually minimized analytically to obtain equations of motion for various problems. In this paper, we propose a different approach: instead of minimizing the action analytically, we discretize it and then minimize it directly with gradient descent. We use this approach to obtain dynamics for six different physical systems and show that they are nearly identical to ground-truth dynamics. We discuss failure modes such as the unconstrained energy effect and show how to address them. Finally, we use the discretized action to construct a simple but novel quantum simulation.
We study the problem of binary classification from the point of view of learning convex polyhedra in Hilbert spaces, to which one can reduce any binary classification problem. The problem of learning convex polyhedra in finite-dimensional spaces is sufficiently well studied in the literature. We generalize this problem to that in a Hilbert space and propose an algorithm for learning a polyhedron which correctly classifies at least $1- \varepsilon$ of the distribution, with a probability of at least $1 - \delta,$ where $\varepsilon$ and $\delta$ are given parameters. Also, as a corollary, we improve some previous bounds for polyhedral classification in finite-dimensional spaces.
The time-fractional porous medium equation is an important model of many hydrological, physical, and chemical flows. We study its self-similar solutions, which make up the profiles of many important experimentally measured situations. We prove that there is a unique solution to the general initial-boundary value problem in the one-dimensional setting. When supplemented with boundary conditions from the physical models, the problem exhibits a self-similar solution described with the use of the Erd\'elyi-Kober fractional operator. Using a backward shooting method, we show that there exists a unique solution to our problem. The shooting method is not only useful in deriving the theoretical results. We utilize it to devise an efficient numerical scheme to solve the governing problem along with two ways of discretizing the Erd\'elyi-Kober fractional derivative. Since the latter is a nonlocal operator, its numerical realization has to include some truncation. We find the correct truncation regime and prove several error estimates. Furthermore, the backward shooting method can be used to solve the main problem, and we provide a convergence proof. The main difficulty lies in the degeneracy of the diffusivity. We overcome it with some regularization. Our findings are supplemented with numerical simulations that verify the theoretical findings.
A new and efficient neural-network and finite-difference hybrid method is developed for solving Poisson equation in a regular domain with jump discontinuities on embedded irregular interfaces. Since the solution has low regularity across the interface, when applying finite difference discretization to this problem, an additional treatment accounting for the jump discontinuities must be employed. Here, we aim to elevate such an extra effort to ease our implementation by machine learning methodology. The key idea is to decompose the solution into singular and regular parts. The neural network learning machinery incorporating the given jump conditions finds the singular solution, while the standard five-point Laplacian discretization is used to obtain the regular solution with associated boundary conditions. Regardless of the interface geometry, these two tasks only require supervised learning for function approximation and a fast direct solver for Poisson equation, making the hybrid method easy to implement and efficient. The two- and three-dimensional numerical results show that the present hybrid method preserves second-order accuracy for the solution and its derivatives, and it is comparable with the traditional immersed interface method in the literature. As an application, we solve the Stokes equations with singular forces to demonstrate the robustness of the present method.
As the next generation of diverse workloads like autonomous driving and augmented/virtual reality evolves, computation is shifting from cloud-based services to the edge, leading to the emergence of a cloud-edge compute continuum. This continuum promises a wide spectrum of deployment opportunities for workloads that can leverage the strengths of cloud (scalable infrastructure, high reliability) and edge (energy efficient, low latencies). Despite its promises, the continuum has only been studied in silos of various computing models, thus lacking strong end-to-end theoretical and engineering foundations for computing and resource management across the continuum. Consequently, developers resort to ad hoc approaches to reason about performance and resource utilization of workloads in the continuum. In this work, we conduct a first-of-its-kind systematic study of various computing models, identify salient properties, and make a case to unify them under a compute continuum reference architecture. This architecture provides an end-to-end analysis framework for developers to reason about resource management, workload distribution, and performance analysis. We demonstrate the utility of the reference architecture by analyzing two popular continuum workloads, deep learning and industrial IoT. We have developed an accompanying deployment and benchmarking framework and first-order analytical model for quantitative reasoning of continuum workloads. The framework is open-sourced and available at //github.com/atlarge-research/continuum.
We prove the first unconditional consistency result for superpolynomial circuit lower bounds with a relatively strong theory of bounded arithmetic. Namely, we show that the theory V$^0_2$ is consistent with the conjecture that NEXP $\not\subseteq$ P/poly, i.e., some problem that is solvable in non-deterministic exponential time does not have polynomial size circuits. We suggest this is the best currently available evidence for the truth of the conjecture. The same techniques establish the same results with NEXP replaced by the class of problems that are decidable in non-deterministic barely superpolynomial time such as NTIME$(n^{O(\log\log\log n)})$. Additionally, we establish a magnification result on the hardness of proving circuit lower bounds.
Knowledge graph reasoning (KGR), aiming to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs), has become a fast-growing research direction. It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering and recommendation systems, etc. According to the graph types, the existing KGR models can be roughly divided into three categories, \textit{i.e.,} static models, temporal models, and multi-modal models. The early works in this domain mainly focus on static KGR and tend to directly apply general knowledge graph embedding models to the reasoning task. However, these models are not suitable for more complex but practical tasks, such as inductive static KGR, temporal KGR, and multi-modal KGR. To this end, multiple works have been developed recently, but no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a survey for knowledge graph reasoning tracing from static to temporal and then to multi-modal KGs. Concretely, the preliminaries, summaries of KGR models, and typical datasets are introduced and discussed consequently. Moreover, we discuss the challenges and potential opportunities. The corresponding open-source repository is shared on GitHub: //github.com/LIANGKE23/Awesome-Knowledge-Graph-Reasoning.