We derive new boundary conditions and implementation procedures for nonlinear initial boundary value problems (IBVPs) with non-zero boundary data that lead to bounded solutions. The new boundary procedure is applied to nonlinear IBVPs on skew-symmetric form, including dissipative terms. The complete procedure has two main ingredients. In the first part (published in [1, 2]), the energy and entropy rate in terms of a surface integral with boundary terms was produced for problems with first derivatives. In this second part we complement it by adding second derivative dissipative terms and bound the boundary terms. We develop a new nonlinear boundary procedure which generalise the characteristic boundary procedure for linear problems. Both strong and weak imposition of the nonlinear boundary conditions with non-zero boundary data are considered, and we prove that the solution is bounded. The boundary procedure is applied to four important IBVPs in computational fluid dynamics: the incompressible Euler and Navier-Stokes, the shallow water and the compressible Euler equations. Finally we show that stable discrete approximations follow by using summation-by-parts operators combined with weak boundary conditions.
We consider fair division of a set of indivisible goods among $n$ agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive their MMS values. However, since MMS allocations do not always exist, the focus shifted to investigating its ordinal and multiplicative approximations. In the ordinal approximation, the goal is to show the existence of $1$-out-of-$d$ MMS allocations (for the smallest possible $d>n$). A series of works led to the state-of-the-art factor of $d=\lfloor 3n/2 \rfloor$ [HSSH21]. We show that $1$-out-of-$\lceil 4n/3\rceil$ MMS allocations always exist. In the multiplicative approximation, the goal is to show the existence of $\alpha$-MMS allocations (for the largest possible $\alpha < 1$) which guarantees each agent at least $\alpha$ times her MMS value. A series of works in the last decade led to the state-of-the-art factor of $\alpha = \frac{3}{4} + \frac{3}{3836}$ [AG23]. We introduce a general framework of $(\alpha, \beta, \gamma)$-MMS that guarantees $\alpha$ fraction of agents $\beta$ times their MMS values and the remaining $(1-\alpha)$ fraction of agents $\gamma$ times their MMS values. The $(\alpha, \beta, \gamma)$-MMS captures both ordinal and multiplicative approximations as its special cases. We show that $(2(1 -\beta)/\beta, \beta, 3/4)$-MMS allocations always exist. Furthermore, since we can choose the $2(1-\beta)/\beta$ fraction of agents arbitrarily in our algorithm, this implies (using $\beta=\sqrt{3}/2$) the existence of a randomized allocation that gives each agent at least 3/4 times her MMS value (ex-post) and at least $(17\sqrt{3} - 24)/4\sqrt{3} > 0.785$ times her MMS value in expectation (ex-ante).
A technique is described in this paper to avoid order reduction when integrating reaction-diffusion initial boundary value problems with explicit exponential Rosenbrock methods. The technique is valid for any Rosenbrock method, without having to impose any stiff order conditions, and for general time-dependent boundary values. An analysis on the global error is thoroughly performed and some numerical experiments are shown which corroborate the theoretical results, and in which a big gain in efficiency with respect to applying the standard method of lines can be observed.
Melt pool dynamics in metal additive manufacturing (AM) is critical to process stability, microstructure formation, and final properties of the printed materials. Physics-based simulation including computational fluid dynamics (CFD) is the dominant approach to predict melt pool dynamics. However, the physics-based simulation approaches suffer from the inherent issue of very high computational cost. This paper provides a physics-informed machine learning (PIML) method by integrating neural networks with the governing physical laws to predict the melt pool dynamics such as temperature, velocity, and pressure without using any training data on velocity. This approach avoids solving the highly non-linear Navier-Stokes equation numerically, which significantly reduces the computational cost. The difficult-to-determine model constants of the governing equations of the melt pool can also be inferred through data-driven discovery. In addition, the physics-informed neural network (PINN) architecture has been optimized for efficient model training. The data-efficient PINN model is attributed to the soft penalty by incorporating governing partial differential equations (PDEs), initial conditions, and boundary conditions in the PINN model.
We introduce a Fourier-Bessel-based spectral solver for Cauchy problems featuring Laplacians in polar coordinates under homogeneous Dirichlet boundary conditions. We use FFTs in the azimuthal direction to isolate angular modes, then perform discrete Hankel transform (DHT) on each mode along the radial direction to obtain spectral coefficients. The two transforms are connected via numerical and cardinal interpolations. We analyze the boundary-dependent error bound of DHT; the worst case is $\sim N^{-3/2}$, which governs the method, and the best $\sim e^{-N}$, which then the numerical interpolation governs. The complexity is $O[N^3]$. Taking advantage of Bessel functions being the eigenfunctions of the Laplacian operator, we solve linear equations for all times. For non-linear equations, we use a time-splitting method to integrate the solutions. We show examples and validate the method on the two-dimensional wave equation, which is linear, and on two non-linear problems: a time-dependent Poiseuille flow and the flow of a Bose-Einstein condensate on a disk.
In this paper, we study the weighted $k$-server problem on the uniform metric in both the offline and online settings. We start with the offline setting. In contrast to the (unweighted) $k$-server problem which has a polynomial-time solution using min-cost flows, there are strong computational lower bounds for the weighted $k$-server problem, even on the uniform metric. Specifically, we show that assuming the unique games conjecture, there are no polynomial-time algorithms with a sub-polynomial approximation factor, even if we use $c$-resource augmentation for $c < 2$. Furthermore, if we consider the natural LP relaxation of the problem, then obtaining a bounded integrality gap requires us to use at least $\ell$ resource augmentation, where $\ell$ is the number of distinct server weights. We complement these results by obtaining a constant-approximation algorithm via LP rounding, with a resource augmentation of $(2+\epsilon)\ell$ for any constant $\epsilon > 0$. In the online setting, an $\exp(k)$ lower bound is known for the competitive ratio of any randomized algorithm for the weighted $k$-server problem on the uniform metric. In contrast, we show that $2\ell$-resource augmentation can bring the competitive ratio down by an exponential factor to only $O(\ell^2 \log \ell)$. Our online algorithm uses the two-stage approach of first obtaining a fractional solution using the online primal-dual framework, and then rounding it online.
Recently Chen and Gao~\cite{ChenGao2017} proposed a new quantum algorithm for Boolean polynomial system solving, motivated by the cryptanalysis of some post-quantum cryptosystems. The key idea of their approach is to apply a Quantum Linear System (QLS) algorithm to a Macaulay linear system over $\mathbb{C}$, which is derived from the Boolean polynomial system. The efficiency of their algorithm depends on the condition number of the Macaulay matrix. In this paper, we give a strong lower bound on the condition number as a function of the Hamming weight of the Boolean solution, and show that in many (if not all) cases a Grover-based exhaustive search algorithm outperforms their algorithm. Then, we improve upon Chen and Gao's algorithm by introducing the Boolean Macaulay linear system over $\mathbb{C}$ by reducing the original Macaulay linear system. This improved algorithm could potentially significantly outperform the brute-force algorithm, when the Hamming weight of the solution is logarithmic in the number of Boolean variables. Furthermore, we provide a simple and more elementary proof of correctness for our improved algorithm using a reduction employing the Valiant-Vazirani affine hashing method, and also extend the result to polynomial systems over $\mathbb{F}_q$ improving on subsequent work by Chen, Gao and Yuan \cite{ChenGao2018}. We also suggest a new approach for extracting the solution of the Boolean polynomial system via a generalization of the quantum coupon collector problem \cite{arunachalam2020QuantumCouponCollector}.
Low rank matrix approximations appear in a number of scientific computing applications. We consider the Nystr\"{o}m method for approximating a positive semidefinite matrix $A$. In the case that $A$ is very large or its entries can only be accessed once, a single-pass version may be necessary. In this work, we perform a complete rounding error analysis of the single-pass Nystr\"{o}m method in two precisions, where the computation of the expensive matrix product with $A$ is assumed to be performed in the lower of the two precisions. Our analysis gives insight into how the sketching matrix and shift should be chosen to ensure stability, implementation aspects which have been commented on in the literature but not yet rigorously justified. We further develop a heuristic to determine how to pick the lower precision, which confirms the general intuition that the lower the desired rank of the approximation, the lower the precision we can use without detriment. We also demonstrate that our mixed precision Nystr\"{o}m method can be used to inexpensively construct limited memory preconditioners for the conjugate gradient method and derive a bound the condition number of the resulting preconditioned coefficient matrix. We present numerical experiments on a set of matrices with various spectral decays and demonstrate the utility of our mixed precision approach.
We introduce a relational semantics based on poset products, and provide sufficient conditions guaranteeing its soundness and completeness for various substructural logics. We also demonstrate that our relational semantics unifies and generalizes two semantics already appearing in the literature: Aguzzoli, Bianchi, and Marra's temporal flow semantics for H\'ajek's basic logic, and Lewis-Smith, Oliva, and Robinson's semantics for intuitionistic Lukasiewicz logic. As a consequence of our general theory, we recover the soundness and completeness results of these prior studies in a uniform fashion, and extend them to infinitely-many other substructural logics.
Imaging problems such as the one in nanoCT require the solution of an inverse problem, where it is often taken for granted that the forward operator, i.e., the underlying physical model, is properly known. In the present work we address the problem where the forward model is inexact due to stochastic or deterministic deviations during the measurement process. We particularly investigate the performance of non-learned iterative reconstruction methods dealing with inexactness and learned reconstruction schemes, which are based on U-Nets and conditional invertible neural networks. The latter also provide the opportunity for uncertainty quantification. A synthetic large data set in line with a typical nanoCT setting is provided and extensive numerical experiments are conducted evaluating the proposed methods.
Regularization promotes well-posedness in solving an inverse problem with incomplete measurement data. The regularization term is typically designed based on a priori characterization of the unknown signal, such as sparsity or smoothness. The standard inhomogeneous regularization incorporates a spatially changing exponent $p$ of the standard $\ell_p$ norm-based regularization to recover a signal whose characteristic varies spatially. This study proposes a weighted inhomogeneous regularization that extends the standard inhomogeneous regularization through new exponent design and weighting using spatially varying weights. The new exponent design avoids misclassification when different characteristics stay close to each other. The weights handle another issue when the region of one characteristic is too small to be recovered effectively by the $\ell_p$ norm-based regularization even after identified correctly. A suite of numerical tests shows the efficacy of the proposed weighted inhomogeneous regularization, including synthetic image experiments and real sea ice recovery from its incomplete wave measurements.