The distributed task allocation problem, as one of the most interesting distributed optimization challenges, has received considerable research attention recently. Previous works mainly focused on the task allocation problem in a population of individuals, where there are no constraints for affording task amounts. The latter condition, however, cannot always be hold. In this paper, we study the task allocation problem with constraints of task allocation in a game-theoretical framework. We assume that each individual can afford different amounts of task and the cost function is convex. To investigate the problem in the framework of population games, we construct a potential game and calculate the fitness function for each individual. We prove that when the Nash equilibrium point in the potential game is in the feasible solutions for the limited task allocation problem, the Nash equilibrium point is the unique globally optimal solution. Otherwise, we also derive analytically the unique globally optimal solution. In addition, in order to confirm our theoretical results, we consider the exponential and quadratic forms of cost function for each agent. Two algorithms with the mentioned representative cost functions are proposed to numerically seek the optimal solution to the limited task problems. We further perform Monte Carlo simulations which provide agreeing results with our analytical calculations.
We consider rather general structural equation models (SEMs) between a target and its covariates in several shifted environments. Given $k\in\N$ shifts we consider the set of shifts that are at most $\gamma$-times as strong as a given weighted linear combination of these $k$ shifts and the worst (quadratic) risk over this entire space. This worst risk has a nice decomposition which we refer to as the "worst risk decomposition". Then we find an explicit arg-min solution that minimizes the worst risk and consider its corresponding plug-in estimator which is the main object of this paper. This plug-in estimator is (almost surely) consistent and we first prove a concentration in measure result for it. The solution to the worst risk minimizer is rather reminiscent of the corresponding ordinary least squares solution in that it is product of a vector and an inverse of a Grammian matrix. Due to this, the central moments of the plug-in estimator is not well-defined in general, but we instead consider these moments conditioned on the Grammian inverse being bounded by some given constant. We also study conditional variance of the estimator with respect to a natural filtration for the incoming data. Similarly we consider the conditional covariance matrix with respect to this filtration and prove a bound for the determinant of this matrix. This SEM model generalizes the linear models that have been studied previously for instance in the setting of casual inference or anchor regression but the concentration in measure result and the moment bounds are new even in the linear setting.
The moderate deviation regime is concerned with the finite block length trade-off between communication cost and error for information processing tasks in the asymptotic regime, where the communication cost approaches a capacity-like quantity and the error vanishes at the same time. We find exact characterisations of these trade-offs for a variety of fully quantum communication tasks, including quantum source coding, quantum state splitting, entanglement-assisted quantum channel coding, and entanglement-assisted quantum channel simulation. The main technical tool we derive is a tight relation between the partially smoothed max-information and the hypothesis testing relative entropy. This allows us to obtain the expansion of the partially smoothed max-information for i.i.d. states in the moderate deviation regime.
Controlling spurious oscillations is crucial for designing reliable numerical schemes for hyperbolic conservation laws. This paper proposes a novel, robust, and efficient oscillation-eliminating discontinuous Galerkin (OEDG) method on general meshes, motivated by the damping technique in [Lu, Liu, and Shu, SIAM J. Numer. Anal., 59:1299-1324, 2021]. The OEDG method incorporates an OE procedure after each Runge-Kutta stage, devised by alternately evolving conventional semidiscrete DG scheme and a damping equation. A novel damping operator is carefully designed to possess scale-invariant and evolution-invariant properties. We rigorously prove optimal error estimates of the fully discrete OEDG method for linear scalar conservation laws. This might be the first generic fully-discrete error estimates for nonlinear DG schemes with automatic oscillation control mechanism. The OEDG method exhibits many notable advantages. It effectively eliminates spurious oscillations for challenging problems across various scales and wave speeds, without problem-specific parameters. It obviates the need for characteristic decomposition in hyperbolic systems. It retains key properties of conventional DG method, such as conservation, optimal convergence rates, and superconvergence. Moreover, it remains stable under normal CFL condition. The OE procedure is non-intrusive, facilitating integration into existing DG codes as an independent module. Its implementation is easy and efficient, involving only simple multiplications of modal coefficients by scalars. The OEDG approach provides new insights into the damping mechanism for oscillation control. It reveals the role of damping operator as a modal filter and establishes close relations between the damping and spectral viscosity techniques. Extensive numerical results confirm the theoretical analysis and validate the effectiveness and advantages of the OEDG method.
Electrical circuits are present in a variety of technologies, making their design an important part of computer aided engineering. The growing number of tunable parameters that affect the final design leads to a need for new approaches of quantifying their impact. Machine learning may play a key role in this regard, however current approaches often make suboptimal use of existing knowledge about the system at hand. In terms of circuits, their description via modified nodal analysis is well-understood. This particular formulation leads to systems of differential-algebraic equations (DAEs) which bring with them a number of peculiarities, e.g. hidden constraints that the solution needs to fulfill. We aim to use the recently introduced dissection concept for DAEs that can decouple a given system into ordinary differential equations, only depending on differential variables, and purely algebraic equations that describe the relations between differential and algebraic variables. The idea then is to only learn the differential variables and reconstruct the algebraic ones using the relations from the decoupling. This approach guarantees that the algebraic constraints are fulfilled up to the accuracy of the nonlinear system solver, which represents the main benefit highlighted in this article.
The problem of answering logical queries over incomplete knowledge graphs is receiving significant attention in the machine learning community. Neuro-symbolic models are a promising recent approach, showing good performance and allowing for good interpretability properties. These models rely on trained architectures to execute atomic queries, combining them with modules that simulate the symbolic operators in queries. Unfortunately, most neuro-symbolic query processors are limited to the so-called tree-like logical queries that admit a bottom-up execution, where the leaves are constant values or anchors, and the root is the target variable. Tree-like queries, while expressive, fail short to express properties in knowledge graphs that are important in practice, such as the existence of multiple edges between entities or the presence of triangles. We propose a framework for answering arbitrary conjunctive queries over incomplete knowledge graphs. The main idea of our method is to approximate a cyclic query by an infinite family of tree-like queries, and then leverage existing models for the latter. Our approximations achieve strong guarantees: they are complete, i.e. there are no false negatives, and optimal, i.e. they provide the best possible approximation using tree-like queries. Our method requires the approximations to be tree-like queries where the leaves are anchors or existentially quantified variables. Hence, we also show how some of the existing neuro-symbolic models can handle these queries, which is of independent interest. Experiments show that our approximation strategy achieves competitive results, and that including queries with existentially quantified variables tends to improve the general performance of these models, both on tree-like queries and on our approximation strategy.
A long-standing issue in the parallel-in-time community is the poor convergence of standard iterative parallel-in-time methods for hyperbolic partial differential equations (PDEs), and for advection-dominated PDEs more broadly. Here, a local Fourier analysis (LFA) convergence theory is derived for the two-level variant of the iterative parallel-in-time method of multigrid reduction-in-time (MGRIT). This closed-form theory allows for new insights into the poor convergence of MGRIT for advection-dominated PDEs when using the standard approach of rediscretizing the fine-grid problem on the coarse grid. Specifically, we show that this poor convergence arises, at least in part, from inadequate coarse-grid correction of certain smooth Fourier modes known as characteristic components, which was previously identified as causing poor convergence of classical spatial multigrid on steady-state advection-dominated PDEs. We apply this convergence theory to show that, for certain semi-Lagrangian discretizations of advection problems, MGRIT convergence using rediscretized coarse-grid operators cannot be robust with respect to CFL number or coarsening factor. A consequence of this analysis is that techniques developed for improving convergence in the spatial multigrid context can be re-purposed in the MGRIT context to develop more robust parallel-in-time solvers. This strategy has been used in recent work to great effect; here, we provide further theoretical evidence supporting the effectiveness of this approach.
A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.
The prediction of system responses for a given fatigue test bench drive signal is a challenging task, for which linear frequency response function models are commonly used. To account for non-linear phenomena, a novel hybrid model is suggested, which augments existing approaches using Long Short-Term Memory networks. Additional virtual sensing applications of this method are demonstrated. The approach is tested using non-linear experimental data from a servo-hydraulic test rig and this dataset is made publicly available. A variety of metrics in time and frequency domains, as well as fatigue strength under variable amplitudes, are employed in the evaluation.
We propose a finite element discretization for the steady, generalized Navier-Stokes equations for fluids with shear-dependent viscosity, completed with inhomogeneous Dirichlet boundary conditions and an inhomogeneous divergence constraint. We establish (weak) convergence of discrete solutions as well as a priori error estimates for the velocity vector field and the scalar kinematic pressure. Numerical experiments complement the theoretical findings.
This paper proposes a strategy to solve the problems of the conventional s-version of finite element method (SFEM) fundamentally. Because SFEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from its strengths. To solve these issues, we propose a novel strategy called B-spline based SFEM. To improve the accuracy of numerical integration, we employed cubic B-spline basis functions with $C^2$-continuity across element boundaries as the global basis functions. To avoid matrix singularity, we applied different basis functions to different meshes. Specifically, we employed the Lagrange basis functions as local basis functions. The numerical results indicate that using the proposed method, numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional SFEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional SFEM.