We describe a family of decidable propositional dynamic logics, where atomic modalities satisfy some extra conditions (for example, given by axioms of the logics K5, S5, or K45 for different atomic modalities). It follows from recent results (Kikot, Shapirovsky, Zolin, 2014; 2020) that if a modal logic $L$ admits a special type of filtration (so-called definable filtration), then its enrichments with modalities for the transitive closure and converse relations also admit definable filtration. We use these results to show that if logics $L_1, \ldots , L_n$ admit definable filtration, then the propositional dynamic logic with converse extended by the fusion $L_1*\ldots * L_n$ has the finite model property.
This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.
This paper addresses the computational problem of deciding invertibility (or one to one-ness) of a Boolean map $F$ in $n$-Boolean variables. This problem has a special case of deciding invertibilty of a map $F:\mathbb{F}_{2}^n\rightarrow\mathbb{F}_{2}^n$ over the binary field $\mathbb{F}_2$. Further the problem can be extended and stated over a finite field $\mathbb{F}$ instead of $\mathbb{F}_2$. Algebraic condition for invertibility of $F$ in this special case over a finite field is well known to be equivalent to invertibility of the Koopman operator of $F$ as shown in \cite{RamSule}. In this paper a condition for invertibility is derived in the special case of Boolean maps $F:B_0^n\rightarrow B_0^n$ where $B_0$ is the two element Boolean algebra in terms of \emph{implicants} of Boolean equations. This condition is then extended to the case of general maps in $n$ variables. Hence this condition answers the special case of invertibility of the map $F$ defined over the binary field $\mathbb{F}_2$ alternatively, in terms of implicants instead of the Koopman operator. The problem of deciding invertibility of a map $F$ (or that of finding its $GOE$) over finite fields appears to be distinct from the satisfiability problem (SAT) or the problem of deciding consistency of polynomial equations over finite fields. Hence the well known algorithms for deciding SAT or of solvability using Grobner basis for checking membership in an ideal generated by polynomials is not known to answer the question of invertibility of a map. Similarly it appears that algorithms for satisfiability or polynomial solvability are not useful for computation of $GOE(F)$ even for maps over the binary field $\mathbb{F}_2$.
The criticality problem in nuclear engineering asks for the principal eigen-pair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step withing judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigen-pair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.
For the first time, a fully-coupled Harmonic Balance method is developed for the forced response of turbomachinery blades. The method is applied to a state-of-the-art model of a turbine bladed disk with interlocked shrouds subjected to wake-induced loading. The recurrent opening and closing of the pre-loaded shroud contact causes a softening effect, leading to turning points in the amplitude-frequency curve near resonance. Therefore, the coupled solver is embedded into a numerical path continuation framework. Two variants are developed: the coupled continuation of the solution path, and the coupled re-iteration of selected solution points. While the re-iteration variant is slightly more costly per solution point, it has the important advantage that it can be run completely in parallel, which substantially reduces the wall clock time. It is shown that wake- and vibration-induced flow fields do not linearly superimpose, leading to a severe underestimation of the resonant vibration level by the influence-coefficient-based state-of-the-art methods (which rely on this linearity assumption).
Computational methods for thermal radiative transfer problems exhibit high computational costs and a prohibitive memory footprint when the spatial and directional domains are finely resolved. A strategy to reduce such computational costs is dynamical low-rank approximation (DLRA), which represents and evolves the solution on a low-rank manifold, thereby significantly decreasing computational and memory requirements. Efficient discretizations for the DLRA evolution equations need to be carefully constructed to guarantee stability while enabling mass conservation. In this work, we focus on the Su-Olson closure and derive a stable discretization through an implicit coupling of energy and radiation density. Moreover, we propose a rank-adaptive strategy to preserve local mass conservation. Numerical results are presented which showcase the accuracy and efficiency of the proposed method.
We give a comprehensive account on the parameterized complexity of model checking and satisfiability of propositional inclusion and independence logic. We discover that for most parameterizations the problems are either in FPT or paraNP-complete.
Simulation and emulation are popular approaches for experimentation in Computer Networks. However, due to their respective inherent drawbacks, existing solutions cannot perform both fast and realistic control plane experiments. To close this gap, we introduce Horse. Horse is a hybrid solution with an emulated control plane, for realism, and simulated data plane, for speed. Our decoupling of the control and data plane allows us to speed up the experiments without sacrificing control plane realism.
Epistemic logics model how agents reason about their beliefs and the beliefs of other agents. Existing logics typically assume the ability of agents to reason perfectly about propositions of unbounded modal depth. We present DBEL, an extension of S5 that models agents that can reason about epistemic formulas only up to a specific modal depth. To support explicit reasoning about agent depths, DBEL includes depth atoms Ead (agent a has depth exactly d) and Pad (agent a has depth at least d). We provide a sound and complete axiomatization of DBEL. We extend DBEL to support public announcements for bounded depth agents and show how the resulting DPAL logic generalizes standard axioms from public announcement logic. We present two alternate extensions and identify two undesirable properties, amnesia and knowledge leakage, that these extensions have but DPAL does not. We provide axiomatizations of these logics as well as complexity results for satisfiability and model checking. Finally, we use these logics to illustrate how agents with bounded modal depth reason in the classical muddy children problem, including upper and lower bounds on the depth knowledge necessary for agents to successfully solve the problem.
Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices -- a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior. This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.