In this paper we present a general theory of $\Pi_{2}$-rules for systems of intuitionistic and modal logic. We introduce the notions of $\Pi_{2}$-rule system and of an Inductive Class, and provide model-theoretic and algebraic completeness theorems, which serve as our basic tools. As an illustration of the general theory, we analyse the structure of inductive classes of G\"{o}del algebras, from a structure theoretic and logical point of view. We show that unlike other well-studied settings (such as logics, or single-conclusion rule systems), there are continuum many $\Pi_{2}$-rule systems extending $\mathsf{LC}=\mathsf{IPC}+(p\rightarrow q)\vee (q\rightarrow p)$, and show how our methods allow easy proofs of the admissibility of the well-known Takeuti-Titani rule. Our final results concern general questions admissibility in $\mathsf{LC}$: (1) we present a full classification of those inductive classes which are inductively complete, i.e., where all $\Pi_{2}$-rules which are admissible are derivable, and (2) show that the problem of admissibility of $\Pi_{2}$-rules over $\mathsf{LC}$ is decidable.
Deep generative models have been demonstrated as problematic in the unsupervised out-of-distribution (OOD) detection task, where they tend to assign higher likelihoods to OOD samples. Previous studies on this issue are usually not applicable to the Variational Autoencoder (VAE). As a popular subclass of generative models, the VAE can be effective with a relatively smaller model size and be more stable and faster in training and inference, which can be more advantageous in real-world applications. In this paper, We propose a novel VAE-based score called Error Reduction (ER) for OOD detection, which is based on a VAE that takes a lossy version of the training set as inputs and the original set as targets. Experiments are carried out on various datasets to show the effectiveness of our method, we also present the effect of design choices with ablation experiments. Our code is available at: //github.com/ZJLAB-AMMI/VAE4OOD.
Diffusion models have emerged as powerful generative tools, rivaling GANs in sample quality and mirroring the likelihood scores of autoregressive models. A subset of these models, exemplified by DDIMs, exhibit an inherent asymmetry: they are trained over $T$ steps but only sample from a subset of $T$ during generation. This selective sampling approach, though optimized for speed, inadvertently misses out on vital information from the unsampled steps, leading to potential compromises in sample quality. To address this issue, we present the S$^{2}$-DMs, which is a new training method by using an innovative $L_{skip}$, meticulously designed to reintegrate the information omitted during the selective sampling phase. The benefits of this approach are manifold: it notably enhances sample quality, is exceptionally simple to implement, requires minimal code modifications, and is flexible enough to be compatible with various sampling algorithms. On the CIFAR10 dataset, models trained using our algorithm showed an improvement of 3.27% to 14.06% over models trained with traditional methods across various sampling algorithms (DDIMs, PNDMs, DEIS) and different numbers of sampling steps (10, 20, ..., 1000). On the CELEBA dataset, the improvement ranged from 8.97% to 27.08%. Access to the code and additional resources is provided in the github.
We propose a method for estimating a log-concave density on $\mathbb R^d$ from samples, under the assumption that there exists an orthogonal transformation that makes the components of the random vector independent. While log-concave density estimation is hard both computationally and statistically, the independent components assumption alleviates both issues, while still maintaining a large non-parametric class. We prove that under mild conditions, at most $\tilde{\mathcal{O}}(\epsilon^{-4})$ samples (suppressing constants and log factors) suffice for our proposed estimator to be within $\epsilon$ of the original density in squared Hellinger distance. On the computational front, while the usual log-concave maximum likelihood estimate can be obtained via a finite-dimensional convex program, it is slow to compute -- especially in higher dimensions. We demonstrate through numerical experiments that our estimator can be computed efficiently, making it more practical to use.
Kinematics analysis is a crucial part of multiple joint-enabled robots. A multi-joint enabled robot requires extensive mathematical calculations to be done so the end effector's position can be determined with respect to the other connective joints involved and their respective frames in a specific coordinate system. For a locomotive quadruped robot, it is essential to determine two types of kinematics for the robot's leg position on the coordinate. For the part of forward kinematics, it measures the position, and joint angles can be calculated using inverse kinematics. Mathematical derivation of the joint angles is derived here, and Python-based simulation has been done to verify and simulate the robot's locomotion. This approach has been tested beneficial over other methods as Python-based code is used which makes it easier to do serial communication and therefore it could be deployed in a micro-controller unit to interact with a prototype leg.
In deep learning, classification tasks are formalized as optimization problems solved via the minimization of the cross-entropy. However, recent advancements in the design of objective functions allow the $f$-divergence measure to generalize the formulation of the optimization problem for classification. With this goal in mind, we adopt a Bayesian perspective and formulate the classification task as a maximum a posteriori probability problem. We propose a class of objective functions based on the variational representation of the $f$-divergence, from which we extract a list of five posterior probability estimators leveraging well-known $f$-divergences. In addition, driven by the challenge of improving the state-of-the-art approach, we propose a bottom-up method that leads us to the formulation of a new objective function (and posterior probability estimator) corresponding to a novel $f$-divergence referred to as shifted log (SL). First, we theoretically prove the convergence property of the posterior probability estimators. Then, we numerically test the set of proposed objective functions in three application scenarios: toy examples, image data sets, and signal detection/decoding problems. The analyzed tasks demonstrate the effectiveness of the proposed estimators and that the SL divergence achieves the highest classification accuracy in almost all the scenarios.
We establish the unique ergodicity of the Markov chain generated by the stochastic theta method (STM) with $\theta \in [1/2, 1]$ for monotone SODEs, without growth restriction on the coefficients, driven by nondegenerate multiplicative noise. The main ingredient of the arguments lies in the construction of new Lyapunov functions, involving the coefficients, the stepsize, and $\theta$, and the irreducibility and the strong Feller property for the STM. We also generalize the arguments to the STM and its Galerkin-based full discretizations for a class of monotone SPDEs driven by infinite-dimensional nondegenerate multiplicative trace-class noise. Applying these results to the stochastic Allen--Cahn equation indicates that its drift-implicit Euler scheme is uniquely ergodic for any interface thickness, which gives an affirmative answer to a question proposed in (J. Cui, J. Hong, and L. Sun, Stochastic Process. Appl. (2021): 55--93). Numerical experiments verify our theoretical results.
We analyze a stochastic approximation algorithm for decision-dependent problems, wherein the data distribution used by the algorithm evolves along the iterate sequence. The primary examples of such problems appear in performative prediction and its multiplayer extensions. We show that under mild assumptions, the deviation between the average iterate of the algorithm and the solution is asymptotically normal, with a covariance that clearly decouples the effects of the gradient noise and the distributional shift. Moreover, building on the work of H\'ajek and Le Cam, we show that the asymptotic performance of the algorithm with averaging is locally minimax optimal.
We initiate a study of a new model of property testing that is a hybrid of testing properties of distributions and testing properties of strings. Specifically, the new model refers to testing properties of distributions, but these are distributions over huge objects (i.e., very long strings). Accordingly, the model accounts for the total number of local probes into these objects (resp., queries to the strings) as well as for the distance between objects (resp., strings), and the distance between distributions is defined as the earth mover's distance with respect to the relative Hamming distance between strings. We study the query complexity of testing in this new model, focusing on three directions. First, we try to relate the query complexity of testing properties in the new model to the sample complexity of testing these properties in the standard distribution testing model. Second, we consider the complexity of testing properties that arise naturally in the new model (e.g., distributions that capture random variations of fixed strings). Third, we consider the complexity of testing properties that were extensively studied in the standard distribution testing model: Two such cases are uniform distributions and pairs of identical distributions.
We study the following combinatorial problem. Given a set of $n$ y-monotone curves, which we call wires, a tangle determines the order of the wires on a number of horizontal layers such that any two consecutive layers differ only in swaps of neighboring wires. Given a multiset $L$ of swaps (that is, unordered pairs of wires) and an initial order of the wires, a tangle realizes $L$ if each pair of wires changes its order exactly as many times as specified by $L$. Deciding whether a given multiset of swaps admits a realizing tangle is known to be NP-hard [Yamanaka et al., CCCG 2018]. We prove that this problem remains NP-hard if every pair of wires swaps only a constant number of times. On the positive side, we improve the runtime of a previous exponential-time algorithm. We also show that the problem is in NP and fixed-parameter tractable with respect to the number of wires.
We propose a new representation of functions in Sobolev spaces on an $N$-dimensional hyper-rectangle, expressing such functions in terms of their admissible derivatives, evaluated along lower-boundaries of the domain. These boundary values are either finite-dimensional or exist in the space $L_{2}$ of square-integrable functions -- free of the continuity constraints inherent to Sobolev space. Moreover, we show that the map from this space of boundary values to the Sobolev space is given by an integral operator with polynomial kernel, and we prove that this map is invertible. Using this result, we propose a method for polynomial approximation of functions in Sobolev space, reconstructing such an approximation from polynomial projections of the boundary values. We prove that this approximation is optimal with respect to a discrete-continuous Sobolev norm, and show through numerical examples that it exhibits better convergence behavior than direct projection of the function. Finally, we show that this approach may also be adapted to use a basis of step functions, to construct accurate piecewise polynomial approximations that do not suffer from e.g. Gibbs phenomenon.