In this article, we show that the completion problem, i.e. the decision problem whether a partial structure can be completed to a full structure, is NP-complete for many combinatorial structures. While the gadgets for most reductions in literature are found by hand, we present an algorithm to construct gadgets in a fully automated way. Using our framework which is based on SAT, we present the first thorough study of the completion problem on sign mappings with forbidden substructures by classifying thousands of structures for which the completion problem is NP-complete. Our list in particular includes interior triple systems, which were introduced by Knuth towards an axiomatization of planar point configurations. Last but not least, we give an infinite family of structures generalizing interior triple system to higher dimensions for which the completion problem is NP-complete.
Over the last two decades, a growing body of experimental research has provided evidence that linguistic frames influence human behaviour in economic games, beyond the economic consequences of the available actions. This article proposes a novel framework that transcends the traditional confines of outcome-based preference models. According to the LENS model, the Linguistic description of the decision problem triggers Emotional responses and suggests potential Norms of behaviour, which then interact to shape an individual's Strategic choice. The article reviews experimental evidence that supports each path of the LENS model. Furthermore, it identifies and discusses several critical research questions that arise from this model, pointing towards avenues for future inquiry.
Reinforcement Learning is the premier technique to approach sequential decision problems, including complex tasks such as driving cars and landing spacecraft. Among the software validation and verification practices, testing for functional fault detection is a convenient way to build trustworthiness in the learned decision model. While recent works seek to maximise the number of detected faults, none consider fault characterisation during the search for more diversity. We argue that policy testing should not find as many failures as possible (e.g., inputs that trigger similar car crashes) but rather aim at revealing as informative and diverse faults as possible in the model. In this paper, we explore the use of quality diversity optimisation to solve the problem of fault diversity in policy testing. Quality diversity (QD) optimisation is a type of evolutionary algorithm to solve hard combinatorial optimisation problems where high-quality diverse solutions are sought. We define and address the underlying challenges of adapting QD optimisation to the test of action policies. Furthermore, we compare classical QD optimisers to state-of-the-art frameworks dedicated to policy testing, both in terms of search efficiency and fault diversity. We show that QD optimisation, while being conceptually simple and generally applicable, finds effectively more diverse faults in the decision model, and conclude that QD-based policy testing is a promising approach.
In this paper, we consider the physical layer security of an RIS-assisted multiple-antenna communication system with randomly located eavesdroppers. The exact distributions of the received signal-to-noise-ratios (SNRs) at the legitimate user and the eavesdroppers located according to a Poisson point process (PPP) are derived, and a closed-form expression for the secrecy outage probability (SOP) is obtained. It is revealed that the secrecy performance is mainly affected by the number of RIS reflecting elements, and the impact of the transmit antennas and transmit power at the base station is marginal. In addition, when the locations of the randomly located eavesdroppers are unknown, deploying the RIS closer to the legitimate user rather than to the base station is shown to be more efficient. We also perform an analytical study demonstrating that the secrecy diversity order depends on the path loss exponent of the RIS-to-ground links. Finally, numerical simulations are conducted to verify the accuracy of these theoretical observations.
In this work, we introduce a three-step semiparametric methodology for the estimation of production frontiers. We consider a model inspired by the well-known Cobb-Douglas production function, wherein input factors operate multiplicatively within the model. Efficiency in the proposed model is assumed to follow a continuous univariate uniparametric distribution in $(0,1)$, referred to as Matsuoka's distribution, which is discussed in detail. Following model linearization, the first step is to semiparametrically estimate the regression function through a local linear smoother. The second step focuses on the estimation of the efficiency parameter. Finally, we estimate the production frontier through a plug-in methodology. We present a rigorous asymptotic theory related to the proposed three-step estimation, including consistency, and asymptotic normality, and derive rates for the convergences presented. Incidentally, we also study the Matsuoka's distribution, deriving its main properties. The Matsuoka's distribution exhibits a versatile array of shapes capable of effectively encapsulating the typical behavior of efficiency within production frontier models. To complement the large sample results obtained, a Monte Carlo simulation study is conducted to assess the finite sample performance of the proposed three-step methodology. An empirical application using a dataset of Danish milk producers is also presented.
We present a new framework to address the non-convex robust hypothesis testing problem, wherein the goal is to seek the optimal detector that minimizes the maximum of worst-case type-I and type-II risk functions. The distributional uncertainty sets are constructed to center around the empirical distribution derived from samples based on Sinkhorn discrepancy. Given that the objective involves non-convex, non-smooth probabilistic functions that are often intractable to optimize, existing methods resort to approximations rather than exact solutions. To tackle the challenge, we introduce an exact mixed-integer exponential conic reformulation of the problem, which can be solved into a global optimum with a moderate amount of input data. Subsequently, we propose a convex approximation, demonstrating its superiority over current state-of-the-art methodologies in literature. Furthermore, we establish connections between robust hypothesis testing and regularized formulations of non-robust risk functions, offering insightful interpretations. Our numerical study highlights the satisfactory testing performance and computational efficiency of the proposed framework.
With advancement of medicine, alternative exposures or interventions are emerging with respect to a common outcome, and there are needs to formally test the difference in the associations of multiple exposures. We propose a duplication method-based multivariate Wald test in the Cox proportional hazard regression analyses to test the difference in the associations of multiple exposures with a same outcome. The proposed method applies to linear or categorical exposures. To illustrate our method, we applied our method to compare the associations between alignment to two different dietary patterns, either as continuous or quartile exposures, and incident chronic diseases, defined as a composite of CVD, cancer, and diabetes, in the Health Professional Follow-up Study. Relevant sample codes in R that implement the proposed approach are provided. The proposed duplication-method-based approach offers a flexible, formal statistical test of multiple exposures for the common outcome with minimal assumptions.
We propose a material design method via gradient-based optimization on compositions, overcoming the limitations of traditional methods: exhaustive database searches and conditional generation models. It optimizes inputs via backpropagation, aligning the model's output closely with the target property and facilitating the discovery of unlisted materials and precise property determination. Our method is also capable of adaptive optimization under new conditions without retraining. Applying to exploring high-Tc superconductors, we identified potential compositions beyond existing databases and discovered new hydrogen superconductors via conditional optimization. This method is versatile and significantly advances material design by enabling efficient, extensive searches and adaptability to new constraints.
In this paper, we introduce a novel approach to centroidal state estimation, which plays a crucial role in predictive model-based control strategies for dynamic legged locomotion. Our approach uses the Koopman operator theory to transform the robot's complex nonlinear dynamics into a linear system, by employing dynamic mode decomposition and deep learning for model construction. We evaluate both models on their linearization accuracy and capability to capture both fast and slow dynamic system responses. We then select the most suitable model for estimation purposes, and integrate it within a moving horizon estimator. This estimator is formulated as a convex quadratic program, to facilitate robust, real-time centroidal state estimation. Through extensive simulation experiments on a quadruped robot executing various dynamic gaits, our data-driven framework outperforms conventional filtering techniques based on nonlinear dynamics. Our estimator addresses challenges posed by force/torque measurement noise in highly dynamic motions and accurately recovers the centroidal states, demonstrating the adaptability and effectiveness of the Koopman-based linear representation for complex locomotive behaviors. Importantly, our model based on dynamic mode decomposition, trained with two locomotion patterns (trot and jump), successfully estimates the centroidal states for a different motion (bound) without retraining.
Spatially misaligned data can be fused by using a Bayesian melding model that assumes that underlying all observations there is a spatially continuous Gaussian random field process. This model can be used, for example, to predict air pollution levels by combining point data from monitoring stations and areal data from satellite imagery. However, if the data presents preferential sampling, that is, if the observed point locations are not independent of the underlying spatial process, the inference obtained from models that ignore such a dependence structure might not be valid. In this paper, we present a Bayesian spatial model for the fusion of point and areal data that takes into account preferential sampling. The model combines the Bayesian melding specification and a model for the stochastically dependent sampling and underlying spatial processes. Fast Bayesian inference is performed using the integrated nested Laplace approximation (INLA) and the stochastic partial differential equation (SPDE) approaches. The performance of the model is assessed using simulated data in a range of scenarios and sampling strategies that can appear in real settings. The model is also applied to predict air pollution in the USA.
In this paper we define a class of polynomial functors suited for constructing coalgebras representing processes in which uncertainty plays an important role. In these polynomial functors we include upper and lower probability measures, finitely additive probability measures, plausibilty measures (and their duals, belief functions), and possibility measures. We give axioms and inference rules for the associated system of coalgebraic modal logic, and construct the canonical coalgebras to prove a completeness result.