To succeed in their objectives, groups of individuals must be able to make quick and accurate collective decisions on the best option among a set of alternatives with different qualities. Group-living animals aim to do that all the time. Plants and fungi are thought to do so too. Swarms of autonomous robots can also be programmed to make best-of-n decisions for solving tasks collaboratively. Ultimately, humans critically need it and so many times they should be better at it. Thanks to their mathematical tractability, simple models like the voter model and the local majority rule model have proven useful to describe the dynamics of such collective decision-making processes. To reach a consensus, individuals change their opinion by interacting with neighbors in their social network. At least among animals and robots, options with a better quality are exchanged more often and therefore spread faster than lower-quality options, leading to the collective selection of the best option. With our work, we study the impact of individuals making errors in pooling others' opinions caused, for example, by the need to reduce the cognitive load. Our analysis is grounded on the introduction of a model that generalizes the two existing models (local majority rule and voter model), showing a speed-accuracy trade-off regulated by the cognitive effort of individuals. We also investigate the impact of the interaction network topology on the collective dynamics. To do so, we extend our model and, by using the heterogeneous mean-field approach, we show the presence of another speed-accuracy trade-off regulated by network connectivity. An interesting result is that reduced network connectivity corresponds to an increase in collective decision accuracy.
Linear structural vector autoregressive models can be identified statistically without imposing restrictions on the model if the shocks are mutually independent and at most one of them is Gaussian. We show that this result extends to structural threshold and smooth transition vector autoregressive models incorporating a time-varying impact matrix defined as a weighted sum of the impact matrices of the regimes. Our empirical application studies the effects of the climate policy uncertainty shock on the U.S. macroeconomy. In a structural logistic smooth transition vector autoregressive model consisting of two regimes, we find that a positive climate policy uncertainty shock decreases production in times of low economic policy uncertainty but slightly increases it in times of high economic policy uncertainty. The introduced methods are implemented to the accompanying R package sstvars.
Differently from conventional procedures, the proposed solution advocates for a groundbreaking paradigm in water quality monitoring through the integration of satellite Remote Sensing (RS) data, Artificial Intelligence (AI) techniques, and onboard processing. The objective is to offer nearly real-time detection of contaminants in coastal waters addressing a significant gap in the existing literature. Moreover, the expected outcomes include substantial advancements in environmental monitoring, public health protection, and resource conservation. The specific focus of our study is on the estimation of Turbidity and pH parameters, for their implications on human and aquatic health. Nevertheless, the designed framework can be extended to include other parameters of interest in the water environment and beyond. Originating from our participation in the European Space Agency (ESA) OrbitalAI Challenge, this article describes the distinctive opportunities and issues for the contaminants monitoring on the Phisat-2 mission. The specific characteristics of this mission, with the tools made available, will be presented, with the methodology proposed by the authors for the onboard monitoring of water contaminants in near real-time. Preliminary promising results are discussed and in progress and future work introduced.
Ensuring intelligible speech communication for hearing assistive devices in low-latency scenarios presents significant challenges in terms of speech enhancement, coding and transmission. In this paper, we propose novel solutions for low-latency joint speech transmission and enhancement, leveraging deep neural networks (DNNs). Our approach integrates two state-of-the-art DNN architectures for low-latency speech enhancement and low-latency analog joint source-channel-based transmission, creating a combined low-latency system and jointly training both systems in an end-to-end approach. Due to the computational demands of the enhancement system, this order is suitable when high computational power is unavailable in the decoder, like hearing assistive devices. The proposed system enables the configuration of total latency, achieving high performance even at latencies as low as 3 ms, which is typically challenging to attain. The simulation results provide compelling evidence that a joint enhancement and transmission system is superior to a simple concatenation system in diverse settings, encompassing various wireless channel conditions, latencies, and background noise scenarios.
We randomize the implicit two-stage Runge-Kutta scheme in order to improve the rate of convergence (with respect to a deterministic scheme) and stability of the approximate solution (with respect to the solution generated by the explicit scheme). For stability analysis, we use Dahlquist's concept of A-stability, adopted to randomized schemes by considering three notions of stability: asymptotic, mean-square, and in probability. The randomized implicit RK2 scheme proves to be A-stable asymptotically and in probability but not in the mean-square sense.
Realizing computationally complex quantum circuits in the presence of noise and imperfections is a challenging task. While fault-tolerant quantum computing provides a route to reducing noise, it requires a large overhead for generic algorithms. Here, we develop and analyze a hardware-efficient, fault-tolerant approach to realizing complex sampling circuits. We co-design the circuits with the appropriate quantum error correcting codes for efficient implementation in a reconfigurable neutral atom array architecture, constituting what we call a fault-tolerant compilation of the sampling algorithm. Specifically, we consider a family of $[[2^D , D, 2]]$ quantum error detecting codes whose transversal and permutation gate set can realize arbitrary degree-$D$ instantaneous quantum polynomial (IQP) circuits. Using native operations of the code and the atom array hardware, we compile a fault-tolerant and fast-scrambling family of such IQP circuits in a hypercube geometry, realized recently in the experiments by Bluvstein et al. [Nature 626, 7997 (2024)]. We develop a theory of second-moment properties of degree-$D$ IQP circuits for analyzing hardness and verification of random sampling by mapping to a statistical mechanics model. We provide evidence that sampling from hypercube IQP circuits is classically hard to simulate and analyze the linear cross-entropy benchmark (XEB) in comparison to the average fidelity. To realize a fully scalable approach, we first show that Bell sampling from degree-$4$ IQP circuits is classically intractable and can be efficiently validated. We further devise new families of $[[O(d^D),D,d]]$ color codes of increasing distance $d$, permitting exponential error suppression for transversal IQP sampling. Our results highlight fault-tolerant compiling as a powerful tool in co-designing algorithms with specific error-correcting codes and realistic hardware.
Modern regression applications can involve hundreds or thousands of variables which motivates the use of variable selection methods. Bayesian variable selection defines a posterior distribution on the possible subsets of the variables (which are usually termed models) to express uncertainty about which variables are strongly linked to the response. This can be used to provide Bayesian model averaged predictions or inference, and to understand the relative importance of different variables. However, there has been little work on meaningful representations of this uncertainty beyond first order summaries. We introduce Cartesian credible sets to address this gap. The elements of these sets are formed by concatenating sub-models defined on each block of a partition of the variables. Investigating these sub-models allow us to understand whether the models in the Cartesian credible set always/never/sometimes include a particular variable or group of variables and provide a useful summary of model uncertainty. We introduce methods to find these sets that emphasize ease of understanding. The potential of the method is illustrated on regression problems with both small and large numbers of variables.
Combined experiments and computational modelling are used to increase understanding of the suitability of the Single-Edge Notch Tension (SENT) test for assessing hydrogen embrittlement susceptibility. The SENT tests were designed to provide the mode I threshold stress intensity factor ($K_{\text{th}}$) for hydrogen-assisted cracking of a C110 steel in two corrosive environments. These were accompanied by hydrogen permeation experiments to relate the environments to the absorbed hydrogen concentrations. A coupled phase-field-based deformation-diffusion-fracture model is then employed to simulate the SENT tests, predicting $K_{\text{th}}$ in good agreement with the experimental results and providing insights into the hydrogen absorption-diffusion-cracking interactions. The suitability of SENT testing and its optimal characteristics (e.g., test duration) are discussed in terms of the various simultaneous active time-dependent phenomena, triaxiality dependencies, and regimes of hydrogen embrittlement susceptibility.
We introduce the notion of multi-patterns, a combinatorial abstraction of polyphonic musical phrases. The interest of this approach in encoding musical phrases lies in the fact that it becomes possible to compose multi-patterns in order to produce new ones. This composition is parameterized by a monoid structure on the scale degrees. This embeds the set of the musical phrases into an algebraic framework since the set of the multi-patterns is endowed with the structure of an operad. Operads are algebraic structures offering a formalization and an abstraction of the notion of operators and their compositions. Seeing musical phrases as operators allows us to perform computations on phrases and admits applications in generative music. Indeed, given a set of initial multi-patterns, we propose various algorithms to randomly generate a new and longer phrase emulating the style suggested by the inputted multi-patterns. The designed algorithms use types of grammars working with operads and colored operads, known as bud generating systems.
We study the problem of bivariate discrete or continuous probability density estimation under low-rank constraints.For discrete distributions, we assume that the two-dimensional array to estimate is a low-rank probability matrix.In the continuous case, we assume that the density with respect to the Lebesgue measure satisfies a generalized multi-view model, meaning that it is $\beta$-H{\"o}lder and can be decomposed as a sum of $K$ components, each of which is a product of one-dimensional functions.In both settings, we propose estimators that achieve, up to logarithmic factors, the minimax optimal convergence rates under such low-rank constraints.In the discrete case, the proposed estimator is adaptive to the rank $K$. In the continuous case, our estimator converges with the $L_1$ rate $\min((K/n)^{\beta/(2\beta+1)}, n^{-\beta/(2\beta+2)})$ up to logarithmic factors, and it is adaptive to the unknown support as well as to the smoothness $\beta$ and to the unknown number of separable components $K$. We present efficient algorithms for computing our estimators.
We present an unsupervised 3D shape co-segmentation method which learns a set of deformable part templates from a shape collection. To accommodate structural variations in the collection, our network composes each shape by a selected subset of template parts which are affine-transformed. To maximize the expressive power of the part templates, we introduce a per-part deformation network to enable the modeling of diverse parts with substantial geometry variations, while imposing constraints on the deformation capacity to ensure fidelity to the originally represented parts. We also propose a training scheme to effectively overcome local minima. Architecturally, our network is a branched autoencoder, with a CNN encoder taking a voxel shape as input and producing per-part transformation matrices, latent codes, and part existence scores, and the decoder outputting point occupancies to define the reconstruction loss. Our network, coined DAE-Net for Deforming Auto-Encoder, can achieve unsupervised 3D shape co-segmentation that yields fine-grained, compact, and meaningful parts that are consistent across diverse shapes. We conduct extensive experiments on the ShapeNet Part dataset, DFAUST, and an animal subset of Objaverse to show superior performance over prior methods. Code and data are available at //github.com/czq142857/DAE-Net.