In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies, where agents can create fake identities to manipulate outcomes, has not yet been studied. In this paper, we delve into the intricacies of different cost-sharing mechanisms proposed in the literature highlighting its non Sybil-resistance nature. Furthermore, we prove that under mild conditions, a Sybil-proof cost-sharing mechanism for public excludable goods is at least $(n/2+1)-$approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. To circumvent these negative results, we introduce the concept of $\textit{Sybil Welfare Invariant}$ mechanisms, where a mechanism maintains its worst-case welfare under Sybil-strategies for every set of prior beliefs even when the mechanism is not Sybil-proof. Finally, we prove that the Shapley value mechanism for public excludable goods holds this property, and so deduce that the worst-case social cost of this mechanism is the $n$th harmonic number $\mathcal H_n$ even under equilibrium of the game with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding carries important implications for decentralized autonomous organizations (DAOs), indicating that they are capable of funding public excludable goods efficiently, even when the total number of agents is unknown.
Symplectic integrators are widely implemented numerical integrators for Hamiltonian mechanics, which preserve the Hamiltonian structure (symplecticity) of the system. Although the symplectic integrator does not conserve the energy of the system, it is well known that there exists a conserving modified Hamiltonian, called the shadow Hamiltonian. For the Nambu mechanics, which is one of the generalized Hamiltonian mechanics, we can also construct structure-preserving integrators by the same procedure used to construct the symplectic integrators. In the structure-preserving integrator, however, the existence of shadow Hamiltonians is non-trivial. This is because the Nambu mechanics is driven by multiple Hamiltonians and it is non-trivial whether the time evolution by the integrator can be cast into the Nambu mechanical time evolution driven by multiple shadow Hamiltonians. In the present paper we construct structure-preserving integrators for a simple Nambu mechanical system, and derive the shadow Hamiltonians in two ways. This is the first attempt to derive shadow Hamiltonians of structure-preserving integrators for Nambu mechanics. We show that the fundamental identity, which corresponds to the Jacobi identity in Hamiltonian mechanics, plays an important role to calculate the shadow Hamiltonians using the Baker-Campbell-Hausdorff formula. It turns out that the resulting shadow Hamiltonians have indefinite forms depending on how the fundamental identities are used. This is not a technical artifact, because the exact shadow Hamiltonians obtained independently have the same indefiniteness.
Leveraging large language models (LLMs), autonomous agents have significantly improved, gaining the ability to handle a variety of tasks. In open-ended settings, optimizing collaboration for efficiency and effectiveness demands flexible adjustments. Despite this, current research mainly emphasizes fixed, task-oriented workflows and overlooks agent-centric organizational structures. Drawing inspiration from human organizational behavior, we introduce a self-organizing agent system (S-Agents) with a "tree of agents" structure for dynamic workflow, an "hourglass agent architecture" for balancing information priorities, and a "non-obstructive collaboration" method to allow asynchronous task execution among agents. This structure can autonomously coordinate a group of agents, efficiently addressing the challenges of open and dynamic environments without human intervention. Our experiments demonstrate that S-Agents proficiently execute collaborative building tasks and resource collection in the Minecraft environment, validating their effectiveness.
By generalizing the stabilizer quantum error-correcting codes, entanglement-assisted quantum error-correcting (EAQEC) codes were introduced, which could be derived from any classical linear codes via the relaxation of self-orthogonality conditions with the aid of pre-shared entanglement between the sender and the receiver. In this paper, three classes of entanglement-assisted quantum error-correcting maximum-distance-separable (EAQMDS) codes are constructed through generalized Reed-Solomon codes. Under our constructions, the minimum distances of our EAQMDS codes are much larger than those of the known EAQMDS codes of the same lengths that consume the same number of ebits. Furthermore, some of the lengths of the EAQMDS codes are not divisors of $q^2-1$, which are completely new and unlike all those known lengths existed before.
Dislocations are the primary carriers of plasticity in metallic material. Understanding the basic mechanisms for dislocation movement is paramount to predicting the material mechanical response. Relying on atomistic simulations, we observe a transition from non-Arrhenius to Arrhenius behavior in the rate for an edge dislocation to overcome the elastic interaction with a prismatic loop in tungsten. Beyond the critical resolved shear stress, the process shows a non-Arrhenius behavior at low temperatures. However, as the temperature increases, the activation entropy starts to dominate, leading to a traditional Arrhenius behavior. We have computed the activation entropy analytically along the minimum energy path following Schoeck's methods [1], which capture the cross-over between anti-Arrhenius and Arrhenius domains. Also, the Projected Average Force Integrator (PAFI) [2], another simulation method to compute free energies along an initial transition path, exhibits considerable concurrence with Schoeck's formalism. We conclude that entropic effects need to be considered to understand processes involving dislocations bypassing elastic barriers close to the critical resolved shear stress. More work needs to be performed to fully understand the discrepancies between Schoeck's and PAFI results compared to molecular dynamics.
This paper develops an in-depth treatment concerning the problem of approximating the Gaussian smoothing and Gaussian derivative computations in scale-space theory for application on discrete data. With close connections to previous axiomatic treatments of continuous and discrete scale-space theory, we consider three main ways discretizing these scale-space operations in terms of explicit discrete convolutions, based on either (i) sampling the Gaussian kernels and the Gaussian derivative kernels, (ii) locally integrating the Gaussian kernels and the Gaussian derivative kernels over each pixel support region and (iii) basing the scale-space analysis on the discrete analogue of the Gaussian kernel, and then computing derivative approximations by applying small-support central difference operators to the spatially smoothed image data. We study the properties of these three main discretization methods both theoretically and experimentally, and characterize their performance by quantitative measures, including the results they give rise to with respect to the task of scale selection, investigated for four different use cases, and with emphasis on the behaviour at fine scales. The results show that the sampled Gaussian kernels and derivatives as well as the integrated Gaussian kernels and derivatives perform very poorly at very fine scales. At very fine scales, the discrete analogue of the Gaussian kernel with its corresponding discrete derivative approximations performs substantially better. The sampled Gaussian kernel and the sampled Gaussian derivatives do, on the other hand, lead to numerically very good approximations of the corresponding continuous results, when the scale parameter is sufficiently large, in the experiments presented in the paper, when the scale parameter is greater than a value of about 1, in units of the grid spacing.
In logistic regression modeling, Firth's modified estimator is widely used to address the issue of data separation, which results in the nonexistence of the maximum likelihood estimate. Firth's modified estimator can be formulated as a penalized maximum likelihood estimator in which Jeffreys' prior is adopted as the penalty term. Despite its widespread use in practice, the formal verification of the corresponding estimate's existence has not been established. In this study, we establish the existence theorem of Firth's modified estimate in binomial logistic regression models, assuming only the full column rankness of the design matrix. We also discuss other binomial regression models obtained through alternating link functions and prove the existence of similar penalized maximum likelihood estimates for such models.
Principal stratification is a popular framework for causal inference in the presence of an intermediate outcome. While the principal average treatment effects have traditionally been the default target of inference, it may not be sufficient when the interest lies in the relative favorability of one potential outcome over the other within the principal stratum. We thus introduce the principal generalized causal effect estimands, which extend the principal average causal effects to accommodate nonlinear contrast functions. Under principal ignorability, we expand the theoretical results in Jiang et. al. (2022) to a much wider class of causal estimands in the presence of a binary intermediate variable. We develop identification formulas and derive the efficient influence functions of the generalized estimands for principal stratification analyses. These efficient influence functions motivate a set of multiply robust estimators and lay the ground for obtaining efficient debiased machine learning estimators via cross-fitting based on $U$-statistics. The proposed methods are illustrated through simulations and the analysis of a data example.
Conversation demands attention. Speakers must call words to mind, listeners must make sense of them, and both together must negotiate this flow of information, all in fractions of a second. We used large language models to study how this works in a large-scale dataset of English-language conversation, the CANDOR corpus. We provide a new estimate of the information density of unstructured conversation, of approximately 13 bits/second, and find significant effects associated with the cognitive load of both retrieving, and presenting, that information. We also reveal a role for backchannels -- the brief yeahs, uh-huhs, and mhmms that listeners provide -- in regulating the production of novelty: the lead-up to a backchannel is associated with declining information rate, while speech downstream rebounds to previous rates. Our results provide new insights into long-standing theories of how we respond to fluctuating demands on cognitive resources, and how we negotiate those demands in partnership with others.
We propose an operator learning approach to accelerate geometric Markov chain Monte Carlo (MCMC) for solving infinite-dimensional nonlinear Bayesian inverse problems. While geometric MCMC employs high-quality proposals that adapt to posterior local geometry, it requires computing local gradient and Hessian information of the log-likelihood, incurring a high cost when the parameter-to-observable (PtO) map is defined through expensive model simulations. We consider a delayed-acceptance geometric MCMC method driven by a neural operator surrogate of the PtO map, where the proposal is designed to exploit fast surrogate approximations of the log-likelihood and, simultaneously, its gradient and Hessian. To achieve a substantial speedup, the surrogate needs to be accurate in predicting both the observable and its parametric derivative (the derivative of the observable with respect to the parameter). Training such a surrogate via conventional operator learning using input--output samples often demands a prohibitively large number of model simulations. In this work, we present an extension of derivative-informed operator learning [O'Leary-Roseberry et al., J. Comput. Phys., 496 (2024)] using input--output--derivative training samples. Such a learning method leads to derivative-informed neural operator (DINO) surrogates that accurately predict the observable and its parametric derivative at a significantly lower training cost than the conventional method. Cost and error analysis for reduced basis DINO surrogates are provided. Numerical studies on PDE-constrained Bayesian inversion demonstrate that DINO-driven MCMC generates effective posterior samples 3--9 times faster than geometric MCMC and 60--97 times faster than prior geometry-based MCMC. Furthermore, the training cost of DINO surrogates breaks even after collecting merely 10--25 effective posterior samples compared to geometric MCMC.
Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.