Discrete and periodic contact switching is a key characteristic of steady-state legged locomotion. This paper introduces a framework for modeling and analyzing this contact-switching behavior through the framework of geometric mechanics on a toy robot model that can make continuous limb swings and discrete contact switches. The kinematics of this model form a hybrid shape-space and by extending the generalized Stokes' theorem to compute discrete curvature functions called \textit{stratified panels}, we determine average locomotion generated by gaits spanning multiple contact modes. Using this tool, we also demonstrate the ability to optimize gaits based on the system's locomotion constraints and perform gait reduction on a complex gait spanning multiple contact modes to highlight the method's scalability to multilegged systems.
In this paper, we comprehensively analyze the vertical and horizontal extensions of existing memory hierarchy. The difference between memory and big memory is well reported. We present the state-of-the-art studies upon the big memory systems, together with design methodology and implementations. Persistence is the first principle of big memory systems. We further show the full-stack and moving persistence.
This paper introduces a theory for assessing and optimizing the multiple-input-multiple-output performance of multi-port cluster antennas in terms of efficiency, channel correlation, and power distribution. A method based on a convex optimization of feeding coefficients is extended with additional constraints allowing the user to control a ratio between the power radiated by the clusters. The formulation of the problem makes it possible to simultaneously optimize total efficiency and channel correlation with a fixed ratio between power radiated by the clusters, thus examining a trade-off between these parameters. It is shown that channel correlation, total efficiency, and allocation of radiated power are mutually conflicting parameters. The trade-offs are shown and discussed. The theory is demonstrated on a four-element antenna array and on a mobile terminal antenna.
The success of Bayesian persuasion relies on the key assumption that the sender will commit to a predetermined information disclosure policy (signaling scheme). However, in practice, it is usually difficult for the receiver to monitor whether the sender sticks to the disclosure policy, which makes the credibility of the sender's disclosure policy questionable. The sender's credibility is particularly tenuous when there are obvious deviations that benefit the sender. In this work, we identify such a deviation: the sender may be unwilling to send a signal that will lead to a less desirable outcome compared to no information disclosure. We thus propose the notion of ex-post individually rational (ex-post IR) Bayesian persuasion: after observing the state, the sender is never required to send a signal that will make the outcome worse off (compared to no information disclosure). An ex-post IR Bayesian persuasion policy is more likely to be truthfully followed by the sender, and thus more credible for the receiver. Our contribution is threefold. Firstly, we demonstrate that the optimal ex-post IR Bayesian persuasion policy can be efficiently computed through a linear program, while also offering geometric characterizations of this optimal policy. Second, we show that surprisingly, for non-trivial classes of games, the imposition of ex-post IR constraints does not affect the sender's expected utility. Finally, we compare ex-post IR Bayesian persuasion to other information disclosure models that ensure different notions of credibility.
This paper studies the problem of encoding messages into sequences which can be uniquely recovered from some noisy observations about their substrings. The observed reads comprise consecutive substrings with some given minimum overlap. This coded reconstruction problem has applications to DNA storage. We consider both single-strand reconstruction codes and multi-strand reconstruction codes, where the message is encoded into a single strand or a set of multiple strands, respectively. Various parameter regimes are studied. New codes are constructed, some of whose rates asymptotically attain the upper bounds.
In computational social choice, the distortion of a voting rule quantifies the degree to which the rule overcomes limited preference information to select a socially desirable outcome. This concept has been investigated extensively, but only through a worst-case lens. Instead, we study the expected distortion of voting rules with respect to an underlying distribution over voter utilities. Our main contribution is the design and analysis of a novel and intuitive rule, binomial voting, which provides strong distribution-independent guarantees for both expected distortion and expected welfare.
The Image Captioning (IC) technique is widely used to describe images in natural language. Recently, some IC system testing methods have been proposed. However, these methods still rely on pre-annotated information and hence cannot really alleviate the oracle problem in testing. Besides, their method artificially manipulates objects, which may generate unreal images as test cases and thus lead to less meaningful testing results. Thirdly, existing methods have various requirements on the eligibility of source test cases, and hence cannot fully utilize the given images to perform testing. To tackle these issues, in this paper, we propose REIC to perform metamorphic testing for IC systems with some image-level reduction transformations like image cropping and stretching. Instead of relying on the pre-annotated information, REIC uses a localization method to align objects in the caption with corresponding objects in the image, and checks whether each object is correctly described or deleted in the caption after transformation. With the image-level reduction transformations, REIC does not artificially manipulate any objects and hence can avoid generating unreal follow-up images. Besides, it eliminates the requirement on the eligibility of source test cases in the metamorphic transformation process, as well as decreases the ambiguity and boosts the diversity among the follow-up test cases, which consequently enables testing to be performed on any test image and reveals more distinct valid violations. We employ REIC to test five popular IC systems. The results demonstrate that REIC can sufficiently leverage the provided test images to generate follow-up cases of good reality, and effectively detect a great number of distinct violations, without the need for any pre-annotated information.
Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.
Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.