Modern blockchains face a scalability challenge due to the intrinsic throughput limitations of consensus protocols. Layer 2 optimistic rollups (L2) are a faster alternative that offer the same interface in terms of smart contract development and user interaction. Optimistic rollups perform most computations offchain and make light use of an underlying blockchain (L1) to guarantee correct behavior, implementing a cheaper blockchain on a blockchain solution. With optimistic rollups, a sequencer calculates offchain batches of L2 transactions and commits batches (compressed or hashed) to the L1 blockchain. The use of hashes requires a data service to translate hashes into their corresponding batches. Current L2 implementations consist of a centralized sequencer (central authority) and an optional data availability committee (DAC). In this paper, we propose a decentralized L2 optimistic rollup based on Setchain, a decentralized Byzantine-tolerant implementation of sets. The main contribution is a fully decentralized "arranger" where arrangers are a formal definition combining sequencers and DACs. We prove our implementation correct and show empirical evidence that our solution scales. A final contribution is a system of incentives (payments) for servers that implement the sequencer and data availability committee protocols correctly, and a fraud-proof mechanism to detect violations of the protocol.
Camera-based 3D occupancy prediction has recently garnered increasing attention in outdoor driving scenes. However, research in indoor scenes remains relatively unexplored. The core differences in indoor scenes lie in the complexity of scene scale and the variance in object size. In this paper, we propose a novel method, named ISO, for predicting indoor scene occupancy using monocular images. ISO harnesses the advantages of a pretrained depth model to achieve accurate depth predictions. Furthermore, we introduce the Dual Feature Line of Sight Projection (D-FLoSP) module within ISO, which enhances the learning of 3D voxel features. To foster further research in this domain, we introduce Occ-ScanNet, a large-scale occupancy benchmark for indoor scenes. With a dataset size 40 times larger than the NYUv2 dataset, it facilitates future scalable research in indoor scene analysis. Experimental results on both NYUv2 and Occ-ScanNet demonstrate that our method achieves state-of-the-art performance. The dataset and code are made publicly at //github.com/hongxiaoy/ISO.git.
Product bundling provides clients with a strategic combination of individual items.And it has gained significant attention in recent years as a fundamental prerequisite for online services. Recent methods utilize multimodal information through sophisticated extractors for bundling, but remain limited by inferior semantic understanding, the restricted scope of knowledge, and an inability to handle cold-start issues.Despite the extensive knowledge and complex reasoning capabilities of large language models (LLMs), their direct utilization fails to process multimodalities and exploit their knowledge for multimodal product bundling. Adapting LLMs for this purpose involves demonstrating the synergies among different modalities and designing an effective optimization strategy for bundling, which remains challenging.To this end, we introduce Bundle-LLM to bridge the gap between LLMs and product bundling tasks. Sepcifically, we utilize a hybrid item tokenization to integrate multimodal information, where a simple yet powerful multimodal fusion module followed by a trainable projector embeds all non-textual features into a single token. This module not only explicitly exhibits the interplays among modalities but also shortens the prompt length, thereby boosting efficiency.By designing a prompt template, we formulate product bundling as a multiple-choice question given candidate items. Furthermore, we adopt progressive optimization strategy to fine-tune the LLMs for disentangled objectives, achieving effective product bundling capability with comprehensive multimodal semantic understanding.Extensive experiments on four datasets from two application domains show that our approach outperforms a range of state-of-the-art (SOTA) methods.
Index modulation (IM) significantly enhances the spectral efficiency of fluid antennas (FAs) enabled multiple-input multiple-output (MIMO) systems, which is named FA-IM. However, due to the dense distribution of ports on fluid antennas, the wireless channel exhibits a high spatial correlation, resulting in severe performance degradation in the existing FA-IM scheme. This paper proposes a novel fluid antenna grouping index modulation (FA-GIM) scheme to mitigate the spatial correlation of the FA-IM channel, further enhancing system performance. Based on the spatial correlation model of two-dimensional (2D) fluid antenna surfaces, this paper specifically adopts a block grouping method where adjacent ports are allocated to the same group. The numerical results demonstrate that the proposed scheme exhibits superior bit error rate (BER) performance compared to the state-of-the-art scheme, enhancing the robustness of FA-assisted MIMO systems.
Quantum repeater chains will form the backbone of future quantum networks that distribute entanglement between network nodes. Therefore, it is important to understand the entanglement distribution performance of quantum repeater chains, especially their throughput and latency. By using Markov chains to model the stochastic dynamics in quantum repeater chains, we offer analytical estimations for long-run throughput and on-demand latency of continuous entanglement distribution. We first study single-link entanglement generation using general multiheralded protocols. We then model entanglement distribution with entanglement swapping over two links, using either a single- or a double-heralded entanglement generation protocol. We also demonstrate how the two-link results offer insights into the performance of general $2^k$-link nested repeater chains. Our results enrich the quantitative understanding of quantum repeater network performance, especially the dependence on system parameters. The analytical formulae themselves are valuable reference resources for the quantum networking community. They can serve as benchmarks for quantum network simulation validation or as examples of quantum network dynamics modeling using the Markov chain formalism.
We study the $d$-dimensional knapsack problem. We are given a set of items, each with a $d$-dimensional cost vector and a profit, along with a $d$-dimensional budget vector. The goal is to select a set of items that do not exceed the budget in all dimensions and maximize the total profit. A PTAS with running time $n^{\Theta(d/\varepsilon)}$ has long been known for this problem, where $\varepsilon$ is the error parameter and $n$ is the encoding size. Despite decades of active research, the best running time of a PTAS has remained $O(n^{\lceil d/\varepsilon \rceil - d})$. Unfortunately, existing lower bounds only cover the special case with two dimensions $d = 2$, and do not answer whether there is a $n^{o(d/\varepsilon)}$-time PTAS for larger values of $d$. The status of exact algorithms is similar: there is a simple $O(n \cdot W^d)$-time (exact) dynamic programming algorithm, where $W$ is the maximum budget, but there is no lower bound which explains the strong exponential dependence on $d$. In this work, we show that the running times of the best-known PTAS and exact algorithm cannot be improved up to a polylogarithmic factor assuming Gap-ETH. Our techniques are based on a robust reduction from 2-CSP, which embeds 2-CSP constraints into a desired number of dimensions, exhibiting tight trade-off between $d$ and $\varepsilon$ for most regimes of the parameters. Informally, we obtain the following main results for $d$-dimensional knapsack. No $n^{o(d/\varepsilon \cdot 1/(\log(d/\varepsilon))^2)}$-time $(1-\varepsilon)$-approximation for every $\varepsilon = O(1/\log d)$. No $(n+W)^{o(d/\log d)}$-time exact algorithm (assuming ETH). No $n^{o(\sqrt{d})}$-time $(1-\varepsilon)$-approximation for constant $\varepsilon$. $(d \cdot \log W)^{O(d^2)} + n^{O(1)}$-time $\Omega(1/\sqrt{d})$-approximation and a matching $n^{O(1)}$-time lower~bound.
The emergence of new spoofing attacks poses an increasing challenge to audio security. Current detection methods often falter when faced with unseen spoofing attacks. Traditional strategies, such as retraining with new data, are not always feasible due to extensive storage. This paper introduces a novel continual learning method Continual Audio Defense Enhancer (CADE). First, by utilizing a fixed memory size to store randomly selected samples from previous datasets, our approach conserves resources and adheres to privacy constraints. Additionally, we also apply two distillation losses in CADE. By distillation in classifiers, CADE ensures that the student model closely resembles that of the teacher model. This resemblance helps the model retain old information while facing unseen data. We further refine our model's performance with a novel embedding similarity loss that extends across multiple depth layers, facilitating superior positive sample alignment. Experiments conducted on the ASVspoof2019 dataset show that our proposed method outperforms the baseline methods.
Sharding is essential for improving blockchain scalability. Existing protocols overlook diverse adversarial attacks, limiting transaction throughput. This paper presents Reticulum, a groundbreaking sharding protocol addressing this issue, boosting blockchain scalability. Reticulum employs a two-phase approach, adapting transaction throughput based on runtime adversarial attacks. It comprises "control" and "process" shards in two layers. Process shards contain at least one trustworthy node, while control shards have a majority of trusted nodes. In the first phase, transactions are written to blocks and voted on by nodes in process shards. Unanimously accepted blocks are confirmed. In the second phase, blocks without unanimous acceptance are voted on by control shards. Blocks are accepted if the majority votes in favor, eliminating first-phase opponents and silent voters. Reticulum uses unanimous voting in the first phase, involving fewer nodes, enabling more parallel process shards. Control shards finalize decisions and resolve disputes. Experiments confirm Reticulum's innovative design, providing high transaction throughput and robustness against various network attacks, outperforming existing sharding protocols for blockchain networks.
This paper investigates two fundamental problems that arise when utilizing Intrinsic Motivation (IM) for reinforcement learning in Reward-Free Pre-Training (RFPT) tasks and Exploration with Intrinsic Motivation (EIM) tasks: 1) how to design an effective intrinsic objective in RFPT tasks, and 2) how to reduce the bias introduced by the intrinsic objective in EIM tasks. Existing IM methods suffer from static skills, limited state coverage, sample inefficiency in RFPT tasks, and suboptimality in EIM tasks. To tackle these problems, we propose \emph{Constrained Intrinsic Motivation (CIM)} for RFPT and EIM tasks, respectively: 1) CIM for RFPT maximizes the lower bound of the conditional state entropy subject to an alignment constraint on the state encoder network for efficient dynamic and diverse skill discovery and state coverage maximization; 2) CIM for EIM leverages constrained policy optimization to adaptively adjust the coefficient of the intrinsic objective to mitigate the distraction from the intrinsic objective. In various MuJoCo robotics environments, we empirically show that CIM for RFPT greatly surpasses fifteen IM methods for unsupervised skill discovery in terms of skill diversity, state coverage, and fine-tuning performance. Additionally, we showcase the effectiveness of CIM for EIM in redeeming intrinsic rewards when task rewards are exposed from the beginning. Our code is available at //github.com/x-zheng16/CIM.
Several applications in time series forecasting require predicting multiple steps ahead. Despite the vast amount of literature in the topic, both classical and recent deep learning based approaches have mostly focused on minimising performance averaged over the predicted window. We observe that this can lead to disparate distributions of errors across forecasting steps, especially for recent transformer architectures trained on popular forecasting benchmarks. That is, optimising performance on average can lead to undesirably large errors at specific time-steps. In this work, we present a Constrained Learning approach for long-term time series forecasting that aims to find the best model in terms of average performance that respects a user-defined upper bound on the loss at each time-step. We call our approach loss shaping constraints because it imposes constraints on the loss at each time step, and leverage recent duality results to show that despite its non-convexity, the resulting problem has a bounded duality gap. We propose a practical Primal-Dual algorithm to tackle it, and demonstrate that the proposed approach exhibits competitive average performance in time series forecasting benchmarks, while shaping the distribution of errors across the predicted window.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.