In this paper, the SHIP4LLRF (Scalable Hardware Integrated Platform for LLRF) based on 6U VPX-standard was designed preliminarily, which includes 6U mother board and two HPC FPGA mezzanine cards (FMCs). The ADC and DAC FMC is based on ADS54J60 from TI and LTC2000Y-16 form ADI, respectively. The system mother board is based on Xilinx Kintex UltraScale KU060, which also features 64-bit DDR4 SDRAM, QSFP and USB3.0 interfaces. Each FMC connector is assigned 58 pairs of LVDS standard IOs and 8 pairs of GTH high-speed serial lanes. Besides, the mother board is equipped with the self-developed ZYNQBee2 module based on ZYNQ7010 for slow control such as EPICS. All ADC or DAC raw data in each SHIP4LLEF is compressed loss-less without triggering and transmitted to the process board. A scalar quantization method which is in development is used for lossless compression of ADC raw data, the process board will decompress the ADC data and perform a digital algorithm to measure the amplitude and phase of the high frequency signal. This de-sign is scalable for testing and upgradability, mean-while, the trigger-less data transmission enable this system participate in both local (rack-scale) and accelerator-wide communication networks.
This paper proposes a delay mechanism to mitigate the impact of latency differences in the gRPC framework--a high-performance, open-source universal remote procedure call (RPC) framework--between different programming languages on the performance of agents in DareFightingICE, a fighting game research platform. The study finds that gRPC latency differences between Java and Python can significantly impact real-time decision-making. Without a delay mechanism, Java-based agents outperform Python-based ones due to lower gRPC latency on the Java platform. However, with the proposed delay mechanism, both Java-based and Python-based agents exhibit similar performance, leading to a fair comparison between agents developed using different programming languages. Thus, this work underscores the crucial importance of considering gRPC latency when developing and evaluating agents in DareFightingICE, and the insights gained could potentially extend to other gRPC-based applications.
The updated version of this paper has already been published in The Australasian Journal of Logic. You can access to the paper from the following link: //ojs.victoria.ac.nz/ajl/article/view/7696. This paper shows Hilbert system $(\mathbf{C+J})^{-}$, given by del Cerro and Herzig (1996) is semantically incomplete. This system is proposed as a proof theory for Kripke semantics for a combination of intuitionistic and classical propositional logic, which is obtained by adding the natural semantic clause of classical implication into intuitionistic Kripke semantics. Although Hilbert system $(\mathbf{C+J})^{-}$ contains intuitionistic modus ponens as a rule, it does not contain classical modus ponens. This paper gives an argument ensuring that the system $(\mathbf{C+J})^{-}$ is semantically incomplete because of the absence of classical modus ponens. Our method is based on the logic of paradox, which is a paraconsistent logic proposed by Priest (1979).
This paper presents HandFi, which constructs hand skeletons with practical WiFi devices. Unlike previous WiFi hand sensing systems that primarily employ predefined gestures for pattern matching, by constructing the hand skeleton, HandFi can enable a variety of downstream WiFi-based hand sensing applications in gaming, healthcare, and smart homes. Deriving the skeleton from WiFi signals is challenging, especially because the palm is a dominant reflector compared with fingers. HandFi develops a novel multi-task learning neural network with a series of customized loss functions to capture the low-level hand information from WiFi signals. During offline training, HandFi takes raw WiFi signals as input and uses the leap motion to provide supervision. During online use, only with commercial WiFi, HandFi is capable of producing 2D hand masks as well as 3D hand poses. We demonstrate that HandFi can serve as a foundation model to enable developers to build various applications such as finger tracking and sign language recognition, and outperform existing WiFi-based solutions. Artifacts can be found: //github.com/SIJIEJI/HandFi
In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference. TAPP is different from canonical prompts for LLMs in that it is a fixed prompt prepended to the beginning of every input regardless of the target task for zero-shot generalization. We observe that both base LLMs (i.e. not fine-tuned to follow instructions) and instruction-tuned models benefit from TAPP, resulting in 34.58% and 12.26% improvement on average, respectively. This implies that the instruction-following ability of LLMs can be improved during inference time with a fixed prompt constructed with simple heuristics. We hypothesize that TAPP assists language models to better estimate the output distribution by focusing more on the instruction of the target task during inference. In other words, such ability does not seem to be sufficiently activated in not only base LLMs but also many instruction-fine-tuned LLMs. All experiments are reproducible from //github.com/seonghyeonye/TAPP.
This paper addresses the challenging scheduling problem of coflows with release times, with the objective of minimizing the total weighted completion time. Previous literature has predominantly concentrated on establishing the scheduling order of coflows. In advancing this research, we contribute by optimizing performance through the determination of the flow scheduling order. The proposed approximation algorithm achieves approximation ratios of $3$ and $2+\frac{1}{LB}$ for arbitrary and zero release times, respectively, where $LB$ is the minimum lower bound of coflow completion time. To further improve time complexity, we streamline linear programming by employing interval-indexed relaxation, thereby reducing the number of variables. As a result, for $\epsilon>0$, the approximation algorithm achieves approximation ratios of $3 + \epsilon$ and $2 + \epsilon$ for arbitrary and zero release times, respectively. Notably, these advancements surpass the previously best-known approximation ratios of 5 and 4 for arbitrary and zero release times, respectively, as established by Shafiee and Ghaderi.
This paper presents a Gaussian Process (GP) framework, a non-parametric technique widely acknowledged for regression and classification tasks, to address inverse problems in mean field games (MFGs). By leveraging GPs, we aim to recover agents' strategic actions and the environment's configurations from partial and noisy observations of the population of agents and the setup of the environment. Our method is a probabilistic tool to infer the behaviors of agents in MFGs from data in scenarios where the comprehensive dataset is either inaccessible or contaminated by noises.
This paper aims to propose the quality of experience (QoE) models based on the expectation and/or the perception of 5G users to evaluate for mean opinion score (MOS) for real-time or interactive services/applications with high reliability. Therefore, Based on the fundamental QoE concept, the analytic hierarchy process (AHP) decision making technique has been applied.
We suggest the implementation of the Dual Use Research of Concern (DURC) framework, originally designed for life sciences, to the domain of generative AI, with a specific focus on Large Language Models (LLMs). With its demonstrated advantages and drawbacks in biological research, we believe the DURC criteria can be effectively redefined for LLMs, potentially contributing to improved AI governance. Acknowledging the balance that must be struck when employing the DURC framework, we highlight its crucial political role in enhancing societal awareness of the impact of generative AI. As a final point, we offer a series of specific recommendations for applying the DURC approach to LLM research.
In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.
In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR). We formulated ASR for different languages as different tasks, and meta-learned the initialization parameters from many pretraining languages to achieve fast adaptation on unseen target language, via recently proposed model-agnostic meta learning algorithm (MAML). We evaluated the proposed approach using six languages as pretraining tasks and four languages as target tasks. Preliminary results showed that the proposed method, MetaASR, significantly outperforms the state-of-the-art multitask pretraining approach on all target languages with different combinations of pretraining languages. In addition, since MAML's model-agnostic property, this paper also opens new research direction of applying meta learning to more speech-related applications.