In this paper, we focus on testing multivariate normality using the BHEP test with data that are missing completely at random. Our objective is twofold: first, to gain insight into the asymptotic behavior of BHEP test statistics under two widely used approaches for handling missing data, namely complete-case analysis and imputation, and second, to compare the power performance of test statistic under these approaches. It is observed that under the imputation approach, the affine invariance of test statistics is not preserved. To address this issue, we propose an appropriate bootstrap algorithm for approximating p-values. Extensive simulation studies demonstrate that both mean and median approaches exhibit greater power compared to testing with complete-case analysis, and open some questions for further research.
Architectural simulators hold a vital role in RISC-V research, providing a crucial platform for workload evaluation without the need for costly physical prototypes. They serve as a dynamic environment for exploring innovative architectural concepts, enabling swift iteration and thorough analysis of performance metrics. As deep learning algorithms become increasingly pervasive, it is essential to benchmark new architectures with machine learning workloads. The diverse computational kernels used in deep learning algorithms highlight the necessity for a comprehensive compilation toolchain to map to target hardware platforms. This study evaluates the performance of a wide array of machine learning workloads on RISC-V architectures using gem5, an open-source architectural simulator. Leveraging an open-source compilation toolchain based on Multi-Level Intermediate Representation (MLIR), the research presents benchmarking results specifically focused on deep learning inference workloads. Additionally, the study sheds light on current limitations of gem5 when simulating RISC-V architectures, offering insights for future development and refinement.
This work concerns the enrichment of Discontinuous Galerkin (DG) bases, so that the resulting scheme provides a much better approximation of steady solutions to hyperbolic systems of balance laws. The basis enrichment leverages a prior - an approximation of the steady solution - which we propose to compute using a Physics-Informed Neural Network (PINN). To that end, after presenting the classical DG scheme, we show how to enrich its basis with a prior. Convergence results and error estimates follow, in which we prove that the basis with prior does not change the order of convergence, and that the error constant is improved. To construct the prior, we elect to use parametric PINNs, which we introduce, as well as the algorithms to construct a prior from PINNs. We finally perform several validation experiments on four different hyperbolic balance laws to highlight the properties of the scheme. Namely, we show that the DG scheme with prior is much more accurate on steady solutions than the DG scheme without prior, while retaining the same approximation quality on unsteady solutions.
Triply periodic minimal surface (TPMS) is emerging as an important way of designing microstructures. However, there has been limited use of commercial CAD/CAM/CAE software packages for TPMS design and manufacturing. This is mainly because TPMS is consistently described in the functional representation (F-rep) format, while modern CAD/CAM/CAE tools are built upon the boundary representation (B-rep) format. One possible solution to this gap is translating TPMS to STEP, which is the standard data exchange format of CAD/CAM/CAE. Following this direction, this paper proposes a new translation method with error-controlling and $C^2$ continuity-preserving features. It is based on an approximation error-driven TPMS sampling algorithm and a constrained-PIA algorithm. The sampling algorithm controls the deviation between the original and translated models. With it, an error bound of $2\epsilon$ on the deviation can be ensured if two conditions called $\epsilon$-density and $\epsilon$-approximation are satisfied. The constrained-PIA algorithm enforces $C^2$ continuity constraints during TPMS approximation, and meanwhile attaining high efficiency. A theoretical convergence proof of this algorithm is also given. The effectiveness of the translation method has been demonstrated by a series of examples and comparisons.
In this paper, we propose a weak Galerkin finite element method (WG) for solving singularly perturbed convection-diffusion problems on a Bakhvalov-type mesh in 2D. Our method is flexible and allows the use of discontinuous approximation functions on the meshe. An error estimate is devised in a suitable norm and the optimal convergence order is obtained. Finally, numerical experiments are given to support the theory and to show the efficiency of the proposed method.
Adapting pre-trained foundation models for various downstream tasks has been prevalent in artificial intelligence. Due to the vast number of tasks and high costs, adjusting all parameters becomes unfeasible. To mitigate this, several fine-tuning techniques have been developed to update the pre-trained model weights in a more resource-efficient manner, such as through low-rank adjustments. Yet, almost all of these methods focus on linear weights, neglecting the intricacies of parameter spaces in higher dimensions like 4D. Alternatively, some methods can be adapted for high-dimensional parameter space by compressing changes in the original space into two dimensions and then employing low-rank matrix decomposition. However, these approaches destructs the structural integrity of the involved high-dimensional spaces. To tackle the diversity of dimensional spaces across different foundation models and provide a more precise representation of the changes within these spaces, this paper introduces a generalized parameter-efficient fine-tuning framework, FLoRA, designed for various dimensional parameter space. Specifically, utilizing Tucker decomposition, FLoRA asserts that changes in each dimensional parameter space are based on a low-rank core space which maintains the consistent topological structure with the original space. It then models the changes through this core space alongside corresponding weights to reconstruct alterations in the original space. FLoRA effectively preserves the structural integrity of the change of original N-dimensional parameter space, meanwhile decomposes it via low-rank tensor decomposition. Extensive experiments on computer vision, natural language processing and multi-modal tasks validate FLoRA's effectiveness. Codes are available at //github.com/SJTU-DeepVisionLab/FLoRA.
Contraction coefficients give a quantitative strengthening of the data processing inequality. As such, they have many natural applications whenever closer analysis of information processing is required. However, it is often challenging to calculate these coefficients. As a remedy we discuss a quantum generalization of Doeblin coefficients. These give an efficiently computable upper bound on many contraction coefficients. We prove several properties and discuss generalizations and applications. In particular, we give additional stronger bounds. One especially for PPT channels and one for general channels based on a constraint relaxation. Additionally, we introduce reverse Doeblin coefficients that bound certain expansion coefficients.
Latent variable models serve as powerful tools to infer underlying dynamics from observed neural activity. However, due to the absence of ground truth data, prediction benchmarks are often employed as proxies. In this study, we reveal the limitations of the widely-used 'co-smoothing' prediction framework and propose an improved few-shot prediction approach that encourages more accurate latent dynamics. Utilizing a student-teacher setup with Hidden Markov Models, we demonstrate that the high co-smoothing model space can encompass models with arbitrary extraneous dynamics within their latent representations. To address this, we introduce a secondary metric -- a few-shot version of co-smoothing. This involves performing regression from the latent variables to held-out channels in the data using fewer trials. Our results indicate that among models with near-optimal co-smoothing, those with extraneous dynamics underperform in the few-shot co-smoothing compared to 'minimal' models devoid of such dynamics. We also provide analytical insights into the origin of this phenomenon. We further validate our findings on real neural data using two state-of-the-art methods: LFADS and STNDT. In the absence of ground truth, we suggest a proxy measure to quantify extraneous dynamics. By cross-decoding the latent variables of all model pairs with high co-smoothing, we identify models with minimal extraneous dynamics. We find a correlation between few-shot co-smoothing performance and this new measure. In summary, we present a novel prediction metric designed to yield latent variables that more accurately reflect the ground truth, offering a significant improvement for latent dynamics inference.
In this paper, we address the problem of designing an experimental plan with both discrete and continuous factors under fairly general parametric statistical models. We propose a new algorithm, named ForLion, to search for locally optimal approximate designs under the D-criterion. The algorithm performs an exhaustive search in a design space with mixed factors while keeping high efficiency and reducing the number of distinct experimental settings. Its optimality is guaranteed by the general equivalence theorem. We present the relevant theoretical results for multinomial logit models (MLM) and generalized linear models (GLM), and demonstrate the superiority of our algorithm over state-of-the-art design algorithms using real-life experiments under MLM and GLM. Our simulation studies show that the ForLion algorithm could reduce the number of experimental settings by 25% or improve the relative efficiency of the designs by 17.5% on average. Our algorithm can help the experimenters reduce the time cost, the usage of experimental devices, and thus the total cost of their experiments while preserving high efficiencies of the designs.
By comparing constructions of block encoding given by [1-4], we propose a way to extract dequantizability from advancements in dequantization techniques that have been led by Tang, as in [5]. Then we apply this notion to the sparse-access input model that is known to be BQP-complete in general, thereby conceived to be un-dequantizable. Our goal is to break down this belief by examining the sparse-access input model's instances, particularly their input matrices. In conclusion, this paper forms a dequantizability-verifying scheme that can be applied whenever an input is given.
In this paper, we propose a novel multi-task learning architecture, which incorporates recent advances in attention mechanisms. Our approach, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with task-specific soft-attention modules, which are trainable in an end-to-end manner. These attention modules allow for learning of task-specific features from the global pool, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. Experiments on the CityScapes dataset show that our method outperforms several baselines in both single-task and multi-task learning, and is also more robust to the various weighting schemes in the multi-task loss function. We further explore the effectiveness of our method through experiments over a range of task complexities, and show how our method scales well with task complexity compared to baselines.