The aim of this paper is twofold. We first provide a new orientation theorem which gives a natural and simple proof of a result of Gao, Yang \cite{GY} on matroid-reachability-based packing of mixed arborescences in mixed graphs by reducing it to the corresponding theorem of Cs. Kir\'aly \cite{cskir} on directed graphs. Moreover, we extend another result of Gao, Yang \cite{GY2} by providing a new theorem on mixed hypergraphs having a packing of mixed hyperarborescences such that their number is at least $\ell$ and at most $\ell'$, each vertex belongs to exactly $k$ of them, and each vertex $v$ is the root of least $f(v)$ and at most $g(v)$ of them.
In recent years a great deal of attention has been paid to discretizations of the incompressible Stokes equations that exactly preserve the incompressibility constraint. These are of substantial interest because these discretizations are pressure-robust, i.e. the error estimates for the velocity do not depend on the error in the pressure. Similar considerations arise in nearly incompressible linear elastic solids. Conforming discretizations with this property are now well understood in two dimensions, but remain poorly understood in three dimensions. In this work we state two conjectures on this subject. The first is that the Scott-Vogelius element pair is inf-sup stable on uniform meshes for velocity degree $k \ge 4$; the best result available in the literature is for $k \ge 6$. The second is that there exists a stable space decomposition of the kernel of the divergence for $k \ge 5$. We present numerical evidence supporting our conjectures.
In order to investigate the relationship between Shannon information measure of random variables, scholars such as Yeung utilized information diagrams to explore the structured representation of information measures, establishing correspondences with sets. However, this method has limitations when studying information measures of five or more random variables. In this paper, we consider employing algebraic methods to study the relationship of information measures of random variables. By introducing a semiring generated by random variables, we establish correspondences between sets and elements of the semiring. Utilizing the Grobner-Shirshov basis, we present the structure of the semiring and its standard form. Furthermore, we delve into the structure of the semiring generated under Markov chain conditions (referred to as Markov semiring), obtaining its standard form.
We study the problem of testing whether the missing values of a potentially high-dimensional dataset are Missing Completely at Random (MCAR). We relax the problem of testing MCAR to the problem of testing the compatibility of a sequence of covariance matrices, motivated by the fact that this procedure is feasible when the dimension grows with the sample size. Tests of compatibility can be used to test the feasibility of positive semi-definite matrix completion problems with noisy observations, and thus our results may be of independent interest. Our first contributions are to define a natural measure of the incompatibility of a sequence of correlation matrices, which can be characterised as the optimal value of a Semi-definite Programming (SDP) problem, and to establish a key duality result allowing its practical computation and interpretation. By studying the concentration properties of the natural plug-in estimator of this measure, we introduce novel hypothesis tests that we prove have power against all distributions with incompatible covariance matrices. The choice of critical values for our tests rely on a new concentration inequality for the Pearson sample correlation matrix, which may be of interest more widely. By considering key examples of missingness structures, we demonstrate that our procedures are minimax rate optimal in certain cases. We further validate our methodology with numerical simulations that provide evidence of validity and power, even when data are heavy tailed.
We characterize absolutely continuous symmetric copulas with square integrable densities in this paper. This characterization is used to create new copula families, that are perturbations of the independence copula. The full study of mixing properties of Markov chains generated by these copula families is conducted. An extension that includes the Farlie-Gumbel-Morgenstern family of copulas is proposed. We propose some examples of copulas that generate non-mixing Markov chains, but whose convex combinations generate $\psi$-mixing Markov chains. Some general results on $\psi$-mixing are given. The Spearman's correlation $\rho_S$ and Kendall's $\tau$ are provided for the created copula families. Some general remarks are provided for $\rho_S$ and $\tau$. A central limit theorem is provided for parameter estimators in one example. A simulation study is conducted to support derived asymptotic distributions for some examples.
This paper proposes a new approach to fit a linear regression for symbolic internal-valued variables, which improves both the Center Method suggested by Billard and Diday in \cite{BillardDiday2000} and the Center and Range Method suggested by Lima-Neto, E.A. and De Carvalho, F.A.T. in \cite{Lima2008, Lima2010}. Just in the Centers Method and the Center and Range Method, the new methods proposed fit the linear regression model on the midpoints and in the half of the length of the intervals as an additional variable (ranges) assumed by the predictor variables in the training data set, but to make these fitments in the regression models, the methods Ridge Regression, Lasso, and Elastic Net proposed by Tibshirani, R. Hastie, T., and Zou H in \cite{Tib1996, HastieZou2005} are used. The prediction of the lower and upper of the interval response (dependent) variable is carried out from their midpoints and ranges, which are estimated from the linear regression models with shrinkage generated in the midpoints and the ranges of the interval-valued predictors. Methods presented in this document are applied to three real data sets cardiologic interval data set, Prostate interval data set and US Murder interval data set to then compare their performance and facility of interpretation regarding the Center Method and the Center and Range Method. For this evaluation, the root-mean-squared error and the correlation coefficient are used. Besides, the reader may use all the methods presented herein and verify the results using the {\tt RSDA} package written in {\tt R} language, that can be downloaded and installed directly from {\tt CRAN} \cite{Rod2014}.
Regularization promotes well-posedness in solving an inverse problem with incomplete measurement data. The regularization term is typically designed based on a priori characterization of the unknown signal, such as sparsity or smoothness. The standard inhomogeneous regularization incorporates a spatially changing exponent $p$ of the standard $\ell_p$ norm-based regularization to recover a signal whose characteristic varies spatially. This study proposes a weighted inhomogeneous regularization that extends the standard inhomogeneous regularization through new exponent design and weighting using spatially varying weights. The new exponent design avoids misclassification when different characteristics stay close to each other. The weights handle another issue when the region of one characteristic is too small to be recovered effectively by the $\ell_p$ norm-based regularization even after identified correctly. A suite of numerical tests shows the efficacy of the proposed weighted inhomogeneous regularization, including synthetic image experiments and real sea ice recovery from its incomplete wave measurements.
We investigate the problem of producing diverse solutions to an image super-resolution problem. From a probabilistic perspective, this can be done by sampling from the posterior distribution of an inverse problem, which requires the definition of a prior distribution on the high-resolution images. In this work, we propose to use a pretrained hierarchical variational autoencoder (HVAE) as a prior. We train a lightweight stochastic encoder to encode low-resolution images in the latent space of a pretrained HVAE. At inference, we combine the low-resolution encoder and the pretrained generative model to super-resolve an image. We demonstrate on the task of face super-resolution that our method provides an advantageous trade-off between the computational efficiency of conditional normalizing flows techniques and the sample quality of diffusion based methods.
It is shown how mixed finite element methods for symmetric positive definite eigenvalue problems related to partial differential operators can provide guaranteed lower eigenvalue bounds. The method is based on a classical compatibility condition (inclusion of kernels) of the mixed scheme and on local constants related to compact embeddings, which are often known explicitly. Applications include scalar second-order elliptic operators, linear elasticity, and the Steklov eigenvalue problem.
Supercomputers have revolutionized how industries and scientific fields process large amounts of data. These machines group hundreds or thousands of computing nodes working together to execute time-consuming programs that require a large amount of computational resources. Over the years, supercomputers have expanded to include new and different technologies characterizing them as heterogeneous. However, executing a program in a heterogeneous environment requires attention to a specific aspect of performance degradation: load imbalance. In this research, we address the challenges associated with load imbalance when scheduling many homogeneous tasks in a heterogeneous environment. To address this issue, we introduce the concept of adaptive asynchronous work-stealing. This approach collects information about the nodes and utilizes it to improve work-stealing aspects, such as victim selection and task offloading. Additionally, the proposed approach eliminates the need for extra threads to communicate information, thereby reducing overhead when implementing a fully asynchronous approach. Our experimental results demonstrate a performance improvement of approximately 10.1\% compared to other conventional and state-of-the-art implementations.
Confounding remains one of the major challenges to causal inference with observational data. This problem is paramount in medicine, where we would like to answer causal questions from large observational datasets like electronic health records (EHRs) and administrative claims. Modern medical data typically contain tens of thousands of covariates. Such a large set carries hope that many of the confounders are directly measured, and further hope that others are indirectly measured through their correlation with measured covariates. How can we exploit these large sets of covariates for causal inference? To help answer this question, this paper examines the performance of the large-scale propensity score (LSPS) approach on causal analysis of medical data. We demonstrate that LSPS may adjust for indirectly measured confounders by including tens of thousands of covariates that may be correlated with them. We present conditions under which LSPS removes bias due to indirectly measured confounders, and we show that LSPS may avoid bias when inadvertently adjusting for variables (like colliders) that otherwise can induce bias. We demonstrate the performance of LSPS with both simulated medical data and real medical data.