We investigate the effect of the well-known Mycielski construction on the Shannon capacity of graphs and on one of its most prominent upper bounds, the (complementary) Lov\'asz theta number. We prove that if the Shannon capacity of a graph, the distinguishability graph of a noisy channel, is attained by some finite power, then its Mycielskian has strictly larger Shannon capacity than the graph itself. For the complementary Lov\'asz theta function we show that its value on the Mycielskian of a graph is completely determined by its value on the original graph, a phenomenon similar to the one discovered for the fractional chromatic number by Larsen, Propp and Ullman. We also consider the possibility of generalizing our results on the Sperner capacity of directed graphs and on the generalized Mycielsky construction. Possible connections with what Zuiddam calls the asymptotic spectrum of graphs are discussed as well.
Based on the expectile loss function and the adaptive LASSO penalty, the paper proposes and studies the estimation methods for the accelerated failure time (AFT) model. In this approach, we need to estimate the survival function of the censoring variable by the Kaplan-Meier estimator. The AFT model parameters are first estimated by the expectile method and afterwards, when the number of explanatory variables can be large, by the adaptive LASSO expectile method which directly carries out the automatic selection of variables. We also obtain the convergence rate and asymptotic normality for the two estimators, while showing the sparsity property for the censored adaptive LASSO expectile estimator. A numerical study using Monte Carlo simulations confirms the theoretical results and demonstrates the competitive performance of the two proposed estimators. The usefulness of these estimators is illustrated by applying them to three survival data sets.
We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n \geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant converges to the optimal Hardy constant at a rate proportional to $1/| \log h |^2$. This result holds in dimension $n=1$, in any dimension $n \geq 3$ if the domain is the unit ball and the finite element discretization exploits the rotational symmetry of the problem, and in dimension $n=3$ for general finite element discretizations of the unit ball. In the first two cases, our estimates show excellent quantitative agreement with values of the discrete Hardy constant obtained computationally.
The approach to giving a proof-theoretic semantics for IMLL taken by Gheorghiu, Gu and Pym is an interesting adaptation of the work presented by Sandqvist in his 2015 paper for IPL. What is particularly interesting is how naturally the move to the substructural setting provided a semantics for the multiplicative fragment of intuitionistic linear logic. Whilst ultimately the authors of the semantics for IMLL used their foundations to provide a semantics for bunched implication logic, it begs the question, what of the rest of intuitionistic linear logic? In this paper, I present a semantics for intuitionistic linear logic, by first presenting a semantics for the multiplicative and additive fragment after which we focus solely on considering the modality "of-course", thus giving a proof-theoretic semantics for intuitionistic linear logic.
This manuscript bridges the divide between causal inference and spatial statistics, presenting novel insights for causal inference in spatial data analysis, and establishing how tools from spatial statistics can be used to draw causal inferences. We introduce spatial causal graphs to highlight that spatial confounding and interference can be entangled, in that investigating the presence of one can lead to wrongful conclusions in the presence of the other. Moreover, we show that spatial dependence in the exposure variable can render standard analyses invalid, which can lead to erroneous conclusions. To remedy these issues, we propose a Bayesian parametric approach based on tools commonly-used in spatial statistics. This approach simultaneously accounts for interference and mitigates bias resulting from local and neighborhood unmeasured spatial confounding. From a Bayesian perspective, we show that incorporating an exposure model is necessary, and we theoretically prove that all model parameters are identifiable, even in the presence of unmeasured confounding. To illustrate the approach's effectiveness, we provide results from a simulation study and a case study involving the impact of sulfur dioxide emissions from power plants on cardiovascular mortality.
A rectangulation is a decomposition of a rectangle into finitely many rectangles. Via natural equivalence relations, rectangulations can be seen as combinatorial objects with a rich structure, with links to lattice congruences, flip graphs, polytopes, lattice paths, Hopf algebras, etc. In this paper, we first revisit the structure of the respective equivalence classes: weak rectangulations that preserve rectangle-segment adjacencies, and strong rectangulations that preserve rectangle-rectangle adjacencies. We thoroughly investigate posets defined by adjacency in rectangulations of both kinds, and unify and simplify known bijections between rectangulations and permutation classes. This yields a uniform treatment of mappings between permutations and rectangulations that unifies the results from earlier contributions, and emphasizes parallelism and differences between the weak and the strong cases. Then, we consider the special case of guillotine rectangulations, and prove that they can be characterized - under all known mappings between permutations and rectangulations - by avoidance of two mesh patterns that correspond to "windmills" in rectangulations. This yields new permutation classes in bijection with weak guillotine rectangulations, and the first known permutation class in bijection with strong guillotine rectangulations. Finally, we address enumerative issues and prove asymptotic bounds for several families of strong rectangulations.
This study examines whether income distribution in Thailand has a property of scale invariance or self-similarity across years. By using the data on income shares by quintile and by decile of Thailand from 1988 to 2021, the results from 306-pairwise Kolmogorov-Smirnov tests indicate that income distribution in Thailand is statistically scale-invariant or self-similar across years with p-values ranging between 0.988 and 1.000. Based on these empirical findings, this study would like to propose that, in order to change income distribution in Thailand whose pattern had persisted for over three decades, the change itself cannot be gradual but has to be like a phase transition of substance in physics.
Most of the existing Mendelian randomization (MR) methods are limited by the assumption of linear causality between exposure and outcome, and the development of new non-linear MR methods is highly desirable. We introduce two-stage prediction estimation and control function estimation from econometrics to MR and extend them to non-linear causality. We give conditions for parameter identification and theoretically prove the consistency and asymptotic normality of the estimates. We compare the two methods theoretically under both linear and non-linear causality. We also extend the control function estimation to a more flexible semi-parametric framework without detailed parametric specifications of causality. Extensive simulations numerically corroborate our theoretical results. Application to UK Biobank data reveals non-linear causal relationships between sleep duration and systolic/diastolic blood pressure.
This paper develops a flexible and computationally efficient multivariate volatility model, which allows for dynamic conditional correlations and volatility spillover effects among financial assets. The new model has desirable properties such as identifiability and computational tractability for many assets. A sufficient condition of the strict stationarity is derived for the new process. Two quasi-maximum likelihood estimation methods are proposed for the new model with and without low-rank constraints on the coefficient matrices respectively, and the asymptotic properties for both estimators are established. Moreover, a Bayesian information criterion with selection consistency is developed for order selection, and the testing for volatility spillover effects is carefully discussed. The finite sample performance of the proposed methods is evaluated in simulation studies for small and moderate dimensions. The usefulness of the new model and its inference tools is illustrated by two empirical examples for 5 stock markets and 17 industry portfolios, respectively.
High-order Hadamard-form entropy stable multidimensional summation-by-parts discretizations of the Euler and compressible Navier-Stokes equations are considerably more expensive than the standard divergence-form discretization. In search of a more efficient entropy stable scheme, we extend the entropy-split method for implementation on unstructured grids and investigate its properties. The main ingredients of the scheme are Harten's entropy functions, diagonal-$ \mathsf{E} $ summation-by-parts operators with diagonal norm matrix, and entropy conservative simultaneous approximation terms (SATs). We show that the scheme is high-order accurate and entropy conservative on periodic curvilinear unstructured grids for the Euler equations. An entropy stable matrix-type interface dissipation operator is constructed, which can be added to the SATs to obtain an entropy stable semi-discretization. Fully-discrete entropy conservation is achieved using a relaxation Runge-Kutta method. Entropy stable viscous SATs, applicable to both the Hadamard-form and entropy-split schemes, are developed for the compressible Navier-Stokes equations. In the absence of heat fluxes, the entropy-split scheme is entropy stable for the compressible Navier-Stokes equations. Local conservation in the vicinity of discontinuities is enforced using an entropy stable hybrid scheme. Several numerical problems involving both smooth and discontinuous solutions are investigated to support the theoretical results. Computational cost comparison studies suggest that the entropy-split scheme offers substantial efficiency benefits relative to Hadamard-form multidimensional SBP-SAT discretizations.
Numerical solutions for flows in partially saturated porous media pose challenges related to the non-linearity and elliptic-parabolic degeneracy of the governing Richards' equation. Iterative methods are therefore required to manage the complexity of the flow problem. Norms of successive corrections in the iterative procedure form sequences of positive numbers. Definitions of computational orders of convergence and theoretical results for abstract convergent sequences can thus be used to evaluate and compare different iterative methods. We analyze in this frame Newton's and $L$-scheme methods for an implicit finite element method (FEM) and the $L$-scheme for an explicit finite difference method (FDM). We also investigate the effect of the Anderson Acceleration (AA) on both the implicit and the explicit $L$-schemes. Considering a two-dimensional test problem, we found that the AA halves the number of iterations and renders the convergence of the FEM scheme two times faster. As for the FDM approach, AA does not reduce the number of iterations and even increases the computational effort. Instead, being explicit, the FDM $L$-scheme without AA is faster and as accurate as the FEM $L$-scheme with AA.