Suppose we want to construct some structure on a bounded-degree graph, e.g., an almost maximum matching, and we want to decide about each edge depending only on its constant-radius neighborhood. We examine and compare the strengths of different extensions of these local algorithms. A common extension is to use preprocessing, which means that we can make some calculation about the whole graph, and each local decision can also depend on this calculation. In this paper, we show that preprocessing is needless: if a nearly optimal local algorithm uses preprocessing, then the same can be achieved by a local algorithm without preprocessing, but with a global randomization.
In this contribution we investigate the application of phase-field fracture models on non-linear multiscale computational homogenization schemes. In particular, we introduce different phase-fields on a two-scale problem and develop a thermodynamically consistent model. This allows on the one hand for the prediction of local micro-fracture patterns, which effectively acts as an anisotropic damage model on the macroscale. On the other and, the macro-fracture phase-field model allows to predict complex fracture pattern with regard to local microstructures. Both phase-fields are introduced in a common framework, such that a joint consistent linearization for the Newton-Raphson iteration can be developed. Finally, the limits of both models as well as the applicability are shown in different numerical examples.
In this contribution, we derive a consistent variational formulation for computational homogenization methods and show that traditional FE2 and IGA2 approaches are special discretization and solution techniques of this most general framework. This allows us to enhance dramatically the numerical analysis as well as the solution of the arising algebraic system. In particular, we expand the dimension of the continuous system, discretize the higher dimensional problem consistently and apply afterwards a discrete null-space matrix to remove the additional dimensions. A benchmark problem, for which we can obtain an analytical solution, demonstrates the superiority of the chosen approach aiming to reduce the immense computational costs of traditional FE2 and IGA2 formulations to a fraction of the original requirements. Finally, we demonstrate a further reduction of the computational costs for the solution of general non-linear problems.
Empirical Bayes provides a powerful approach to learning and adapting to latent structure in data. Theory and algorithms for empirical Bayes have a rich literature for sequence models, but are less understood in settings where latent variables and data interact through more complex designs. In this work, we study empirical Bayes estimation of an i.i.d. prior in Bayesian linear models, via the nonparametric maximum likelihood estimator (NPMLE). We introduce and study a system of gradient flow equations for optimizing the marginal log-likelihood, jointly over the prior and posterior measures in its Gibbs variational representation using a smoothed reparametrization of the regression coefficients. A diffusion-based implementation yields a Langevin dynamics MCEM algorithm, where the prior law evolves continuously over time to optimize a sequence-model log-likelihood defined by the coordinates of the current Langevin iterate. We show consistency of the NPMLE as $n, p \rightarrow \infty$ under mild conditions, including settings of random sub-Gaussian designs when $n \asymp p$. In high noise, we prove a uniform log-Sobolev inequality for the mixing of Langevin dynamics, for possibly misspecified priors and non-log-concave posteriors. We then establish polynomial-time convergence of the joint gradient flow to a near-NPMLE if the marginal negative log-likelihood is convex in a sub-level set of the initialization.
Shamir and Spencer proved in the 1980s that the chromatic number of the binomial random graph G(n,p) is concentrated in an interval of length at most \omega\sqrt{n}, and in the 1990s Alon showed that an interval of length \omega\sqrt{n}/\log n suffices for constant edge-probabilities p \in (0,1). We prove a similar logarithmic improvement of the Shamir-Spencer concentration results for the sparse case p=p(n) \to 0, and uncover a surprising concentration `jump' of the chromatic number in the very dense case p=p(n) \to 1.
Introducing a coupling framework reminiscent of FETI methods, but here on abstract form, we establish conditions for stability and minimal requirements for well-posedness on the continuous level, as well as conditions on local solvers for the approximation of subproblems. We then discuss stability of the resulting Lagrange multiplier methods and show stability under a mesh conditions between the local discretizations and the mortar space. If this condition is not satisfied we show how a stabilization, acting only on the multiplier can be used to achieve stability. The design of preconditioners of the Schur complement system is discussed in the unstabilized case. Finally we discuss some applications that enter the framework.
In this paper we consider the numerical solution of fractional differential equations. In particular, we study a step-by-step graded mesh procedure based on an expansion of the vector field using orthonormal Jacobi polynomials. Under mild hypotheses, the proposed procedure is capable of getting spectral accuracy. A few numerical examples are reported to confirm the theoretical findings.
Motivated by previous work on moment varieties for Gaussian distributions and their mixtures, we study moment varieties for two other statistically important two-parameter distributions: the inverse Gaussian and gamma distributions. In particular, we realize the moment varieties as determinantal varieties and find their degrees and singularities. We also provide computational evidence for algebraic identifiability of mixtures, and study the identifiability degree and Euclidean distance degree.
A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.
In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.