We propose a randomized lattice algorithm for approximating multivariate periodic functions over the $d$-dimensional unit cube from the weighted Korobov space with mixed smoothness $\alpha > 1/2$ and product weights $\gamma_1,\gamma_2,\ldots\in [0,1]$. Building upon the deterministic lattice algorithm by Kuo, Sloan, and Wo\'{z}niakowski (2006), we incorporate a randomized quadrature rule by Dick, Goda, and Suzuki (2022) to accelerate the convergence rate. This randomization involves drawing the number of points for function evaluations randomly, and selecting a good generating vector for rank-1 lattice points using the randomized component-by-component algorithm. We prove that our randomized algorithm achieves a worst-case root mean squared $L_2$-approximation error of order $M^{-\alpha/2 - 1/8 + \varepsilon}$ for an arbitrarily small $\varepsilon > 0$, where $M$ denotes the maximum number of function evaluations, and that the error bound is independent of the dimension $d$ if the weights satisfy $\sum_{j=1}^\infty \gamma_j^{1/\alpha} < \infty$. Our upper bound converges faster than a lower bound on the worst-case $L_2$-approximation error for deterministic rank-1 lattice-based approximation proved by Byrenheid, K\"{a}mmerer, Ullrich, and Volkmer (2017). We also show a lower error bound of order $M^{-\alpha/2-1/2}$ for our randomized algorithm, leaving a slight gap between the upper and lower bounds open for future research.
The $d$-independence number of a graph $G$ is the largest possible size of an independent set $I$ in $G$ where each vertex of $I$ has degree at least $d$ in $G$. Upper bounds for the $d$-independence number in planar graphs are well-known for $d=3,4,5$, and can in fact be matched with constructions that actually have minimum degree $d$. In this paper, we explore the same questions for 1-planar graphs, i.e., graphs that can be drawn in the plane with at most one crossing per edge. We give upper bounds for the $d$-independence number for all $d$. Then we give constructions that match the upper bound, and (for small $d$) also have minimum degree $d$.
Time-variant standard Sylvester-conjugate matrix equations are presented as early time-variant versions of the complex conjugate matrix equations. Current solving methods include Con-CZND1 and Con-CZND2 models, both of which use ode45 for continuous model. Given practical computational considerations, discrete these models is also important. Based on Euler-forward formula discretion, Con-DZND1-2i model and Con-DZND2-2i model are proposed. Numerical experiments using step sizes of 0.1 and 0.001. The above experiments show that Con-DZND1-2i model and Con-DZND2-2i model exhibit different neural dynamics compared to their continuous counterparts, such as trajectory correction in Con-DZND2-2i model and the swallowing phenomenon in Con-DZND1-2i model, with convergence affected by step size. These experiments highlight the differences between optimizing sampling discretion errors and space compressive approximation errors in neural dynamics.
Logistic regression is a classical model for describing the probabilistic dependence of binary responses to multivariate covariates. We consider the predictive performance of the maximum likelihood estimator (MLE) for logistic regression, assessed in terms of logistic risk. We consider two questions: first, that of the existence of the MLE (which occurs when the dataset is not linearly separated), and second that of its accuracy when it exists. These properties depend on both the dimension of covariates and on the signal strength. In the case of Gaussian covariates and a well-specified logistic model, we obtain sharp non-asymptotic guarantees for the existence and excess logistic risk of the MLE. We then generalize these results in two ways: first, to non-Gaussian covariates satisfying a certain two-dimensional margin condition, and second to the general case of statistical learning with a possibly misspecified logistic model. Finally, we consider the case of a Bernoulli design, where the behavior of the MLE is highly sensitive to the parameter direction.
We propose a tamed-adaptive Milstein scheme for stochastic differential equations in which the first-order derivatives of the coefficients are locally H\"older continuous of order $\alpha$. We show that the scheme converges in the $L_2$-norm with a rate of $(1+\alpha)/2$ over both finite intervals $[0, T]$ and the infinite interval $(0, +\infty)$, under certain growth conditions on the coefficients.
Many models require integrals of high-dimensional functions: for instance, to obtain marginal likelihoods. Such integrals may be intractable, or too expensive to compute numerically. Instead, we can use the Laplace approximation (LA). The LA is exact if the function is proportional to a normal density; its effectiveness therefore depends on the function's true shape. Here, we propose the use of the probabilistic numerical framework to develop a diagnostic for the LA and its underlying shape assumptions, modelling the function and its integral as a Gaussian process and devising a "test" by conditioning on a finite number of function values. The test is decidedly non-asymptotic and is not intended as a full substitute for numerical integration - rather, it is simply intended to test the feasibility of the assumptions underpinning the LA with as minimal computation. We discuss approaches to optimize and design the test, apply it to known sample functions, and highlight the challenges of high dimensions.
We present and analyze two stabilized finite element methods for solving numerically the Poisson--Nernst--Planck equations. The stabilization we consider is carried out by using a shock detector and a discrete graph Laplacian operator for the ion equations, whereas the discrete equation for the electric potential need not be stabilized. Discrete solutions stemmed from the first algorithm preserve both maximum and minimum discrete principles. For the second algorithm, its discrete solutions are conceived so that they hold discrete principles and obey an entropy law provided that an acuteness condition is imposed for meshes. Remarkably the latter is found to be unconditionally stable. We validate our methodology through numerical experiments.
QAC$^0$ is the class of constant-depth quantum circuits with polynomially many ancillary qubits, where Toffoli gates on arbitrarily many qubits are allowed. In this work, we show that the parity function cannot be computed in QAC$^0$, resolving a long-standing open problem in quantum circuit complexity more than twenty years old. As a result, this proves ${\rm QAC}^0 \subsetneqq {\rm QAC}_{\rm wf}^0$. We also show that any QAC circuit of depth $d$ that approximately computes parity on $n$ bits requires $2^{\widetilde{\Omega}(n^{1/d})}$ ancillary qubits, which is close to tight. This implies a similar lower bound on approximately preparing cat states using QAC circuits. Finally, we prove a quantum analog of the Linial-Mansour-Nisan theorem for QAC$^0$. This implies that, for any QAC$^0$ circuit $U$ with $a={\rm poly}(n)$ ancillary qubits, and for any $x\in\{0,1\}^n$, the correlation between $Q(x)$ and the parity function is bounded by ${1}/{2} + 2^{-\widetilde{\Omega}(n^{1/d})}$, where $Q(x)$ denotes the output of measuring the output qubit of $U|x,0^a\rangle$. All the above consequences rely on the following technical result. If $U$ is a QAC$^0$ circuit with $a={\rm poly}(n)$ ancillary qubits, then there is a distribution $\mathcal{D}$ of bounded polynomials of degree polylog$(n)$ such that with high probability, a random polynomial from $\mathcal{D}$ approximates the function $\langle x,0^a| U^\dag Z_{n+1} U |x,0^a\rangle$ for a large fraction of $x\in \{0,1\}^n$. This result is analogous to the Razborov-Smolensky result on the approximation of AC$^0$ circuits by random low-degree polynomials.
We introduce an algebraic concept of the frame for abstract conditional independence (CI) models, together with basic operations with respect to which such a frame should be closed: copying and marginalization. Three standard examples of such frames are (discrete) probabilistic CI structures, semi-graphoids and structural semi-graphoids. We concentrate on those frames which are closed under the operation of set-theoretical intersection because, for these, the respective families of CI models are lattices. This allows one to apply the results from lattice theory and formal concept analysis to describe such families in terms of implications among CI statements. The central concept of this paper is that of self-adhesivity defined in algebraic terms, which is a combinatorial reflection of the self-adhesivity concept studied earlier in context of polymatroids and information theory. The generalization also leads to a self-adhesivity operator defined on the hyper-level of CI frames. We answer some of the questions related to this approach and raise other open questions. The core of the paper is in computations. The combinatorial approach to computation might overcome some memory and space limitation of software packages based on polyhedral geometry, in particular, if SAT solvers are utilized. We characterize some basic CI families over 4 variables in terms of canonical implications among CI statements. We apply our method in information-theoretical context to the task of entropic region demarcation over 5 variables.
An efficient algorithm, Apriori_Goal, is proposed for constructing association rules for a relational database with a given classification. The algorithm's features are related to the specifics of the database and the method of encoding its records. The algorithm proposes five criteria that characterize the quality of the rules being constructed. Different criteria are also proposed for filtering the sets used when constructing association rules. The proposed method of encoding records allows for an efficient implementation of the basic operation underlying the computation of rule characteristics. The algorithm works with a relational database, where the columns can be of different types, both continuous and discrete. Among the columns, a target discrete column is distinguished, which defines the classification of the records. This allows the original database to be divided into $n$ subsets according to the number of categories of the target parameter. A classical example of such databases is medical databases, where the target parameter is the diagnosis established by doctors. A preprocessor, which is an important part of the algorithm, converts the properties of the objects represented by the columns of the original database into binary properties and encodes each record as a single integer. In addition to saving memory, the proposed format allows the complete preservation of information about the binary properties representing the original record. More importantly, the computationally intensive operations on records, required for calculating rule characteristics, are performed almost instantly in this format using a pair of logical operations on integers.
Accurate computation of robust estimates for extremal quantiles of empirical distributions is an essential task for a wide range of applicative fields, including economic policymaking and the financial industry. Such estimates are particularly critical in calculating risk measures, such as Growth-at-Risk (GaR). % and Value-at-Risk (VaR). This work proposes a conformal framework to estimate calibrated quantiles, and presents an extensive simulation study and a real-world analysis of GaR to examine its benefits with respect to the state of the art. Our findings show that CP methods consistently improve the calibration and robustness of quantile estimates at all levels. The calibration gains are appreciated especially at extremal quantiles, which are critical for risk assessment and where traditional methods tend to fall short. In addition, we introduce a novel property that guarantees coverage under the exchangeability assumption, providing a valuable tool for managing risks by quantifying and controlling the likelihood of future extreme observations.