We investigate shift-invariant vectorial Boolean functions on $n$ bits that are induced from Boolean functions on $k$ bits, for $k\leq n$. We consider such functions that are not necessarily permutations, but are, in some sense, almost bijective, and their cryptographic properties. In this context, we define an almost lifting as a Boolean function for which there is an upper bound on the number of collisions of its induced functions that does not depend on $n$. We show that if a Boolean function with diameter $k$ is an almost lifting, then the maximum number of collisions of its induced functions is $2^{k-1}$ for any $n$. Moreover, we search for functions in the class of almost liftings that have good cryptographic properties and for which the non-bijectivity does not cause major security weaknesses. These functions generalize the well-known map $\chi$ used in the Keccak hash function.
We address the decision problem for a fragment of real analysis involving differentiable functions with continuous first derivatives. The proposed theory, besides the operators of Tarski's theory of reals, includes predicates for comparisons, monotonicity, convexity, and derivative of functions over bounded closed intervals or unbounded intervals. Our decision algorithm is obtained by showing that satisfiable formulae of our theory admit canonical models in which functional variables are interpreted as piecewise exponential functions. These can be implicitly described within the decidable Tarski's theory of reals. Our satisfiability test generalizes previous decidability results not involving derivative operators.
In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD) problems, called JDSVD, a large symmetric and generally indefinite correction equation is solved iteratively at each outer iteration, which constitutes the inner iterations and dominates the overall efficiency of JDSVD. In this paper, a convergence analysis is made on the minimal residual (MINRES) method for the correction equation. Motivated by the results obtained, at each outer iteration a new correction equation is derived that extracts useful information from current subspaces to construct effective preconditioners for the correction equation and is proven to retain the same convergence of outer iterations of JDSVD.The resulting method is called inner preconditioned JDSVD (IPJDSVD) method; it is also a new JDSVD method, and any viable preconditioner for the correction equations in JDSVD is straightforwardly applicable to those in IPJDSVD. Convergence results show that MINRES for the new correction equation can converge much faster when there is a cluster of singular values closest to a given target. A new thick-restart IPJDSVD algorithm with deflation and purgation is proposed that simultaneously accelerates the outer and inner convergence of the standard thick-restart JDSVD and computes several singular triplets. Numerical experiments justify the theory and illustrate the considerable superiority of IPJDSVD to JDSVD, and demonstrate that a similar two-stage IPJDSVD algorithm substantially outperforms the most advanced PRIMME\_SVDS software nowadays for computing the smallest singular triplets.
Accelerated failure time (AFT) models are frequently used to model survival data, providing a direct quantification of the relationship between event times and covariates. These models allow for the acceleration or deceleration of failure times through a multiplicative factor that accounts for the effect of covariates. While existing literature provides numerous methods for fitting AFT models with time-fixed covariates, adapting these approaches to scenarios involving both time-varying covariates and partly interval-censored data remains challenging. Motivated by a randomised clinical trial dataset on advanced melanoma patients, we propose a maximum penalised likelihood approach for fitting a semiparametric AFT model to survival data with partly interval-censored failure times. This method also accommodates both time-fixed and time-varying covariates. We utilise Gaussian basis functions to construct a smooth approximation of the non-parametric baseline hazard and fit the model using a constrained optimisation approach. The effectiveness of our method is demonstrated through extensive simulations. Finally, we illustrate the relevance of our approach by applying it to a dataset from a randomised clinical trial involving patients with advanced melanoma.
We address the regression problem for a general function $f:[-1,1]^d\to \mathbb R$ when the learner selects the training points $\{x_i\}_{i=1}^n$ to achieve a uniform error bound across the entire domain. In this setting, known historically as nonparametric regression, we aim to establish a sample complexity bound that depends solely on the function's degree of smoothness. Assuming periodicity at the domain boundaries, we introduce PADUA, an algorithm that, with high probability, provides performance guarantees optimal up to constant or logarithmic factors across all problem parameters. Notably, PADUA is the first parametric algorithm with optimal sample complexity for this setting. Due to this feature, we prove that, differently from the non-parametric state of the art, PADUA enjoys optimal space complexity in the prediction phase. To validate these results, we perform numerical experiments over functions coming from real audio data, where PADUA shows comparable performance to state-of-the-art methods, while requiring only a fraction of the computational time.
We develop a class of functions Omega_N(x; mu, nu) in N-dimensional space concentrated around a spherical shell of the radius mu and such that, being convoluted with an isotropic Gaussian function, these functions do not change their expression but only a value of its 'width' parameter, nu. Isotropic Gaussian functions are a particular case of Omega_N(x; mu, nu) corresponding to mu = 0. Due to their features, these functions are an efficient tool to build approximations to smooth and continuous spherically-symmetric functions including oscillating ones. Atomic images in limited-resolution maps of the electron density, electrostatic scattering potential and other scalar fields studied in physics, chemistry, biology, and other natural sciences are examples of such functions. We give simple analytic expressions of Omega_N(x; mu, nu) for N = 1, 2, 3 and analyze properties of these functions. Representation of oscillating functions by a sum of Omega_N(x; mu, nu) allows calculating distorted maps for the same cost as the respective theoretical fields. We give practical examples of such representation for the interference functions of the uniform unit spheres for N = 1, 2, 3 that define the resolution of the respective images. Using the chain rule and analytic expressions of the Omega_N(x; mu, nu) derivatives makes simple refinement of parameters of the models which describe these fields.
We present a novel class of projected gradient (PG) methods for minimizing a smooth but not necessarily convex function over a convex compact set. We first provide a novel analysis of the "vanilla" PG method, achieving the best-known iteration complexity for finding an approximate stationary point of the problem. We then develop an "auto-conditioned" projected gradient (AC-PG) variant that achieves the same iteration complexity without requiring the input of the Lipschitz constant of the gradient or any line search procedure. The key idea is to estimate the Lipschitz constant using first-order information gathered from the previous iterations, and to show that the error caused by underestimating the Lipschitz constant can be properly controlled. We then generalize the PG methods to the stochastic setting, by proposing a stochastic projected gradient (SPG) method and a variance-reduced stochastic gradient (VR-SPG) method, achieving new complexity bounds in different oracle settings. We also present auto-conditioned stepsize policies for both stochastic PG methods and establish comparable convergence guarantees.
The notion of a non-deterministic logical matrix (where connectives are interpreted as multi-functions) extends the traditional semantics for propositional logics based on logical matrices (where connectives are interpreted as functions). This extension allows for finitely characterizing a much wider class of logics, and has proven decisive in a myriad of recent compositionality results. In this paper we show that the added expressivity brought by non-determinism also has its drawbacks, and in particular that the problem of determining whether two given finite non-deterministic matrices are equivalent, in the sense that they induce the same logic, becomes undecidable. We also discuss some workable sufficient conditions and particular cases, namely regarding rexpansion homomorphisms and bridges to calculi.
We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied by the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our Definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.
We prove, for stably computably enumerable formal systems, direct analogues of the first and second incompleteness theorems of G\"odel. A typical stably computably enumerable set is the set of Diophantine equations with no integer solutions, and in particular such sets are generally not computably enumerable. And so this gives the first extension of the second incompleteness theorem to non classically computable formal systems. Let's motivate this with a somewhat physical application. Let $\mathcal{H} $ be the suitable infinite time limit (stabilization in the sense of the paper) of the mathematical output of humanity, specializing to first order sentences in the language of arithmetic (for simplicity), and understood as a formal system. Suppose that all the relevant physical processes in the formation of $\mathcal{H} $ are Turing computable. Then as defined $\mathcal{H} $ may \emph{not} be computably enumerable, but it is stably computably enumerable. Thus, the classical G\"odel disjunction applied to $\mathcal{H} $ is meaningless, but applying our incompleteness theorems to $\mathcal{H} $ we then get a sharper version of G\"odel's disjunction: assume $\mathcal{H} \vdash PA$ then either $\mathcal{H} $ is not stably computably enumerable or $\mathcal{H} $ is not 1-consistent (in particular is not sound) or $\mathcal{H} $ cannot prove a certain true statement of arithmetic (and cannot disprove it if in addition $\mathcal{H} $ is 2-consistent).
We consider the quantum query complexity of local search as a function of graph geometry. Given a graph $G = (V,E)$ with $n$ vertices and black box access to a function $f : V \to \mathbb{R}$, the goal is find a vertex $v$ that is a local minimum, i.e. with $f(v) \leq f(u)$ for all $(u,v) \in E$, using as few oracle queries as possible. We show that the quantum query complexity of local search on $G$ is $\Omega\bigl( \frac{n^{\frac{3}{4}}}{\sqrt{g}} \bigr)$, where $g$ is the vertex congestion of the graph. For a $\beta$-expander with maximum degree $\Delta$, this implies a lower bound of $ \Omega\bigl(\frac{\sqrt{\beta} \; n^{\frac{1}{4}}}{\sqrt{\Delta} \; \log{n}} \bigr)$. We obtain these bounds by applying the strong weighted adversary method to a construction by Br\^anzei, Choo, and Recker (2024). As a corollary, on constant degree expanders, we derive a lower bound of $\Omega\bigl(\frac{n^{\frac{1}{4}}}{ \sqrt{\log{n}}} \bigr)$. This improves upon the best prior quantum lower bound of $\Omega\bigl( \frac{n^{\frac{1}{8}}}{\log{n}}\bigr) $ by Santha and Szegedy (2004). In contrast to the classical setting, a gap remains in the quantum case between our lower bound and the best-known upper bound of $O\bigl( n^{\frac{1}{3}} \bigr)$ for such graphs.