Working in a variant of the intersection type assignment system of Coppo, Dezani-Ciancaglini and Veneri [1981], we prove several facts about sets of terms having a given intersection type. One of our results is that every strongly normalizing term M admits a *uniqueness typing*, which is a pair $(\Gamma,A)$ such that 1) $\Gamma \vdash M : A$ 2) $\Gamma \vdash N : A \Longrightarrow M =_{\beta\eta} N$ We also discuss several presentations of intersection type algebras, and the corresponding choices of type assignment rules. We also prove that the set of closed terms having a given intersection type is separable, and, if infinite, forms an adequate numeral system.
In this paper, we design and analyze a Hybrid-High Order (HHO) approximation for a class of quasilinear elliptic problems of nonmonotone type. The proposed method has several advantages, for instance, it supports arbitrary order of approximation and general polytopal meshes. The key ingredients involve local reconstruction and high-order stabilization terms. Existence and uniqueness of the discrete solution are shown by Brouwer's fixed point theorem and contraction result. A priori error estimate is shown in discrete energy norm that shows optimal order convergence rate. Numerical experiments are performed to substantiate the theoretical results.
In this paper we present results on asymptotic characteristics of multivariate function classes in the uniform norm. Our main interest is the approximation of functions with mixed smoothness parameter not larger than $1/2$. Our focus will be on the behavior of the best $m$-term trigonometric approximation as well as the decay of Kolmogorov and entropy numbers in the uniform norm. It turns out that these quantities share a few fundamental abstract properties like their behavior under real interpolation, such that they can be treated simultaneously. We start with proving estimates on finite rank convolution operators with range in a step hyperbolic cross. These results imply bounds for the corresponding function space embeddings by a well-known decomposition technique. The decay of Kolmogorov numbers have direct implications for the problem of sampling recovery in $L_2$ in situations where recent results in the literature are not applicable since the corresponding approximation numbers are not square summable.
In this paper, through thinking on the modularity function that measures the standard of community division, a new algorithm for dividing communities is proposed, called the Connect Intensity Iteration algorithm, or CIIA for short. In this algorithm, a new indicator is proposed.This indicator is the difference between the actual number of edges between two nodes and the number of edges when the edges are randomly placed. It can reflect more information between the nodes. The larger the value of this index, the greater the possibility that the two nodes are divided into the same community, and vice versa. This paper also verifies the algorithm through numerical simulations and real cases, and the results show the feasibility of the algorithm.
Let a polytope $\mathcal{P}$ be defined by one of the following ways: (i) $\mathcal{P} = \{x \in \mathbb{R}^n \colon A x \leq b\}$, where $A \in \mathbb{Z}^{(n+m) \times n}$, $b \in \mathbb{Z}^{(n+m)}$, and $rank(A) = n$, (ii) $\mathcal{P} = \{x \in \mathbb{R}_+^n \colon A x = b\}$, where $A \in \mathbb{Z}^{m \times n}$, $b \in \mathbb{Z}^{m}$, and $rank(A) = m$, and let all the rank minors of $A$ be bounded by $\Delta$ in the absolute values. We show that $|\mathcal{P} \cap \mathbb{Z}^n|$ can be computed with an algorithm, having the arithmetic complexity bound $$ O\bigl( \nu(d,m,\Delta) \cdot d^3 \cdot \Delta^4 \cdot \log(\Delta) \bigr), $$ where $d = \dim(\mathcal{P})$ and $\nu(d,m,\Delta)$ is the maximal possible number of vertices in a $d$-dimensional polytope $P$, defined by one of the systems above. Using the obtained result, we have the following arithmetical complexity bounds to compute $|P \cap \mathbb{Z}^n|$: 1) The bound $O(\frac{d}{m}+1)^m \cdot d^3 \cdot \Delta^4 \cdot \log(\Delta)$ that is polynomial on $d$ and $\Delta$, for any fixed $m$; 2) The bound $O\bigl(\frac{m}{d}+1\bigr)^{\frac{d}{2}} \cdot d^4 \cdot \Delta^4 \cdot \log(\Delta)$ that is polynomial on $m$ and $\Delta$, for any fixed $d$; 3) The bound $O(d)^{4 + \frac{d}{2}} \cdot \Delta^{4+d} \cdot \log(\Delta)$ that is polynomial on $\Delta$, for any fixed $d$. Given bounds can be used to obtain faster algorithms for the ILP feasibility problem, and for the problem to count integer points in a simplex or in an unbounded Subset-Sum polytope. Unbounded and parametric versions of the above problem are also considered.
What is the information leakage of an iterative randomized learning algorithm about its training data, when the internal state of the algorithm is \emph{private}? How much is the contribution of each specific training epoch to the information leakage through the released model? We study this problem for noisy gradient descent algorithms, and model the \emph{dynamics} of R\'enyi differential privacy loss throughout the training process. Our analysis traces a provably \emph{tight} bound on the R\'enyi divergence between the pair of probability distributions over parameters of models trained on neighboring datasets. We prove that the privacy loss converges exponentially fast, for smooth and strongly convex loss functions, which is a significant improvement over composition theorems (which over-estimate the privacy loss by upper-bounding its total value over all intermediate gradient computations). For Lipschitz, smooth, and strongly convex loss functions, we prove optimal utility with a small gradient complexity for noisy gradient descent algorithms.
We present a method for producing unbiased parameter estimates and valid confidence intervals under the constraints of differential privacy, a formal framework for limiting individual information leakage from sensitive data. Prior work in this area is limited in that it is tailored to calculating confidence intervals for specific statistical procedures, such as mean estimation or simple linear regression. While other recent work can produce confidence intervals for more general sets of procedures, they either yield only approximately unbiased estimates, are designed for one-dimensional outputs, or assume significant user knowledge about the data-generating distribution. Our method induces distributions of mean and covariance estimates via the bag of little bootstraps (BLB) and uses them to privately estimate the parameters' sampling distribution via a generalized version of the CoinPress estimation algorithm. If the user can bound the parameters of the BLB-induced parameters and provide heavier-tailed families, the algorithm produces unbiased parameter estimates and valid confidence intervals which hold with arbitrarily high probability. These results hold in high dimensions and for any estimation procedure which behaves nicely under the bootstrap.
In the present paper, we study the limit sets of the almost periodic functions $f(x)$. It is interesting that the values $r=\inf|f(x)|$ and $R=\sup|f(x)|$ may be expressed in the exact form. We show that the ring $r\leq |z|\leq R$ is the limit set of the almost periodic function $f(x)$ (under some natural conditions on $f$). The exact expression for $r$ coincides with the well known partition problem formula and gives a new analytical method of solving the corresponding partition problem. Several interesting examples are considered. For instance, in the case of the five numbers, the well-known Karmarkar--Karp algorithm gives the value $m=2$ as the solution of the partition problem in our example, and our method gives the correct answer $m=0.$ The figures presented in Appendix illustrate our results.
This paper studies Makespan Minimization in the secretary model. Formally, jobs, specified by their processing times, are presented in a uniformly random order. An online algorithm has to assign each job permanently and irrevocably to one of m parallel and identical machines such that the expected time it takes to process them all, the makespan, is minimized. We give two deterministic algorithms. First, a straightforward adaptation of the semi-online strategy LightLoad provides a very simple algorithm retaining its competitive ratio of 1.75. A new and sophisticated algorithm is 1.535-competitive. These competitive ratios are not only obtained in expectation but, in fact, for all but a very tiny fraction of job orders. Classically, online makespan minimization only considers the worst-case order. Here, no competitive ratio below 1.885 for deterministic algorithms and 1.581 using randomization is possible. The best randomized algorithm so far is 1.916-competitive. Our results show that classical worst-case orders are quite rare and pessimistic for many applications. They also demonstrate the power of randomization when compared to much stronger deterministic reordering models. We complement our results by providing first lower bounds. A competitive ratio obtained on nearly all possible job orders must be at least 1.257. This implies a lower bound of 1.043 for both deterministic and randomized algorithms in the general model.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.