An interesting thread in the research of Boolean functions for cryptography and coding theory is the study of secondary constructions: given a known function with a good cryptographic profile, the aim is to extend it to a (usually larger) function possessing analogous properties. In this work, we continue the investigation of a secondary construction based on cellular automata, focusing on the classes of bent and semi-bent functions. We prove that our construction preserves the algebraic degree of the local rule, and we narrow our attention to the subclass of quadratic functions, performing several experiments based on exhaustive combinatorial search and heuristic optimization through Evolutionary Strategies (ES). Finally, we classify the obtained results up to permutation equivalence, remarking that the number of equivalence classes that our CA-XOR construction can successfully extend grows very quickly with respect to the CA diameter.
We propose a measure of product substitutability based on correlation of common purchases, which is fast to compute and easy to interpret. In an empirical study of a drugstore retail chain, we demonstrate its properties, compare it to a similarly simple measure of product complementarity, and use it to find small clusters of substitutes.
Several div-conforming and divdiv-conforming finite elements for symmetric tensors on simplexes in arbitrary dimension are constructed in this work. The shape function space is first split as the trace space and the bubble space. The later is further decomposed into the null space of the differential operator and its orthogonal complement. Instead of characterization of these subspaces of the shape function space, characterization of the dual spaces are provided. Vector div-conforming finite elements are firstly constructed as an introductory example. Then new symmetric div-conforming finite elements are constructed. The dual subspaces are then used as build blocks to construct divdiv conforming finite elements.
Isogeometric Analysis generalizes classical finite element analysis and intends to integrate it with the field of Computer-Aided Design. A central problem in achieving this objective is the reconstruction of analysis-suitable models from Computer-Aided Design models, which is in general a non-trivial and time-consuming task. In this article, we present a novel spline construction, that enables model reconstruction as well as simulation of high-order PDEs on the reconstructed models. The proposed almost-$C^1$ are biquadratic splines on fully unstructured quadrilateral meshes (without restrictions on placements or number of extraordinary vertices). They are $C^1$ smooth almost everywhere, that is, at all vertices and across most edges, and in addition almost (i.e. approximately) $C^1$ smooth across all other edges. Thus, the splines form $H^2$-nonconforming analysis-suitable discretization spaces. This is the lowest-degree unstructured spline construction that can be used to solve fourth-order problems. The associated spline basis is non-singular and has several B-spline-like properties (e.g., partition of unity, non-negativity, local support), the almost-$C^1$ splines are described in an explicit B\'ezier-extraction-based framework that can be easily implemented. Numerical tests suggest that the basis is well-conditioned and exhibits optimal approximation behavior.
We propose several heuristics for mitigating one of the main causes of combinatorial explosion in rank-based complementation of B\"{u}chi automata (BAs): unnecessarily high bounds on the ranks of states. First, we identify elevator automata, which is a large class of BAs (generalizing semi-deterministic BAs), occurring often in practice, where ranks of states are bounded according to the structure of strongly connected components. The bounds for elevator automata also carry over to general BAs that contain elevator automata as a sub-structure. Second, we introduce two techniques for refining bounds on the ranks of BA states using data-flow analysis of the automaton. We implement out techniques as an extension of the tool Ranker for BA complementation and show that they indeed greatly prune the generated state space, obtaining significantly better results and outperforming other state-of-the-art tools on a large set of benchmarks.
The quadratic Wasserstein metric has shown its power in measuring the difference between probability densities, which benefits optimization objective function with better convexity and is insensitive to data noise. Nevertheless, it is always an important question to make the seismic signals suitable for comparison using the quadratic Wasserstein metric. The squaring scaling is worth exploring since it guarantees the convexity caused by data shift. However, as mentioned in [Commun. Inf. Syst., 2019, 19:95-145], the squaring scaling may lose uniqueness and result in more local minima to the misfit function. In our previous work [J. Comput. Phys., 2018, 373:188-209], the quadratic Wasserstein metric with squaring scaling was successfully applied to the earthquake location problem. But it only discussed the inverse problem with few degrees of freedom. In this work, we will present a more in-depth study on the combination of squaring scaling technique and the quadratic Wasserstein metric. By discarding some inapplicable data, picking seismic phases, and developing a new normalization method, we successfully invert the seismic velocity structure based on the squaring scaling technique and the quadratic Wasserstein metric. The numerical experiments suggest that this newly proposed method is an efficient approach to obtain more accurate inversion results.
Functional constrained optimization is becoming more and more important in machine learning and operations research. Such problems have potential applications in risk-averse machine learning, semisupervised learning, and robust optimization among others. In this paper, we first present a novel Constraint Extrapolation (ConEx) method for solving convex functional constrained problems, which utilizes linear approximations of the constraint functions to define the extrapolation (or acceleration) step. We show that this method is a unified algorithm that achieves the best-known rate of convergence for solving different functional constrained convex composite problems, including convex or strongly convex, and smooth or nonsmooth problems with a stochastic objective and/or stochastic constraints. Many of these rates of convergence were in fact obtained for the first time in the literature. In addition, ConEx is a single-loop algorithm that does not involve any penalty subproblems. Contrary to existing primal-dual methods, it does not require the projection of Lagrangian multipliers into a (possibly unknown) bounded set. Second, for nonconvex functional constrained problems, we introduce a new proximal point method that transforms the initial nonconvex problem into a sequence of convex problems by adding quadratic terms to both the objective and constraints. Under a certain MFCQ-type assumption, we establish the convergence and rate of convergence of this method to KKT points when the convex subproblems are solved exactly or inexactly. For large-scale and stochastic problems, we present a more practical proximal point method in which the approximate solutions of the subproblems are computed by the aforementioned ConEx method. To the best of our knowledge, most of these convergence and complexity results of the proximal point method for nonconvex problems also seem to be new in the literature.
We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising $p$-spin glass model, and (b) finding a large independent set in a sparse Erd\H{o}s-R\'{e}nyi graph. The following families of algorithms are considered: (a) low-degree polynomials of the input; (b) low-depth Boolean circuits; (c) the Langevin dynamics algorithm. We show that these families of algorithms fail to produce nearly optimal solutions with high probability. For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory (although we consider the search problem as opposed to the decision problem). Our proof uses the fact that these models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objectives are above a certain threshold are either close or far from each other. The crux of our proof is that the classes of algorithms we consider exhibit a form of stability. We show by an interpolation argument that stable algorithms cannot overcome the OGP barrier. The stability of Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials and Boolean circuits is established using tools from Gaussian and Boolean analysis -- namely hypercontractivity and total influence, as well as a novel lower bound for random walks avoiding certain subsets. In the case of Boolean circuits, the result also makes use of Linal-Mansour-Nisan's classical theorem. Our techniques apply more broadly to low influence functions and may apply more generally.
This paper investigates the achievability of the interference channel coding. It is clarified that the rate-splitting technique is unnecessary to achieve Han-Kobayashi and Jian-Xin-Garg inner regions. Codes are constructed by using sparse matrices (with logarithmic column degree) and the constrained-random-number generators. By extending the problem, we can establish a possible extension of known inner regions.
In this paper we propose a method that estimates the $SE(3)$ continuous trajectories (orientation and translation) of the dynamic rigid objects present in a scene, from multiple RGB-D views. Specifically, we fit the object trajectories to cumulative B-Splines curves, which allow us to interpolate, at any intermediate time stamp, not only their poses but also their linear and angular velocities and accelerations. Additionally, we derive in this work the analytical $SE(3)$ Jacobians needed by the optimization, being applicable to any other approach that uses this type of curves. To the best of our knowledge this is the first work that proposes 6-DoF continuous-time object tracking, which we endorse with significant computational cost reduction thanks to our analytical derivations. We evaluate our proposal in synthetic data and in a public benchmark, showing competitive results in localization and significant improvements in velocity estimation in comparison to discrete-time approaches.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.