This paper describes a class of shape optimization problems for optical metamaterials comprised of periodic microscale inclusions composed of a dielectric, low-dimensional material suspended in a non-magnetic bulk dielectric. The shape optimization approach is based on a homogenization theory for time-harmonic Maxwell's equations that describes effective material parameters for the propagation of electromagnetic waves through the metamaterial. The control parameter of the optimization is a deformation field representing the deviation of the microscale geometry from a reference configuration of the cell problem. This allows for describing the homogenized effective permittivity tensor as a function of the deformation field. We show that the underlying deformed cell problem is well-posed and regular. This, in turn, proves that the shape optimization problem is well-posed. In addition, a numerical scheme is formulated that utilizes an adjoint formulation with either gradient descent or BFGS as optimization algorithms. The developed algorithm is tested numerically on a number of prototypical shape optimization problems with a prescribed effective permittivity tensor as the target.
This paper studies the fusogenicity of cationic liposomes in relation to their surface distribution of cationic lipids and utilizes membrane phase separation to control this surface distribution. It is found that concentrating the cationic lipids into small surface patches on liposomes, through phase-separation, can enhance liposome's fusogenicity. Further concentrating these lipids into smaller patches on the surface of liposomes led to an increased level of fusogenicity. These experimental findings are supported by numerical simulations using a mathematical model for phase-separated charged liposomes. Findings of this study may be used for design and development of highly fusogenic liposomes with minimal level of toxicity.
We propose a generalization of nonlinear stability of numerical one-step integrators to Riemannian manifolds in the spirit of Butcher's notion of B-stability. Taking inspiration from Simpson-Porco and Bullo, we introduce non-expansive systems on such manifolds and define B-stability of integrators. In this first exposition, we provide concrete results for a geodesic version of the Implicit Euler (GIE) scheme. We prove that the GIE method is B-stable on Riemannian manifolds with non-positive sectional curvature. We show through numerical examples that the GIE method is expansive when applied to a certain non-expansive vector field on the 2-sphere, and that the GIE method does not necessarily possess a unique solution for large enough step sizes. Finally, we derive a new improved global error estimate for general Lie group integrators.
Discrete latent space models have recently achieved performance on par with their continuous counterparts in deep variational inference. While they still face various implementation challenges, these models offer the opportunity for a better interpretation of latent spaces, as well as a more direct representation of naturally discrete phenomena. Most recent approaches propose to train separately very high-dimensional prior models on the discrete latent data which is a challenging task on its own. In this paper, we introduce a latent data model where the discrete state is a Markov chain, which allows fast end-to-end training. The performance of our generative model is assessed on a building management dataset and on the publicly available Electricity Transformer Dataset.
Uniform error estimates of a bi-fidelity method for a kinetic-fluid coupled model with random initial inputs in the fine particle regime are proved in this paper. Such a model is a system coupling the incompressible Navier-Stokes equations to the Vlasov-Fokker-Planck equations for a mixture of the flows with distinct particle sizes. The main analytic tool is the hypocoercivity analysis for the multi-phase Navier-Stokes-Vlasov-Fokker-Planck system with uncertainties, considering solutions in a perturbative setting near the global equilibrium. This allows us to obtain the error estimates in both kinetic and hydrodynamic regimes.
We study the problem of enumerating Tarski fixed points, focusing on the relational lattices of equivalences, quasiorders and binary relations. We present a polynomial space enumeration algorithm for Tarski fixed points on these lattices and other lattices of polynomial height. It achieves polynomial delay when enumerating fixed points of increasing isotone maps on all three lattices, as well as decreasing isotone maps on the lattice of binary relations. In those cases in which the enumeration algorithm does not guarantee polynomial delay on the three relational lattices on the other hand, we prove exponential lower bounds for deciding the existence of three fixed points when the isotone map is given as an oracle, and that it is NP-hard to find three or more Tarski fixed points. More generally, we show that any deterministic or bounded-error randomized algorithm must perform a number of queries asymptotically at least as large as the lattice width to decide the existence of three fixed points when the isotone map is given as an oracle. Finally, we demonstrate that our findings yield a polynomial delay and space algorithm for listing bisimulations and instances of some related models of behavioral or role equivalence.
We develop the no-propagate algorithm for sampling the linear response of random dynamical systems, which are non-uniform hyperbolic deterministic systems perturbed by noise with smooth density. We first derive a Monte-Carlo type formula and then the algorithm, which is different from the ensemble (stochastic gradient) algorithms, finite-element algorithms, and fast-response algorithms; it does not involve the propagation of vectors or covectors, and only the density of the noise is differentiated, so the formula is not cursed by gradient explosion, dimensionality, or non-hyperbolicity. We demonstrate our algorithm on a tent map perturbed by noise and a chaotic neural network with 51 layers $\times$ 9 neurons. By itself, this algorithm approximates the linear response of non-hyperbolic deterministic systems, with an additional error proportional to the noise. We also discuss the potential of using this algorithm as a part of a bigger algorithm with smaller error.
We provide a non-unit disk framework to solve combinatorial optimization problems such as Maximum Cut (Max-Cut) and Maximum Independent Set (MIS) on a Rydberg quantum annealer. Our setup consists of a many-body interacting Rydberg system where locally controllable light shifts are applied to individual qubits in order to map the graph problem onto the Ising spin model. Exploiting the flexibility that optical tweezers offer in terms of spatial arrangement, our numerical simulations implement the local-detuning protocol while globally driving the Rydberg annealer to the desired many-body ground state, which is also the solution to the optimization problem. Using optimal control methods, these solutions are obtained for prototype graphs with varying sizes at time scales well within the system lifetime and with approximation ratios close to one. The non-blockade approach facilitates the encoding of graph problems with specific topologies that can be realized in two-dimensional Rydberg configurations and is applicable to both unweighted as well as weighted graphs. A comparative analysis with fast simulated annealing is provided which highlights the advantages of our scheme in terms of system size, hardness of the graph, and the number of iterations required to converge to the solution.
We investigate non-wellfounded proof systems based on parsimonious logic, a weaker variant of linear logic where the exponential modality ! is interpreted as a constructor for streams over finite data. Logical consistency is maintained at a global level by adapting a standard progressing criterion. We present an infinitary version of cut-elimination based on finite approximations, and we prove that, in presence of the progressing criterion, it returns well-defined non-wellfounded proofs at its limit. Furthermore, we show that cut-elimination preserves the progressive criterion and various regularity conditions internalizing degrees of proof-theoretical uniformity. Finally, we provide a denotational semantics for our systems based on the relational model.
Bayesian Optimization (BO) links Gaussian Process (GP) surrogates with sequential design toward optimizing expensive-to-evaluate black-box functions. Example design heuristics, or so-called acquisition functions, like expected improvement (EI), balance exploration and exploitation to furnish global solutions under stringent evaluation budgets. However, they fall short when solving for robust optima, meaning a preference for solutions in a wider domain of attraction. Robust solutions are useful when inputs are imprecisely specified, or where a series of solutions is desired. A common mathematical programming technique in such settings involves an adversarial objective, biasing a local solver away from ``sharp'' troughs. Here we propose a surrogate modeling and active learning technique called robust expected improvement (REI) that ports adversarial methodology into the BO/GP framework. After describing the methods, we illustrate and draw comparisons to several competitors on benchmark synthetic exercises and real problems of varying complexity.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.