亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We implement Genetic Algorithms for triangulations of four-dimensional reflexive polytopes which induce Calabi-Yau threefold hypersurfaces via Batryev's construction. We demonstrate that such algorithms efficiently optimize physical observables such as axion decay constants or axion-photon couplings in string theory compactifications. For our implementation, we choose a parameterization of triangulations that yields homotopy inequivalent Calabi-Yau threefolds by extending fine, regular triangulations of two-faces, thereby eliminating exponentially large redundancy factors in the map from polytope triangulations to Calabi-Yau hypersurfaces. In particular, we discuss how this encoding renders the entire Kreuzer-Skarke list amenable to a variety of optimization strategies, including but not limited to Genetic Algorithms. To achieve optimal performance, we tune the hyperparameters of our Genetic Algorithm using Bayesian optimization. We find that our implementation vastly outperforms other sampling and optimization strategies like Markov Chain Monte Carlo or Simulated Annealing. Finally, we showcase that our Genetic Algorithm efficiently performs optimization even for the maximal polytope with Hodge numbers $h^{1,1} = 491$, where we use it to maximize axion-photon couplings.

相關內容

Metropolis-Hastings estimates intractable expectations - can differentiating the algorithm estimate their gradients? The challenge is that Metropolis-Hastings trajectories are not conventionally differentiable due to the discrete accept/reject steps. Using a technique based on recoupling chains, our method differentiates through the Metropolis-Hastings sampler itself, allowing us to estimate gradients with respect to a parameter of otherwise intractable expectations. Our main contribution is a proof of strong consistency and a central limit theorem for our estimator under assumptions that hold in common Bayesian inference problems. The proofs augment the sampler chain with latent information, and formulate the estimator as a stopping tail functional of this augmented chain. We demonstrate our method on examples of Bayesian sensitivity analysis and optimizing a random walk Metropolis proposal.

Efficient computation of the optimal transport distance between two distributions serves as an algorithm subroutine that empowers various applications. This paper develops a scalable first-order optimization-based method that computes optimal transport to within $\varepsilon$ additive accuracy with runtime $\widetilde{O}( n^2/\varepsilon)$, where $n$ denotes the dimension of the probability distributions of interest. Our algorithm achieves the state-of-the-art computational guarantees among all first-order methods, while exhibiting favorable numerical performance compared to classical algorithms like Sinkhorn and Greenkhorn. Underlying our algorithm designs are two key elements: (a) converting the original problem into a bilinear minimax problem over probability distributions; (b) exploiting the extragradient idea -- in conjunction with entropy regularization and adaptive learning rates -- to accelerate convergence.

We consider the method of mappings for performing shape optimization for unsteady fluid-structure interaction (FSI) problems. In this work, we focus on the numerical implementation. We model the optimization problem such that it takes several theoretical results into account, such as regularity requirements on the transformations and a differential geometrical point of view on the manifold of shapes. Moreover, we discretize the problem such that we can compute exact discrete gradients. This allows for the use of general purpose optimization solvers. We focus on an FSI benchmark problem to validate our numerical implementation. The method is used to optimize parts of the outer boundary and the interface. The numerical simulations build on FEniCS, dolfin-adjoint and IPOPT. Moreover, as an additional theoretical result, we show that for a linear special case the adjoint attains the same structure as the forward problem but reverses the temporal flow of information.

In symmetric cryptography, maximum distance separable (MDS) matrices with computationally simple inverses have wide applications. Many block ciphers like AES, SQUARE, SHARK, and hash functions like PHOTON use an MDS matrix in the diffusion layer. In this article, we first characterize all $3 \times 3$ irreducible semi-involutory matrices over the finite field of characteristic $2$. Using this matrix characterization, we provide a necessary and sufficient condition to construct MDS semi-involutory matrices using only their diagonal entries and the entries of an associated diagonal matrix. Finally, we count the number of $3 \times 3$ semi-involutory MDS matrices over any finite field of characteristic $2$.

We present a new, inductive construction of the Vietoris-Rips complex, in which we take advantage of a small amount of unexploited combinatorial structure in the $k$-skeleton of the complex in order to avoid unnecessary comparisons when identifying its $(k+1)$-simplices. In doing so, we achieve a significant reduction in the number of comparisons required to construct the Vietoris-Rips compared to state-of-the-art algorithms, which is seen here by examining the computational complexity of the critical step in the algorithms. In experiments comparing a C/C++ implementation of our algorithm to the GUDHI v3.9.0 software package, this results in an observed $5$-$10$-fold improvement in speed of on sufficiently sparse Erd\H{o}s-R\'enyi graphs with the best advantages as the graphs become sparser, as well as for higher dimensional Vietoris-Rips complexes. We further clarify that the algorithm described in Boissonnat and Maria (//doi.org/10.1007/978-3-642-33090-2_63) for the construction of the Vietoris-Rips complex is exactly the Incremental Algorithm from Zomorodian (//doi.org/10.1016/j.cag.2010.03.007), albeit with the additional requirement that the result be stored in a tree structure, and we explain how these techniques are different from the algorithm presented here.

We obtain a new universal approximation theorem for continuous operators on arbitrary Banach spaces using the Leray-Schauder mapping. Moreover, we introduce and study a method for operator learning in Banach spaces $L^p$ of functions with multiple variables, based on orthogonal projections on polynomial bases. We derive a universal approximation result for operators where we learn a linear projection and a finite dimensional mapping under some additional assumptions. For the case of $p=2$, we give some sufficient conditions for the approximation results to hold. This article serves as the theoretical framework for a deep learning methodology whose implementation will be provided in subsequent work.

We provide both a theoretical and empirical analysis of the Mean-Median Difference (MM) and Partisan Bias (PB), which are both symmetry metrics intended to detect gerrymandering. We consider vote-share, seat-share pairs $(V, S)$ for which one can construct election data having vote share $V$ and seat share $S$, and turnout is equal in each district. We calculate the range of values that MM and PB can achieve on that constructed election data. In the process, we find the range of vote-share, seat share pairs $(V, S)$ for which there is constructed election data with vote share $V$, seat share $S$, and $MM=0$, and see that the corresponding range for PB is the same set of $(V,S)$ pairs. We show how the set of such $(V,S)$ pairs allowing for $MM=0$ (and $PB=0$) changes when turnout in each district is allowed to be different. Although the set of $(V,S)$ pairs for which there is election data with $MM=0$ is the same as the set of $(V,S)$ pairs for which there is election data with $PB=0$, the range of possible values for MM and PB on a fixed $(V, S)$ is different. Additionally, for a fixed constructed election outcome, the values of the Mean-Median Difference and Partisan Bias can theoretically be as large as 0.5. We show empirically that these two metric values can differ by as much as 0.33 in US congressional map data. We use both neutral ensemble analysis and the short-burst method to show that neither the Mean-Median Difference nor the Partisan Bias can reliably detect when a districting map has an extreme number of districts won by a particular party. Finally, we give additional empirical and logical arguments in an attempt to explain why other metrics are better at detecting when a districting map has an extreme number of districts won by a particular party.

The moments of the coefficients of elliptic curve L-functions are related to numerous arithmetic problems. Rosen and Silverman proved a conjecture of Nagao relating the first moment of one-parameter families satisfying Tate's conjecture to the rank of the corresponding elliptic surface over Q(T); one can also construct families of moderate rank by finding families with large first moments. Michel proved that if j(T) is not constant, then the second moment of the family is of size p^2 + O(p^(3/2)); these two moments show that for suitably small support the behavior of zeros near the central point agree with that of eigenvalues from random matrix ensembles, with the higher moments impacting the rate of convergence. In his thesis, Miller noticed a negative bias in the second moment of every one-parameter family of elliptic curves over the rationals whose second moment had a calculable closed-form expression, specifically the first lower order term which does not average to zero is on average negative. This Bias Conjecture is confirmed for many families; however, these are highly non-generic families whose resulting Legendre sums can be determined. Inspired by the recent successes by Yang-Hui He, Kyu-Hwan Lee, Thomas Oliver, Alexey Pozdnyakov and others in investigations of murmurations of elliptic curve coefficients with machine learning techniques, we pose a similar problem for trying to understand the Bias Conjecture. As a start to this program, we numerically investigate the Bias Conjecture for a family whose bias is positive for half the primes. Since the numerics do not offer conclusive evidence that negative bias for the other half is enough to overwhelm the positive bias, the Bias Conjecture cannot be verified for the family.

We are concerned with the arithmetic of solutions to ordinary or partial nonlinear differential equations which are algebraic in the indeterminates and their derivatives. We call these solutions D-algebraic functions, and their equations are algebraic (ordinary or partial) differential equations (ADEs). The general purpose is to find ADEs whose solutions contain specified rational expressions of solutions to given ADEs. For univariate D-algebraic functions, we show how to derive an ADE of smallest possible order. In the multivariate case, we introduce a general algorithm for these computations and derive conclusions on the order bound of the resulting algebraic PDE. Using our accompanying Maple software, we discuss applications in physics, statistics, and symbolic integration.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

北京阿比特科技有限公司