亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Karppa & Kaski (2019) proposed a novel type of ``broken" or ``opportunistic" multiplication algorithm, based on a variant of Strassen's algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. For instance, their algorithm can compute Boolean matrix multiplication in $O(n^{\log_2(6+6/7)} \log n) = O(n^{2.778})$ time. While faster matrix multiplication algorithms exist asymptotically, in practice most such algorithms are infeasible for practical problems. In this note, we describe an alternate way to use the broken matrix multiplication algorithm to approximately compute matrix multiplication, either for real-valued matrices or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm. Asymptotically, the resulting algorithm has runtime $O(n^{\frac{3 \log6}{\log7}} \log n) \leq O(n^{2.763})$, a slight improvement of Karppa-Kaski's algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, these asymptotic runtime bounds are not directly useful. We estimate the runtime for our algorithm for some sample problems which are at the upper limits of practical algorithms; unfortunately, for these parameters, the new algorithm does not appear to be beneficial.

相關內容

It is disproved the Tokareva's conjecture that any balanced boolean function of appropriate degree is a derivative of some bent function. This result is based on new upper bounds for the numbers of bent and plateaued functions.

The semi-empirical nature of best-estimate models closing the balance equations of thermal-hydraulic (TH) system codes is well-known as a significant source of uncertainty for accuracy of output predictions. This uncertainty, called model uncertainty, is usually represented by multiplicative (log-)Gaussian variables whose estimation requires solving an inverse problem based on a set of adequately chosen real experiments. One method from the TH field, called CIRCE, addresses it. We present in the paper a generalization of this method to several groups of experiments each having their own properties, including different ranges for input conditions and different geometries. An individual (log-)Gaussian distribution is therefore estimated for each group in order to investigate whether the model uncertainty is homogeneous between the groups, or should depend on the group. To this end, a multi-group CIRCE is proposed where a variance parameter is estimated for each group jointly to a mean parameter common to all the groups to preserve the uniqueness of the best-estimate model. The ECME algorithm for Maximum Likelihood Estimation is adapted to the latter context, then applied to relevant demonstration cases. Finally, it is tested on a practical case to assess the uncertainty of critical mass flow assuming two groups due to the difference of geometry between the experimental setups.

The HEat modulated Infinite DImensional Heston (HEIDIH) model and its numerical approximation are introduced and analyzed. This model falls into the general framework of infinite dimensional Heston stochastic volatility models of (F.E. Benth, I.C. Simonsen '18), introduced for the pricing of forward contracts. The HEIDIH model consists of a one-dimensional stochastic advection equation coupled with a stochastic volatility process, defined as a Cholesky-type decomposition of the tensor product of a Hilbert-space valued Ornstein-Uhlenbeck process, the mild solution to the stochastic heat equation on the real half-line. The advection and heat equations are driven by independent space-time Gaussian processes which are white in time and colored in space, with the latter covariance structure expressed by two different kernels. First, a class of weight-stationary kernels are given, under which regularity results for the HEIDIH model in fractional Sobolev spaces are formulated. In particular, the class includes weighted Mat\'ern kernels. Second, numerical approximation of the model is considered. An error decomposition formula, pointwise in space and time, for a finite-difference scheme is proven. For a special case, essentially sharp convergence rates are obtained when this is combined with a fully discrete finite element approximation of the stochastic heat equation. The analysis takes into account a localization error, a pointwise-in-space finite element discretization error and an error stemming from the noise being sampled pointwise in space. The rates obtained in the analysis are higher than what would be obtained using a standard Sobolev embedding technique. Numerical simulations illustrate the results.

A novel overlapping domain decomposition splitting algorithm based on a Crank-Nisolson method is developed for the stochastic nonlinear Schroedinger equation driven by a multiplicative noise with non-periodic boundary conditions. The proposed algorithm can significantly reduce the computational cost while maintaining the similar conservation laws. Numerical experiments are dedicated to illustrating the capability of the algorithm for different spatial dimensions, as well as the various initial conditions. In particular, we compare the performance of the overlapping domain decomposition splitting algorithm with the stochastic multi-symplectic method in [S. Jiang, L. Wang and J. Hong, Commun. Comput. Phys., 2013] and the finite difference splitting scheme in [J. Cui, J. Hong, Z. Liu and W. Zhou, J. Differ. Equ., 2019]. We observe that our proposed algorithm has excellent computational efficiency and is highly competitive. It provides a useful tool for solving stochastic partial differential equations.

The nonlocality of the fractional operator causes numerical difficulties for long time computation of the time-fractional evolution equations. This paper develops a high-order fast time-stepping discontinuous Galerkin finite element method for the time-fractional diffusion equations, which saves storage and computational time. The optimal error estimate $O(N^{-p-1} + h^{m+1} + \varepsilon N^{r\alpha})$ of the current time-stepping discontinuous Galerkin method is rigorous proved, where $N$ denotes the number of time intervals, $p$ is the degree of polynomial approximation on each time subinterval, $h$ is the maximum space step, $r\ge1$, $m$ is the order of finite element space, and $\varepsilon>0$ can be arbitrarily small. Numerical simulations verify the theoretical analysis.

This paper introduces a formulation of the variable density incompressible Navier-Stokes equations by modifying the nonlinear terms in a consistent way. For Galerkin discretizations, the formulation leads to full discrete conservation of mass, squared density, momentum, angular momentum and kinetic energy without the divergence-free constraint being strongly enforced. In addition to favorable conservation properties, the formulation is shown to make the density field invariant to global shifts. The effect of viscous regularizations on conservation properties is also investigated. Numerical tests validate the theory developed in this work. The new formulation shows superior performance compared to other formulations from the literature, both in terms of accuracy for smooth problems and in terms of robustness.

In Bayesian statistics, posterior contraction rates (PCRs) quantify the speed at which the posterior distribution concentrates on arbitrarily small neighborhoods of a true model, in a suitable way, as the sample size goes to infinity. In this paper, we develop a new approach to PCRs, with respect to strong norm distances on parameter spaces of functions. Critical to our approach is the combination of a local Lipschitz-continuity for the posterior distribution with a dynamic formulation of the Wasserstein distance, which allows to set forth an interesting connection between PCRs and some classical problems arising in mathematical analysis, probability and statistics, e.g., Laplace methods for approximating integrals, Sanov's large deviation principles in the Wasserstein distance, rates of convergence of mean Glivenko-Cantelli theorems, and estimates of weighted Poincar\'e-Wirtinger constants. We first present a theorem on PCRs for a model in the regular infinite-dimensional exponential family, which exploits sufficient statistics of the model, and then extend such a theorem to a general dominated model. These results rely on the development of novel techniques to evaluate Laplace integrals and weighted Poincar\'e-Wirtinger constants in infinite-dimension, which are of independent interest. The proposed approach is applied to the regular parametric model, the multinomial model, the finite-dimensional and the infinite-dimensional logistic-Gaussian model and the infinite-dimensional linear regression. In general, our approach leads to optimal PCRs in finite-dimensional models, whereas for infinite-dimensional models it is shown explicitly how the prior distribution affect PCRs.

In this paper, we are concerned with symmetric integrators for the nonlinear relativistic Klein--Gordon (NRKG) equation with a dimensionless parameter $0<\varepsilon\ll 1$, which is inversely proportional to the speed of light. The highly oscillatory property in time of this model corresponds to the parameter $\varepsilon$ and the equation has strong nonlinearity when $\eps$ is small. There two aspects bring significantly numerical burdens in designing numerical methods. We propose and analyze a novel class of symmetric integrators which is based on some formulation approaches to the problem, Fourier pseudo-spectral method and exponential integrators. Two practical integrators up to order four are constructed by using the proposed symmetric property and stiff order conditions of implicit exponential integrators. The convergence of the obtained integrators is rigorously studied, and it is shown that the accuracy in time is improved to be $\mathcal{O}(\varepsilon^{3} \hh^2)$ and $\mathcal{O}(\varepsilon^{4} \hh^4)$ for the time stepsize $\hh$. The near energy conservation over long times is established for the multi-stage integrators by using modulated Fourier expansions. These theoretical results are achievable even if large stepsizes are utilized in the schemes. Numerical results on a NRKG equation show that the proposed integrators have improved uniform error bounds, excellent long time energy conservation and competitive efficiency.

Recently, quantum computing experiments have for the first time exceeded the capability of classical computers to perform certain computations -- a milestone termed "quantum computational advantage." However, verifying the output of the quantum device in these experiments required extremely large classical computations. An exciting next step for demonstrating quantum capability would be to implement tests of quantum computational advantage with efficient classical verification, such that larger system sizes can be tested and verified. One of the first proposals for an efficiently-verifiable test of quantumness consists of hiding a secret classical bitstring inside a circuit of the class IQP, in such a way that samples from the circuit's output distribution are correlated with the secret (arXiv:0809.0847). The classical hardness of this protocol has been supported by evidence that directly simulating IQP circuits is hard, but the security of the protocol against other (non-simulating) classical attacks has remained an open question. In this work we demonstrate that the protocol is not secure against classical forgery. We describe a classical algorithm that can not only convince the verifier that the (classical) prover is quantum, but can in fact can extract the secret key underlying a given protocol instance. Furthermore, we show that the key extraction algorithm is efficient in practice for problem sizes of hundreds of qubits. Finally, we provide an implementation of the algorithm, and give the secret vector underlying the "$25 challenge" posted online by the authors of the original paper.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司