亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on $\mathbb{R}^d$, for $d \geq 1$. Both spaces are Hilbert spaces involving weight functions, which determine the behaviour as different variables tend to $\pm \infty$, and weight parameters, which represent the influence of different subsets of variables. The unanchored ANOVA space on $\mathbb{R}^d$ was initially introduced by Nichols & Kuo in 2014 to analyse the error of quasi-Monte Carlo (QMC) approximations for integrals on unbounded domains; whereas the classical Sobolev space of dominating mixed smoothness was used as the setting in a series of papers by Griebel, Kuo & Sloan on the smoothing effect of integration, in an effort to develop a rigorous theory on why QMC methods work so well for certain non-smooth integrands with kinks or jumps coming from option pricing problems. In this same setting, Griewank, Kuo, Le\"ovey & Sloan in 2018 subsequently extended these ideas by developing a practical smoothing by preintegration technique to approximate integrals of such functions with kinks or jumps. We first prove the equivalence in one dimension (itself a non-trivial task), before following a similar, but more complicated, strategy to prove the equivalence for general dimensions. As a consequence of this equivalence, we analyse applying QMC combined with a preintegration step to approximate the fair price of an Asian option, and prove that the error of such an approximation using $N$ points converges at a rate close to $1/N$.

相關內容

We prove hypercontractive inequalities on high dimensional expanders. As in the settings of the p-biased hypercube, the symmetric group, and the Grassmann scheme, our inequalities are effective for global functions, which are functions that are not significantly affected by a restriction of a small set of coordinates. As applications, we obtain Fourier concentration, small-set expansion, and Kruskal-Katona theorems for high dimensional expanders. Our techniques rely on a new approximate Efron-Stein decomposition for high dimensional link expanders.

The diagonalization technique was invented by Cantor to show that there are more real numbers than algebraic numbers, and is very important in computer science. In this work, we enumerate all polynomial-time deterministic Turing machines and diagonalize over all of them by an universal nondeterministic Turing machine. As a result, we obtain that there is a language $L_d$ not accepted by any polynomial-time deterministic Turing machines but accepted by a nondeterministic Turing machine working within $O(n^k)$ for any $k\in\mathbb{N}_1$, i.e. $L_d\in \mathcal{NP}$ . That is, we present a proof that $\mathcal{P}$ and $\mathcal{NP}$ differ. Key words: Diagonalization, Polynomial-Time deterministic Turing machine, Universal nondeterministic Turing machine

Relating animal behaviors to brain activity is a fundamental goal in neuroscience, with practical applications in building robust brain-machine interfaces. However, the domain gap between individuals is a major issue that prevents the training of general models that work on unlabeled subjects. Since 3D pose data can now be reliably extracted from multi-view video sequences without manual intervention, we propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations exploiting the properties of microscopy imaging. To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions. To demonstrate this, we test our methods on three very different multimodal datasets; one that features flies and their neural activity, one that contains human neural Electrocorticography (ECoG) data, and lastly the RGB video data of human activities from different viewpoints.

Optimal zero-delay coding (quantization) of $\mathbb{R}^d$-valued linearly generated Markov sources is studied under quadratic distortion. The structure and existence of deterministic and stationary coding policies that are optimal for the infinite horizon average cost (distortion) problem are established. Prior results studying the optimality of zero-delay codes for Markov sources for infinite horizons either considered finite alphabet sources or, for the $\mathbb{R}^d$-valued case, only showed the existence of deterministic and non-stationary Markov coding policies or those which are randomized. In addition to existence results, for finite blocklength (horizon) $T$ the performance of an optimal coding policy is shown to approach the infinite time horizon optimum at a rate $O(\frac{1}{T})$. This gives an explicit rate of convergence that quantifies the near-optimality of finite window (finite-memory) codes among all optimal zero-delay codes.

This study proposes a method for efficient delivery of liquefied petroleum gas cylinders based on demand forecasts of gas usage. To maintain a liquefied petroleum gas service, gas providers visit each customer to check the gas meter and replace the gas cylinder depending on the remaining amount of gas. These visits can be categorized into three patterns: non-replacement visit, replacement before the customer runs out of gas, and replacement after the customer runs out of gas. The last pattern is a crucial problem in sustaining a liquefied petroleum gas service, and it must be reduced. By contrast, frequent non-replacement visits are required to prevent the customer from running out of gas, but it requires considerable effort. One of the most severe difficulties of this problem is acquiring the gas usages of each customer only by visiting. However, with the recent spread of smart sensors, the daily gas consumption of each house can be monitored without having to visit customers. Our main idea is to categorize all customers into three groups: high-risk, moderate-risk, and low-risk by focusing on an urgent need for cylinder replacement based on the demand forecast. Based on this idea, we construct an algorithm to maximize the delivery for moderate-risk customers while ensuring delivery to high-risk customers. Long-term optimal delivery planning is realized by achieving workload balancing per day. The verification experiment in Chiba prefecture in Japan showed the effectiveness of our algorithm in reducing the number of out-of-gas cylinders. Moreover, the proposed model is a new generic framework for building an optimal vehicle routing plan, consisting of a complementary algorithm, demand forecast, delivery list optimization, and delivery route optimization for realizing a long-term optimal delivery plan by setting the priority.

We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables. Our approach uses gradients of the likelihood function with respect to its discrete inputs to propose updates in a Metropolis-Hastings sampler. We show empirically that this approach outperforms generic samplers in a number of difficult settings including Ising models, Potts models, restricted Boltzmann machines, and factorial hidden Markov models. We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data. This approach outperforms variational auto-encoders and existing energy-based models. Finally, we give bounds showing that our approach is near-optimal in the class of samplers which propose local updates.

Matter evolved under influence of gravity from minuscule density fluctuations. Non-perturbative structure formed hierarchically over all scales, and developed non-Gaussian features in the Universe, known as the Cosmic Web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and employ a large ensemble of computer simulations to compare with the observed data in order to extract the full information of our own Universe. However, to evolve trillions of galaxies over billions of years even with the simplest physics is a daunting task. We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory. Our extensive analysis, demonstrates that D$^3$M outperforms the second order perturbation theory (hereafter 2LPT), the commonly used fast approximate simulation method, in point-wise comparison, 2-point correlation, and 3-point correlation. We also show that D$^3$M is able to accurately extrapolate far beyond its training data, and predict structure formation for significantly different cosmological parameters. Our study proves, for the first time, that deep learning is a practical and accurate alternative to approximate simulations of the gravitational structure formation of the Universe.

Importance sampling is one of the most widely used variance reduction strategies in Monte Carlo rendering. In this paper, we propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples. Our approach considers an existing Monte Carlo rendering algorithm as a black box. During a scene-dependent training phase, we learn to generate samples with a desired density in the primary sample space of the rendering algorithm using maximum likelihood estimation. We leverage a recent neural network architecture that was designed to represent real-valued non-volume preserving ('Real NVP') transformations in high dimensional spaces. We use Real NVP to non-linearly warp primary sample space and obtain desired densities. In addition, Real NVP efficiently computes the determinant of the Jacobian of the warp, which is required to implement the change of integration variables implied by the warp. A main advantage of our approach is that it is agnostic of underlying light transport effects, and can be combined with many existing rendering techniques by treating them as a black box. We show that our approach leads to effective variance reduction in several practical scenarios.

We present the problem of selecting relevant premises for a proof of a given statement. When stated as a binary classification task for pairs (conjecture, axiom), it can be efficiently solved using artificial neural networks. The key difference between our advance to solve this problem and previous approaches is the use of just functional signatures of premises. To further improve the performance of the model, we use dimensionality reduction technique, to replace long and sparse signature vectors with their compact and dense embedded versions. These are obtained by firstly defining the concept of a context for each functor symbol, and then training a simple neural network to predict the distribution of other functor symbols in the context of this functor. After training the network, the output of its hidden layer is used to construct a lower dimensional embedding of a functional signature (for each premise) with a distributed representation of features. This allows us to use 512-dimensional embeddings for conjecture-axiom pairs, containing enough information about the original statements to reach the accuracy of 76.45% in premise selection task, only with simple two-layer densely connected neural networks.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司