亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A new fixed (non-adaptive) recursive scheme for multigrid algorithms is introduced. Governed by a positive parameter $\kappa$ called the cycle counter, this scheme generates a family of multigrid cycles dubbed $\kappa$-cycles. The well-known $V$-cycle, $F$-cycle, and $W$-cycle are shown to be particular members of this rich $\kappa$-cycle family, which satisfies the property that the total number of recursive calls in a single cycle is a polynomial of degree $\kappa$ in the number of levels of the cycle. This broadening of the scope of fixed multigrid cycles is shown to be potentially significant for the solution of some large problems on platforms, such as GPU processors, where the overhead induced by recursive calls may be relatively significant. In cases of problems for which the convergence of standard $V$-cycles or $F$-cycles (corresponding to $\kappa=1$ and $\kappa=2$, respectively) is particularly slow, and yet the cost of $W$-cycles is very high due to the large number of recursive calls (which is exponential in the number of levels), intermediate values of $\kappa$ may prove to yield significantly faster run-times. This is demonstrated in examples where $\kappa$-cycles are used for the solution of rotated anisotropic diffusion problems, both as a stand-alone solver and as a preconditioner. Moreover, a simple model is presented for predicting the approximate run-time of the $\kappa$-cycle, which is useful in pre-selecting an appropriate cycle counter for a given problem on a given platform. Implementing the $\kappa$-cycle requires making just a small change in the classical multigrid cycle.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系統編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

We analyze the complexity of learning directed acyclic graphical models from observational data in general settings without specific distributional assumptions. Our approach is information-theoretic and uses a local Markov boundary search procedure in order to recursively construct ancestral sets in the underlying graphical model. Perhaps surprisingly, we show that for certain graph ensembles, a simple forward greedy search algorithm (i.e. without a backward pruning phase) suffices to learn the Markov boundary of each node. This substantially improves the sample complexity, which we show is at most polynomial in the number of nodes. This is then applied to learn the entire graph under a novel identifiability condition that generalizes existing conditions from the literature. As a matter of independent interest, we establish finite-sample guarantees for the problem of recovering Markov boundaries from data. Moreover, we apply our results to the special case of polytrees, for which the assumptions simplify, and provide explicit conditions under which polytrees are identifiable and learnable in polynomial time. We further illustrate the performance of the algorithm, which is easy to implement, in a simulation study. Our approach is general, works for discrete or continuous distributions without distributional assumptions, and as such sheds light on the minimal assumptions required to efficiently learn the structure of directed graphical models from data.

Analytical conditions are available for the optimum design of impact absorbers for the case where the host structure is well described as rigid body. Accordingly, the analysis relies on the assumption that the impacts cause immediate dissipation in the contact region, which is modeled in terms of a known coefficient of restitution. When a flexible host structure is considered instead, the impact absorber not only dissipates energy at the time instances of impact, but it inflicts nonlinear energy scattering between structural modes. Hence, it is crucial to account for such nonlinear energy transfers yielding energy redistribution within the modal space of the structure. In the present work, we develop a design approach for reonantly-driven, flexible host structures. We demonstrate decoupling of the time scales of the impact and the resonant vibration. On the long time scale, the dynamics can be properly reduced to the fundamental harmonic of the resonant mode. A light impact absorber responds to this enforced motion, and we recover the Slow Invariant Manifold of the dynamics for the regime of two impacts per period. On the short time scale, the contact mechanics and elasto-dynamics must be finely resolved. We show that it is sufficient to run a numerical simulation of a single impact event with adequate pre-impact velocity. From this simulation, we derive a modal coefficient of restitution and the properties of the contact force pulse, needed to approximate the behavior on the long time scale. We establish that the design problem can be reduced to four dimensionless parameters and demonstrate the approach for the numerical example of a cantilevered beam with an impact absorber. We conclude that the proposed semi-analytical procedure enables deep qualitative understanding of the problem and, at the same time, yields a quantitatively accurate prediction of the optimum design.

R\'enyi's information provides a theoretical foundation for tractable and data-efficient non-parametric density estimation, based on pair-wise evaluations in a reproducing kernel Hilbert space (RKHS). This paper extends this framework to parametric probabilistic modeling, motivated by the fact that R\'enyi's information can be estimated in closed-form for Gaussian mixtures. Based on this special connection, a novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible, because costs are scale-invariant, avoiding high gradient variance while imposing less restrictions on absolute continuity, which is a huge advantage in parametric information theoretic optimization. The implementation employs a single neural network driven by an orthonormal input appended to a single white noise source adapted to learn an infinite Gaussian mixture model (IMoG), which provides an empirically tractable model distribution in low dimensions. To train SGM, we provide three novel variational cost functions, based on R\'enyi's second-order entropy and divergence, to implement minimization of cross-entropy, minimization of variational representations of $f$-divergence, and maximization of the evidence lower bound (conditional probability). We test the framework for estimation of mutual information and compare the results with the mutual information neural estimation (MINE), for density estimation, for conditional probability estimation in Markov models as well as for training adversarial networks. Our preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as the performance of generative adversarial networks.

We study bivariate stochastic recurrence equations with triangular matrix coefficients and we characterize the tail behavior of their stationary solutions ${\bf W} =(W_1,W_2)$. Recently it has been observed that $W_1,W_2$ may exhibit regularly varying tails with different indices, which is in contrast to well-known Kesten-type results. However, only partial results have been derived. Under typical "Kesten-Goldie" and "Grey" conditions, we completely characterize tail behavior of $W_1,W_2$. The tail asymptotics we obtain has not been observed in previous settings of stochastic recurrence equations.

Stochastic gradient Markov chain Monte Carlo (SGMCMC) is considered the gold standard for Bayesian inference in large-scale models, such as Bayesian neural networks. Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option. Unfortunately, VI makes strong assumptions on both the factorization and functional form of the posterior. In this work, we propose a new non-parametric variational approximation that makes no assumptions about the approximate posterior's functional form and allows practitioners to specify the exact dependencies the algorithm should respect or break. The approach relies on a new Langevin-type algorithm that operates on a modified energy function, where parts of the latent variables are averaged over samples from earlier iterations of the Markov chain. This way, statistical dependencies can be broken in a controlled way, allowing the chain to mix faster. This scheme can be further modified in a "dropout" manner, leading to even more scalability. By implementing the scheme on a ResNet-20 architecture, we obtain better predictive likelihoods and larger effective sample sizes than full SGMCMC.

The connectivity of a graph is an important parameter to measure its reliability. Structure and substructure connectivity are two novel generalizations of the connectivity. In this paper, we characterize the complexity of determining structure and substructure connectivity of graphs, showing that they are both NP-complete.

Unpaired image-to-image translation has been applied successfully to natural images but has received very little attention for manifold-valued data such as in diffusion tensor imaging (DTI). The non-Euclidean nature of DTI prevents current generative adversarial networks (GANs) from generating plausible images and has mainly limited their application to diffusion MRI scalar maps, such as fractional anisotropy (FA) or mean diffusivity (MD). Even if these scalar maps are clinically useful, they mostly ignore fiber orientations and therefore have limited applications for analyzing brain fibers. Here, we propose a manifold-aware CycleGAN that learns the generation of high-resolution DTI from unpaired T1w images. We formulate the objective as a Wasserstein distance minimization problem of data distributions on a Riemannian manifold of symmetric positive definite 3x3 matrices SPD(3), using adversarial and cycle-consistency losses. To ensure that the generated diffusion tensors lie on the SPD(3) manifold, we exploit the theoretical properties of the exponential and logarithm maps of the Log-Euclidean metric. We demonstrate that, unlike standard GANs, our method is able to generate realistic high-resolution DTI that can be used to compute diffusion-based metrics and potentially run fiber tractography algorithms. To evaluate our model's performance, we compute the cosine similarity between the generated tensors principal orientation and their ground-truth orientation, the mean squared error (MSE) of their derived FA values and the Log-Euclidean distance between the tensors. We demonstrate that our method produces 2.5 times better FA MSE than a standard CycleGAN and up to 30% better cosine similarity than a manifold-aware Wasserstein GAN while synthesizing sharp high-resolution DTI.

Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.

Matter evolved under influence of gravity from minuscule density fluctuations. Non-perturbative structure formed hierarchically over all scales, and developed non-Gaussian features in the Universe, known as the Cosmic Web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and employ a large ensemble of computer simulations to compare with the observed data in order to extract the full information of our own Universe. However, to evolve trillions of galaxies over billions of years even with the simplest physics is a daunting task. We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory. Our extensive analysis, demonstrates that D$^3$M outperforms the second order perturbation theory (hereafter 2LPT), the commonly used fast approximate simulation method, in point-wise comparison, 2-point correlation, and 3-point correlation. We also show that D$^3$M is able to accurately extrapolate far beyond its training data, and predict structure formation for significantly different cosmological parameters. Our study proves, for the first time, that deep learning is a practical and accurate alternative to approximate simulations of the gravitational structure formation of the Universe.

Recurrent neural networks (RNNs) provide state-of-the-art performance in processing sequential data but are memory intensive to train, limiting the flexibility of RNN models which can be trained. Reversible RNNs---RNNs for which the hidden-to-hidden transition can be reversed---offer a path to reduce the memory requirements of training, as hidden states need not be stored and instead can be recomputed during backpropagation. We first show that perfectly reversible RNNs, which require no storage of the hidden activations, are fundamentally limited because they cannot forget information from their hidden state. We then provide a scheme for storing a small number of bits in order to allow perfect reversal with forgetting. Our method achieves comparable performance to traditional models while reducing the activation memory cost by a factor of 10--15. We extend our technique to attention-based sequence-to-sequence models, where it maintains performance while reducing activation memory cost by a factor of 5--10 in the encoder, and a factor of 10--15 in the decoder.

北京阿比特科技有限公司