亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an algorithm for compressing the radiosity view factor model commonly used in radiation heat transfer and computer graphics. We use a format inspired by the hierarchical off-diagonal low rank format, where elements are recursively partitioned using a quadtree or octree and blocks are compressed using a sparse singular value decomposition -- the hierarchical matrix is assembled using dynamic programming. The motivating application is time-dependent thermal modeling on vast planetary surfaces, with a focus on permanently shadowed craters which receive energy through indirect irradiance. In this setting, shape models are comprised of a large number of triangular facets which conform to a rough surface. At each time step, a quadratic number of triangle-to-triangle scattered fluxes must be summed; that is, as the sun moves through the sky, we must solve the same view factor system of equations for a potentially unlimited number of time-varying righthand sides. We first conduct numerical experiments with a synthetic spherical cap-shaped crater, where the equilibrium temperature is analytically available. We also test our implementation with triangle meshes of planetary surfaces derived from digital elevation models recovered by orbiting spacecrafts. Our results indicate that the compressed view factor matrix can be assembled in quadratic time, which is comparable to the time it takes to assemble the full view matrix itself. Memory requirements during assembly are reduced by a large factor. Finally, for a range of compression tolerances, the size of the compressed view factor matrix and the speed of the resulting matrix vector product both scale linearly (as opposed to quadratically for the full matrix), resulting in orders of magnitude savings in processing time and memory space.

相關內容

Besov priors are nonparametric priors that can model spatially inhomogeneous functions. They are routinely used in inverse problems and imaging, where they exhibit attractive sparsity-promoting and edge-preserving features. A recent line of work has initiated the study of their asymptotic frequentist convergence properties. In the present paper, we consider the theoretical recovery performance of the posterior distributions associated to Besov-Laplace priors in the density estimation model, under the assumption that the observations are generated by a possibly spatially inhomogeneous true density belonging to a Besov space. We improve on existing results and show that carefully tuned Besov-Laplace priors attain optimal posterior contraction rates. Furthermore, we show that hierarchical procedures involving a hyper-prior on the regularity parameter lead to adaptation to any smoothness level.

Recently developed iterative and deep learning-based approaches to computer-generated holography (CGH) have been shown to achieve high-quality photorealistic 3D images with spatial light modulators. However, such approaches remain overly cumbersome for patterning sparse collections of target points across a photoresponsive volume in applications including biological microscopy and material processing. Specifically, in addition to requiring heavy computation that cannot accommodate real-time operation in mobile or hardware-light settings, existing sampling-dependent 3D CGH methods preclude the ability to place target points with arbitrary precision, limiting accessible depths to a handful of planes. Accordingly, we present a non-iterative point cloud holography algorithm that employs fast deterministic calculations in order to efficiently allocate patches of SLM pixels to different target points in the 3D volume and spread the patterning of all points across multiple time frames. Compared to a matched-performance implementation of the iterative Gerchberg-Saxton algorithm, our algorithm's relative computation speed advantage was found to increase with SLM pixel count, exceeding 100,000x at 512x512 array format.

For a singular integral equation on an interval of the real line, we study the behavior of the error of a delta-delta discretization. We show that the convergence is non-uniform, between order $O(h^{2})$ in the interior of the interval and a boundary layer where the consistency error does not tend to zero.

In this article we analyze the error produced by the removal of an arbitrary knot from a spline function. When a knot has multiplicity greater than one, this implies a reduction of its multiplicity by one unit. In particular, we deduce a very simple formula to compute the error in terms of some neighboring knots and a few control points of the considered spline. Furthermore, we show precisely how this error is related to the jump of a derivative of the spline at the knot. We then use the developed theory to propose efficient and very low-cost local error indicators and adaptive coarsening algorithms. Finally, we present some numerical experiments to illustrate their performance and show some applications.

Atmospheric retrievals (AR) of exoplanets typically rely on a combination of a Bayesian inference technique and a forward simulator to estimate atmospheric properties from an observed spectrum. A key component in simulating spectra is the pressure-temperature (PT) profile, which describes the thermal structure of the atmosphere. Current AR pipelines commonly use ad hoc fitting functions here that limit the retrieved PT profiles to simple approximations, but still use a relatively large number of parameters. In this work, we introduce a conceptually new, data-driven parameterization scheme for physically consistent PT profiles that does not require explicit assumptions about the functional form of the PT profiles and uses fewer parameters than existing methods. Our approach consists of a latent variable model (based on a neural network) that learns a distribution over functions (PT profiles). Each profile is represented by a low-dimensional vector that can be used to condition a decoder network that maps $P$ to $T$. When training and evaluating our method on two publicly available datasets of self-consistent PT profiles, we find that our method achieves, on average, better fit quality than existing baseline methods, despite using fewer parameters. In an AR based on existing literature, our model (using two parameters) produces a tighter, more accurate posterior for the PT profile than the five-parameter polynomial baseline, while also speeding up the retrieval by more than a factor of three. By providing parametric access to physically consistent PT profiles, and by reducing the number of parameters required to describe a PT profile (thereby reducing computational cost or freeing resources for additional parameters of interest), our method can help improve AR and thus our understanding of exoplanet atmospheres and their habitability.

This paper introduces a formulation of the variable density incompressible Navier-Stokes equations by modifying the nonlinear terms in a consistent way. For Galerkin discretizations, the formulation leads to full discrete conservation of mass, squared density, momentum, angular momentum and kinetic energy without the divergence-free constraint being strongly enforced. In addition to favorable conservation properties, the formulation is shown to make the density field invariant to global shifts. The effect of viscous regularizations on conservation properties is also investigated. Numerical tests validate the theory developed in this work. The new formulation shows superior performance compared to other formulations from the literature, both in terms of accuracy for smooth problems and in terms of robustness.

An efficient method of computing power expansions of algebraic functions is the method of Kung and Traub and is based on exact arithmetic. This paper shows a numeric approach is both feasible and accurate while also introducing a performance improvement to Kung and Traub's method based on the ramification extent of the expansions. A new method is then described for computing radii of convergence using a series comparison test. Series accuracies are then fitted to a simple log-linear function in their domain of convergence and found to have low variance. Algebraic functions up to degree 50 were analyzed and timed. A consequence of this work provided a simple method of computing the Riemann surface genus and was used as a cycle check-sum. Mathematica ver. 13.2 was used to acquire and analyze the data on a 4.0 GHz quad-core desktop computer.

We give a categorical treatment, in the spirit of Baez and Fritz, of relative entropy for probability distributions defined on standard Borel spaces. We define a category suitable for reasoning about statistical inference on standard Borel spaces. We define relative entropy as a functor into Lawvere's category and we show convexity, lower semicontinuity and uniqueness.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司