亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose an innovative isogeometric space-time method for the heat equation, with smooth splines approximation in both space and time. To enhance the stability of the method we add a stabilizing term, based on a linear combination of high-order artificial diffusions. This term is designed in order to make the linear system lower block-triangular, that is, lower triangular with respect to time. In order to keep optimal accuracy, the stabilization terms are further weighted in terms of the residual. Through a series of numerical experiments, we validate the method's capability, showcasing its stability and accuracy.

相關內容

Bayesian optimization (BO) developed as an approach for the efficient optimization of expensive black-box functions without gradient information. A typical BO paper introduces a new approach and compares it to some alternatives on simulated and possibly real examples to show its efficacy. Yet on a different example, this new algorithm might not be as effective as the alternatives. This paper looks at a broader family of approaches to explain the strengths and weaknesses of algorithms in the family, with guidance on what choices might work best on different classes of problems.

We develop a Bayesian inference method for discretely-observed stochastic differential equations (SDEs). Inference is challenging for most SDEs, due to the analytical intractability of the likelihood function. Nevertheless, forward simulation via numerical methods is straightforward, motivating the use of approximate Bayesian computation (ABC). We propose a conditional simulation scheme for SDEs that is based on lookahead strategies for sequential Monte Carlo (SMC) and particle smoothing using backward simulation. This leads to the simulation of trajectories that are consistent with the observed trajectory, thereby increasing the ABC acceptance rate. We additionally employ an invariant neural network, previously developed for Markov processes, to learn the summary statistics function required in ABC. The neural network is incrementally retrained by exploiting an ABC-SMC sampler, which provides new training data at each round. Since the SDE simulation scheme differs from standard forward simulation, we propose a suitable importance sampling correction, which has the added advantage of guiding the parameters towards regions of high posterior density, especially in the first ABC-SMC round. Our approach achieves accurate inference and is about three times faster than standard (forward-only) ABC-SMC. We illustrate our method in four simulation studies, including three examples from the Chan-Karaolyi-Longstaff-Sanders SDE family.

The modeling of time-varying graph signals as stationary time-vertex stochastic processes permits the inference of missing signal values by efficiently employing the correlation patterns of the process across different graph nodes and time instants. In this study, we propose an algorithm for computing graph autoregressive moving average (graph ARMA) processes based on learning the joint time-vertex power spectral density of the process from its incomplete realizations for the task of signal interpolation. Our solution relies on first roughly estimating the joint spectrum of the process from partially observed realizations and then refining this estimate by projecting it onto the spectrum manifold of the graph ARMA process through convex relaxations. The initially missing signal values are then estimated based on the learnt model. Experimental results show that the proposed approach achieves high accuracy in time-vertex signal estimation problems.

In rank-metric cryptography, a vector from a finite dimensional linear space over a finite field is viewed as the linear space spanned by its entries. The rank decoding problem which is the analogue of the problem of decoding a random linear code consists in recovering a basis of a random noise vector that was used to perturb a set of random linear equations sharing a secret solution. Assuming the intractability of this problem, we introduce a new construction of injective one-way trapdoor functions. Our solution departs from the frequent way of building public key primitives from error-correcting codes where, to establish the security, ad hoc assumptions about a hidden structure are made. Our method produces a hard-to-distinguish linear code together with low weight vectors which constitute the secret that helps recover the inputs.The key idea is to focus on trapdoor functions that take sufficiently enough input vectors sharing the same support. Applying then the error correcting algorithm designed for Low Rank Parity Check (LRPC) codes, we obtain an inverting algorithm that recovers the inputs with overwhelming probability.

We establish an efficient approximation algorithm for the partition functions of a class of quantum spin systems at low temperature, which can be viewed as stable quantum perturbations of classical spin systems. Our algorithm is based on combining the contour representation of quantum spin systems of this type due to Borgs, Koteck\'y, and Ueltschi with the algorithmic framework developed by Helmuth, Perkins, and Regts, and Borgs et al.

In color spaces where the chromatic term is given in polar coordinates, the shortest distance between colors of the same value is circular. By converting such a space into a complex polar form with a real-valued value axis, a color algebra for combining colors is immediately available. In this work, we introduce two complex space operations utilizing this observation: circular average filtering and circular linear interpolation. These operations produce Archimedean Spirals, thus guaranteeing that they operate along the shortest paths. We demonstrate that these operations provide an intuitive way to work in certain color spaces and that they are particularly useful for obtaining better filtering and interpolation results. We present a set of examples based on the perceptually uniform color space CIELAB or L*a*b* with its polar form CIEHLC. We conclude that representing colors in a complex space with circular operations can provide better visual results by exploitation of the strong algebraic properties of complex space C.

Representing and rendering dynamic scenes has been an important but challenging task. Especially, to accurately model complex motions, high efficiency is usually hard to maintain. We introduce the 4D Gaussian Splatting (4D-GS) to achieve real-time dynamic scene rendering while also enjoying high training and storage efficiency. An efficient deformation field is constructed to model both Gaussian motions and shape deformations. Different adjacent Gaussians are connected via a HexPlane to produce more accurate position and shape deformations. Our 4D-GS method achieves real-time rendering under high resolutions, 70 FPS at a 800$\times$800 resolution on an RTX 3090 GPU, while maintaining comparable or higher quality than previous state-of-the-art methods. More demos and code are available at //guanjunwu.github.io/4dgs/.

We consider the variable selection problem for two-sample tests, aiming to select the most informative variables to distinguish samples from two groups. To solve this problem, we propose a framework based on the kernel maximum mean discrepancy (MMD). Our approach seeks a group of variables with a pre-specified size that maximizes the variance-regularized MMD statistics. This formulation also corresponds to the minimization of asymptotic type-II error while controlling type-I error, as studied in the literature. We present mixed-integer programming formulations and develop exact and approximation algorithms with performance guarantees for different choices of kernel functions. Furthermore, we provide a statistical testing power analysis of our proposed framework. Experiment results on synthetic and real datasets demonstrate the superior performance of our approach.

Traffic flow prediction is one of the most fundamental tasks of intelligent transportation systems. The complex and dynamic spatial-temporal dependencies make the traffic flow prediction quite challenging. Although existing spatial-temporal graph neural networks hold prominent, they often encounter challenges such as (1) ignoring the fixed graph that limits the predictive performance of the model, (2) insufficiently capturing complex spatial-temporal dependencies simultaneously, and (3) lacking attention to spatial-temporal information at different time lengths. In this paper, we propose a Multi-Scale Spatial-Temporal Recurrent Network for traffic flow prediction, namely MSSTRN, which consists of two different recurrent neural networks: the single-step gate recurrent unit and the multi-step gate recurrent unit to fully capture the complex spatial-temporal information in the traffic data under different time windows. Moreover, we propose a spatial-temporal synchronous attention mechanism that integrates adaptive position graph convolutions into the self-attention mechanism to achieve synchronous capture of spatial-temporal dependencies. We conducted extensive experiments on four real traffic datasets and demonstrated that our model achieves the best prediction accuracy with non-trivial margins compared to all the twenty baseline methods.

Sparse Candecomp / PARAFAC decomposition, a generalization of the matrix singular value decomposition to higher-dimensional tensors, is a popular tool for analyzing diverse datasets. On tensors with billions of nonzero entries, computing a CP decomposition is a computationally intensive task. We propose the first distributed-memory implementations of two randomized CP decomposition algorithms, CP-ARLS-LEV and STS-CP, that offer nearly an order-of-magnitude speedup at high decomposition ranks over well-tuned non-randomized decomposition packages. Both algorithms rely on leverage score sampling and enjoy strong theoretical guarantees, each with varying time and accuracy tradeoffs. We tailor the communication schedule for our random sampling algorithms, eliminating expensive reduction collectives and forcing communication costs to scale with the random sample count. Finally, we optimize the local storage format for our methods, switching between an analogue of compressed sparse column and compressed sparse row formats to facilitate both random sampling and efficient parallelization of sparse-dense matrix multiplication. Experiments show that our methods are fast and scalable, producing 11x speedup over SPLATT to compute a decomposition of the billion-scale Reddit tensor on 512 CPU cores in under 2 minutes.

北京阿比特科技有限公司