亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In color spaces where the chromatic term is given in polar coordinates, the shortest distance between colors of the same value is circular. By converting such a space into a complex polar form with a real-valued value axis, a color algebra for combining colors is immediately available. In this work, we introduce two complex space operations utilizing this observation: circular average filtering and circular linear interpolation. These operations produce Archimedean Spirals, thus guaranteeing that they operate along the shortest paths. We demonstrate that these operations provide an intuitive way to work in certain color spaces and that they are particularly useful for obtaining better filtering and interpolation results. We present a set of examples based on the perceptually uniform color space CIELAB or L*a*b* with its polar form CIEHLC. We conclude that representing colors in a complex space with circular operations can provide better visual results by exploitation of the strong algebraic properties of complex space C.

相關內容

Modern time series forecasting methods, such as Transformer and its variants, have shown strong ability in sequential data modeling. To achieve high performance, they usually rely on redundant or unexplainable structures to model complex relations between variables and tune the parameters with large-scale data. Many real-world data mining tasks, however, lack sufficient variables for relation reasoning, and therefore these methods may not properly handle such forecasting problems. With insufficient data, time series appear to be affected by many exogenous variables, and thus, the modeling becomes unstable and unpredictable. To tackle this critical issue, in this paper, we develop a novel algorithmic framework for inferring the intrinsic latent factors implied by the observable time series. The inferred factors are used to form multiple independent and predictable signal components that enable not only sparse relation reasoning for long-term efficiency but also reconstructing the future temporal data for accurate prediction. To achieve this, we introduce three characteristics, i.e., predictability, sufficiency, and identifiability, and model these characteristics via the powerful deep latent dynamics models to infer the predictable signal components. Empirical results on multiple real datasets show the efficiency of our method for different kinds of time series forecasting. The statistical analysis validates the predictability of the learned latent factors.

For solving linear inverse problems, particularly of the type that appears in tomographic imaging and compressive sensing, this paper develops two new approaches. The first approach is an iterative algorithm that minimizes a regularized least squares objective function where the regularization is based on a compound Gaussian prior distribution. The compound Gaussian prior subsumes many of the commonly used priors in image reconstruction, including those of sparsity-based approaches. The developed iterative algorithm gives rise to the paper's second new approach, which is a deep neural network that corresponds to an "unrolling" or "unfolding" of the iterative algorithm. Unrolled deep neural networks have interpretable layers and outperform standard deep learning methods. This paper includes a detailed computational theory that provides insight into the construction and performance of both algorithms. The conclusion is that both algorithms outperform other state-of-the-art approaches to tomographic image formation and compressive sensing, especially in the difficult regime of low training.

Structured matrices with symbolic sizes appear frequently in the literature, especially in the description of algorithms for linear algebra. Recent work has treated these symbolic structured matrices themselves as computational objects, showing how to add matrices with blocks of different symbolic sizes in a general way while avoiding a combinatorial explosion of cases. The present article introduces the concept of hybrid intervals, in which points may have negative multiplicity. Various operations on hybrid intervals have compact and elegant formulations that do not require cases to handle different orders of the end points. This makes them useful to represent symbolic block matrix structures and to express arithmetic on symbolic block matrices compactly. We use these ideas to formulate symbolic block matrix addition and multiplication in a compact and uniform way.

The self-random number generation (SRNG) problem is considered for general setting. In the literature, the optimum SRNG rate with respect to the variational distance has been discussed. In this paper, we first try to characterize the optimum SRNG rate with respect to a subclass of $f$-divergences. The subclass of $f$-divergences considered in this paper includes typical distance measures such as the variational distance, the KL divergence, the Hellinger distance and so on. Hence our result can be considered as a generalization of the previous result with respect to the variational distance. Next, we consider the obtained optimum SRNG rate from several viewpoints. The $\varepsilon$-source coding problem is one of related problems with the SRNG problem. Our results reveal how the SRNG problem with the $f$-divergence relate to the $\varepsilon$-fixed-length source coding problem. We also apply our results to the rate distortion perception (RDP) function. As a result, we can establish a lower bound for the RDP function with respect to $f$-divergences using our findings. Finally, we discuss the representation of the optimum SRNG rate using the smooth R\'enyi entropy.

Support Vector Machine (SVM) algorithm requires a high computational cost (both in memory and time) to solve a complex quadratic programming (QP) optimization problem during the training process. Consequently, SVM necessitates high computing hardware capabilities. The central processing unit (CPU) clock frequency cannot be increased due to physical limitations in the miniaturization process. However, the potential of parallel multi-architecture, available in both multi-core CPUs and highly scalable GPUs, emerges as a promising solution to enhance algorithm performance. Therefore, there is an opportunity to reduce the high computational time required by SVM for solving the QP optimization problem. This paper presents a comparative study that implements the SVM algorithm on different parallel architecture frameworks. The experimental results show that SVM MPI-CUDA implementation achieves a speedup over SVM TensorFlow implementation on different datasets. Moreover, SVM TensorFlow implementation provides a cross-platform solution that can be migrated to alternative hardware components, which will reduces the development time.

Enforcing orthonormal or isometric property for the weight matrices has been shown to enhance the training of deep neural networks by mitigating gradient exploding/vanishing and increasing the robustness of the learned networks. However, despite its practical performance, the theoretical analysis of orthonormality in neural networks is still lacking; for example, how orthonormality affects the convergence of the training process. In this letter, we aim to bridge this gap by providing convergence analysis for training orthonormal deep linear neural networks. Specifically, we show that Riemannian gradient descent with an appropriate initialization converges at a linear rate for training orthonormal deep linear neural networks with a class of loss functions. Unlike existing works that enforce orthonormal weight matrices for all the layers, our approach excludes this requirement for one layer, which is crucial to establish the convergence guarantee. Our results shed light on how increasing the number of hidden layers can impact the convergence speed. Experimental results validate our theoretical analysis.

The optimization of expensive-to-evaluate black-box functions is prevalent in various scientific disciplines. Bayesian optimization is an automatic, general and sample-efficient method to solve these problems with minimal knowledge of the underlying function dynamics. However, the ability of Bayesian optimization to incorporate prior knowledge or beliefs about the function at hand in order to accelerate the optimization is limited, which reduces its appeal for knowledgeable practitioners with tight budgets. To allow domain experts to customize the optimization routine, we propose ColaBO, the first Bayesian-principled framework for incorporating prior beliefs beyond the typical kernel structure, such as the likely location of the optimizer or the optimal value. The generality of ColaBO makes it applicable across different Monte Carlo acquisition functions and types of user beliefs. We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.

The integration of experimental data into mathematical and computational models is crucial for enhancing their predictive power in real-world scenarios. However, the performance of data assimilation algorithms can be significantly degraded when measurements are corrupted by biased noise, altering the signal magnitude, or when the system dynamics lack smoothness, such as in the presence of fast oscillations or discontinuities. This paper focuses on variational state estimation using the so-called Parameterized Background Data Weak method, which relies on a parameterized background by a set of constraints, enabling state estimation by solving a minimization problem on a reduced-order background model, subject to constraints imposed by the input measurements. To address biased noise in observations, a modified formulation is proposed, incorporating a correction mechanism to handle rapid oscillations by treating them as slow-decaying modes based on a two-scale splitting of the classical reconstruction algorithm. The effectiveness of the proposed algorithms is demonstrated through various examples, including discontinuous signals and simulated Doppler ultrasound data.

The solutions of Hamiltonian equations are known to describe the underlying phase space of the mechanical system. Hamiltonian Monte Carlo is the sole use of the properties of solutions to the Hamiltonian equations in Bayesian statistics. In this article, we propose a novel spatio-temporal model using a strategic modification of the Hamiltonian equations, incorporating appropriate stochasticity via Gaussian processes. The resultant sptaio-temporal process, continuously varying with time, turns out to be nonparametric, nonstationary, nonseparable and non-Gaussian. Additionally, as the spatio-temporal lag goes to infinity, the lagged correlations converge to zero. We investigate the theoretical properties of the new spatio-temporal process, including its continuity and smoothness properties. In the Bayesian paradigm, we derive methods for complete Bayesian inference using MCMC techniques. The performance of our method has been compared with that of non-stationary Gaussian process (GP) using two simulation studies, where our method shows a significant improvement over the non-stationary GP. Further, application of our new model to two real data sets revealed encouraging performance.

The exploration of the lunar poles and the collection of samples from the martian surface are characterized by shorter time windows demanding increased autonomy and speeds. Autonomous mobile robots must intrinsically cope with a wider range of disturbances. Faster off-road navigation has been explored for terrestrial applications but the combined effects of increased speeds and reduced gravity fields are yet to be fully studied. In this paper, we design and demonstrate a novel fully passive suspension design for wheeled planetary robots, which couples for the first time a high-range passive rocker with elastic in-wheel coil-over shock absorbers. The design was initially conceived and verified in a reduced-gravity (1.625 m/s${^2}$) simulated environment, where three different passive suspension configurations were evaluated against steep slopes and unexpected obstacles, and later prototyped and validated in a series of field tests. The proposed mechanically-hybrid suspension proves to mitigate more effectively the negative effects (high-frequency/high-amplitude vibrations and impact loads) of faster locomotion (~1\,m/s) over unstructured terrains under varied gravity fields.

北京阿比特科技有限公司