亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Multiscale Hierarchical Decomposition Method (MHDM) was introduced as an iterative method for total variation regularization, with the aim of recovering details at various scales from images corrupted by additive or multiplicative noise. Given its success beyond image restoration, we extend the MHDM iterates in order to solve larger classes of linear ill-posed problems in Banach spaces. Thus, we define the MHDM for more general convex or even non-convex penalties, and provide convergence results for the data fidelity term. We also propose a flexible version of the method using adaptive convex functionals for regularization, and show an interesting multiscale decomposition of the data. This decomposition result is highlighted for the Bregman iteration method that can be expressed as an adaptive MHDM. Furthermore, we state necessary and sufficient conditions when the MHDM iteration agrees with the variational Tikhonov regularization, which is the case, for instance, for one-dimensional total variation denoising. Finally, we investigate several particular instances and perform numerical experiments that point out the robust behavior of the MHDM.

相關內容

Designing capacity-achieving coding schemes for the band-limited additive colored Gaussian noise (ACGN) channel has been and is still a challenge. In this paper, the capacity of the band-limited ACGN channel is studied from a fundamental algorithmic point of view by addressing the question of whether or not the capacity can be algorithmically computed. To this aim, the concept of Turing machines is used, which provides fundamental performance limits of digital computers. t is shown that there are band-limited ACGN channels having computable continuous spectral densities whose capacity are non-computable numbers. Moreover, it is demonstrated that for those channels, it is impossible to find computable sequences of asymptotically sharp upper bounds for their capacities.

In this work, we propose a numerical method to compute the Wasserstein Hamiltonian flow (WHF), which is a Hamiltonian system on the probability density manifold. Many well-known PDE systems can be reformulated as WHFs. We use parameterized function as push-forward map to characterize the solution of WHF, and convert the PDE to a finite-dimensional ODE system, which is a Hamiltonian system in the phase space of the parameter manifold. We establish error analysis results for the continuous time approximation scheme in Wasserstein metric. For the numerical implementation, we use neural networks as push-forward maps. We apply an effective symplectic scheme to solve the derived Hamiltonian ODE system so that the method preserves some important quantities such as total energy. The computation is done by fully deterministic symplectic integrator without any neural network training. Thus, our method does not involve direct optimization over network parameters and hence can avoid the error introduced by stochastic gradient descent (SGD) methods, which is usually hard to quantify and measure. The proposed algorithm is a sampling-based approach that scales well to higher dimensional problems. In addition, the method also provides an alternative connection between the Lagrangian and Eulerian perspectives of the original WHF through the parameterized ODE dynamics.

Understanding how convolutional neural networks (CNNs) can efficiently learn high-dimensional functions remains a fundamental challenge. A popular belief is that these models harness the local and hierarchical structure of natural data such as images. Yet, we lack a quantitative understanding of how such structure affects performance, e.g., the rate of decay of the generalisation error with the number of training samples. In this paper, we study infinitely-wide deep CNNs in the kernel regime. First, we show that the spectrum of the corresponding kernel inherits the hierarchical structure of the network, and we characterise its asymptotics. Then, we use this result together with generalisation bounds to prove that deep CNNs adapt to the spatial scale of the target function. In particular, we find that if the target function depends on low-dimensional subsets of adjacent input variables, then the decay of the error is controlled by the effective dimensionality of these subsets. Conversely, if the target function depends on the full set of input variables, then the error decay is controlled by the input dimension. We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN with randomly-initialised parameters. Interestingly, we find that, despite their hierarchical structure, the functions generated by infinitely-wide deep CNNs are too rich to be efficiently learnable in high dimension.

Massive machine-type communications (mMTC) in 6G requires supporting a massive number of devices with limited resources, posing challenges in efficient random access. Grant-free random access and uplink non-orthogonal multiple access (NOMA) are introduced to increase the overload factor and reduce transmission latency with signaling overhead in mMTC. Sparse code multiple access (SCMA) and Multi-user shared access (MUSA) are introduced as advanced code domain NOMA schemes. In grant-free NOMA, machine-type devices (MTD) transmit information to the base station (BS) without a grant, creating a challenging task for the BS to identify the active MTD among all potential active devices. In this paper, a novel pre-activated residual neural network-based multi-user detection (MUD) scheme for the grant-free SCMA and MUSA system in an mMTC uplink framework is proposed to jointly identify the number of active MTDs and their respective messages in the received signal's sparsity and the active MTDs in the absence of channel state information. A novel residual unit designed to learn the properties of multi-dimensional SCMA codebooks, MUSA spreading sequences, and corresponding combinations of active devices with diverse settings. The proposed scheme learns from the labeled dataset of the received signal and identifies the active MTDs from the received signal without any prior knowledge of the device sparsity level. A calibration curve is evaluated to verify the model's calibration. The application of the proposed MUD scheme is investigated in an indoor factory setting using four different mmWave channel models. Numerical results show that when the number of active MTDs in the system is large, the proposed MUD has a significantly higher probability of detection compared to existing approaches over the signal-to-noise ratio range of interest.

We investigate the expressive power of depth-2 bandlimited random neural networks. A random net is a neural network where the hidden layer parameters are frozen with random assignment, and only the output layer parameters are trained by loss minimization. Using random weights for a hidden layer is an effective method to avoid non-convex optimization in standard gradient descent learning. It has also been adopted in recent deep learning theories. Despite the well-known fact that a neural network is a universal approximator, in this study, we mathematically show that when hidden parameters are distributed in a bounded domain, the network may not achieve zero approximation error. In particular, we derive a new nontrivial approximation error lower bound. The proof utilizes the technique of ridgelet analysis, a harmonic analysis method designed for neural networks. This method is inspired by fundamental principles in classical signal processing, specifically the idea that signals with limited bandwidth may not always be able to perfectly recreate the original signal. We corroborate our theoretical results with various simulation studies, and generally, two main take-home messages are offered: (i) Not any distribution for selecting random weights is feasible to build a universal approximator; (ii) A suitable assignment of random weights exists but to some degree is associated with the complexity of the target function.

Domination problems in general can capture situations in which some entities have an effect on other entities (and sometimes on themselves). The usual goal is to select a minimum number of entities that can influence a target group of entities or to influence a maximum number of target entities with a certain number of available influencers. In this work, we focus on the distinction between \textit{internal} and \textit{external} domination in the respective maximization problem. In particular, a dominator can dominate its entire neighborhood in a graph, internally dominating itself, while those of its neighbors which are not dominators themselves are externally dominated. We study the problem of maximizing the external domination that a given number of dominators can yield and we present a 0.5307-approximation algorithm for this problem. Moreover, our methods provide a framework for approximating a number of problems that can be cast in terms of external domination. In particular, we observe that an interesting interpretation of the maximum coverage problem can capture a new problem in elections, in which we want to maximize the number of \textit{externally represented} voters. We study this problem in two different settings, namely Non-Secrecy and Rational-Candidate, and provide approximability analysis for two alternative approaches; our analysis reveals, among other contributions, that an earlier resource allocation algorithm is, in fact, a 0.462-approximation algorithm for maximum external domination in directed graphs.

Anomaly detection (AD) plays a crucial role in many safety-critical application domains. The challenge of adapting an anomaly detector to drift in the normal data distribution, especially when no training data is available for the "new normal", has led to the development of zero-shot AD techniques. In this paper, we propose a simple yet effective method called Adaptive Centered Representations (ACR) for zero-shot batch-level AD. Our approach trains off-the-shelf deep anomaly detectors (such as deep SVDD) to adapt to a set of inter-related training data distributions in combination with batch normalization, enabling automatic zero-shot generalization for unseen AD tasks. This simple recipe, batch normalization plus meta-training, is a highly effective and versatile tool. Our results demonstrate the first zero-shot AD results for tabular data and outperform existing methods in zero-shot anomaly detection and segmentation on image data from specialized domains.

Risk-sensitive reinforcement learning (RL) has become a popular tool to control the risk of uncertain outcomes and ensure reliable performance in various sequential decision-making problems. While policy gradient methods have been developed for risk-sensitive RL, it remains unclear if these methods enjoy the same global convergence guarantees as in the risk-neutral case. In this paper, we consider a class of dynamic time-consistent risk measures, called Expected Conditional Risk Measures (ECRMs), and derive policy gradient updates for ECRM-based objective functions. Under both constrained direct parameterization and unconstrained softmax parameterization, we provide global convergence and iteration complexities of the corresponding risk-averse policy gradient algorithms. We further test risk-averse variants of REINFORCE and actor-critic algorithms to demonstrate the efficacy of our method and the importance of risk control.

In this paper, we evaluate the performance of novel numerical methods for solving one-dimensional nonlinear fractional dispersive and dissipative evolution equations. The methods are based on affine combinations of time-splitting integrators and pseudo-spectral discretizations using Hermite and Fourier expansions. We show the effectiveness of the proposed methods by numerically computing the dynamics of soliton solutions of the the standard and fractional variants of the nonlinear Schr\"odinger equation (NLSE) and the complex Ginzburg-Landau equation (CGLE), and by comparing the results with those obtained by standard splitting integrators. An exhaustive numerical investigation shows that the new technique is competitive with traditional composition-splitting schemes for the case of Hamiltonian problems both in terms accuracy and computational cost. Moreover, it is applicable straightforwardly to irreversible models, outperforming high-order symplectic integrators which could become unstable due to their need of negative time steps. Finally, we discuss potential improvements of the numerical methods aimed to increase their efficiency, and possible applications to the investigation of dissipative solitons that arise in nonlinear optical systems of contemporary interest. Overall, our method offers a promising alternative for solving a wide range of evolutionary partial differential equations.

GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model, for the image to be faithfully reconstructed from the inverted code by the generator. As an emerging technique to bridge the real and fake image domains, GAN inversion plays an essential role in enabling the pretrained GAN models such as StyleGAN and BigGAN to be used for real image editing applications. Meanwhile, GAN inversion also provides insights on the interpretation of GAN's latent space and how the realistic images can be generated. In this paper, we provide an overview of GAN inversion with a focus on its recent algorithms and applications. We cover important techniques of GAN inversion and their applications to image restoration and image manipulation. We further elaborate on some trends and challenges for future directions.

北京阿比特科技有限公司