亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study color image inpainting as a pure quaternion matrix completion problem. In the literature, the theoretical guarantee for quaternion matrix completion is not well-established. Our main aim is to propose a new minimization problem with an objective combining nuclear norm and a quadratic loss weighted among three channels. To fill the theoretical vacancy, we obtain the error bound in both clean and corrupted regimes, which relies on some new results of quaternion matrices. A general Gaussian noise is considered in robust completion where all observations are corrupted. Motivated by the error bound, we propose to handle unbalanced or correlated noise via a cross-channel weight in the quadratic loss, with the main purpose of rebalancing noise level, or removing noise correlation. Extensive experimental results on synthetic and color image data are presented to confirm and demonstrate our theoretical findings.

相關內容

In this paper, we revisit the class of iterative shrinkage-thresholding algorithms (ISTA) for solving the linear inverse problem with sparse representation, which arises in signal and image processing. It is shown in the numerical experiment to deblur an image that the convergence behavior in the logarithmic-scale ordinate tends to be linear instead of logarithmic, approximating to be flat. Making meticulous observations, we find that the previous assumption for the smooth part to be convex weakens the least-square model. Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned. Furthermore, we improve the pivotal inequality tighter for composite optimization with the smooth part to be strongly convex instead of general convex, which is first found in [Li et al., 2022]. Based on this pivotal inequality, we generalize the linear convergence to composite optimization in both the objective value and the squared proximal subgradient norm. Meanwhile, we set a simple ill-conditioned matrix which is easy to compute the singular values instead of the original blur matrix. The new numerical experiment shows the proximal generalization of Nesterov's accelerated gradient descent (NAG) for the strongly convex function has a faster linear convergence rate than ISTA. Based on the tighter pivotal inequality, we also generalize the faster linear convergence rate to composite optimization, in both the objective value and the squared proximal subgradient norm, by taking advantage of the well-constructed Lyapunov function with a slight modification and the phase-space representation based on the high-resolution differential equation framework from the implicit-velocity scheme.

Transfer learning aims to improve the performance of a target model by leveraging data from related source populations, which is known to be especially helpful in cases with insufficient target data. In this paper, we study the problem of how to train a high-dimensional ridge regression model using limited target data and existing regression models trained in heterogeneous source populations. We consider a practical setting where only the parameter estimates of the fitted source models are accessible, instead of the individual-level source data. Under the setting with only one source model, we propose a novel flexible angle-based transfer learning (angleTL) method, which leverages the concordance between the source and the target model parameters. We show that angleTL unifies several benchmark methods by construction, including the target-only model trained using target data alone, the source model fitted on source data, and distance-based transfer learning method that incorporates the source parameter estimates and the target data under a distance-based similarity constraint. We also provide algorithms to effectively incorporate multiple source models accounting for the fact that some source models may be more helpful than others. Our high-dimensional asymptotic analysis provides interpretations and insights regarding when a source model can be helpful to the target model, and demonstrates the superiority of angleTL over other benchmark methods. We perform extensive simulation studies to validate our theoretical conclusions and show the feasibility of applying angleTL to transfer existing genetic risk prediction models across multiple biobanks.

Federated learning allows multiple clients to collaboratively train a model without exchanging their data, thus preserving data privacy. Unfortunately, it suffers significant performance degradation under heterogeneous data at clients. Common solutions in local training involve designing a specific auxiliary loss to regularize weight divergence or feature inconsistency. However, we discover that these approaches fall short of the expected performance because they ignore the existence of a vicious cycle between classifier divergence and feature mapping inconsistency across clients, such that client models are updated in inconsistent feature space with diverged classifiers. We then propose a simple yet effective framework named Federated learning with Feature Anchors (FedFA) to align the feature mappings and calibrate classifier across clients during local training, which allows client models updating in a shared feature space with consistent classifiers. We demonstrate that this modification brings similar classifiers and a virtuous cycle between feature consistency and classifier similarity across clients. Extensive experiments show that FedFA significantly outperforms the state-of-the-art federated learning algorithms on various image classification datasets under label and feature distribution skews.

Quantum secret sharing is an important cryptographic primitive for network applications ranging from secure money transfer to multiparty quantum computation. Currently most progresses on quantum secret sharing suffer from rate-distance bound, and thus the key rates are limited and unpractical for large-scale deployment. Furthermore, the performance of most existing protocols is analyzed in the asymptotic regime without considering participant attacks. Here we report a measurement-device-independent quantum secret sharing protocol with improved key rate and transmission distance. Based on spatial multiplexing, our protocol shows it can break rate-distance bounds over network under at least ten communication parties. Compared with other protocols, our work improves the secret key rate by more than two orders of magnitude and has a longer transmission distance. We analyze the security of our protocol in the composable framework considering participant attacks. Based on the security analysis, we also evaluate their performance in the finite-size regime. In addition, we investigate applying our protocol to digital signatures where the signature rate is improved more than $10^7$ times compared with existing protocols. Based on our results, we anticipate that our quantum secret sharing protocol will provide a solid future for multiparty applications on quantum network.

We propose a deep learning method for solving the American options model with a free boundary feature. To extract the free boundary known as the early exercise boundary from our proposed method, we introduce the Landau transformation. For efficient implementation of our proposed method, we further construct a dual solution framework consisting of a novel auxiliary function and free boundary equations. The auxiliary function is formulated to include the feed forward deep neural network (DNN) output and further mimic the far boundary behaviour, smooth pasting condition, and remaining boundary conditions due to the second-order space derivative and first-order time derivative. Because the early exercise boundary and its derivative are not a priori known, the boundary values mimicked by the auxiliary function are in approximate form. Concurrently, we then establish equations that approximate the early exercise boundary and its derivative directly from the DNN output based on some linear relationships at the left boundary. Furthermore, the option Greeks are obtained from the derivatives of this auxiliary function. We test our implementation with several examples and compare them with the existing numerical methods. All indicators show that our proposed deep learning method presents an efficient and alternative way of pricing options with early exercise features.

Classifying logo images is a challenging task as they contain elements such as text or shapes that can represent anything from known objects to abstract shapes. While the current state of the art for logo classification addresses the problem as a multi-class task focusing on a single characteristic, logos can have several simultaneous labels, such as different colors. This work proposes a method that allows visually similar logos to be classified and searched from a set of data according to their shape, color, commercial sector, semantics, general characteristics, or a combination of features selected by the user. Unlike previous approaches, the proposal employs a series of multi-label deep neural networks specialized in specific attributes and combines the obtained features to perform the similarity search. To delve into the classification system, different existing logo topologies are compared and some of their problems are analyzed, such as the incomplete labeling that trademark registration databases usually contain. The proposal is evaluated considering 76,000 logos (7 times more than previous approaches) from the European Union Trademarks dataset, which is organized hierarchically using the Vienna ontology. Overall, experimentation attains reliable quantitative and qualitative results, reducing the normalized average rank error of the state-of-the-art from 0.040 to 0.018 for the Trademark Image Retrieval task. Finally, given that the semantics of logos can often be subjective, graphic design students and professionals were surveyed. Results show that the proposed methodology provides better labeling than a human expert operator, improving the label ranking average precision from 0.53 to 0.68.

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.

It has been shown that deep neural networks are prone to overfitting on biased training data. Towards addressing this issue, meta-learning employs a meta model for correcting the training bias. Despite the promising performances, super slow training is currently the bottleneck in the meta learning approaches. In this paper, we introduce a novel Faster Meta Update Strategy (FaMUS) to replace the most expensive step in the meta gradient computation with a faster layer-wise approximation. We empirically find that FaMUS yields not only a reasonably accurate but also a low-variance approximation of the meta gradient. We conduct extensive experiments to verify the proposed method on two tasks. We show our method is able to save two-thirds of the training time while still maintaining the comparable or achieving even better generalization performance. In particular, our method achieves the state-of-the-art performance on both synthetic and realistic noisy labels, and obtains promising performance on long-tailed recognition on standard benchmarks.

Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: 1) the lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time. To handle unpaired training data, we introduce a novel cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative comparisons, we measure realism with user study and diversity with a perceptual distance metric. We apply the proposed model to domain adaptation and show competitive performance when compared to the state-of-the-art on the MNIST-M and the LineMod datasets.

北京阿比特科技有限公司