亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms by encouraging iterative refinements toward a stable flow estimation. However, these RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation. They can converge poorly and thereby suffer from performance degradation. To combat these drawbacks, we propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer (using any black-box solver), and differentiates through this fixed point analytically (thus requiring $O(1)$ training memory). This implicit-depth approach is not predicated on any specific model, and thus can be applied to a wide range of SOTA flow estimation model designs. The use of these DEQ flow estimators allows us to compute the flow faster using, e.g., fixed-point reuse and inexact gradients, consumes $4\sim6\times$ times less training memory than the recurrent counterpart, and achieves better results with the same computation budget. In addition, we propose a novel, sparse fixed-point correction scheme to stabilize our DEQ flow estimators, which addresses a longstanding challenge for DEQ models in general. We test our approach in various realistic settings and show that it improves SOTA methods on Sintel and KITTI datasets with substantially better computational and memory efficiency.

相關內容

Non-rigid registration, which deforms a source shape in a non-rigid way to align with a target shape, is a classical problem in computer vision. Such problems can be challenging because of imperfect data (noise, outliers and partial overlap) and high degrees of freedom. Existing methods typically adopt the $\ell_{p}$ type robust norm to measure the alignment error and regularize the smoothness of deformation, and use a proximal algorithm to solve the resulting non-smooth optimization problem. However, the slow convergence of such algorithms limits their wide applications. In this paper, we propose a formulation for robust non-rigid registration based on a globally smooth robust norm for alignment and regularization, which can effectively handle outliers and partial overlaps. The problem is solved using the majorization-minimization algorithm, which reduces each iteration to a convex quadratic problem with a closed-form solution. We further apply Anderson acceleration to speed up the convergence of the solver, enabling the solver to run efficiently on devices with limited compute capability. Extensive experiments demonstrate the effectiveness of our method for non-rigid alignment between two shapes with outliers and partial overlaps, with quantitative evaluation showing that it outperforms state-of-the-art methods in terms of registration accuracy and computational speed. The source code is available at //github.com/yaoyx689/AMM_NRR.

We consider a scalable user-centric cell-free massive MIMO network with distributed remote radio units (RUs), enabling macrodiversity and joint processing. Due to the limited uplink (UL) pilot dimension, multiuser interference in the UL pilot transmission phase makes channel estimation a non-trivial problem. We make use of two types of UL pilot signals, sounding reference signal (SRS) and demodulation reference signal (DMRS) pilots, for the estimation of the channel subspace and its instantaneous realization, respectively. The SRS pilots are transmitted over multiple time slots and resource blocks according to a Latin squares based hopping scheme, which aims at averaging out the interference of different SRS co-pilot users. We propose a robust principle component analysis approach for channel subspace estimation from the SRS signal samples, employed at the RUs for each associated user. The estimated subspace is further used at the RUs for DMRS pilot decontamination and instantaneous channel estimation. We provide numerical simulations to compare the system performance using our subspace and channel estimation scheme with the cases of ideal partial subspace/channel knowledge and pilot matching channel estimation. The results show that a system with a properly designed SRS pilot hopping scheme can closely approximate the performance of a genie-aided system.

Scene flow represents the motion of points in the 3D space, which is the counterpart of the optical flow that represents the motion of pixels in the 2D image. However, it is difficult to obtain the ground truth of scene flow in the real scenes, and recent studies are based on synthetic data for training. Therefore, how to train a scene flow network with unsupervised methods based on real-world data shows crucial significance. A novel unsupervised learning method for scene flow is proposed in this paper, which utilizes the images of two consecutive frames taken by monocular camera without the ground truth of scene flow for training. Our method realizes the goal that training scene flow network with real-world data, which bridges the gap between training data and test data and broadens the scope of available data for training. Unsupervised learning of scene flow in this paper mainly consists of two parts: (i) depth estimation and camera pose estimation, and (ii) scene flow estimation based on four different loss functions. Depth estimation and camera pose estimation obtain the depth maps and camera pose between two consecutive frames, which provide further information for the next scene flow estimation. After that, we used depth consistency loss, dynamic-static consistency loss, Chamfer loss, and Laplacian regularization loss to carry out unsupervised training of the scene flow network. To our knowledge, this is the first paper that realizes the unsupervised learning of 3D scene flow from monocular camera. The experiment results on KITTI show that our method for unsupervised learning of scene flow meets great performance compared to traditional methods Iterative Closest Point (ICP) and Fast Global Registration (FGR). The source code is available at: //github.com/IRMVLab/3DUnMonoFlow.

We study the problem of high-dimensional sparse mean estimation in the presence of an $\epsilon$-fraction of adversarial outliers. Prior work obtained sample and computationally efficient algorithms for this task for identity-covariance subgaussian distributions. In this work, we develop the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance. For distributions on $\mathbb R^d$ with "certifiably bounded" $t$-th moments and sufficiently light tails, our algorithm achieves error of $O(\epsilon^{1-1/t})$ with sample complexity $m = (k\log(d))^{O(t)}/\epsilon^{2-2/t}$. For the special case of the Gaussian distribution, our algorithm achieves near-optimal error of $\tilde O(\epsilon)$ with sample complexity $m = O(k^4 \mathrm{polylog}(d))/\epsilon^2$. Our algorithms follow the Sum-of-Squares based, proofs to algorithms approach. We complement our upper bounds with Statistical Query and low-degree polynomial testing lower bounds, providing evidence that the sample-time-error tradeoffs achieved by our algorithms are qualitatively the best possible.

Multiple hypothesis testing has been widely applied to problems dealing with high-dimensional data, e.g., selecting significant variables and controlling the selection error rate. The most prevailing measure of error rate used in the multiple hypothesis testing is the false discovery rate (FDR). In recent years, local false discovery rate (fdr) has drawn much attention, due to its advantage of accessing the confidence of individual hypothesis. However, most methods estimate fdr through p-values or statistics with known null distributions, which are sometimes not available or reliable. Adopting the innovative methodology of competition-based procedures, e.g., knockoff filter, this paper proposes a new approach, named TDfdr, to local false discovery rate estimation, which is free of the p-values or known null distributions. Simulation results demonstrate that TDfdr can accurately estimate the fdr with two competition-based procedures. In real data analysis, the power of TDfdr on variable selection is verified on two biological datasets.

When neural network model and data are outsourced to cloud server for inference, it is desired to preserve the confidentiality of model and data as the involved parties (i.e., cloud server, model providing client and data providing client) may not trust mutually. Solutions were proposed based on multi-party computation, trusted execution environment (TEE) and leveled or fully homomorphic encryption (LHE/FHE), but their limitations hamper practical application. We propose a new framework based on synergistic integration of LHE and TEE, which enables collaboration among mutually-untrusted three parties, while minimizing the involvement of (relatively) resource-constrained TEE and allowing the full utilization of the untrusted but more resource-rich part of server. We also propose a generic and efficient LHE-based inference scheme as an important performance-determining component of the framework. We implemented/evaluated the proposed system on a moderate platform and show that, our proposed scheme is more applicable/scalable to various settings, and has better performance, compared to the state-of-the-art LHE-based solutions.

The optimal power flow (OPF) problem, as a critical component of power system operations, becomes increasingly difficult to solve due to the variability, intermittency, and unpredictability of renewable energy brought to the power system. Although traditional optimization techniques, such as stochastic and robust optimization approaches, could be used to address the OPF problem in the face of renewable energy uncertainty, their effectiveness in dealing with large-scale problems remains limited. As a result, deep learning techniques, such as neural networks, have recently been developed to improve computational efficiency in solving large-scale OPF problems. However, the feasibility and optimality of the solution may not be guaranteed. In this paper, we propose an optimization model-informed generative adversarial network (MI-GAN) framework to solve OPF under uncertainty. The main contributions are summarized into three aspects: (1) to ensure feasibility and improve optimality of generated solutions, three important layers are proposed: feasibility filter layer, comparison layer, and gradient-guided layer; (2) in the GAN-based framework, an efficient model-informed selector incorporating these three new layers is established; and (3) a new recursive iteration algorithm is also proposed to improve solution optimality. The numerical results on IEEE test systems show that the proposed method is very effective and promising.

From a model-building perspective, in this paper we propose a paradigm shift for fitting over-parameterized models. Philosophically, the mindset is to fit models to future observations rather than to the observed sample. Technically, choosing an imputation model for generating future observations, we fit over-parameterized models to future observations via optimizing an approximation to the desired expected loss-function based on its sample counterpart and an adaptive simplicity-preference function. This technique is discussed in detail to both creating bootstrap imputation and final estimation with bootstrap imputation. The method is illustrated with the many-normal-means problem, $n < p$ linear regression, and deep convolutional neural networks for image classification of MNIST digits. The numerical results demonstrate superior performance across these three different types of applications. For example, for the many-normal-means problem, our method uniformly dominates James-Stein and Efron's $g-$modeling, and for the MNIST image classification, it performs better than all existing methods and reaches arguably the best possible result. While this paper is largely expository because of the ambitious task of taking a look at over-parameterized models from the new perspective, fundamental theoretical properties are also investigated. We conclude the paper with a few remarks.

We propose a novel and unified framework for change-point estimation in multivariate time series. The proposed method is fully nonparametric, enjoys effortless tuning and is robust to temporal dependence. One salient and distinct feature of the proposed method is its versatility, where it allows change-point detection for a broad class of parameters (such as mean, variance, correlation and quantile) in a unified fashion. At the core of our method, we couple the self-normalization (SN) based tests with a novel nested local-window segmentation algorithm, which seems new in the growing literature of change-point analysis. Due to the presence of an inconsistent long-run variance estimator in the SN test, non-standard theoretical arguments are further developed to derive the consistency and convergence rate of the proposed SN-based change-point detection method. Extensive numerical experiments and relevant real data analysis are conducted to illustrate the effectiveness and broad applicability of our proposed method in comparison with state-of-the-art approaches in the literature.

Multiclass probability estimation is the problem of estimating conditional probabilities of a data point belonging to a class given its covariate information. It has broad applications in statistical analysis and data science. Recently a class of weighted Support Vector Machines (wSVMs) has been developed to estimate class probabilities through ensemble learning for $K$-class problems (Wu, Zhang and Liu, 2010; Wang, Zhang and Wu, 2019), where $K$ is the number of classes. The estimators are robust and achieve high accuracy for probability estimation, but their learning is implemented through pairwise coupling, which demands polynomial time in $K$. In this paper, we propose two new learning schemes, the baseline learning and the One-vs-All (OVA) learning, to further improve wSVMs in terms of computational efficiency and estimation accuracy. In particular, the baseline learning has optimal computational complexity in the sense that it is linear in $K$. Though not being most efficient in computation, the OVA offers the best estimation accuracy among all the procedures under comparison. The resulting estimators are distribution-free and shown to be consistent. We further conduct extensive numerical experiments to demonstrate finite sample performance.

北京阿比特科技有限公司