亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a massively parallel Lagrange decomposition method for solving 0--1 integer linear programs occurring in structured prediction. We propose a new iterative update scheme for solving the Lagrangean dual and a perturbation technique for decoding primal solutions. For representing subproblems we follow Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and dual algorithms require little synchronization between subproblems and optimization over BDDs needs only elementary operations without complicated control flow. This allows us to exploit the parallelism offered by GPUs for all components of our method. We present experimental results on combinatorial problems from MAP inference for Markov Random Fields, quadratic assignment and cell tracking for developmental biology. Our highly parallel GPU implementation improves upon the running times of the algorithms from Lange et al. (2021) by up to an order of magnitude. In particular, we come close to or outperform some state-of-the-art specialized heuristics while being problem agnostic. Our implementation is available at //github.com/LPMP/BDD.

相關內容

Interpolating between measures supported by polygonal or polyhedral domains is a problem that has been recently addressed by the semi-discrete optimal transport framework. Within this framework, one of the domains is discretized with a set of samples, while the other one remains continuous. In this paper we present a method to introduce some symmetry into the solution using coupled power diagrams. This symmetry is key to capturing the discontinuities of the transport map reflected in the geometry of the power cells. We design our method as a fixed-point algorithm alternating between computations of semi-discrete transport maps and recentering of the sites. The resulting objects are coupled power diagrams with identical geometry, allowing us to approximate displacement interpolation through linear interpolation of the meshes vertices. Through these coupled power diagrams, we have a natural way of jointly sampling measures.

Techniques of hybridisation and ensemble learning are popular model fusion techniques for improving the predictive power of forecasting methods. With limited research that instigates combining these two promising approaches, this paper focuses on the utility of the Exponential-Smoothing-Recurrent Neural Network (ES-RNN) in the pool of base models for different ensembles. We compare against some state of the art ensembling techniques and arithmetic model averaging as a benchmark. We experiment with the M4 forecasting data set of 100,000 time-series, and the results show that the Feature-based Forecast Model Averaging (FFORMA), on average, is the best technique for late data fusion with the ES-RNN. However, considering the M4's Daily subset of data, stacking was the only successful ensemble at dealing with the case where all base model performances are similar. Our experimental results indicate that we attain state of the art forecasting results compared to N-BEATS as a benchmark. We conclude that model averaging is a more robust ensemble than model selection and stacking strategies. Further, the results show that gradient boosting is superior for implementing ensemble learning strategies.

In this paper, we study an active IRS-aided simultaneous wireless information and power transfer (SWIPT) system. Specifically, an active IRS is deployed to assist a multi-antenna access point (AP) to convey information and energy simultaneously to multiple single-antenna information users (IUs) and energy users (EUs). Two joint transmit and reflect beamforming optimization problems are investigated with different practical objectives. The first problem maximizes the weighted sum-power harvested by the EUs subject to individual signal-to-interference-plus-noise ratio (SINR) constraints at the IUs, while the second problem maximizes the weighted sum-rate of the IUs subject to individual energy harvesting (EH) constraints at the EUs. The optimization problems are non-convex and difficult to solve optimally. To tackle these two problems, we first rigorously prove that dedicated energy beams are not required for their corresponding semidefinite relaxation (SDR) reformulations and the SDR is tight for the first problem, thus greatly simplifying the AP precoding design. Then, by capitalizing on the techniques of alternating optimization (AO), SDR, and successive convex approximation (SCA), computationally efficient algorithms are developed to obtain suboptimal solutions of the resulting optimization problems. Simulation results demonstrate that, given the same total system power budget, significant performance gains in terms of operating range of wireless power transfer (WPT), total harvested energy, as well as achievable rate can be obtained by our proposed designs over benchmark schemes (especially the one adopting a passive IRS). Moreover, it is advisable to deploy an active IRS in the proximity of the users for the effective operation of WPT/SWIPT.

We initiate a comprehensive experimental study of objective-based hierarchical clustering methods on massive datasets consisting of deep embedding vectors from computer vision and NLP applications. This includes a large variety of image embedding (ImageNet, ImageNetV2, NaBirds), word embedding (Twitter, Wikipedia), and sentence embedding (SST-2) vectors from several popular recent models (e.g. ResNet, ResNext, Inception V3, SBERT). Our study includes datasets with up to $4.5$ million entries with embedding dimensions up to $2048$. In order to address the challenge of scaling up hierarchical clustering to such large datasets we propose a new practical hierarchical clustering algorithm B++&C. It gives a 5%/20% improvement on average for the popular Moseley-Wang (MW) / Cohen-Addad et al. (CKMM) objectives (normalized) compared to a wide range of classic methods and recent heuristics. We also introduce a theoretical algorithm B2SAT&C which achieves a $0.74$-approximation for the CKMM objective in polynomial time. This is the first substantial improvement over the trivial $2/3$-approximation achieved by a random binary tree. Prior to this work, the best poly-time approximation of $\approx 2/3 + 0.0004$ was due to Charikar et al. (SODA'19).

In linear regression we wish to estimate the optimum linear least squares predictor for a distribution over $d$-dimensional input points and real-valued responses, based on a small sample. Under standard random design analysis, where the sample is drawn i.i.d. from the input distribution, the least squares solution for that sample can be viewed as the natural estimator of the optimum. Unfortunately, this estimator almost always incurs an undesirable bias coming from the randomness of the input points, which is a significant bottleneck in model averaging. In this paper we show that it is possible to draw a non-i.i.d. sample of input points such that, regardless of the response model, the least squares solution is an unbiased estimator of the optimum. Moreover, this sample can be produced efficiently by augmenting a previously drawn i.i.d. sample with an additional set of $d$ points, drawn jointly according to a certain determinantal point process constructed from the input distribution rescaled by the squared volume spanned by the points. Motivated by this, we develop a theoretical framework for studying volume-rescaled sampling, and in the process prove a number of new matrix expectation identities. We use them to show that for any input distribution and $\epsilon>0$ there is a random design consisting of $O(d\log d+ d/\epsilon)$ points from which an unbiased estimator can be constructed whose expected square loss over the entire distribution is bounded by $1+\epsilon$ times the loss of the optimum. We provide efficient algorithms for generating such unbiased estimators in a number of practical settings and support our claims experimentally.

Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time.

The crude Monte Carlo approximates the integral $$S(f)=\int_a^b f(x)\,\mathrm dx$$ with expected error (deviation) $\sigma(f)N^{-1/2},$ where $\sigma(f)^2$ is the variance of $f$ and $N$ is the number of random samples. If $f\in C^r$ then special variance reduction techniques can lower this error to the level $N^{-(r+1/2)}.$ In this paper, we consider methods of the form $$\overline M_{N,r}(f)=S(L_{m,r}f)+M_n(f-L_{m,r}f),$$ where $L_{m,r}$ is the piecewise polynomial interpolation of $f$ of degree $r-1$ using a partition of the interval $[a,b]$ into $m$ subintervals, $M_n$ is a Monte Carlo approximation using $n$ samples of $f,$ and $N$ is the total number of function evaluations used. We derive asymptotic error formulas for the methods $\overline M_{N,r}$ that use nonadaptive as well as adaptive partitions. Although the convergence rate $N^{-(r+1/2)}$ cannot be beaten, the asymptotic constants make a huge difference. For example, for $\int_0^1(x+d)^{-1}\mathrm dx$ and $r=4$ the best adaptive methods overcome the nonadaptive ones roughly $10^{12}$ times if $d=10^{-4},$ and $10^{29}$ times if $d=10^{-8}.$ In addition, the proposed adaptive methods are easily implementable and can be well used for automatic integration. We believe that the obtained results can be generalized to multivariate integration.

Estimation of a conditional mean (linking a set of features to an outcome of interest) is a fundamental statistical task. While there is an appeal to flexible nonparametric procedures, effective estimation in many classical nonparametric function spaces (e.g., multivariate Sobolev spaces) can be prohibitively difficult -- both statistically and computationally -- especially when the number of features is large. In this paper, we present (penalized) sieve estimators for regression in nonparametric tensor product spaces: These spaces are more amenable to multivariate regression, and allow us to, in-part, avoid the curse of dimensionality. Our estimators can be easily applied to multivariate nonparametric problems and have appealing statistical and computational properties. Moreover, they can effectively leverage additional structures such as feature sparsity. In this manuscript, we give theoretical guarantees, indicating that the predictive performance of our estimators scale favorably in dimension. In addition, we also present numerical examples to compare the finite-sample performance of the proposed estimators with several popular machine learning methods.

Correlation acts as a critical role in the tracking field, especially in recent popular Siamese-based trackers. The correlation operation is a simple fusion manner to consider the similarity between the template and the search region. However, the correlation operation itself is a local linear matching process, leading to lose semantic information and fall into local optimum easily, which may be the bottleneck of designing high-accuracy tracking algorithms. Is there any better feature fusion method than correlation? To address this issue, inspired by Transformer, this work presents a novel attention-based feature fusion network, which effectively combines the template and search region features solely using attention. Specifically, the proposed method includes an ego-context augment module based on self-attention and a cross-feature augment module based on cross-attention. Finally, we present a Transformer tracking (named TransT) method based on the Siamese-like feature extraction backbone, the designed attention-based fusion mechanism, and the classification and regression head. Experiments show that our TransT achieves very promising results on six challenging datasets, especially on large-scale LaSOT, TrackingNet, and GOT-10k benchmarks. Our tracker runs at approximatively 50 fps on GPU. Code and models are available at //github.com/chenxin-dlut/TransT.

Object detection is considered as one of the most challenging problems in computer vision, since it requires correct prediction of both classes and locations of objects in images. In this study, we define a more difficult scenario, namely zero-shot object detection (ZSD) where no visual training data is available for some of the target object classes. We present a novel approach to tackle this ZSD problem, where a convex combination of embeddings are used in conjunction with a detection framework. For evaluation of ZSD methods, we propose a simple dataset constructed from Fashion-MNIST images and also a custom zero-shot split for the Pascal VOC detection challenge. The experimental results suggest that our method yields promising results for ZSD.

北京阿比特科技有限公司