亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Matrix perturbation bounds play an essential role in the design and analysis of spectral algorithms. In this paper, we introduce a new method to deduce matrix perturbation bounds, which we call "contour bootstrapping". As applications, we work out several new bounds for eigensubspace computation and low rank approximation. Next, we use these bounds to study utility problems in the area of differential privacy.

相關內容

In this paper, we introduce the problem of Online Matching with Delays and Size-based Costs (OMDSC). The OMDSC problem involves $m$ requests arriving online. At any time, a group can be formed by matching any number of these requests that have been received but are still unmatched. The cost associated with each group is determined by the waiting time for each request within the group and a size-dependent cost. Our goal is to partition all incoming requests into multiple groups while minimizing the total associated cost. The problem extends the TCP acknowledgment problem proposed by Dooly et al. (JACM 2001). It generalizes the cost model for sending acknowledgments. This paper reveals the competitive ratios for a fundamental case where the range of the penalty function is limited to $0$ and $1$. We classify such penalty functions into three distinct cases: (i) a fixed penalty of $1$ regardless of group size, (ii) a penalty of $0$ if and only if the group size is a multiple of a specific integer $k$, and (iii) other situations. The problem of case (i) is equivalent to the TCP acknowledgment problem, for which Dooly et al. proposed a $2$-competitive algorithm. For case (ii), we first show that natural algorithms that match all the remaining requests are $\Omega(\sqrt{k})$-competitive. We then propose an $O(\log k / \log \log k)$-competitive deterministic algorithm by carefully managing match size and timing, and we also prove its optimality. For case (iii), we demonstrate the non-existence of a competitive online algorithm. Additionally, we discuss competitive ratios for other typical penalty functions.

When we want to compute the probability of a query from a Probabilistic Answer Set Program, some parts of a program may not influence the probability of a query, but they impact on the size of the grounding. Identifying and removing them is crucial to speed up the computation. Algorithms for SLG resolution offer the possibility of returning the residual program which can be used for computing answer sets for normal programs that do have a total well-founded model. The residual program does not contain the parts of the program that do not influence the probability. In this paper, we propose to exploit the residual program for performing inference. Empirical results on graph datasets show that the approach leads to significantly faster inference.

In this paper, we consider a class of discontinuous Galerkin (DG) methods for one-dimensional nonlocal diffusion (ND) problems. The nonlocal models, which are integral equations, are widely used in describing many physical phenomena with long-range interactions. The ND problem is the nonlocal analog of the classic diffusion problem, and as the interaction radius (horizon) vanishes, then the nonlocality disappears and the ND problem converges to the classic diffusion problem. Under certain conditions, the exact solution to the ND problem may exhibit discontinuities, setting it apart from the classic diffusion problem. Since the DG method shows its great advantages in resolving problems with discontinuities in computational fluid dynamics over the past several decades, it is natural to adopt the DG method to compute the ND problems. Based on [Du-Ju-Lu-Tian-CAMC2020], we develop the DG methods with different penalty terms, ensuring that the proposed DG methods have local counterparts as the horizon vanishes. This indicates the proposed methods will converge to the existing DG schemes as the horizon vanishes, which is crucial for achieving asymptotic compatibility. Rigorous proofs are provided to demonstrate the stability, error estimates, and asymptotic compatibility of the proposed DG schemes. To observe the effect of the nonlocal diffusion, we also consider the time-dependent convection-diffusion problems with nonlocal diffusion. We conduct several numerical experiments, including accuracy tests and Burgers' equation with nonlocal diffusion, and various horizons are taken to show the good performance of the proposed algorithm and validate the theoretical findings.

Given a family of pretrained models and a hold-out set, how can we construct a valid conformal prediction set while selecting a model that minimizes the width of the set? If we use the same hold-out data set both to select a model (the model that yields the smallest conformal prediction sets) and then to construct a conformal prediction set based on that selected model, we suffer a loss of coverage due to selection bias. Alternatively, we could further splitting the data to perform selection and calibration separately, but this comes at a steep cost if the size of the dataset is limited. In this paper, we address the challenge of constructing a valid prediction set after efficiency-oriented model selection. Our novel methods can be implemented efficiently and admit finite-sample validity guarantees without invoking additional sample-splitting. We show that our methods yield prediction sets with asymptotically optimal size under certain notion of continuity for the model class. The improved efficiency of the prediction sets constructed by our methods are further demonstrated through applications to synthetic datasets in various settings and a real data example.

Explaining decisions of black-box classifiers is both important and computationally challenging. In this paper, we scrutinize explainers that generate feature-based explanations from samples or datasets. We start by presenting a set of desirable properties that explainers would ideally satisfy, delve into their relationships, and highlight incompatibilities of some of them. We identify the entire family of explainers that satisfy two key properties which are compatible with all the others. Its instances provide sufficient reasons, called weak abductive explanations.We then unravel its various subfamilies that satisfy subsets of compatible properties. Indeed, we fully characterize all the explainers that satisfy any subset of compatible properties. In particular, we introduce the first (broad family of) explainers that guarantee the existence of explanations and their global consistency.We discuss some of its instances including the irrefutable explainer and the surrogate explainer whose explanations can be found in polynomial time.

This paper addresses the Graph Matching problem, which consists of finding the best possible alignment between two input graphs, and has many applications in computer vision, network deanonymization and protein alignment. A common approach to tackle this problem is through convex relaxations of the NP-hard \emph{Quadratic Assignment Problem} (QAP). Here, we introduce a new convex relaxation onto the unit simplex and develop an efficient mirror descent scheme with closed-form iterations for solving this problem. Under the correlated Gaussian Wigner model, we show that the simplex relaxation admits a unique solution with high probability. In the noiseless case, this is shown to imply exact recovery of the ground truth permutation. Additionally, we establish a novel sufficiency condition for the input matrix in standard greedy rounding methods, which is less restrictive than the commonly used `diagonal dominance' condition. We use this condition to show exact one-step recovery of the ground truth (holding almost surely) via the mirror descent scheme, in the noiseless setting. We also use this condition to obtain significantly improved conditions for the GRAMPA algorithm [Fan et al. 2019] in the noiseless setting.

In this paper we propose a procedure for robust estimation in the context of generalized linear models based on the maximum Lq-likelihood method. Alongside this, an estimation algorithm that represents a natural extension of the usual iteratively weighted least squares method in generalized linear models is presented. It is through the discussion of the asymptotic distribution of the proposed estimator and a set of statistics for testing linear hypothesis that it is possible to define standardized residuals using the mean-shift outlier model. In addition, robust versions of deviance function and the Akaike information criterion are defined with the aim of providing tools for model selection. Finally, the performance of the proposed methodology is illustrated through a simulation study and analysis of a real dataset.

The Learning Parity with Noise (LPN) problem underlines several classic cryptographic primitives. Researchers have endeavored to demonstrate the algorithmic difficulty of this problem by attempting to find a reduction from the decoding problem of linear codes, for which several hardness results exist. Earlier studies used code smoothing as a technical tool to achieve such reductions, showing that they are possible for codes with vanishing rate. This has left open the question of attaining a reduction with positive-rate codes. Addressing this case, we characterize the efficiency of the reduction in terms of the parameters of the decoding and LPN problems. As a conclusion, we isolate the parameter regimes for which a meaningful reduction is possible and the regimes for which its existence is unlikely.

In this paper, we focus on the self-supervised learning of visual correspondence using unlabeled videos in the wild. Our method simultaneously considers intra- and inter-video representation associations for reliable correspondence estimation. The intra-video learning transforms the image contents across frames within a single video via the frame pair-wise affinity. To obtain the discriminative representation for instance-level separation, we go beyond the intra-video analysis and construct the inter-video affinity to facilitate the contrastive transformation across different videos. By forcing the transformation consistency between intra- and inter-video levels, the fine-grained correspondence associations are well preserved and the instance-level feature discrimination is effectively reinforced. Our simple framework outperforms the recent self-supervised correspondence methods on a range of visual tasks including video object tracking (VOT), video object segmentation (VOS), pose keypoint tracking, etc. It is worth mentioning that our method also surpasses the fully-supervised affinity representation (e.g., ResNet) and performs competitively against the recent fully-supervised algorithms designed for the specific tasks (e.g., VOT and VOS).

BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at //github.com/nlpyang/BertSum

北京阿比特科技有限公司