亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The zeros of type II multiple orthogonal polynomials can be used for quadrature formulas that approximate $r$ integrals of the same function $f$ with respect to $r$ measures $\mu_1,\ldots,\mu_r$ in the spirit of Gaussian quadrature. This was first suggested by Borges in 1994. We give a method to compute the quadrature nodes and the quadrature weights which extends the Golub-Welsch approach using the eigenvalues and left and right eigenvectors of a banded Hessenberg matrix. This method was already described by Coussement and Van Assche in 2005 but it seems to have gone unnoticed. We describe the result in detail for $r=2$ and give some examples.

相關內容

We consider the problem of solving a family of parametric mixed-integer linear optimization problems where some entries in the input data change. We introduce the concept of $cutting-plane$ $layer$ (CPL), $i.e.$, a differentiable cutting-plane generator mapping the problem data and previous iterates to cutting planes. We propose a CPL implementation to generate split cuts, and by combining several CPLs, we devise a differentiable cutting-plane algorithm that exploits the repeated nature of parametric instances. In an offline phase, we train our algorithm by updating the parameters controlling the CPLs, thus altering cut generation. Once trained, our algorithm computes, with predictable execution times and a fixed number of cuts, solutions with low integrality gaps. Preliminary computational tests show that our algorithm generalizes on unseen instances and captures underlying parametric structures.

Estimating a prediction function is a fundamental component of many data analyses. The Super Learner ensemble, a particular implementation of stacking, has desirable theoretical properties and has been used successfully in many applications. Dimension reduction can be accomplished by using variable screening algorithms, including the lasso, within the ensemble prior to fitting other prediction algorithms. However, the performance of a Super Learner using the lasso for dimension reduction has not been fully explored in cases where the lasso is known to perform poorly. We provide empirical results that suggest that a diverse set of candidate screening algorithms should be used to protect against poor performance of any one screen, similar to the guidance for choosing a library of prediction algorithms for the Super Learner.

The problem of determining whether a graph $G$ contains another graph $H$ as a minor, referred to as the minor containment problem, is a fundamental problem in the field of graph algorithms. While it is NP-complete when $G$ and $H$ are general graphs, it is sometimes tractable on more restricted graph classes. This study focuses on the case where both $G$ and $H$ are trees, known as the tree minor containment problem. Even in this case, the problem is known to be NP-complete. In contrast, polynomial-time algorithms are known for the case when both trees are caterpillars or when the maximum degree of $H$ is a constant. Our research aims to clarify the boundary of tractability and intractability for the tree minor containment problem. Specifically, we provide dichotomies for the computational complexities of the problem based on three structural parameters: the diameter, pathwidth, and path eccentricity.

We extend classical methods of computational complexity to the setting of distributed computing, where they are sometimes more effective than in their original context. Our focus is on distributed decision in the LOCAL model, where multiple networked computers communicate via synchronous message-passing to collectively answer a question about their network topology. Rather unusually, we impose two orthogonal constraints on the running time of this model: the number of communication rounds is bounded by a constant, and the number of computation steps of each computer is polynomially bounded by the size of its local input and the messages it receives. By letting two players take turns assigning certificates to all computers in the network, we obtain a generalization of the polynomial hierarchy (and hence of the complexity classes $\mathbf{P}$ and $\mathbf{NP}$). We then extend some key results of complexity theory to this setting, in particular the Cook-Levin theorem (which identifies Boolean satisfiability as a complete problem for $\mathbf{NP}$), and Fagin's theorem (which characterizes $\mathbf{NP}$ as the problems expressible in existential second-order logic). The original results can be recovered as the special case where the network consists of a single computer. But perhaps more surprisingly, the task of separating complexity classes becomes easier in the general case: we can show that our hierarchy is infinite, while it remains notoriously open whether the same is true in the case of a single computer. (By contrast, a collapse of our hierarchy would have implied a collapse of the polynomial hierarchy.) As an application, we propose quantifier alternation as a new approach to measuring the locality of problems in distributed computing.

Pre-trained Generative models such as BART, T5, etc. have gained prominence as a preferred method for text generation in various natural language processing tasks, including abstractive long-form question answering (QA) and summarization. However, the potential of generative models in extractive QA tasks, where discriminative models are commonly employed, remains largely unexplored. Discriminative models often encounter challenges associated with label sparsity, particularly when only a small portion of the context contains the answer. The challenge is more pronounced for multi-span answers. In this work, we introduce a novel approach that uses the power of pre-trained generative models to address extractive QA tasks by generating indexes corresponding to context tokens or sentences that form part of the answer. Through comprehensive evaluations on multiple extractive QA datasets, including MultiSpanQA, BioASQ, MASHQA, and WikiQA, we demonstrate the superior performance of our proposed approach compared to existing state-of-the-art models.

Conditional copulas are useful tools for modeling the dependence between multiple response variables that may vary with a given set of predictor variables. Conditional dependence measures such as conditional Kendall's tau and Spearman's rho that can be expressed as functionals of the conditional copula are often used to evaluate the strength of dependence conditioning on the covariates. In general, semiparametric estimation methods of conditional copulas rely on an assumed parametric copula family where the copula parameter is assumed to be a function of the covariates. The functional relationship can be estimated nonparametrically using different techniques but it is required to choose an appropriate copula model from various candidate families. In this paper, by employing the empirical checkerboard Bernstein copula (ECBC) estimator we propose a fully nonparametric approach for estimating conditional copulas, which doesn't require any selection of parametric copula models. Closed-form estimates of the conditional dependence measures are derived directly from the proposed ECBC-based conditional copula estimator. We provide the large-sample consistency of the proposed estimator as well as the estimates of conditional dependence measures. The finite-sample performance of the proposed estimator and comparison with semiparametric methods are investigated through simulation studies. An application to real case studies is also provided.

The expressivity of Graph Neural Networks (GNNs) can be entirely characterized by appropriate fragments of the first-order logic. Namely, any query of the two variable fragment of graded modal logic (GC2) interpreted over labeled graphs can be expressed using a GNN whose size depends only on the depth of the query. As pointed out by [Barcelo & Al., 2020, Grohe, 2021], this description holds for a family of activation functions, leaving the possibility for a hierarchy of logics expressible by GNNs depending on the chosen activation function. In this article, we show that such hierarchy indeed exists by proving that GC2 queries cannot be expressed by GNNs with polynomial activation functions. This implies a separation between polynomial and popular non-polynomial activations (such as ReLUs, sigmoid and hyperbolic tan and others) and answers an open question formulated by [Grohe, 2021].

The evaluation of the fidelity of eXplainable Artificial Intelligence (XAI) methods to their underlying models is a challenging task, primarily due to the absence of a ground truth for explanations. However, assessing fidelity is a necessary step for ensuring a correct XAI methodology. In this study, we conduct a fair and objective comparison of the current state-of-the-art XAI methods by introducing three novel image datasets with reliable ground truth for explanations. The primary objective of this comparison is to identify methods with low fidelity and eliminate them from further research, thereby promoting the development of more trustworthy and effective XAI techniques. Our results demonstrate that XAI methods based on the backpropagation of output information to input yield higher accuracy and reliability compared to methods relying on sensitivity analysis or Class Activation Maps (CAM). However, the backpropagation method tends to generate more noisy saliency maps. These findings have significant implications for the advancement of XAI methods, enabling the elimination of erroneous explanations and fostering the development of more robust and reliable XAI.

The objective of a two-stage submodular maximization problem is to reduce the ground set using provided training functions that are submodular, with the aim of ensuring that optimizing new objective functions over the reduced ground set yields results comparable to those obtained over the original ground set. This problem has applications in various domains including data summarization. Existing studies often assume the monotonicity of the objective function, whereas our work pioneers the extension of this research to accommodate non-monotone submodular functions. We have introduced the first constant-factor approximation algorithms for this more general case.

Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction. This is due to the difficulty of estimating the phase of clean speech. To improve speech enhancement performance, we tackle the phase estimation problem in three ways. First, we propose Deep Complex U-Net, an advanced U-Net structured model incorporating well-defined complex-valued building blocks to deal with complex-valued spectrograms. Second, we propose a polar coordinate-wise complex-valued masking method to reflect the distribution of complex ideal ratio masks. Third, we define a novel loss function, weighted source-to-distortion ratio (wSDR) loss, which is designed to directly correlate with a quantitative evaluation measure. Our model was evaluated on a mixture of the Voice Bank corpus and DEMAND database, which has been widely used by many deep learning models for speech enhancement. Ablation experiments were conducted on the mixed dataset showing that all three proposed approaches are empirically valid. Experimental results show that the proposed method achieves state-of-the-art performance in all metrics, outperforming previous approaches by a large margin.

北京阿比特科技有限公司