亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we show that Graph Isomorphism (GI) is not $\textsf{AC}^{0}$-reducible to several problems, including the Latin Square Isotopy problem, isomorphism testing of several families of Steiner designs, and isomorphism testing of conference graphs. As a corollary, we obtain that GI is not $\textsf{AC}^{0}$-reducible to isomorphism testing of Latin square graphs and strongly regular graphs arising from special cases of Steiner $2$-designs. We accomplish this by showing that the generator-enumeration technique for each of these problems can be implemented in $\beta_{2}\textsf{FOLL}$, which cannot compute Parity (Chattopadhyay, Tor\'an, & Wagner, ACM Trans. Comp. Theory, 2013).

相關內容

Many problems in robotics, such as estimating the state from noisy sensor data or aligning two LiDAR point clouds, can be posed and solved as least-squares problems. Unfortunately, vanilla nonminimal solvers for least-squares problems are notoriously sensitive to outliers. As such, various robust loss functions have been proposed to reduce the sensitivity to outliers. Examples of loss functions include pseudo-Huber, Cauchy, and Geman-McClure. Recently, these loss functions have been generalized into a single loss function that enables the best loss function to be found adaptively based on the distribution of the residuals. However, even with the generalized robust loss function, most nonminimal solvers can only be solved locally given a prior state estimate due to the nonconvexity of the problem. The first contribution of this paper is to combine graduated nonconvexity (GNC) with the generalized robust loss function to solve least-squares problems without a prior state estimate and without the need to specify a loss function. Moreover, existing loss functions, including the generalized loss function, are based on Gaussian-like distribution. However, residuals are often defined as the squared norm of a multivariate error and distributed in a Chi-like fashion. The second contribution of this paper is to apply a norm-aware adaptive robust loss function within a GNC framework. This leads to additional robustness when compared with state-of-the-art methods. Simulations and experiments demonstrate that the proposed approach is more robust and yields faster convergence times compared to other GNC formulations.

This paper presents Squid, a new conjunctive query synthesis algorithm for searching code with target patterns. Given positive and negative examples along with a natural language description, Squid analyzes the relations derived from the examples by a Datalog-based program analyzer and synthesizes a conjunctive query expressing the search intent. The synthesized query can be further used to search for desired grammatical constructs in the editor. To achieve high efficiency, we prune the huge search space by removing unnecessary relations and enumerating query candidates via refinement. We also introduce two quantitative metrics for query prioritization to select the queries from multiple candidates, yielding desired queries for code search. We have evaluated Squid on over thirty code search tasks. It is shown that Squid successfully synthesizes the conjunctive queries for all the tasks, taking only 2.56 seconds on average.

In this paper, we propose two new algorithms for maximum-likelihood estimation (MLE) of high dimensional sparse covariance matrices. Unlike most of the state of-the-art methods, which either use regularization techniques or penalize the likelihood to impose sparsity, we solve the MLE problem based on an estimated covariance graph. More specifically, we propose a two-stage procedure: in the first stage, we determine the sparsity pattern of the target covariance matrix (in other words the marginal independence in the covariance graph under a Gaussian graphical model) using the multiple hypothesis testing method of false discovery rate (FDR), and in the second stage we use either a block coordinate descent approach to estimate the non-zero values or a proximal distance approach that penalizes the distance between the estimated covariance graph and the target covariance matrix. Doing so gives rise to two different methods, each with its own advantage: the coordinate descent approach does not require tuning of any hyper-parameters, whereas the proximal distance approach is computationally fast but requires a careful tuning of the penalty parameter. Both methods are effective even in cases where the number of observed samples is less than the dimension of the data. For performance evaluation, we test the proposed methods on both simulated and real-world data and show that they provide more accurate estimates of the sparse covariance matrix than two state-of-the-art methods.

We are interested in the discretisation of a drift-diffusion system in the framework of hybrid finite volume (HFV) methods on general polygonal/polyhedral meshes. The system under study is composed of two anisotropic and nonlinear convection-diffusion equations with nonsymmetric tensors, coupled with a Poisson equation and describes in particular semiconductor devices immersed in a magnetic field. We introduce a new scheme based on an entropy-dissipation relation and prove that the scheme admits solutions with values in admissible sets - especially, the computed densities remain positive. Moreover, we show that the discrete solutions to the scheme converge exponentially fast in time towards the associated discrete thermal equilibrium. Several numerical tests confirm our theoretical results. Up to our knowledge, this scheme is the first one able to discretise anisotropic drift-diffusion systems while preserving the bounds on the densities.

The United Nations (UN) Sustainable Development Goals (SDGs) challenge the global community to build a world where no one is left behind. Recognizing that research plays a fundamental part in supporting these goals, attempts have been made to classify research publications according to their relevance in supporting each of the UN's SDGs. In this paper, we outline the methodology that we followed when mapping research articles to SDGs and which is adopted by Times Higher Education in their Social Impact rankings. We compare our solution with other existing queries and models mapping research papers to SDGs. We also discuss various aspects in which the methodology can be improved and generalized to other types of content apart from research articles. The results presented in this paper are the outcome of the SDG Research Mapping Initiative that was established as a partnership between the University of Southern Denmark, the Aurora European Universities Alliance (represented by Vrije Universiteit Amsterdam), the University of Auckland, and Elsevier to bring together broad expertise and share best practices on identifying research contributions to UN's Sustainable Development Goals.

Incrementality experiments compare customers exposed to a marketing action designed to increase sales to those randomly assigned to a control group. These experiments suffer from noisy responses which make precise estimation of the average treatment effect (ATE) and marketing ROI difficult. We develop a model that improves the precision by estimating separate treatment effects for three latent strata defined by potential outcomes in the experiment -- customers who would buy regardless of ad exposure, those who would buy only if exposed to ads and those who would not buy regardless. The overall ATE is estimated by averaging the strata-level effects, and this produces a more precise estimator of the ATE over a wide range of conditions typical of marketing experiments. Analytical results and simulations show that the method decreases the sampling variance of the ATE most when (1) there are large differences in the treatment effect between latent strata and (2) the model used to estimate the strata-level effects is well-identified. Applying the procedure to 5 catalog experiments shows a reduction of 30-60% in the variance of the overall ATE. This leads to a substantial decrease in decision errors when the estimator is used to determine whether ads should be continued or discontinued.

A code of length $n$ is said to be (combinatorially) $(\rho,L)$-list decodable if the Hamming ball of radius $\rho n$ around any vector in the ambient space does not contain more than $L$ codewords. We study a recently introduced class of higher order MDS codes, which are closely related (via duality) to codes that achieve a generalized Singleton bound for list decodability. For some $\ell\geq 1$, higher order MDS codes of length $n$, dimension $k$, and order $\ell$ are denoted as $(n,k)$-MDS($\ell$) codes. We present a number of results on the structure of these codes, identifying the `extend-ability' of their parameters in various scenarios. Specifically, for some parameter regimes, we identify conditions under which $(n_1,k_1)$-MDS($\ell_1$) codes can be obtained from $(n_2,k_2)$-MDS($\ell_2$) codes, via various techniques. We believe that these results will aid in efficient constructions of higher order MDS codes. We also obtain a new field size upper bound for the existence of such codes, which arguably improves over the best known existing bound, in some parameter regimes.

A distance estimator for a graph property $\mathcal{P}$ is an algorithm that given $G$ and $\alpha, \varepsilon >0$ distinguishes between the case that $G$ is $(\alpha-\varepsilon)$-close to $\mathcal{P}$ and the case that $G$ is $\alpha$-far from $\mathcal{P}$ (in edit distance). We say that $\mathcal{P}$ is estimable if it has a distance estimator whose query complexity depends only on $\varepsilon$. Every estimable property is also testable, since testing corresponds to estimating with $\alpha=\varepsilon$. A central result in the area of property testing, the Fischer--Newman theorem, gives an inverse statement: every testable property is in fact estimable. The proof of Fischer and Newman was highly ineffective, since it incurred a tower-type loss when transforming a testing algorithm for $\mathcal{P}$ into a distance estimator. This raised the natural problem, studied recently by Fiat--Ron and by Hoppen--Kohayakawa--Lang--Lefmann--Stagni, whether one can find a transformation with a polynomial loss. We obtain the following results. 1. If $\mathcal{P}$ is hereditary, then one can turn a tester for $\mathcal{P}$ into a distance estimator with an exponential loss. This is an exponential improvement over the result of Hoppen et. al., who obtained a transformation with a double exponential loss. 2. For every $\mathcal{P}$, one can turn a testing algorithm for $\mathcal{P}$ into a distance estimator with a double exponential loss. This improves over the transformation of Fischer--Newman that incurred a tower-type loss. Our main conceptual contribution in this work is that we manage to turn the approach of Fischer--Newman, which was inherently ineffective, into an efficient one. On the technical level, our main contribution is in establishing certain properties of Frieze--Kannan Weak Regular partitions that are of independent interest.

Graph machine learning has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To tackle the challenge, automated graph machine learning, which aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design, is gaining an increasing number of attentions from the research community. In this paper, we extensively discuss automated graph machine approaches, covering hyper-parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning. We briefly overview existing libraries designed for either graph machine learning or automated machine learning respectively, and further in depth introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning. Last but not least, we share our insights on future research directions for automated graph machine learning. This paper is the first systematic and comprehensive discussion of approaches, libraries as well as directions for automated graph machine learning.

In recent years, DBpedia, Freebase, OpenCyc, Wikidata, and YAGO have been published as noteworthy large, cross-domain, and freely available knowledge graphs. Although extensively in use, these knowledge graphs are hard to compare against each other in a given setting. Thus, it is a challenge for researchers and developers to pick the best knowledge graph for their individual needs. In our recent survey, we devised and applied data quality criteria to the above-mentioned knowledge graphs. Furthermore, we proposed a framework for finding the most suitable knowledge graph for a given setting. With this paper we intend to ease the access to our in-depth survey by presenting simplified rules that map individual data quality requirements to specific knowledge graphs. However, this paper does not intend to replace our previously introduced decision-support framework. For an informed decision on which KG is best for you we still refer to our in-depth survey.

北京阿比特科技有限公司