亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we have to demonstrate that if we claim to have an algorithm that solves CSAT in polynomial time with a DTM (Deterministic Turing Machine), then we have to admit that: there is a counterexample that invalidates the correctness of the algorithm. This is because if we suppose that it can prove that an elenkhos formula (a formula that lists the negated codes of all models) is a contradiction, and if we change exactly a specific boolean variable of that formula, then we have proven that: in this case the algorithm will always fail.

相關內容

Most of the popular dependence measures for two random variables $X$ and $Y$ (such as Pearson's and Spearman's correlation, Kendall's $\tau$ and Gini's $\gamma$) vanish whenever $X$ and $Y$ are independent. However, neither does a vanishing dependence measure necessarily imply independence, nor does a measure equal to $1$ imply that one variable is a measurable function of the other. Yet, both properties are natural desiderata for a convincing dependence measure. In this paper, we present a general approach to transforming a given dependence measure into a new one which exactly characterizes independence as well as functional dependence. Our approach uses the concept of monotone rearrangements as introduced by Hardy and Littlewood and is applicable to a broad class of measures. In particular, we are able to define a rearranged Spearman's $\rho$ and a rearranged Kendall's $\tau$ which do attain the value $1$ if, and only if, one variable is a measurable function of the other. We also present simple estimators for the rearranged dependence measures, prove their consistency and illustrate their finite sample properties by means of a simulation study.

Deciding whether a given function is quasiconvex is generally a difficult task. Here, we discuss a number of numerical approaches that can be used in the search for a counterexample to the quasiconvexity of a given function $W$. We will demonstrate these methods using the planar isotropic rank-one convex function \[ W_{\rm magic}^+(F)=\frac{\lambda_{\rm max}}{\lambda_{\rm min}}-\log\frac{\lambda_{\rm max}}{\lambda_{\rm min}}+\log\det F=\frac{\lambda_{\rm max}}{\lambda_{\rm min}}+2\log\lambda_{\rm min}\,, \] where $\lambda_{\rm max}\geq\lambda_{\rm min}$ are the singular values of $F$, as our main example. In a previous contribution, we have shown that quasiconvexity of this function would imply quasiconvexity for all rank-one convex isotropic planar energies $W:\operatorname{GL}^+(2)\rightarrow\mathbb{R}$ with an additive volumetric-isochoric split of the form \[ W(F)=W_{\rm iso}(F)+W_{\rm vol}(\det F)=\widetilde W_{\rm iso}\bigg(\frac{F}{\sqrt{\det F}}\bigg)+W_{\rm vol}(\det F) \] with a concave volumetric part. This example is therefore of particular interest with regard to Morrey's open question whether or not rank-one convexity implies quasiconvexity in the planar case.

Most software companies have extensive test suites and re-run parts of them continuously to ensure recent changes have no adverse effects. Since test suites are costly to execute, industry needs methods for test case prioritisation (TCP). Recently, TCP methods use machine learning (ML) to exploit the information known about the system under test (SUT) and its test cases. However, the value added by ML-based TCP methods should be critically assessed with respect to the cost of collecting the information. This paper analyses two decades of TCP research, and presents a taxonomy of 91 information attributes that have been used. The attributes are classified with respect to their information sources and the characteristics of their extraction process. Based on this taxonomy, TCP methods validated with industrial data and those applying ML are analysed in terms of information availability, attribute combination and definition of data features suitable for ML. Relying on a high number of information attributes, assuming easy access to SUT code and simplified testing environments are identified as factors that might hamper industrial applicability of ML-based TCP. The TePIA taxonomy provides a reference framework to unify terminology and evaluate alternatives considering the cost-benefit of the information attributes.

The commonly quoted error rates for QMC integration with an infinite low discrepancy sequence is $O(n^{-1}\log(n)^r)$ with $r=d$ for extensible sequences and $r=d-1$ otherwise. Such rates hold uniformly over all $d$ dimensional integrands of Hardy-Krause variation one when using $n$ evaluation points. Implicit in those bounds is that for any sequence of QMC points, the integrand can be chosen to depend on $n$. In this paper we show that rates with any $r<(d-1)/2$ can hold when $f$ is held fixed as $n\to\infty$. This is accomplished following a suggestion of Erich Novak to use some unpublished results of Trojan from the 1980s as given in the information based complexity monograph of Traub, Wasilkowski and Wo\'zniakowski. The proof is made by applying a technique of Roth with the theorem of Trojan. The proof is non constructive and we do not know of any integrand of bounded variation in the sense of Hardy and Krause for which the QMC error exceeds $(\log n)^{1+\epsilon}/n$ for infinitely many $n$ when using a digital sequence such as one of Sobol's. An empirical search when $d=2$ for integrands designed to exploit known weaknesses in certain point sets showed no evidence that $r>1$ is needed. An example with $d=3$ and $n$ up to $2^{100}$ might possibly require $r>1$.

A strict bramble of a graph $G$ is a collection of pairwise-intersecting connected subgraphs of $G.$ The order of a strict bramble ${\cal B}$ is the minimum size of a set of vertices intersecting all sets of ${\cal B}.$ The strict bramble number of $G,$ denoted by ${\sf sbn}(G),$ is the maximum order of a strict bramble in $G.$ The strict bramble number of $G$ can be seen as a way to extend the notion of acyclicity, departing from the fact that (non-empty) acyclic graphs are exactly the graphs where every strict bramble has order one. We initiate the study of this graph parameter by providing three alternative definitions, each revealing different structural characteristics. The first is a min-max theorem asserting that ${\sf sbn}(G)$ is equal to the minimum $k$ for which $G$ is a minor of the lexicographic product of a tree and a clique on $k$ vertices (also known as the lexicographic tree product number). The second characterization is in terms of a new variant of a tree decomposition called lenient tree decomposition. We prove that ${\sf sbn}(G)$ is equal to the minimum $k$ for which there exists a lenient tree decomposition of $G$ of width at most $k.$ The third characterization is in terms of extremal graphs. For this, we define, for each $k,$ the concept of a $k$-domino-tree and we prove that every edge-maximal graph of strict bramble number at most $k$ is a $k$-domino-tree. We also identify three graphs that constitute the minor-obstruction set of the class of graphs with strict bramble number at most two. We complete our results by proving that, given some $G$ and $k,$ deciding whether ${\sf sbn}(G) \leq k$ is an ${\sf NP}$-complete problem.

We consider the problem of computing an $(s,d)$-hypernetwork in an acyclic F-hypergraph. This is a fundamental computational problem arising in directed hypergraphs, and is a foundational step in tackling problems of reachability and redundancy. This problem was previously explored in the context of general directed hypergraphs (containing cycles), where it is NP-hard, and acyclic B-hypergraphs, where a linear time algorithm can be achieved. In a surprising contrast, we find that for acyclic F-hypergraphs the problem is NP-hard, which also implies the problem is hard in BF-hypergraphs. This is a striking complexity boundary given that F-hypergraphs and B-hypergraphs would at first seem to be symmetrical to one another. We provide the proof of complexity and explain why there is a fundamental asymmetry between the two classes of directed hypergraphs.

Given polynomials $f_0,\dots, f_k$ the Ideal Membership Problem, IMP for short, asks if $f_0$ belongs to the ideal generated by $f_1,\dots, f_k$. In the search version of this problem the task is to find a proof of this fact. The IMP is a well-known fundamental problem with numerous applications, for instance, it underlies many proof systems based on polynomials such as Nullstellensatz, Polynomial Calculus, and Sum-of-Squares. Although the IMP is in general intractable, in many important cases it can be efficiently solved. Mastrolilli [SODA'19] initiated a systematic study of IMPs for ideals arising from Constraint Satisfaction Problems (CSPs), parameterized by constraint languages, denoted IMP($\Gamma$). The ultimate goal of this line of research is to classify all such IMPs accordingly to their complexity. Mastrolilli achieved this goal for IMPs arising from CSP($\Gamma$) where $\Gamma$ is a Boolean constraint language, while Bulatov and Rafiey [ArXiv'21] advanced these results to several cases of CSPs over finite domains. In this paper we consider IMPs arising from CSPs over `affine' constraint languages, in which constraints are subgroups (or their cosets) of direct products of Abelian groups. This kind of CSPs include systems of linear equations and are considered one of the most important types of tractable CSPs. Some special cases of the problem have been considered before by Bharathi and Mastrolilli [MFCS'21] for linear equation modulo 2, and by Bulatov and Rafiey [ArXiv'21] to systems of linear equations over $GF(p)$, $p$ prime. Here we prove that if $\Gamma$ is an affine constraint language then IMP($\Gamma$) is solvable in polynomial time assuming the input polynomial has bounded degree.

Clustering is one of the most fundamental and wide-spread techniques in exploratory data analysis. Yet, the basic approach to clustering has not really changed: a practitioner hand-picks a task-specific clustering loss to optimize and fit the given data to reveal the underlying cluster structure. Some types of losses---such as k-means, or its non-linear version: kernelized k-means (centroid based), and DBSCAN (density based)---are popular choices due to their good empirical performance on a range of applications. Although every so often the clustering output using these standard losses fails to reveal the underlying structure, and the practitioner has to custom-design their own variation. In this work we take an intrinsically different approach to clustering: rather than fitting a dataset to a specific clustering loss, we train a recurrent model that learns how to cluster. The model uses as training pairs examples of datasets (as input) and its corresponding cluster identities (as output). By providing multiple types of training datasets as inputs, our model has the ability to generalize well on unseen datasets (new clustering tasks). Our experiments reveal that by training on simple synthetically generated datasets or on existing real datasets, we can achieve better clustering performance on unseen real-world datasets when compared with standard benchmark clustering techniques. Our meta clustering model works well even for small datasets where the usual deep learning models tend to perform worse.

Both generative adversarial network models and variational autoencoders have been widely used to approximate probability distributions of datasets. Although they both use parametrized distributions to approximate the underlying data distribution, whose exact inference is intractable, their behaviors are very different. In this report, we summarize our experiment results that compare these two categories of models in terms of fidelity and mode collapse. We provide a hypothesis to explain their different behaviors and propose a new model based on this hypothesis. We further tested our proposed model on MNIST dataset and CelebA dataset.

This paper presents a safety-aware learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain feedback controllers only when safety is about to be violated. Under some mild assumptions, solutions to the constrained feedback-controller optimization are guaranteed to be globally optimal, and the monotonic improvement of a feedback controller is thus ensured. In addition, we reformulate the (action-)value function approximation to make any kernel-based nonlinear function estimation method applicable. We then employ a state-of-the-art kernel adaptive filtering technique for the (action-)value function approximation. The resulting framework is verified experimentally on a brushbot, whose dynamics is unknown and highly complex.

小貼士
登錄享
相關主題
北京阿比特科技有限公司