亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

SARRIGUREN, a new complete algorithm for SAT based on counting clauses (which is valid also for Unique-SAT and #SAT) is described, analyzed and tested. Although existing complete algorithms for SAT perform slower with clauses with many literals, that is an advantage for SARRIGUREN, because the more literals are in the clauses the bigger is the probability of overlapping among clauses, a property that makes the clause counting process more efficient. Actually, it provides a $O(m^2 \times n/k)$ time complexity for random $k$-SAT instances of $n$ variables and $m$ relatively dense clauses, where that density level is relative to the number of variables $n$, that is, clauses are relatively dense when $k\geq7\sqrt{n}$. Although theoretically there could be worst-cases with exponential complexity, the probability of those cases to happen in random $k$-SAT with relatively dense clauses is practically zero. The algorithm has been empirically tested and that polynomial time complexity maintains also for $k$-SAT instances with less dense clauses ($k\geq5\sqrt{n}$). That density could, for example, be of only 0.049 working with $n=20000$ variables and $k=989$ literals. In addition, they are presented two more complementary algorithms that provide the solutions to $k$-SAT instances and valuable information about number of solutions for each literal. Although this algorithm does not solve the NP=P problem (it is not a polynomial algorithm for 3-SAT), it broads the knowledge about that subject, because $k$-SAT with $k>3$ and dense clauses is not harder than 3-SAT. Moreover, the Python implementation of the algorithms, and all the input datasets and obtained results in the experiments are made available.

相關內容

SAT是研究者關注命題可滿足性問題的理論與應用的第一次年度會議。除了簡單命題可滿足性外,它還包括布爾優化(如MaxSAT和偽布爾(PB)約束)、量化布爾公式(QBF)、可滿足性模理論(SMT)和約束規劃(CP),用于與布爾級推理有明確聯系的問題。官網鏈接: · Analysis · 可辨認的 · 確切的 · MoDELS ·
2024 年 2 月 28 日

The approach to analysing compositional data has been dominated by the use of logratio transformations, to ensure exact subcompositional coherence and, in some situations, exact isometry as well. A problem with this approach is that data zeros, found in most applications, have to be replaced to allow the logarithmic transformation. An alternative new approach, called the `chiPower' transformation, which allows data zeros, is to combine the standardization inherent in the chi-square distance in correspondence analysis, with the essential elements of the Box-Cox power transformation. The chiPower transformation is justified because it} defines between-sample distances that tend to logratio distances for strictly positive data as the power parameter tends to zero, and are then equivalent to transforming to logratios. For data with zeros, a value of the power can be identified that brings the chiPower transformation as close as possible to a logratio transformation, without having to substitute the zeros. Especially in the area of high-dimensional data, this alternative approach can present such a high level of coherence and isometry as to be a valid approach to the analysis of compositional data. Furthermore, in a supervised learning context, if the compositional variables serve as predictors of a response in a modelling framework, for example generalized linear models, then the power can be used as a tuning parameter in optimizing the accuracy of prediction through cross-validation. The chiPower-transformed variables have a straightforward interpretation, since they are each identified with single compositional parts, not ratios.

In this note we consider the problem of ParaTuck-2 decomposition of a three-way tensor.We provide an algebraic algorithm for finding the ParaTuck-2 decomposition for the case when the ParaTuck-2 ranks are smaller than the frontal dimensions of the tensors.Our approach relies only on linear algebra operations and is based on finding the kernel of a structured matrix constructed from the tensor.

We introduce novel finite element schemes for curve diffusion and elastic flow in arbitrary codimension. The schemes are based on a variational form of a system that includes a specifically chosen tangential motion. We derive optimal $L^2$- and $H^1$-error bounds for continuous-in-time semidiscrete finite element approximations that use piecewise linear elements. In addition, we consider fully discrete schemes and, in the case of curve diffusion, prove unconditional stability for it. Finally, we present several numerical simulations, including some convergence experiments that confirm the derived error bounds. The presented simulations suggest that the tangential motion leads to equidistribution in practice.

Data depth functions have been intensively studied for normed vector spaces. However, a discussion on depth functions on data where one specific data structure cannot be presupposed is lacking. In this article, we introduce a notion of depth functions for data types that are not given in statistical standard data formats and therefore we do not have one specific data structure. We call such data in general non-standard data. To achieve this, we represent the data via formal concept analysis which leads to a unified data representation. Besides introducing depth functions for non-standard data using formal concept analysis, we give a systematic basis by introducing structural properties. Furthermore, we embed the generalised Tukey depth into our concept of data depth and analyse it using the introduced structural properties. Thus, this article provides the mathematical formalisation of centrality and outlyingness for non-standard data and therefore increases the spaces centrality is currently discussed. In particular, it gives a basis to define further depth functions and statistical inference methods for non-standard data.

A family of symmetric matrices $A_1,\ldots, A_d$ is SDC (simultaneous diagonalization by congruence) if there is an invertible matrix $X$ such that every $X^T A_k X$ is diagonal. In this work, a novel randomized SDC (RSDC) algorithm is proposed that reduces SDC to a generalized eigenvalue problem by considering two (random) linear combinations of the family. We establish exact recovery: RSDC achieves diagonalization with probability $1$ if the family is exactly SDC. Under a mild regularity assumption, robust recovery is also established: Given a family that is $\epsilon$-close to SDC then RSDC diagonalizes, with high probability, the family up to an error of norm $\mathcal{O}(\epsilon)$. Under a positive definiteness assumption, which often holds in applications, stronger results are established, including a bound on the condition number of the transformation matrix. For practical use, we suggest to combine RSDC with an optimization algorithm. The performance of the resulting method is verified for synthetic data, image separation and EEG analysis tasks. It turns out that our newly developed method outperforms existing optimization-based methods in terms of efficiency while achieving a comparable level of accuracy.

The solution approximation for partial differential equations (PDEs) can be substantially improved using smooth basis functions. The recently introduced mollified basis functions are constructed through mollification, or convolution, of cell-wise defined piecewise polynomials with a smooth mollifier of certain characteristics. The properties of the mollified basis functions are governed by the order of the piecewise functions and the smoothness of the mollifier. In this work, we exploit the high-order and high-smoothness properties of the mollified basis functions for solving PDEs through the point collocation method. The basis functions are evaluated at a set of collocation points in the domain. In addition, boundary conditions are imposed at a set of boundary collocation points distributed over the domain boundaries. To ensure the stability of the resulting linear system of equations, the number of collocation points is set larger than the total number of basis functions. The resulting linear system is overdetermined and is solved using the least square technique. The presented numerical examples confirm the convergence of the proposed approximation scheme for Poisson, linear elasticity, and biharmonic problems. We study in particular the influence of the mollifier and the spatial distribution of the collocation points.

We propose a high order numerical scheme for time-dependent first order Hamilton--Jacobi--Bellman equations. In particular we propose to combine a semi-Lagrangian scheme with a Central Weighted Non-Oscillatory reconstruction. We prove a convergence result in the case of state- and time-independent Hamiltonians. Numerical simulations are presented in space dimensions one and two, also for more general state- and time-dependent Hamiltonians, demonstrating superior performance in terms of CPU time gain compared with a semi-Lagrangian scheme coupled with Weighted Non-Oscillatory reconstructions.

We consider the problem of the exact computation of the marginal eigenvalue distributions in the Laguerre and Jacobi $\beta$ ensembles. In the case $\beta=1$ this is a question of long standing in the mathematical statistics literature. A recursive procedure to accomplish this task is given for $\beta$ a positive integer, and the parameter $\lambda_1$ a non-negative integer. This case is special due to a finite basis of elementary functions, with coefficients which are polynomials. In the Laguerre case with $\beta = 1$ and $\lambda_1 + 1/2$ a non-negative integer some evidence is given of their again being a finite basis, now consisting of elementary functions and the error function multiplied by elementary functions. Moreover, from this the corresponding distributions in the fixed trace case permit a finite basis of power functions, as also for $\lambda_1$ a non-negative integer. The fixed trace case in this setting is relevant to quantum information theory and quantum transport problem, allowing particularly the exact determination of Landauer conductance distributions in a previously intractable parameter regime. Our findings also aid in analyzing zeros of the generating function for specific gap probabilities, supporting the validity of an associated large $N$ local central limit theorem.

Quantum-inspired classical algorithms provide us with a new way to understand the computational power of quantum computers for practically-relevant problems, especially in machine learning. In the past several years, numerous efficient algorithms for various tasks have been found, while an analysis of lower bounds is still missing. Using communication complexity, in this work we propose the first method to study lower bounds for these tasks. We mainly focus on lower bounds for solving linear regressions, supervised clustering, principal component analysis, recommendation systems, and Hamiltonian simulations. More precisely, we show that for linear regressions, in the row-sparse case, the lower bound is quadratic in the Frobenius norm of the underlying matrix, which is tight. In the dense case, with an extra assumption on the accuracy we obtain that the lower bound is quartic in the Frobenius norm, which matches the upper bound. For supervised clustering, we obtain a tight lower bound that is quartic in the Frobenius norm. For the other three tasks, we obtain a lower bound that is quadratic in the Frobenius norm, and the known upper bound is quartic in the Frobenius norm. Through this research, we find that large quantum speedup can exist for sparse, high-rank, well-conditioned matrix-related problems. Finally, we extend our method to study lower bounds analysis of quantum query algorithms for matrix-related problems. Some applications are given.

We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.

北京阿比特科技有限公司