亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

By the MAXSAT problem, we are given a set $V$ of $m$ variables and a collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss a polynomial time algorithm to solve this problem. Its time complexity is bounded by O($n^2m^3$). Hence, we provide a proof of $P$ = $\textit{NP}$.

相關內容

Algorithms for solving the linear classification problem have a long history, dating back at least to 1936 with linear discriminant analysis. For linearly separable data, many algorithms can obtain the exact solution to the corresponding 0-1 loss classification problem efficiently, but for data which is not linearly separable, it has been shown that this problem, in full generality, is NP-hard. Alternative approaches all involve approximations of some kind, including the use of surrogates for the 0-1 loss (for example, the hinge or logistic loss) or approximate combinatorial search, none of which can be guaranteed to solve the problem exactly. Finding efficient algorithms to obtain an exact i.e. globally optimal solution for the 0-1 loss linear classification problem with fixed dimension, remains an open problem. In research we report here, we detail the construction of a new algorithm, incremental cell enumeration (ICE), that can solve the 0-1 loss classification problem exactly in polynomial time. To our knowledge, this is the first, rigorously-proven polynomial time algorithm for this long-standing problem.

The algebraic degree is an important parameter of Boolean functions used in cryptography. When a function in a large number of variables is not given explicitly in algebraic normal form, it might not be feasible to compute its degree. Instead, one can try to estimate the degree using probabilistic tests. We propose a probabilistic test for deciding whether the algebraic degree of a Boolean function $f$ is below a certain value $k$. The test involves picking an affine space of dimension $k$ and testing whether the values on $f$ on that space sum up to zero. If $deg(f)<k$, then $f$ will always pass the test, otherwise it will sometimes pass and sometimes fail the test, depending on which affine space was chosen. The probability of failing the proposed test is closely related to the number of monomials of degree $k$ in a polynomial $g$, averaged over all the polynomials $g$ which are affine equivalent to $f$. We initiate the study of the probability of failing the proposed ``$deg(f)<k$'' test. We show that in the particular case when the degree of $f$ is actually equal to $k$, the probability will be in the interval $(0.288788, 0.5]$, and therefore a small number of runs of the test is sufficient to give, with very high probability, the correct answer. Exact values of this probability for all the polynomials in 8 variables were computed using the representatives listed by Hou and by Langevin and Leander.

The Plackett--Luce model is a popular approach for ranking data analysis, where a utility vector is employed to determine the probability of each outcome based on Luce's choice axiom. In this paper, we investigate the asymptotic theory of utility vector estimation by maximizing different types of likelihood, such as the full-, marginal-, and quasi-likelihood. We provide a rank-matching interpretation for the estimating equations of these estimators and analyze their asymptotic behavior as the number of items being compared tends to infinity. In particular, we establish the uniform consistency of these estimators under conditions characterized by the topology of the underlying comparison graph sequence and demonstrate that the proposed conditions are sharp for common sampling scenarios such as the nonuniform random hypergraph model and the hypergraph stochastic block model; we also obtain the asymptotic normality of these estimators and discuss the trade-off between statistical efficiency and computational complexity for practical uncertainty quantification. Both results allow for nonuniform and inhomogeneous comparison graphs with varying edge sizes and different asymptotic orders of edge probabilities. We verify our theoretical findings by conducting detailed numerical experiments.

We establish globally optimal solutions to a class of fractional optimization problems on a class of constraint sets, whose key characteristics are as follows: 1) The numerator and the denominator of the objective function are both convex, semi-algebraic, Lipschitz continuous and differentiable with Lipschitz continuous gradients on the constraint set. 2) The constraint set is closed, convex and semi-algebraic. Compared with Dinkelbach's approach, our novelty falls into the following aspects: 1) Dinkelbach's has to solve a concave maximization problem in each iteration, which is nontrivial to obtain a solution, while ours only needs to conduct one proximity gradient operation in each iteration. 2) Dinkelbach's requires at least one nonnegative point for the numerator to proceed the algorithm, but ours does not, which is available to a much wider class of situations. 3) Dinkelbach's requires a closed and bounded constraint set, while ours only needs the closedness but not necessarily the boundedness. Therefore, our approach is viable for many more practical models, like optimizing the Sharpe ratio (SR) or the Information ratio in mathematical finance. Numerical experiments show that our approach achieves the ground-truth solutions in two simple examples. For real-world financial data, it outperforms several existing approaches for SR maximization.

We consider Bidder Selection Problem (BSP) in position auctions motivated by practical concerns of online advertising platforms. In this problem, the platform sells ad slots via an auction to a large pool of $n$ potential buyers with independent values drawn from known prior distributions. The seller can only invite a fraction of $k<n$ advertisers to the auction due to communication and computation restrictions. She wishes to maximize either the social welfare or her revenue by selecting the set of invited bidders. We study BSP in a classic multi-winner model of position auctions for welfare and revenue objectives using the optimal (respectively, VCG mechanism, or Myerson's auction) format for the selected set of bidders. We propose a novel Poisson-Chernoff relaxation of the problem that immediately implies that 1) BSP is polynomial time solvable up to a vanishingly small error as the problem size $k$ grows; 2) PTAS for position auctions after combining our relaxation with the trivial brute force algorithm; the algorithm is in fact an Efficient PTAS (EPTAS) under a mild assumption $k\ge\log n$ with much better running time than previous PTASes for single-item auction. Our approach yields simple and practically relevant algorithms unlike all previous complex PTASes, which had at least doubly exponential dependency of their running time on $\varepsilon$. In contrast, our algorithms are even faster than popular algorithms such as greedy for submodular maximization. Furthermore, we did extensive numerical experiments, which demonstrate high efficiency and practical applicability of our solution. Our experiments corroborate the experimental findings of [Mehta, Nadav, Psomas, Rubinstein 2020] that many simple heuristics perform surprisingly well, which indicates importance of using small $\varepsilon$ for the BSP and practical irrelevance of all previous PTAS approaches.

In this work, we propose a simple numerical scheme based on a fast front-tracking approach for solving a fluid-structure interaction (FSI) problem of plaque growth in blood vessels. A rigorous error analysis is carried out for the temporal semi-discrete scheme to show that it is first-order accurate for all macro time step $\Delta T$, micro time step $\Delta t$ and scale parameter $\epsilon$. A numerical example is presented to verify the theoretical results and demonstrate the excellent performance of the proposed multiscale algorithm.

In a simple connected graph $G=(V,E)$, a subset of vertices $S \subseteq V$ is a dominating set if any vertex $v \in V\setminus S$ is adjacent to some vertex $x$ from this subset. A number of real-life problems can be modeled using this problem which is known to be among the difficult NP-hard problems in its class. We formulate the problem as an integer liner program (ILP) and compare the performance with the two earlier existing exact state-of-the-art algorithms and exact implicit enumeration and heuristic algorithms that we propose here. Our exact algorithm was able to find optimal solutions much faster than ILP and the above two exact algorithms for middle-dense instances. For graphs with a considerable size, our heuristic algorithm was much faster than both, ILP and our exact algorithm. It found an optimal solution for more than half of the tested instances, whereas it improved the earlier known state-of-the-art solutions for almost all the tested benchmark instances. Among the instances where the optimum was not found, it gave an average approximation error of $1.18$.

In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases

Determining the optimal sample complexity of PAC learning in the realizable setting was a central open problem in learning theory for decades. Finally, the seminal work by Hanneke (2016) gave an algorithm with a provably optimal sample complexity. His algorithm is based on a careful and structured sub-sampling of the training data and then returning a majority vote among hypotheses trained on each of the sub-samples. While being a very exciting theoretical result, it has not had much impact in practice, in part due to inefficiency, since it constructs a polynomial number of sub-samples of the training data, each of linear size. In this work, we prove the surprising result that the practical and classic heuristic bagging (a.k.a. bootstrap aggregation), due to Breiman (1996), is in fact also an optimal PAC learner. Bagging pre-dates Hanneke's algorithm by twenty years and is taught in most undergraduate machine learning courses. Moreover, we show that it only requires a logarithmic number of sub-samples to reach optimality.

Reliable probabilistic primality tests are fundamental in public-key cryptography. In adversarial scenarios, a composite with a high probability of passing a specific primality test could be chosen. In such cases, we need worst-case error estimates for the test. However, in many scenarios the numbers are randomly chosen and thus have significantly smaller error probability. Therefore, we are interested in average case error estimates. In this paper, we establish such bounds for the strong Lucas primality test, as only worst-case, but no average case error bounds, are currently available. This allows us to use this test with more confidence. We examine an algorithm that draws odd $k$-bit integers uniformly and independently, runs $t$ independent iterations of the strong Lucas test with randomly chosen parameters, and outputs the first number that passes all $t$ consecutive rounds. We attain numerical upper bounds on the probability on returing a composite. Furthermore, we consider a modified version of this algorithm that excludes integers divisible by small primes, resulting in improved bounds. Additionally, we classify the numbers that contribute most to our estimate.

北京阿比特科技有限公司