亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Strong Exponential Time Hypothesis (SETH) asserts that for every $\varepsilon>0$ there exists $k$ such that $k$-SAT requires time $(2-\varepsilon)^n$. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-$k$-SAT, and Set Cover. In this paper, we show that fine-grained reductions implying even $\lambda^n$-hardness of these problems from SETH for any $\lambda>1$, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question). We also extend this barrier result to the class of parameterized problems. Namely, for every $\lambda>1$ we conditionally rule out fine-grained reductions implying SETH-based lower bounds of $\lambda^k$ for a number of problems parameterized by the solution size $k$. Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).

相關內容

In the Colored Clustering problem, one is asked to cluster edge-colored (hyper-)graphs whose colors represent interaction types. More specifically, the goal is to select as many edges as possible without choosing two edges that share an endpoint and are colored differently. Equivalently, the goal can also be described as assigning colors to the vertices in a way that fits the edge-coloring as well as possible. As this problem is NP-hard, we build on previous work by studying its parameterized complexity. We give a $2^{\mathcal O(k)} \cdot n^{\mathcal O(1)}$-time algorithm where $k$ is the number of edges to be selected and $n$ the number of vertices. We also prove the existence of a problem kernel of size $\mathcal O(k^{5/2} )$, resolving an open problem posed in the literature. We consider parameters that are smaller than $k$, the number of edges to be selected, and $r$, the number of edges that can be deleted. Such smaller parameters are obtained by considering the difference between $k$ or $r$ and some lower bound on these values. We give both algorithms and lower bounds for Colored Clustering with such parameterizations. Finally, we settle the parameterized complexity of Colored Clustering with respect to structural graph parameters by showing that it is $W[1]$-hard with respect to both vertex cover number and tree-cut width, but fixed-parameter tractable with respect to slim tree-cut width.

Isogeometric analysis is a powerful paradigm which exploits the high smoothness of splines for the numerical solution of high order partial differential equations. However, the tensor-product structure of standard multivariate B-spline models is not well suited for the representation of complex geometries, and to maintain high continuity on general domains special constructions on multi-patch geometries must be used. In this paper we focus on adaptive isogeometric methods with hierarchical splines, and extend the construction of $C^1$ isogeometric spline spaces on multi-patch planar domains to the hierarchical setting. We introduce a new abstract framework for the definition of hierarchical splines, which replaces the hypothesis of local linear independence for the basis of each level by a weaker assumption. We also develop a refinement algorithm that guarantees that the assumption is fulfilled by $C^1$ splines on certain suitably graded hierarchical multi-patch mesh configurations, and prove that it has linear complexity. The performance of the adaptive method is tested by solving the Poisson and the biharmonic problems.

We study the SAM (Sharpness-Aware Minimization) optimizer which has recently attracted a lot of interest due to its increased performance over more classical variants of stochastic gradient descent. Our main contribution is the derivation of continuous-time models (in the form of SDEs) for SAM and two of its variants, both for the full-batch and mini-batch settings. We demonstrate that these SDEs are rigorous approximations of the real discrete-time algorithms (in a weak sense, scaling linearly with the step size). Using these models, we then offer an explanation of why SAM prefers flat minima over sharp ones~--~by showing that it minimizes an implicitly regularized loss with a Hessian-dependent noise structure. Finally, we prove that perhaps unexpectedly SAM is attracted to saddle points under some realistic conditions. Our theoretical results are supported by detailed experiments.

Pin fins are imperative in the cooling of turbine blades. The designs of pin fins, therefore, have seen significant research in the past. With the developments in metal additive manufacturing, novel design approaches toward complex geometries are now feasible. To that end, this article presents a Bayesian optimization approach for designing inline pins that can achieve low pressure loss. The pin-fin shape is defined using featurized (parametrized) piecewise cubic splines in 2D. The complexity of the shape is dependent on the number of splines used for the analysis. From a method development perspective, the study is performed using three splines. Owing to this piece-wise modeling, a unique pin fin design is defined using five features. After specifying the design, a computational fluid dynamics-based model is developed that computes the pressure drop during the flow. Bayesian optimization is carried out on a Gaussian processes-based surrogate to obtain an optimal combination of pin-fin features to minimize the pressure drop. The results show that the optimization tends to approach an aerodynamic design leading to low pressure drop corroborating with the existing knowledge. Furthermore, multiple iterations of optimizations are conducted with varying degree of input data. The results reveal that a convergence to similar optimal design is achieved with a minimum of just twenty five initial design-of-experiments data points for the surrogate. Sensitivity analysis shows that the distance between the rows of the pin fins is the most dominant feature influencing the pressure drop. In summary, the newly developed automated framework demonstrates remarkable capabilities in designing pin fins with superior performance characteristics.

It is known that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings x and y is equal to the length of the longest shared secret key that two parties can establish via a probabilistic protocol with interaction on a public channel, assuming that the parties hold as their inputs x and y respectively. We determine the worst-case communication complexity of this problem for the setting where the parties can use private sources of random bits. We show that for some x, y the communication complexity of the secret key agreement does not decrease even if the parties have to agree on a secret key whose size is much smaller than the mutual information between x and y. On the other hand, we discuss examples of x, y such that the communication complexity of the protocol declines gradually with the size of the derived secret key. The proof of the main result uses spectral properties of appropriate graphs and the expander mixing lemma, as well as information theoretic techniques.

We consider constraint satisfaction problems whose relations are defined in first-order logic over any uniform hypergraph satisfying certain weak abstract structural conditions. Our main result is a P/NP-complete complexity dichotomy for such CSPs. Surprisingly, the large class of structures under consideration falls into a mixed regime where neither the classical complexity reduction to finite-domain CSPs can be used as a black box, nor does the class exhibit order properties, known to prevent the application of this reduction. We introduce an algorithmic technique inspired by classical notions from the theory of finite-domain CSPs, and prove its correctness based on symmetries that depend on a linear order that is external to the structures under consideration.

Summation-by-parts (SBP) operators are popular building blocks for systematically developing stable and high-order accurate numerical methods for time-dependent differential equations. The main idea behind existing SBP operators is that the solution is assumed to be well approximated by polynomials up to a certain degree, and the SBP operator should therefore be exact for them. However, polynomials might not provide the best approximation for some problems, and other approximation spaces may be more appropriate. In this paper, a theory for SBP operators based on general function spaces is developed. We demonstrate that most of the established results for polynomial-based SBP operators carry over to this general class of SBP operators. Our findings imply that the concept of SBP operators can be applied to a significantly larger class of methods than currently known. We exemplify the general theory by considering trigonometric, exponential, and radial basis functions.

We study the repeated balls-into-bins process introduced by Becchetti, Clementi, Natale, Pasquale and Posta (2019). This process starts with $m$ balls arbitrarily distributed across $n$ bins. At each round $t=1,2,\ldots$, one ball is selected from each non-empty bin, and then placed it into a bin chosen independently and uniformly at random. We prove the following results: $\quad \bullet$ For any $n \leq m \leq \mathrm{poly}(n)$, we prove a lower bound of $\Omega(m/n \cdot \log n)$ on the maximum load. For the special case $m=n$, this matches the upper bound of $O(\log n)$, as shown in [BCNPP19]. It also provides a positive answer to the conjecture in [BCNPP19] that for $m=n$ the maximum load is $\omega(\log n/ \log \log n)$ at least once in a polynomially large time interval. For $m\in [\omega(n),n\log n]$, our new lower bound disproves the conjecture in [BCNPP19] that the maximum load remains $O(\log n)$. $\quad \bullet$ For any $n\leq m\leq\mathrm{poly}(n)$, we prove an upper bound of $O(m/n\cdot\log n)$ on the maximum load for all steps of a polynomially large time interval. This matches our lower bound up to multiplicative constants. $\quad \bullet$ For any $m\geq n$, our analysis also implies an $O(m^2/n)$ waiting time to reach a configuration with a $O(m/n\cdot\log m)$ maximum load, even for worst-case initial distributions. $\quad \bullet$ For any $m \geq n$, we show that every ball visits every bin in $O(m\log m)$ rounds. For $m = n$, this improves the previous upper bound of $O(n \log^2 n)$ in [BCNPP19]. We also prove that the upper bound is tight up to multiplicative constants for any $n \leq m \leq \mathrm{poly}(n)$.

Predictive algorithms, such as deep neural networks (DNNs), are used in many domain sciences to directly estimate internal parameters of interest in simulator-based models, especially in settings where the observations include images or other complex high-dimensional data. In parallel, modern neural density estimators, such as normalizing flows, are becoming increasingly popular for uncertainty quantification, especially when both parameters and observations are high-dimensional. However, parameter inference is an inverse problem and not a prediction task; thus, an open challenge is to construct conditionally valid and precise confidence regions, with a guaranteed probability of covering the true parameters of the data-generating process, no matter what the (unknown) parameter values are, and without relying on large-sample theory. Many simulator-based inference (SBI) methods are indeed known to produce biased or overly confident parameter regions, yielding misleading uncertainty estimates. This paper presents WALDO, a novel method for constructing confidence regions with finite-sample conditional validity by leveraging prediction algorithms or posterior estimators that are currently widely adopted in SBI. WALDO reframes the well-known Wald test statistic, and uses a computationally efficient regression-based machinery for classical Neyman inversion of hypothesis tests. We apply our method to a recent high-energy physics problem, where prediction with DNNs has previously led to estimates with prediction bias. We also illustrate how our approach can correct overly confident posterior regions computed with normalizing flows.

We consider Group Control by Adding Individuals (GCAI) in the setting of group identification for two procedural rules -- the consensus-start-respecting rule and the liberal-start-respecting rule. It is known that GCAI for both rules are NP-hard, but whether they are fixed-parameter tractable with respect to the number of distinguished individuals remained open. We resolve both open problems in the affirmative. In addition, we strengthen the NP-hardness of GCAI by showing that, with respect to the natural parameter the number of added individuals, GCAI for both rules are W[2]-hard. Notably, the W[2]-hardness for the liberal-start-respecting rule holds even when restricted to a very special case where the qualifications of individuals satisfy the so-called consecutive ones property. However, for the consensus-start-respecting rule, the problem becomes polynomial-time solvable in this special case. We also study a dual restriction where the disqualifications of individuals fulfill the consecutive ones property, and show that under this restriction GCAI for both rules turn out to be polynomial-time solvable. Our reductions for showing W[2]-hardness also imply several lower bounds concerning kernelization and exact algorithms.

北京阿比特科技有限公司