亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we initiate study of the computational power of adaptive and non-adaptive monotone decision trees - decision trees where each query is a monotone function on the input bits. In the most general setting, the monotone decision tree height (or size) can be viewed as a measure of non-monotonicity of a given Boolean function. We also study the restriction of the model by restricting (in terms of circuit complexity) the monotone functions that can be queried at each node. This naturally leads to complexity classes of the form DT(mon-C) for any circuit complexity class C, where the height of the tree is O(log n), and the query functions can be computed by monotone circuits in the class C. In the above context, we prove the following characterizations and bounds. For any Boolean function f, we show that the minimum monotone decision tree height can be exactly characterized (both in the adaptive and non-adaptive versions of the model) in terms of its alternation (alt(f) is defined as the maximum number of times that the function value changes, in any chain in the Boolean lattice). We also characterize the non-adaptive decision tree height with a natural generalization of certification complexity of a function. Similarly, we determine the complexity of non-deterministic and randomized variants of monotone decision trees in terms of alt(f). We show that DT(mon-C) = C when C contains monotone circuits for the threshold functions (for e.g., if C = TC0). For C = AC0, we are able to show that any function in AC0 can be computed by a sub-linear height monotone decision tree with queries having monotone AC0 circuits. To understand the logarithmic height case in case of AC0 i.e., DT(mon-AC0), we show that functions in DT(mon-AC0) have AC0 circuits with few negation gates.

相關內容

We present a Newton-type method that converges fast from any initialization and for arbitrary convex objectives with Lipschitz Hessians. We achieve this by merging the ideas of cubic regularization with a certain adaptive Levenberg--Marquardt penalty. In particular, we show that the iterates given by $x^{k+1}=x^k - \bigl(\nabla^2 f(x^k) + \sqrt{H\|\nabla f(x^k)\|} \mathbf{I}\bigr)^{-1}\nabla f(x^k)$, where $H>0$ is a constant, converge globally with a $\mathcal{O}(\frac{1}{k^2})$ rate. Our method is the first variant of Newton's method that has both cheap iterations and provably fast global convergence. Moreover, we prove that locally our method converges superlinearly when the objective is strongly convex. To boost the method's performance, we present a line search procedure that does not need prior knowledge of $H$ and is provably efficient.

Model attribution is a critical component of deep neural networks (DNNs) for its interpretability to complex models. Recent studies bring up attention to the security of attribution methods as they are vulnerable to attribution attacks that generate similar images with dramatically different attributions. Existing works have been investigating empirically improving the robustness of DNNs against those attacks; however, none of them explicitly quantifies the actual deviations of attributions. In this work, for the first time, a constrained optimization problem is formulated to derive an upper bound that measures the largest dissimilarity of attributions after the samples are perturbed by any noises within a certain region while the classification results remain the same. Based on the formulation, different practical approaches are introduced to bound the attributions above using Euclidean distance and cosine similarity under both $\ell_2$ and $\ell_\infty$-norm perturbations constraints. The bounds developed by our theoretical study are validated on various datasets and two different types of attacks (PGD attack and IFIA attribution attack). Over 10 million attacks in the experiments indicate that the proposed upper bounds effectively quantify the robustness of models based on the worst-case attribution dissimilarities.

3D object detection with point clouds and images plays an important role in perception tasks such as autonomous driving. Current methods show great performance on detection and pose estimation of standard-shaped vehicles but lack behind on more complex shapes as e.g. semi-trailer truck combinations. Determining the shape and motion of those special vehicles accurately is crucial in yard operation and maneuvering and industrial automation applications. This work introduces several new methods to improve and measure the performance for such classes. State-of-the-art methods are based on predefined anchor grids or heatmaps for ground truth targets. However, the underlying representations do not take the shape of different sized objects into account. Our main contribution, AdaptiveShape, uses shape aware anchor distributions and heatmaps to improve the detection capabilities. For large vehicles we achieve +10.9% AP in comparison to current shape agnostic methods. Furthermore we introduce a new fast LiDAR-camera fusion. It is based on 2D bounding box camera detections which are available in many processing pipelines. This fusion method does not rely on perfectly calibrated or temporally synchronized systems and is therefore applicable to a broad range of robotic applications. We extend a standard point pillar network to account for temporal data and improve learning of complex object movements. In addition we extended a ground truth augmentation to use grouped object pairs to further improve truck AP by +2.2% compared to conventional augmentation.

We present an adapted construction of algebraic circuits over the reals introduced by Cucker and Meer to arbitrary infinite integral domains and generalize the $\mathrm{AC}_{\mathbb{R}}$ and $\mathrm{NC}_{\mathbb{R}}$-classes for this setting. We give a theorem in the style of Immerman's theorem which shows that for these adapted formalisms, sets decided by circuits of constant depth and polynomial size are the same as sets definable by a suitable adaptation of first-order logic. Additionally, we discuss a generalization of the guarded predicative logic by Durand, Haak and Vollmer and we show characterizations for the $\mathrm{AC}_{R}$ and $\mathrm{NC}_{R}$ hierarchy. Those generalizations apply to the Boolean $\mathrm{AC}$ and $\mathrm{NC}$ hierarchies as well. Furthermore, we introduce a formalism to be able to compare some of the aforementioned complexity classes with different underlying integral domains.

We design an adaptive virtual element method (AVEM) of lowest order over triangular meshes with hanging nodes in 2d, which are treated as polygons. AVEM hinges on the stabilization-free a posteriori error estimators recently derived in [8]. The crucial property, that also plays a central role in this paper, is that the stabilization term can be made arbitrarily small relative to the a posteriori error estimators upon increasing the stabilization parameter. Our AVEM concatenates two modules, GALERKIN and DATA. The former deals with piecewise constant data and is shown in [8] to be a contraction between consecutive iterates. The latter approximates general data by piecewise constants to a desired accuracy. AVEM is shown to be convergent and quasi-optimal, in terms of error decay versus degrees of freedom, for solutions and data belonging to appropriate approximation classes. Numerical experiments illustrate the interplay between these two modules and provide computational evidence of optimality.

In this paper we consider the generalized inverse iteration for computing ground states of the Gross-Pitaevskii eigenvector problem (GPE). For that we prove explicit linear convergence rates that depend on the maximum eigenvalue in magnitude of a weighted linear eigenvalue problem. Furthermore, we show that this eigenvalue can be bounded by the first spectral gap of a linearized Gross-Pitaevskii operator, recovering the same rates as for linear eigenvector problems. With this we establish the first local convergence result for the basic inverse iteration for the GPE without damping. We also show how our findings directly generalize to extended inverse iterations, such as the Gradient Flow Discrete Normalized (GFDN) proposed in [W. Bao, Q. Du, SIAM J. Sci. Comput., 25 (2004)] or the damped inverse iteration suggested in [P. Henning, D. Peterseim, SIAM J. Numer. Anal., 53 (2020)]. Our analysis also reveals why the inverse iteration for the GPE does not react favourably to spectral shifts. This empirical observation can now be explained with a blow-up of a weighting function that crucially contributes to the convergence rates. Our findings are illustrated by numerical experiments.

Most of the popular dependence measures for two random variables $X$ and $Y$ (such as Pearson's and Spearman's correlation, Kendall's $\tau$ and Gini's $\gamma$) vanish whenever $X$ and $Y$ are independent. However, neither does a vanishing dependence measure necessarily imply independence, nor does a measure equal to 1 imply that one variable is a measurable function of the other. Yet, both properties are natural properties for a convincing dependence measure. In this paper, we present a general approach to transforming a given dependence measure into a new one which exactly characterizes independence as well as functional dependence. Our approach uses the concept of monotone rearrangements as introduced by Hardy and Littlewood and is applicable to a broad class of measures. In particular, we are able to define a rearranged Spearman's $\rho$ and a rearranged Kendall's $\tau$ which do attain the value $0$ if and only if both variables are independent, and the value $1$ if and only if one variable is a measurable function of the other. We also present simple estimators for the rearranged dependence measures, prove their consistency and illustrate their finite sample properties by means of a simulation study and a data example.

We investigate how to efficiently compute the difference result of two (or multiple) conjunctive queries, which is the last operator in relational algebra to be unraveled. The standard approach in practical database systems is to materialize the results for every input query as a separate set, and then compute the difference of two (or multiple) sets. This approach is bottlenecked by the complexity of evaluating every input query individually, which could be very expensive, particularly when there are only a few results in the difference. In this paper, we introduce a new approach by exploiting the structural property of input queries and rewriting the original query by pushing the difference operator down as much as possible. We show that for a large class of difference queries, this approach can lead to a linear-time algorithm, in terms of the input size and (final) output size, i.e., the number of query results that survive from the difference operator. We complete this result by showing the hardness of computing the remaining difference queries in linear time. Although a linear-time algorithm is hard to achieve in general, we also provide some heuristics that can provably improve the standard approach. At last, we compare our approach with standard SQL engines over graph and benchmark datasets. The experiment results demonstrate order-of-magnitude speedups achieved by our approach over the vanilla SQL.

We analyze the convergence of the $k$-opinion Undecided State Dynamics (USD) in the population protocol model. For $k$=2 opinions it is well known that the USD reaches consensus with high probability within $O(n \log n)$ interactions. Proving that the process also quickly solves the consensus problem for $k>2$ opinions has remained open, despite analogous results for larger $k$ in the related parallel gossip model. In this paper we prove such convergence: under mild assumptions on $k$ and on the initial number of undecided agents we prove that the USD achieves plurality consensus within $O(k n \log n)$ interactions with high probability, regardless of the initial bias. Moreover, if there is an initial additive bias of at least $\Omega(\sqrt{n} \log n)$ we prove that the initial plurality opinion wins with high probability, and if there is a multiplicative bias the convergence time is further improved. Note that this is the first result for $k > 2$ for the USD in the population protocol model. Furthermore, it is the first result for the unsynchronized variant of the USD with $k>2$ which does not need any initial bias.

Vessel transit in ice-covered waters poses unique challenges in safe and efficient motion planning. When the concentration of ice is high, it may not be possible to find collision-free trajectories. Instead, ice can be pushed out of the way if it is small or if contact occurs near the edge of the ice. In this work, we propose a real-time navigation framework that minimizes collisions with ice and distance travelled by the vessel. We exploit a lattice-based planner with a cost that captures the ship interaction with ice. To address the dynamic nature of the environment, we plan motion in a receding horizon manner based on updated vessel and ice state information. Further, we present a novel planning heuristic for evaluating the cost-to-go, which is applicable to navigation in a channel without a fixed goal location. The performance of our planner is evaluated across several levels of ice concentration both in simulated and in real-world experiments.

北京阿比特科技有限公司