亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Branch-and-bound-based consensus maximization stands out due to its important ability of retrieving the globally optimal solution to outlier-affected geometric problems. However, while the discovery of such solutions caries high scientific value, its application in practical scenarios is often prohibited by its computational complexity growing exponentially as a function of the dimensionality of the problem at hand. In this work, we convey a novel, general technique that allows us to branch over an $n-1$ dimensional space for an n-dimensional problem. The remaining degree of freedom can be solved globally optimally within each bound calculation by applying the efficient interval stabbing technique. While each individual bound derivation is harder to compute owing to the additional need for solving a sorting problem, the reduced number of intervals and tighter bounds in practice lead to a significant reduction in the overall number of required iterations. Besides an abstract introduction of the approach, we present applications to three fundamental geometric computer vision problems: camera resectioning, relative camera pose estimation, and point set registration. Through our exhaustive tests, we demonstrate significant speed-up factors at times exceeding two orders of magnitude, thereby increasing the viability of globally optimal consensus maximizers in online application scenarios.

相關內容

With the high focus on autonomous aerial refueling recently, it becomes increasingly urgent to design efficient methods or algorithms to solve AAR problems in complicated aerial environments. Apart from the complex aerodynamic disturbance, another problem is the pose estimation error caused by the camera calibration error, installation error, or 3D object modeling error, which may not satisfy the highly accurate docking. The main objective of the effort described in this paper is the implementation of an image-based visual servo control method, which contains the establishment of an image-based visual servo model involving the receiver's dynamics and the design of the corresponding controller. Simulation results indicate that the proposed method can make the system dock successfully under complicated conditions and improve the robustness against pose estimation error.

Graph neural networks (GNNs) are powerful tools for exploring and learning from graph structures and features. As such, achieving high-performance execution for GNNs becomes crucially important. Prior works have proposed to explore the sparsity (i.e., low density) in the input graph to accelerate GNNs, which uses the full-graph-level or block-level sparsity format. We show that they fail to balance the sparsity benefit and kernel execution efficiency. In this paper, we propose a novel system, referred to as AdaptGear, that addresses the challenge of optimizing GNNs performance by leveraging kernels tailored to the density characteristics at the subgraph level. Meanwhile, we also propose a method that dynamically chooses the optimal set of kernels for a given input graph. Our evaluation shows that AdaptGear can achieve a significant performance improvement, up to $6.49 \times$ ($1.87 \times$ on average), over the state-of-the-art works on two mainstream NVIDIA GPUs across various datasets.

In many applications, the process of identifying a specific feature of interest often involves testing multiple hypotheses for their joint statistical significance. Examples include mediation analysis which simultaneously examines the existence of the exposure-mediator and the mediator-outcome effects, and replicability analysis aiming to identify simultaneous signals that exhibit statistical significance across multiple independent experiments. In this study, we present a new approach called joint mirror (JM) procedure that effectively detects such features while maintaining false discovery rate (FDR) control in finite samples. The JM procedure employs an iterative method that gradually shrinks the rejection region based on progressively revealed information until a conservative estimate of the false discovery proportion (FDP) is below the target FDR level. Additionally, we introduce a more stringent error measure, known as the modified FDR (mFDR), which assigns weights to each false discovery based on its number of null components. We demonstrate that, under appropriate assumptions, the JM procedure controls the mFDR in finite samples. To implement the JM procedure, we propose an efficient algorithm that can incorporate partial ordering information. Through extensive simulations, we demonstrate that our procedure effectively controls the mFDR and enhances statistical power across various scenarios. Finally, we showcase the utility of our method by applying it to real-world mediation and replicability analyses.

In this article, we propose a novel formulation for the resource allocation problem of a sliced and disaggregated Radio Access Network (RAN) and its transport network. Our proposal assures an end-to-end delay bound for the Ultra-Reliable and Low-Latency Communication (URLLC) use case while jointly considering the number of admitted users, the transmission rate allocation per slice, the functional split of RAN nodes and the routing paths in the transport network. We use deterministic network calculus theory to calculate delay along the transport network connecting disaggregated RANs deploying network functions at the Radio Unit (RU), Distributed Unit (DU), and Central Unit (CU) nodes. The maximum end-to-end delay is a constraint in the optimization-based formulation that aims to maximize Mobile Network Operator (MNO) profit, considering a cash flow analysis to model revenue and operational costs using data from one of the world's leading MNOs. The optimization model leverages a Flexible Functional Split (FFS) approach to provide a new degree of freedom to the resource allocation strategy. Simulation results reveal that, due to its non-linear nature, there is no trivial solution to the proposed optimization problem formulation. Our proposal guarantees a maximum delay for URLLC services while satisfying minimal bandwidth requirements for enhanced Mobile BroadBand (eMBB) services and maximizing the MNO's profit.

We study a family of adversarial multiclass classification problems and provide equivalent reformulations in terms of: 1) a family of generalized barycenter problems introduced in the paper and 2) a family of multimarginal optimal transport problems where the number of marginals is equal to the number of classes in the original classification problem. These new theoretical results reveal a rich geometric structure of adversarial learning problems in multiclass classification and extend recent results restricted to the binary classification setting. A direct computational implication of our results is that by solving either the barycenter problem and its dual, or the MOT problem and its dual, we can recover the optimal robust classification rule and the optimal adversarial strategy for the original adversarial problem. Examples with synthetic and real data illustrate our results.

Prefix aggregation operation (also called scan), and its particular case, prefix summation, is an important parallel primitive and enjoys a lot of attention in the research literature. It is also used in many algorithms as one of the steps. Aggregation over dominated points in $\mathbb{R}^m$ is a multidimensional generalisation of prefix aggregation. It is also intensively researched, both as a parallel primitive and as a practical problem, encountered in computational geometry, spatial databases and data warehouses. In this paper we show that, for a constant dimension $m$, aggregation over dominated points in $\mathbb{R}^m$ can be computed by $O(1)$ basic operations that include sorting the whole dataset, zipping sorted lists of elements, computing prefix aggregations of lists of elements and flat maps, which expand the data size from initial $n$ to $n\log^{m-1}n$. Thereby we establish that prefix aggregation suffices to express aggregation over dominated points in more dimensions, even though the latter is a far-reaching generalisation of the former. Many problems known to be expressible by aggregation over dominated points become expressible by prefix aggregation, too. We rely on a small set of primitive operations which guarantee an easy transfer to various distributed architectures and some desired properties of the implementation.

Two-player graph games have found numerous applications, most notably the synthesis of reactive systems from temporal specifications, where they are successfully used to generate finite-state systems. Due to their relevance for practical applications, reactive synthesis of infinite-state systems, and hence the need for techniques for solving infinite-state games, have attracted attention in recent years. We propose novel semi-algorithms for solving infinite-state games with $\omega$-regular winning conditions. The novelty lies in the utilization of an acceleration technique that is helpful in avoiding divergence. The key idea is to enhance the game-solving algorithm with the ability to use combinations of simple inductive arguments in order to summarize unbounded iterations of strategic decisions in the game. This enables our method to solve games on which state-of-the-art techniques diverge, as we demonstrate in the evaluation of a prototype implementation.

Constrained $k$-submodular maximization is a general framework that captures many discrete optimization problems such as ad allocation, influence maximization, personalized recommendation, and many others. In many of these applications, datasets are large or decisions need to be made in an online manner, which motivates the development of efficient streaming and online algorithms. In this work, we develop single-pass streaming and online algorithms for constrained $k$-submodular maximization with both monotone and general (possibly non-monotone) objectives subject to cardinality and knapsack constraints. Our algorithms achieve provable constant-factor approximation guarantees which improve upon the state of the art in almost all settings. Moreover, they are combinatorial and very efficient, and have optimal space and running time. We experimentally evaluate our algorithms on instances for ad allocation and other applications, where we observe that our algorithms are efficient and scalable, and construct solutions that are comparable in value to offline greedy algorithms.

Deep neural networks (DNNs) have demonstrated extraordinary capabilities and are an integral part of modern software systems. However, they also suffer from various vulnerabilities such as adversarial attacks and unfairness. Testing deep learning (DL) systems is therefore an important task, to detect and mitigate those vulnerabilities. Motivated by the success of traditional software testing, which often employs diversity heuristics, various diversity measures on DNNs have been proposed to help efficiently expose the buggy behavior of DNNs. In this work, we argue that many DNN testing tasks should be treated as directed testing problems rather than general-purpose testing tasks, because these tasks are specific and well-defined. Hence, the diversity-based approach is less effective. Following our argument based on the semantics of DNNs and the testing goal, we derive $6$ metrics that can be used for DNN testing and carefully analyze their application scopes. We empirically show their efficacy in exposing bugs in DNNs compared to recent diversity-based metrics. Moreover, we also notice discrepancies between the practices of the software engineering (SE) community and the DL community. We point out some of these gaps, and hopefully, this can lead to bridging the SE practice and DL findings.

The current approach to connected and autonomous driving function development and evaluation uses model-in-the-loop simulation, hardware-in-the-loop simulation, and limited proving ground work followed by public road deployment of beta version of software and technology. The rest of the road users are involuntarily forced into taking part in the development and evaluation of these connected and autonomous driving functions in this approach. This is an unsafe, costly and inefficient method. Motivated by these shortcomings, this paper introduces the Vehicle-in-Virtual-Environment (VVE) method of safe, efficient and low cost connected and autonomous driving function development, evaluation and demonstration. The VVE method is compared to the existing state-of-the-art. Its basic implementation for a path following task is used to explain the method where the actual autonomous vehicle operates in a large empty area with its sensor feeds being replaced by realistic sensor feeds corresponding to its location and pose in the virtual environment. It is possible to easily change the development virtual environment and inject rare and difficult events which can be tested very safely. Vehicle-to-Pedestrian (V2P) communication based pedestrian safety is chosen as the application use case for VVE and corresponding experimental results are presented and discussed. It is noted that actual pedestrians and other vulnerable road users can be used very safely in this approach.

北京阿比特科技有限公司