亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Dispersion relation reflects the dependence of wave frequency on its wave vector when the wave passes through certain material. It demonstrates the properties of this material and thus it is critical. However, dispersion relation reconstruction is very time consuming and expensive. To address this bottleneck, we propose in this paper an efficient dispersion relation reconstruction scheme based on global polynomial interpolation for the approximation of 2D photonic band functions. Our method relies on the fact that the band functions are piecewise analytic with respect to the wave vector in the first Brillouin zone. We utilize suitable sampling points in the first Brillouin zone at which we solve the eigenvalue problem involved in the band function calculation, and then employ Lagrange interpolation to approximate the band functions on the whole first Brillouin zone. Numerical results show that our proposed methods can significantly improve the computational efficiency.

相關內容

Low Reynolds number fluid flows are governed by the Stokes equations. In two dimensions, Stokes flows can be described by two analytic functions, known as Goursat functions. Brubeck and Trefethen (2022) recently introduced a lightning Stokes solver that uses rational functions to approximate the Goursat functions in polygonal domains. In this paper, we present a solver for computing 2D Stokes flows in domains with smooth boundaries and multiply-connected domains using lightning and AAA rational approximation (Nakatsukasa et al., 2018). This leads to a new rational approximation algorithm "LARS" that is suitable for computing many bounded 2D Stokes flow problems. After validating our solver against known analytical solutions, we solve a variety of 2D Stokes flow problems with physical and engineering applications. The computations take less than a second and give solutions with at least 6-digit accuracy.

Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input $p(x)$, but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research.

In this paper, we focus on non-conservative obstacle avoidance between robots with control affine dynamics with strictly convex and polytopic shapes. The core challenge for this obstacle avoidance problem is that the minimum distance between strictly convex regions or polytopes is generally implicit and non-smooth, such that distance constraints cannot be enforced directly in the optimization problem. To handle this challenge, we employ non-smooth control barrier functions to reformulate the avoidance problem in the dual space, with the positivity of the minimum distance between robots equivalently expressed using a quadratic program. Our approach is proven to guarantee system safety. We theoretically analyze the smoothness properties of the minimum distance quadratic program and its KKT conditions. We validate our approach by demonstrating computationally-efficient obstacle avoidance for multi-agent robotic systems with strictly convex and polytopic shapes. To our best knowledge, this is the first time a real-time QP problem can be formulated for general non-conservative avoidance between strictly convex shapes and polytopes.

Multi-object tracking algorithms have made significant advancements due to the recent developments in object detection. However, most existing methods primarily focus on tracking pedestrians or vehicles, which exhibit relatively simple and regular motion patterns. Consequently, there is a scarcity of algorithms that address the tracking of targets with irregular or non-linear motion, such as multi-athlete tracking. Furthermore, popular tracking algorithms often rely on the Kalman filter for object motion modeling, which fails to track objects when their motion contradicts the linear motion assumption of the Kalman filter. Due to this reason, we proposed a novel online and robust multi-object tracking approach, named Iterative Scale-Up ExpansionIoU and Deep Features for multi-object tracking. Unlike conventional methods, we abandon the use of the Kalman filter and propose utilizing the iterative scale-up expansion IoU. This approach achieves superior tracking performance without requiring additional training data or adopting a more robust detector, all while maintaining a lower computational cost compared to other appearance-based methods. Our proposed method demonstrates remarkable effectiveness in tracking irregular motion objects, achieving a score of 75.3% in HOTA. It outperforms all state-of-the-art online tracking algorithms on the SportsMOT dataset, covering various kinds of sport scenarios.

Video frame interpolation (VFI) is one of the fundamental research areas in video processing and there has been extensive research on novel and enhanced interpolation algorithms. The same is not true for quality assessment of the interpolated content. In this paper, we describe a subjective quality study for VFI based on a newly developed video database, BVI-VFI. BVI-VFI contains 36 reference sequences at three different frame rates and 180 distorted videos generated using five conventional and learning based VFI algorithms. Subjective opinion scores have been collected from 60 human participants, and then employed to evaluate eight popular quality metrics, including PSNR, SSIM and LPIPS which are all commonly used for assessing VFI methods. The results indicate that none of these metrics provide acceptable correlation with the perceived quality on interpolated content, with the best-performing metric, LPIPS, offering a SROCC value below 0.6. Our findings show that there is an urgent need to develop a bespoke perceptual quality metric for VFI. The BVI-VFI dataset is publicly available and can be accessed at //danier97.github.io/BVI-VFI/.

The assessment of iris uniqueness plays a crucial role in analyzing the capabilities and limitations of iris recognition systems. Among the various methodologies proposed, Daugman's approach to iris uniqueness stands out as one of the most widely accepted. According to Daugman, uniqueness refers to the iris recognition system's ability to enroll an increasing number of classes while maintaining a near-zero probability of collision between new and enrolled classes. Daugman's approach involves creating distinct IrisCode templates for each iris class within the system and evaluating the sustainable population under a fixed Hamming distance between codewords. In our previous work [23], we utilized Rate-Distortion Theory (as it pertains to the limits of error-correction codes) to establish boundaries for the maximum possible population of iris classes supported by Daugman's IrisCode, given the constraint of a fixed Hamming distance between codewords. Building upon that research, we propose a novel methodology to evaluate the scalability of an iris recognition system, while also measuring iris quality. We achieve this by employing a sphere-packing bound for Gaussian codewords and adopting a approach similar to Daugman's, which utilizes relative entropy as a distance measure between iris classes. To demonstrate the efficacy of our methodology, we illustrate its application on two small datasets of iris images. We determine the sustainable maximum population for each dataset based on the quality of the images. By providing these illustrations, we aim to assist researchers in comprehending the limitations inherent in their recognition systems, depending on the quality of their iris databases.

We present a novel approach based on sparse Gaussian processes (SGPs) to address the sensor placement problem for monitoring spatially (or spatiotemporally) correlated phenomena such as temperature and precipitation. Existing Gaussian process (GP) based sensor placement approaches use GPs with known kernel function parameters to model a phenomenon and subsequently optimize the sensor locations in a discretized representation of the environment. In our approach, we fit an SGP with known kernel function parameters to randomly sampled unlabeled locations in the environment and show that the learned inducing points of the SGP inherently solve the sensor placement problem in continuous spaces. Using SGPs avoids discretizing the environment and reduces the computation cost from cubic to linear complexity. When restricted to a candidate set of sensor placement locations, we can use greedy sequential selection algorithms on the SGP's optimization bound to find good solutions. We also present an approach to efficiently map our continuous space solutions to discrete solution spaces using the assignment problem, which gives us discrete sensor placements optimized in unison. Moreover, we generalize our approach to model sensors with non-point field-of-view and integrated observations by leveraging the inherent properties of GPs and SGPs. Our experimental results on three real-world datasets show that our approaches generate solution placements that result in reconstruction quality that is consistently on par or better than the prior state-of-the-art approach while being significantly faster. Our computationally efficient approaches will enable both large-scale sensor placement, and fast sensor placement for informative path planning problems.

In this paper, we propose an efficient quadratic interpolation formula utilizing solution gradients computed and stored at nodes and demonstrate its application to a third-order cell-centered finite-volume discretization on tetrahedral grids. The proposed quadratic formula is constructed based on an efficient formula of computing a projected derivative. It is efficient in that it completely eliminates the need to compute and store second derivatives of solution variables or any other quantities, which are typically required in upgrading a second-order cell-centered unstructured-grid finite-volume discretization to third-order accuracy. Moreover, a high-order flux quadrature formula, as required for third-order accuracy, can also be simplified by utilizing the efficient projected-derivative formula, resulting in a numerical flux at a face centroid plus a curvature correction not involving second derivatives of the flux. Similarly, a source term can be integrated over a cell to high-order in the form of a source term evaluated at the cell centroid plus a curvature correction, again, not requiring second derivatives of the source term. The discretization is defined as an approximation to an integral form of a conservation law but the numerical solution is defined as a point value at a cell center, leading to another feature that there is no need to compute and store geometric moments for a quadratic polynomial to preserve a cell average. Third-order accuracy and improved second-order accuracy are demonstrated and investigated for simple but illustrative test cases in three dimensions.

Human-robot collaboration (HRC) is one key component to achieving flexible manufacturing to meet the different needs of customers. However, it is difficult to build intelligent robots that can proactively assist humans in a safe and efficient way due to several challenges.First, it is challenging to achieve efficient collaboration due to diverse human behaviors and data scarcity. Second, it is difficult to ensure interactive safety due to uncertainty in human behaviors. This paper presents an integrated framework for proactive HRC. A robust intention prediction module, which leverages prior task information and human-in-the-loop training, is learned to guide the robot for efficient collaboration. The proposed framework also uses robust safe control to ensure interactive safety under uncertainty. The developed framework is applied to a co-assembly task using a Kinova Gen3 robot. The experiment demonstrates that our solution is robust to environmental changes as well as different human preferences and behaviors. In addition, it improves task efficiency by approximately 15-20%. Moreover, the experiment demonstrates that our solution can guarantee interactive safety during proactive collaboration.

Semantic reconstruction of indoor scenes refers to both scene understanding and object reconstruction. Existing works either address one part of this problem or focus on independent objects. In this paper, we bridge the gap between understanding and reconstruction, and propose an end-to-end solution to jointly reconstruct room layout, object bounding boxes and meshes from a single image. Instead of separately resolving scene understanding and object reconstruction, our method builds upon a holistic scene context and proposes a coarse-to-fine hierarchy with three components: 1. room layout with camera pose; 2. 3D object bounding boxes; 3. object meshes. We argue that understanding the context of each component can assist the task of parsing the others, which enables joint understanding and reconstruction. The experiments on the SUN RGB-D and Pix3D datasets demonstrate that our method consistently outperforms existing methods in indoor layout estimation, 3D object detection and mesh reconstruction.

北京阿比特科技有限公司