亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the context of simulation-based methods, multiple challenges arise, two of which are considered in this work. As a first challenge, problems including time-dependent phenomena with complex domain deformations, potentially even with changes in the domain topology, need to be tackled appropriately. The second challenge arises when computational resources and the time for evaluating the model become critical in so-called many query scenarios for parametric problems. For example, these problems occur in optimization, uncertainty quantification (UQ), or automatic control and using highly resolved full-order models (FOMs) may become impractical. To address both types of complexity, we present a novel projection-based model order reduction (MOR) approach for deforming domain problems that takes advantage of the time-continuous space-time formulation. We apply it to two examples that are relevant for engineering or biomedical applications and conduct an error and performance analysis. In both cases, we are able to drastically reduce the computational expense for a model evaluation and, at the same time, to maintain an adequate accuracy level. All in all, this work indicates the effectiveness of the presented MOR approach for deforming domain problems taking advantage of a time-continuous space-time setting.

相關內容

Clustering multidimensional points is a fundamental data mining task, with applications in many fields, such as astronomy, neuroscience, bioinformatics, and computer vision. The goal of clustering algorithms is to group similar objects together. Density-based clustering is a clustering approach that defines clusters as dense regions of points. It has the advantage of being able to detect clusters of arbitrary shapes, rendering it useful in many applications. In this paper, we propose fast parallel algorithms for Density Peaks Clustering (DPC), a popular version of density-based clustering. Existing exact DPC algorithms suffer from low parallelism both in theory and in practice, which limits their application to large-scale data sets. Our most performant algorithm, which is based on priority search kd-trees, achieves $O(\log n\log\log n)$ span (parallel time complexity) for a data set of $n$ points. Our algorithm is also work-efficient, achieving a work complexity matching the best existing sequential exact DPC algorithm. In addition, we present another DPC algorithm based on a Fenwick tree that makes fewer assumptions for its average-case complexity to hold. We provide optimized implementations of our algorithms and evaluate their performance via extensive experiments. On a 30-core machine with two-way hyperthreading, we find that our best algorithm achieves a 10.8--13169x speedup over the previous best parallel exact DPC algorithm. Compared to the state-of-the-art parallel approximate DPC algorithm, our best algorithm achieves a 1.5--4206x speedup, while being exact.

Cloud, fog, and edge computing integration with future mobile Internet-of-Things (IoT) devices and related applications in 5G/6G networks will become more practical in the coming years. Containers became the de facto virtualization technique that replaced Virtual Memory (VM). Mobile IoT applications, e.g., intelligent transportation and augmented reality, incorporating fog-edge, have increased the demand for a millisecond-scale response and processing time. Edge Computing reduces remote network traffic and latency. These services must run on edge nodes that are physically close to devices. However, classical migration techniques may not meet the requirements of future mission-critical IoT applications. IoT mobile devices have limited resources for running multiple services, and client-server latency worsens when fog-edge services must migrate to maintain proximity in light of device mobility. This study analyzes the performance of the MiGrror migration method and the pre-copy live migration method when the migration of multiple VMs/containers is considered. This paper presents mathematical models for the stated methods and provides migration guidelines and comparisons for services to be implemented as multiple containers, as in microservice-based environments. Experiments demonstrate that MiGrror outperforms the pre-copy technique and, unlike conventional live migrations, can maintain less than 10 milliseconds of downtime and reduce migration time with a minimal bandwidth overhead. The results show that MiGrror can improve service continuity and availability for users. Most significant is that the model can use average and non-average values for different parameters during migration to achieve improved and more accurate results, while other research typically only uses average values. This paper shows that using only average parameter values in migration can lead to inaccurate results.

In this work, we explore the application of multilinear algebra in reducing the order of multidimentional linear time-invariant (MLTI) systems. We use tensor Krylov subspace methods as key tools, which involve approximating the system solution within a low-dimensional subspace. We introduce the tensor extended block and global Krylov subspaces and the corresponding Arnoldi based processes. Using these methods, we develop a model reduction using projection techniques. We also show how these methods could be used to solve large-scale Lyapunov tensor equations that are needed in the balanced truncation method which is a technique for order reduction. We demonstrate how to extract approximate solutions via the Einstein product using the tensor extended block Arnoldi and the extended global Arnoldi processes.

Interpolation-based methods are well-established and effective approaches for the efficient generation of accurate reduced-order surrogate models. Common challenges for such methods are the automatic selection of good or even optimal interpolation points and the appropriate size of the reduced-order model. An approach that addresses the first problem for linear, unstructured systems is the Iterative Rational Krylov Algorithm (IRKA), which computes optimal interpolation points through iterative updates by solving linear eigenvalue problems. However, in the case of preserving internal system structures, optimal interpolation points are unknown, and heuristics based on nonlinear eigenvalue problems result in numbers of potential interpolation points that typically exceed the reasonable size of reduced-order systems. In our work, we propose a projection-based iterative interpolation method inspired by IRKA for generally structured systems to adaptively compute near-optimal interpolation points as well as an appropriate size for the reduced-order system. Additionally, the iterative updates of the interpolation points can be chosen such that the reduced-order model provides an accurate approximation in specified frequency ranges of interest. For such applications, our new approach outperforms the established methods in terms of accuracy and computational effort. We show this in numerical examples with different structures.

Motion planning seeks a collision-free path in a configuration space (C-space), representing all possible robot configurations in the environment. As it is challenging to construct a C-space explicitly for a high-dimensional robot, we generally build a graph structure called a roadmap, a discrete approximation of a complex continuous C-space, to reason about connectivity. Checking collision-free connectivity in the roadmap requires expensive edge-evaluation computations, and thus, reducing the number of evaluations has become a significant research objective. However, in practice, we often face infeasible problems: those in which there is no collision-free path in the roadmap between the start and the goal locations. Existing studies often overlook the possibility of infeasibility, becoming highly inefficient by performing many edge evaluations. In this work, we address this oversight in scenarios where a prior roadmap is available; that is, the edges of the roadmap contain the probability of being a collision-free edge learned from past experience. To this end, we propose an algorithm called iterative path and cut finding (IPC) that iteratively searches for a path and a cut in a prior roadmap to detect infeasibility while reducing expensive edge evaluations as much as possible. We further improve the efficiency of IPC by introducing a second algorithm, iterative decomposition and path and cut finding (IDPC), that leverages the fact that cut-finding algorithms partition the roadmap into smaller subgraphs. We analyze the theoretical properties of IPC and IDPC, such as completeness and computational complexity, and evaluate their performance in terms of completion time and the number of edge evaluations in large-scale simulations.

Uncertainty quantification for inverse problems in imaging has drawn much attention lately. Existing approaches towards this task define uncertainty regions based on probable values per pixel, while ignoring spatial correlations within the image, resulting in an exaggerated volume of uncertainty. In this paper, we propose PUQ (Principal Uncertainty Quantification) -- a novel definition and corresponding analysis of uncertainty regions that takes into account spatial relationships within the image, thus providing reduced volume regions. Using recent advancements in stochastic generative models, we derive uncertainty intervals around principal components of the empirical posterior distribution, forming an ambiguity region that guarantees the inclusion of true unseen values with a user confidence probability. To improve computational efficiency and interpretability, we also guarantee the recovery of true unseen values using only a few principal directions, resulting in ultimately more informative uncertainty regions. Our approach is verified through experiments on image colorization, super-resolution, and inpainting; its effectiveness is shown through comparison to baseline methods, demonstrating significantly tighter uncertainty regions.

In this work, we propose a new stochastic domain decomposition method for solving steady-state partial differential equations (PDEs) with random inputs. Based on the efficiency of the Variable-separation (VS) method in simulating stochastic partial differential equations (SPDEs), we extend it to stochastic algebraic systems and apply it to stochastic domain decomposition. The resulting Stochastic Domain Decomposition based on the Variable-separation method (SDD-VS) effectively addresses the ``curse of dimensionality" by leveraging the explicit representation of stochastic functions derived from physical systems. The SDD-VS method aims to obtain a separated representation of the solution for the stochastic interface problem. To enhance efficiency, an offline-online computational decomposition is introduced. In the offline phase, the affine representation of stochastic algebraic systems is obtained through the successive application of the VS method. This serves as a crucial foundation for the SDD-VS method. In the online phase, the interface unknowns of SPDEs are estimated using a quasi-optimal separated representation, enabling the construction of efficient surrogate models for subproblems. The effectiveness of the proposed method is demonstrated via the numerical results of three concrete examples.

Answer Set Programming with Quantifiers ASP(Q) extends Answer Set Programming (ASP) to allow for declarative and modular modeling of problems from the entire polynomial hierarchy. The first implementation of ASP(Q), called qasp, was based on a translation to Quantified Boolean Formulae (QBF) with the aim of exploiting the well-developed and mature QBF-solving technology. However, the implementation of the QBF encoding employed in qasp is very general and might produce formulas that are hard to evaluate for existing QBF solvers because of the large number of symbols and sub-clauses. In this paper, we present a new implementation that builds on the ideas of qasp and features both a more efficient encoding procedure and new optimized encodings of ASP(Q) programs in QBF. The new encodings produce smaller formulas (in terms of the number of quantifiers, variables, and clauses) and result in a more efficient evaluation process. An algorithm selection strategy automatically combines several QBF-solving back-ends to further increase performance. An experimental analysis, conducted on known benchmarks, shows that the new system outperforms qasp.

This paper proposes a method for learning continuous control policies for active landmark localization and exploration using an information-theoretic cost. We consider a mobile robot detecting landmarks within a limited sensing range, and tackle the problem of learning a control policy that maximizes the mutual information between the landmark states and the sensor observations. We employ a Kalman filter to convert the partially observable problem in the landmark state to Markov decision process (MDP), a differentiable field of view to shape the reward, and an attention-based neural network to represent the control policy. The approach is further unified with active volumetric mapping to promote exploration in addition to landmark localization. The performance is demonstrated in several simulated landmark localization tasks in comparison with benchmark methods.

Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous potential to be tapped to reach the fully supervised performance. In this paper, we present a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation. We start from an observation that energy-based models exhibit free energy biases when training (source) and test (target) data come from different distributions. Inspired by this inherent mechanism, we empirically reveal that a simple yet efficient energy-based sampling strategy sheds light on selecting the most valuable target samples than existing approaches requiring particular architectures or computation of the distances. Our algorithm, Energy-based Active Domain Adaptation (EADA), queries groups of targe data that incorporate both domain characteristic and instance uncertainty into every selection round. Meanwhile, by aligning the free energy of target data compact around the source domain via a regularization term, domain gap can be implicitly diminished. Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world. Code is available at //github.com/BIT-DA/EADA.

北京阿比特科技有限公司