亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a distributed algebra system for efficient and compact implementation of numerical time integration schemes on parallel computers and graphics processing units (GPU). The software implementation combines the time integration library Odeint from Boost with the OpenFPM framework for scalable scientific computing. Implementing multi-stage, multi-step, or adaptive time integration methods in distributed-memory parallel codes or on GPUs is challenging. The present algebra system addresses this by making the time integration methods from Odeint available in a concise template-expression language for numerical simulations distributed and parallelized using OpenFPM. This allows using state-of-the-art time integration schemes, or switching between schemes, by changing one line of code, while maintaining parallel scalability. This enables scalable time integration with compact code and facilitates rapid rewriting and deployment of simulation algorithms. We benchmark the present software for exponential and sigmoidal dynamics and present an application example to the 3D Gray-Scott reaction-diffusion problem on both CPUs and GPUs in only 60 lines of code.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

In numerical linear algebra, considerable effort has been devoted to obtaining faster algorithms for linear systems whose underlying matrices exhibit structural properties. A prominent success story is the method of generalized nested dissection~[Lipton-Rose-Tarjan'79] for separable matrices. On the other hand, the majority of recent developments in the design of efficient linear program (LP) solves do not leverage the ideas underlying these faster linear system solvers nor consider the separable structure of the constraint matrix. We give a faster algorithm for separable linear programs. Specifically, we consider LPs of the form $\min_{\mathbf{A}\mathbf{x}=\mathbf{b}, \mathbf{l}\leq\mathbf{x}\leq\mathbf{u}} \mathbf{c}^\top\mathbf{x}$, where the graphical support of the constraint matrix $\mathbf{A} \in \mathbb{R}^{n\times m}$ is $O(n^\alpha)$-separable. These include flow problems on planar graphs and low treewidth matrices among others. We present an $\tilde{O}((m+m^{1/2 + 2\alpha}) \log(1/\epsilon))$ time algorithm for these LPs, where $\epsilon$ is the relative accuracy of the solution. Our new solver has two important implications: for the $k$-multicommodity flow problem on planar graphs, we obtain an algorithm running in $\tilde{O}(k^{5/2} m^{3/2})$ time in the high accuracy regime; and when the support of $\mathbf{A}$ is $O(n^\alpha)$-separable with $\alpha \leq 1/4$, our algorithm runs in $\tilde{O}(m)$ time, which is nearly optimal. The latter significantly improves upon the natural approach of combining interior point methods and nested dissection, whose time complexity is lower bounded by $\Omega(\sqrt{m}(m+m^{\alpha\omega}))=\Omega(m^{3/2})$, where $\omega$ is the matrix multiplication constant. Lastly, in the setting of low-treewidth LPs, we recover the results of [DLY,STOC21] and [GS,22] with significantly simpler data structure machinery.

The ability to rapidly test a variety of algorithms for an arbitrary state estimation task is valuable in the prototyping phase of navigation systems. Lie group theory is now mainstream in the robotics community, and hence estimation prototyping tools should allow state definitions that belong to manifolds. A new package, called navlie, provides a framework that allows a user to model a large class of problems by implementing a set of classes complying with a generic interface. Once accomplished, navlie provides a variety of on-manifold estimation algorithms that can run directly on these classes. The package also provides a built-in library of common models, as well as many useful utilities. The open-source project can be found at //github.com/decargroup/navlie.

Inferring the parameters of ordinary differential equations (ODEs) from noisy observations is an important problem in many scientific fields. Currently, most parameter estimation methods that bypass numerical integration tend to rely on basis functions or Gaussian processes to approximate the ODE solution and its derivatives. Due to the sensitivity of the ODE solution to its derivatives, these methods can be hindered by estimation error, especially when only sparse time-course observations are available. We present a Bayesian collocation framework that operates on the integrated form of the ODEs and also avoids the expensive use of numerical solvers. Our methodology has the capability to handle general nonlinear ODE systems. We demonstrate the accuracy of the proposed method through simulation studies, where the estimated parameters and recovered system trajectories are compared with other recent methods. A real data example is also provided.

Creating large-scale and well-annotated datasets to train AI algorithms is crucial for automated tumor detection and localization. However, with limited resources, it is challenging to determine the best type of annotations when annotating massive amounts of unlabeled data. To address this issue, we focus on polyps in colonoscopy videos and pancreatic tumors in abdominal CT scans; both applications require significant effort and time for pixel-wise annotation due to the high dimensional nature of the data, involving either temporary or spatial dimensions. In this paper, we develop a new annotation strategy, termed Drag&Drop, which simplifies the annotation process to drag and drop. This annotation strategy is more efficient, particularly for temporal and volumetric imaging, than other types of weak annotations, such as per-pixel, bounding boxes, scribbles, ellipses, and points. Furthermore, to exploit our Drag&Drop annotations, we develop a novel weakly supervised learning method based on the watershed algorithm. Experimental results show that our method achieves better detection and localization performance than alternative weak annotations and, more importantly, achieves similar performance to that trained on detailed per-pixel annotations. Interestingly, we find that, with limited resources, allocating weak annotations from a diverse patient population can foster models more robust to unseen images than allocating per-pixel annotations for a small set of images. In summary, this research proposes an efficient annotation strategy for tumor detection and localization that is less accurate than per-pixel annotations but useful for creating large-scale datasets for screening tumors in various medical modalities.

This article discusses the uncertainty quantification (UQ) for time-independent linear and nonlinear partial differential equation (PDE)-based systems with random model parameters carried out using sampling-free intrusive stochastic Galerkin method leveraging multilevel scalable solvers constructed combining two-grid Schwarz method and AMG. High-resolution spatial meshes along with a large number of stochastic expansion terms increase the system size leading to significant memory consumption and computational costs. Domain decomposition (DD)-based parallel scalable solvers are developed to this end for linear and nonlinear stochastic PDEs. A generalized minimum residual (GMRES) iterative solver equipped with a multilevel preconditioner consisting of restricted additive Schwarz (RAS) for the fine grid and algebraic multigrid (AMG) for the coarse grid is constructed to improve scalability. Numerical experiments illustrate the scalabilities of the proposed solver for stochastic linear and nonlinear Poisson problems.

Most of existing neural methods for multi-objective combinatorial optimization (MOCO) problems solely rely on decomposition, which often leads to repetitive solutions for the respective subproblems, thus a limited Pareto set. Beyond decomposition, we propose a novel neural heuristic with diversity enhancement (NHDE) to produce more Pareto solutions from two perspectives. On the one hand, to hinder duplicated solutions for different subproblems, we propose an indicator-enhanced deep reinforcement learning method to guide the model, and design a heterogeneous graph attention mechanism to capture the relations between the instance graph and the Pareto front graph. On the other hand, to excavate more solutions in the neighborhood of each subproblem, we present a multiple Pareto optima strategy to sample and preserve desirable solutions. Experimental results on classic MOCO problems show that our NHDE is able to generate a Pareto front with higher diversity, thereby achieving superior overall performance. Moreover, our NHDE is generic and can be applied to different neural methods for MOCO.

In numerical simulations of cardiac mechanics, coupling the heart to a model of the circulatory system is essential for capturing physiological cardiac behavior. A popular and efficient technique is to use an electrical circuit analogy, known as a lumped parameter network or zero-dimensional (0D) fluid model, to represent blood flow throughout the cardiovascular system. Due to the strong physical interaction between the heart and the blood circulation, developing accurate and efficient numerical coupling methods remains an active area of research. In this work, we present a modular framework for implicitly coupling three-dimensional (3D) finite element simulations of cardiac mechanics to 0D models of blood circulation. The framework is modular in that the circulation model can be modified independently of the 3D finite element solver, and vice versa. The numerical scheme builds upon a previous work that combines 3D blood flow models with 0D circulation models (3D fluid - 0D fluid). Here, we extend it to couple 3D cardiac tissue mechanics models with 0D circulation models (3D structure - 0D fluid), showing that both mathematical problems can be solved within a unified coupling scheme. The effectiveness, temporal convergence, and computational cost of the algorithm are assessed through multiple examples relevant to the cardiovascular modeling community. Importantly, in an idealized left ventricle example, we show that the coupled model yields physiological pressure-volume loops and naturally recapitulates the isovolumic contraction and relaxation phases of the cardiac cycle without any additional numerical techniques. Furthermore, we provide a new derivation of the scheme inspired by the Approximate Newton Method of Chan (1985), explaining how the proposed numerical scheme combines the stability of monolithic approaches with the modularity and flexibility of partitioned approaches.

We propose a novel non-negative spherical relaxation for optimization problems over binary matrices with injectivity constraints, which in particular has applications in multi-matching and clustering. We relax respective binary matrix constraints to the (high-dimensional) non-negative sphere. To optimize our relaxed problem, we use a conditional power iteration method to iteratively improve the objective function, while at same time sweeping over a continuous scalar parameter that is (indirectly) related to the universe size (or number of clusters). Opposed to existing procedures that require to fix the integer universe size before optimization, our method automatically adjusts the analogous continuous parameter. Furthermore, while our approach shares similarities with spectral multi-matching and spectral clustering, our formulation has the strong advantage that we do not rely on additional post-processing procedures to obtain binary results. Our method shows compelling results in various multi-matching and clustering settings, even when compared to methods that use the ground truth universe size (or number of clusters).

We introduce two new stochastic conjugate frameworks for a class of nonconvex and possibly also nonsmooth optimization problems. These frameworks are built upon Stochastic Recursive Gradient Algorithm (SARAH) and we thus refer to them as Acc-Prox-CG-SARAH and Acc-Prox-CG-SARAH-RS, respectively. They are efficiently accelerated, easy to implement, tune free and can be smoothly extended and modified. We devise a deterministic restart scheme for stochastic optimization and apply it in our second stochastic conjugate framework, which serves the key difference between the two approaches. In addition, we apply the ProbAbilistic Gradient Estimator (PAGE) and further develop a practical variant, denoted as Acc-Prox-CG-SARAH-ST, in order to reduce potential computational overhead. We provide comprehensive and rigorous convergence analysis for all three approaches and establish linear convergence rates for unconstrained minimization problem with nonconvex and nonsmooth objective functions. Experiments have demonstrated that Acc-Prox-CG-SARAH and Acc-Prox-CG-SARAH-RS both outperform state-of-art methods consistently and Acc-Prox-CG-SARAH-ST can as well achieve comparable convergence speed. In terms of theory and experiments, we verify the strong computational efficiency of the deterministic restart scheme in stochastic optimization methods.

Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing the generalization capabilities of a model, it can also address many other challenges and problems, from overcoming a limited amount of training data over regularizing the objective to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation (C1) and a taxonomy for existing works (C2), this survey is concerned with data augmentation methods for textual classification and aims to achieve a concise and comprehensive overview for researchers and practitioners (C3). Derived from the taxonomy, we divided more than 100 methods into 12 different groupings and provide state-of-the-art references expounding which methods are highly promising (C4). Finally, research perspectives that may constitute a building block for future work are given (C5).

北京阿比特科技有限公司