亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper deals with the derivation of Non-Intrusive Reduced Basis (NIRB) techniques for sensitivity analysis, more specifically the direct and adjoint state methods. For highly complex parametric problems, these two approaches may become too costly. To reduce computational times, Proper Orthogonal Decomposition (POD) and Reduced Basis Methods (RBMs) have already been investigated. The majority of these algorithms are however intrusive in the sense that the High-Fidelity (HF) code must be modified. To address this issue, non-intrusive strategies are employed. The NIRB two-grid method uses the HF code solely as a ``black-box'', requiring no code modification. Like other RBMs, it is based on an offline-online decomposition. The offline stage is time-consuming, but it is only executed once, whereas the online stage is significantly less expensive than an HF evaluation. In this paper, we propose new NIRB two-grid algorithms for both the direct and adjoint state methods. On the direct method, we prove on a classical model problem, the heat equation, that HF evaluations of sensitivities reach an optimal convergence rate in $L^{\infty}(0,T;H^1(\Omega))$, and then establish that these rates are recovered by the proposed NIRB approximation. These results are supported by numerical simulations. We then numerically demonstrate that a Gaussian process regression can be used to approximate the projection coefficients of the NIRB two-grid method. This further reduces the computational costs of the online step while only computing a coarse solution of the initial problem. All numerical results are run with the model problem as well as a more complex problem, namely the Brusselator system.

相關內容

Generally, to apply the MUltiple SIgnal Classification (MUSIC) algorithm for the rapid imaging of small inhomogeneities, the complete elements of the multi-static response (MSR) matrix must be collected. However, in real-world applications such as microwave imaging or bistatic measurement configuration, diagonal elements of the MSR matrix are unknown. Nevertheless, it is possible to obtain imaging results using a traditional approach but theoretical reason of the applicability has not been investigated yet. In this paper, we establish mathematical structures of the imaging function of MUSIC from an MSR matrix without diagonal elements in both transverse magnetic (TM) and transverse electric (TE) polarizations. The established structures demonstrate why the shape of the location of small inhomogeneities can be retrieved via MUSIC without the diagonal elements of the MSR matrix. In addition, they reveal the intrinsic properties of imaging and the fundamental limitations. Results of numerical simulations are also provided to support the identified structures.

Principal component analysis (PCA) is a longstanding and well-studied approach for dimension reduction. It rests upon the assumption that the underlying signal in the data has low rank, and thus can be well-summarized using a small number of dimensions. The output of PCA is typically represented using a scree plot, which displays the proportion of variance explained (PVE) by each principal component. While the PVE is extensively reported in routine data analyses, to the best of our knowledge the notion of inference on the PVE remains unexplored. In this paper, we consider inference on the PVE. We first introduce a new population quantity for the PVE with respect to an unknown matrix mean. Critically, our interest lies in the PVE of the sample principal components (as opposed to unobserved population principal components); thus, the population PVE that we introduce is defined conditional on the sample singular vectors. We show that it is possible to conduct inference, in the sense of confidence intervals, p-values, and point estimates, on this population quantity. Furthermore, we can conduct valid inference on the PVE of a subset of the principal components, even when the subset is selected using a data-driven approach such as the elbow rule. We demonstrate the proposed approach in simulation and in an application to a gene expression dataset.

In this work, the high order accuracy and the well-balanced (WB) properties of some novel continuous interior penalty (CIP) stabilizations for the Shallow Water (SW) equations are investigated. The underlying arbitrary high order numerical framework is given by a Residual Distribution (RD)/continuous Galerkin (CG) finite element method (FEM) setting for the space discretization coupled with a Deferred Correction (DeC) time integration, to have a fully-explicit scheme. If, on the one hand, the introduced CIP stabilizations are all specifically designed to guarantee the exact preservation of the lake at rest steady state, on the other hand, some of them make use of general structures to tackle the preservation of general steady states, whose explicit analytical expression is not known. Several basis functions have been considered in the numerical experiments and, in all cases, the numerical results confirm the high order accuracy and the ability of the novel stabilizations to exactly preserve the lake at rest steady state and to capture small perturbations of such equilibrium. Moreover, some of them, based on the notions of space residual and global flux, have shown very good performances and superconvergences in the context of general steady solutions not known in closed-form. Many elements introduced here can be extended to other hyperbolic systems, e.g., to the Euler equations with gravity.

This essay provides a comprehensive analysis of the optimization and performance evaluation of various routing algorithms within the context of computer networks. Routing algorithms are critical for determining the most efficient path for data transmission between nodes in a network. The efficiency, reliability, and scalability of a network heavily rely on the choice and optimization of its routing algorithm. This paper begins with an overview of fundamental routing strategies, including shortest path, flooding, distance vector, and link state algorithms, and extends to more sophisticated techniques.

Quantum-inspired classical algorithms provide us with a new way to understand the computational power of quantum computers for practically-relevant problems, especially in machine learning. In the past several years, numerous efficient algorithms for various tasks have been found, while an analysis of lower bounds is still missing. Using communication complexity, in this work we propose the first method to study lower bounds for these tasks. We mainly focus on lower bounds for solving linear regressions, supervised clustering, principal component analysis, recommendation systems, and Hamiltonian simulations. More precisely, we show that for linear regressions, in the row-sparse case, the lower bound is quadratic in the Frobenius norm of the underlying matrix, which is tight. In the dense case, with an extra assumption on the accuracy we obtain that the lower bound is quartic in the Frobenius norm, which matches the upper bound. For supervised clustering, we obtain a tight lower bound that is quartic in the Frobenius norm. For the other three tasks, we obtain a lower bound that is quadratic in the Frobenius norm, and the known upper bound is quartic in the Frobenius norm. Through this research, we find that large quantum speedup can exist for sparse, high-rank, well-conditioned matrix-related problems. Finally, we extend our method to study lower bounds analysis of quantum query algorithms for matrix-related problems. Some applications are given.

Digital credentials represent a cornerstone of digital identity on the Internet. To achieve privacy, certain functionalities in credentials should be implemented. One is selective disclosure, which allows users to disclose only the claims or attributes they want. This paper presents a novel approach to selective disclosure that combines Merkle hash trees and Boneh-Lynn-Shacham (BLS) signatures. Combining these approaches, we achieve selective disclosure of claims in a single credential and creation of a verifiable presentation containing selectively disclosed claims from multiple credentials signed by different parties. Besides selective disclosure, we enable issuing credentials signed by multiple issuers using this approach.

The availability of digital twins for the cardiovascular system will enable insightful computational tools both for research and clinical practice. This, however, demands robust and well defined models and methods for the different steps involved in the process. We present a vessel coordinate system (VCS) that enables the unanbiguous definition of locations in a vessel section, by adapting the idea of cylindrical coordinates to the vessel geometry. Using the VCS model, point correspondence can be defined among different samples of a cohort, allowing data transfer, quantitative comparison, shape coregistration or population analysis. Furthermore, the VCS model allows for the generation of specific meshes (e.g. cylindrical grids, ogrids) necessary for an accurate reconstruction of the geometries used in fluid simulations. We provide the technical details for coordinates computation and discuss the assumptions taken to guarantee that they are well defined. The VCS model is tested in a series of applications. We present a robust, low dimensional, patient specific vascular model and use it to study phenotype variability analysis of the thoracic aorta within a cohort of patients. Point correspondence is exploited to build an haemodynamics atlas of the aorta for the same cohort. The atlas originates from fluid simulations (Navier-Stokes with Finite Volume Method) conducted using OpenFOAMv10. We finally present a relevant discussion on the VCS model, which covers its impact in important areas such as shape modeling and computer fluids dynamics (CFD).

We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.

Finite sample inference for Cox models is an important problem in many settings, such as clinical trials. Bayesian procedures provide a means for finite sample inference and incorporation of prior information if MCMC algorithms and posteriors are well behaved. On the other hand, estimation procedures should also retain inferential properties in high dimensional settings. In addition, estimation procedures should be able to incorporate constraints and multilevel modeling such as cure models and frailty models in a straightforward manner. In order to tackle these modeling challenges, we propose a uniformly ergodic Gibbs sampler for a broad class of convex set constrained multilevel Cox models. We develop two key strategies. First, we exploit a connection between Cox models and negative binomial processes through the Poisson process to reduce Bayesian computation to iterative Gaussian sampling. Next, we appeal to sufficient dimension reduction to address the difficult computation of nonparametric baseline hazards, allowing for the collapse of the Markov transition operator within the Gibbs sampler based on sufficient statistics. We demonstrate our approach using open source data and simulations.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司