亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents a study of large linear systems resulting from the regular $B$-splines finite element discretization of the $\bm{curl}-\bm{curl}$ and $\bm{grad}-div$ elliptic problems on unit square/cube domains. We consider systems subject to both homogeneous essential and natural boundary conditions. Our objective is to develop a preconditioning strategy that is optimal and robust, based on the Auxiliary Space Preconditioning method proposed by Hiptmair et al. \cite{hiptmair2007nodal}. Our approach is demonstrated to be robust with respect to mesh size, and we also show how it can be combined with the Generalized Locally Toeplitz (GLT) sequences analysis presented in \cite{mazza2019isogeometric} to derive an algorithm that is optimal and stable with respect to spline degree. Numerical tests are conducted to illustrate the effectiveness of our approach.

相關內容

Second-order polynomials generalize classical first-order ones in allowing for additional variables that range over functions rather than values. We are motivated by their applications in higher-order computational complexity theory, extending for example classical classes like P or PSPACE to operators in Analysis [doi:10.1137/S0097539794263452, doi:10.1145/2189778.2189780]. The degree subclassifies ordinary polynomial growth into linear, quadratic, cubic etc. In order to similarly classify second-order polynomials, define their degree to be an 'arctic' first-order polynomial (namely a term/expression over variable $D$ and operations $+$ and $\cdot$ and $\max$). Our normal form and semantic uniqueness results for second-order polynomials assert said second-order degree to be well-defined; and it turns out to transform well under (now two kinds of) polynomial composition. More generally we define the degree of a third-order polynomial to be an arctic second-order polynomial, and establish its transformation under three kinds of composition.

We propose a least-squares formulation for parabolic equations in the natural $L^2(0,T;V^*)\times H$ norm which avoids regularity assumptions on the data of the problem. For the abstract heat equation the resulting bilinear form then is symmetric, continuous, and coercive. This among other things paves the ground for classical space-time a priori and a posteriori Galerkin frameworks for the numerical approximation of the solution of the abstract heat equation. Moreover, the approach is applicable in e.g. optimal control problems with (parametrized) parabolic equations, and for certification of reduced basis methods with parabolic equations.

We study the problem of testing whether an unknown set $S$ in $n$ dimensions is convex or far from convex, using membership queries. The simplest high-dimensional discrete domain where the problem of testing convexity is non-trivial is the domain $\{-1,0,1\}^n$. Our main results are nearly-tight upper and lower bounds of $3^{\widetilde \Theta( \sqrt n)}$ for one-sided error testing of convex sets over this domain with non-adaptive queries. Together with our $3^{\Omega(n)}$ lower bound on one-sided error testing with samples, this shows that non-adaptive queries are significantly more powerful than samples for this problem.

Grade of Membership (GoM) models are popular individual-level mixture models for multivariate categorical data. GoM allows each subject to have mixed memberships in multiple extreme latent profiles. Therefore GoM models have a richer modeling capacity than latent class models that restrict each subject to belong to a single profile. The flexibility of GoM comes at the cost of more challenging identifiability and estimation problems. In this work, we propose a singular value decomposition (SVD) based spectral approach to GoM analysis with multivariate binary responses. Our approach hinges on the observation that the expectation of the data matrix has a low-rank decomposition under a GoM model. For identifiability, we develop sufficient and almost necessary conditions for a notion of expectation identifiability. For estimation, we extract only a few leading singular vectors of the observed data matrix, and exploit the simplex geometry of these vectors to estimate the mixed membership scores and other parameters. Our spectral method has a huge computational advantage over Bayesian or likelihood-based methods and is scalable to large-scale and high-dimensional data. Extensive simulation studies demonstrate the superior efficiency and accuracy of our method. We also illustrate our method by applying it to a personality test dataset.

This paper presents a scalable multigrid preconditioner targeting large-scale systems arising from discontinuous Petrov-Galerkin (DPG) discretizations of high-frequency wave operators. This work is built on previously developed multigrid preconditioning techniques of Petrides and Demkowicz (Comput. Math. Appl. 87 (2021) pp. 12-26) and extends the convergence results from $\mathcal{O}(10^7)$ degrees of freedom (DOFs) to $\mathcal{O}(10^9)$ DOFs using a new scalable parallel MPI/OpenMP implementation. Novel contributions of this paper include an alternative definition of coarse-grid systems based on restriction of fine-grid operators, yielding superior convergence results. In the uniform refinement setting, a detailed convergence study is provided, demonstrating h and p robust convergence and linear dependence with respect to the wave frequency. The paper concludes with numerical results on hp-adaptive simulations including a large-scale seismic modeling benchmark problem with high material contrast.

The aim of this paper is to study the shape optimization method for solving the Bernoulli free boundary problem, a well-known ill-posed problem that seeks the unknown free boundary through Cauchy data. Different formulations have been proposed in the literature that differ in the choice of the objective functional. Specifically, it was shown respectively in [14] and [16] that tracking Neumann data is well-posed but tracking Dirichlet data is not. In this paper we propose a new well-posed objective functional that tracks Dirichlet data at the free boundary. By calculating the Euler derivative and the shape Hessian of the objective functional we show that the new formulation is well-posed, i.e., the shape Hessian is coercive at the minimizers. The coercivity of the shape Hessian may ensure the existence of optimal solutions for the nonlinear Ritz-Galerkin approximation method and its convergence, thus is crucial for the formulation. As a summary, we conclude that tracking Dirichlet or Neumann data in its energy norm is not sufficient, but tracking it in a half an order higher norm will be well-posed. To support our theoretical results we carry out extensive numerical experiments.

Accurately estimating the probability of failure for safety-critical systems is important for certification. Estimation is often challenging due to high-dimensional input spaces, dangerous test scenarios, and computationally expensive simulators; thus, efficient estimation techniques are important to study. This work reframes the problem of black-box safety validation as a Bayesian optimization problem and introduces an algorithm, Bayesian safety validation, that iteratively fits a probabilistic surrogate model to efficiently predict failures. The algorithm is designed to search for failures, compute the most-likely failure, and estimate the failure probability over an operating domain using importance sampling. We introduce a set of three acquisition functions that focus on reducing uncertainty by covering the design space, optimizing the analytically derived failure boundaries, and sampling the predicted failure regions. Mainly concerned with systems that only output a binary indication of failure, we show that our method also works well in cases where more output information is available. Results show that Bayesian safety validation achieves a better estimate of the probability of failure using orders of magnitude fewer samples and performs well across various safety validation metrics. We demonstrate the algorithm on three test problems with access to ground truth and on a real-world safety-critical subsystem common in autonomous flight: a neural network-based runway detection system. This work is open sourced and currently being used to supplement the FAA certification process of the machine learning components for an autonomous cargo aircraft.

We design and analyze an iterative two-grid algorithm for the finite element discretizations of strongly nonlinear elliptic boundary value problems in this paper. We propose an iterative two-grid algorithm, in which a nonlinear problem is first solved on the coarse space, and then a symmetric positive definite problem is solved on the fine space. The innovation of this paper lies in the establishment of a first convergence analysis, which requires simultaneous estimation of four interconnected error estimates. We also present some numerical experiments to confirm the efficiency of the proposed algorithm.

Leveraging medical record information in the era of big data and machine learning comes with the caveat that data must be cleaned and deidentified. Facilitating data sharing and harmonization for multi-center collaborations are particularly difficult when protected health information (PHI) is contained or embedded in image meta-data. We propose a novel library in the Python framework, called PyLogik, to help alleviate this issue for ultrasound images, which are particularly challenging because of the frequent inclusion of PHI directly on the images. PyLogik processes the image volumes through a series of text detection/extraction, filtering, thresholding, morphological and contour comparisons. This methodology deidentifies the images, reduces file sizes, and prepares image volumes for applications in deep learning and data sharing. To evaluate its effectiveness in the identification of regions of interest (ROI), a random sample of 50 cardiac ultrasounds (echocardiograms) were processed through PyLogik, and the outputs were compared with the manual segmentations by an expert user. The Dice coefficient of the two approaches achieved an average value of 0.976. Next, an investigation was conducted to ascertain the degree of information compression achieved using the algorithm. Resultant data was found to be on average approximately 72% smaller after processing by PyLogik. Our results suggest that PyLogik is a viable methodology for ultrasound data cleaning and deidentification, determining ROI, and file compression which will facilitate efficient storage, use, and dissemination of ultrasound data.

The space of $C^1$ cubic Clough-Tocher splines is a classical finite element approximation space over triangulations for solving partial differential equations. However, for such a space there is no B-spline basis available, which is a preferred choice in computer aided geometric design and isogeometric analysis. A B-spline basis is a locally supported basis that forms a convex partition of unity. In this paper, we explore several alternative $C^1$ cubic spline spaces over triangulations equipped with a B-spline basis. They are defined over a Powell-Sabin refined triangulation and present different types of $C^2$ super-smoothness. The super-smooth B-splines are obtained through an extraction process, i.e., they are expressed in terms of less smooth basis functions. These alternative spline spaces maintain the same optimal approximation power as Clough-Tocher splines. This is illustrated with a selection of numerical examples in the context of least squares approximation and finite element approximation for second and fourth order boundary value problems.

北京阿比特科技有限公司