亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Dual-Primal Finite Element Tearing and Interconnecting (FETI-DP) algorithms are developed for a 2D Biot model. The model is formulated with mixed-finite elements as a saddle-point problem. The displacement $\mathbf{u}$ and the Darcy flux flow $\mathbf{z}$ are represented with $P_1$ piecewise continuous elements and pore-pressure $p$ with $P_0$ piecewise constant elements, {\it i.e.}, overall three fields with a stabilizing term. We have tested the functionality of FETI-DP with Dirichlet preconditioners. Numerical experiments show a signature of scalability of the resulting parallel algorithm in the compressible elasticity with permeable Darcy flow as well as almost incompressible elasticity.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · IPL · motivation ·
2023 年 8 月 15 日

This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist's B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL, in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established.

We derive a Bernstein von-Mises theorem in the context of misspecified, non-i.i.d., hierarchical models parametrized by a finite-dimensional parameter of interest. We apply our results to hierarchical models containing non-linear operators, including the squared integral operator, and PDE-constrained inverse problems. More specifically, we consider the elliptic, time-independent Schr\"odinger equation with parametric boundary condition and general parabolic PDEs with parametric potential and boundary constraints. Our theoretical results are complemented with numerical analysis on synthetic data sets, considering both the square integral operator and the Schr\"odinger equation.

Solving high-dimensional random parametric PDEs poses a challenging computational problem. It is well-known that numerical methods can greatly benefit from adaptive refinement algorithms, in particular when functional approximations in polynomials are computed as in stochastic Galerkin and stochastic collocations methods. This work investigates a residual based adaptive algorithm used to approximate the solution of the stationary diffusion equation with lognormal coefficients. It is known that the refinement procedure is reliable, but the theoretical convergence of the scheme for this class of unbounded coefficients remains a challenging open question. This paper advances the theoretical results by providing a quasi-error reduction results for the adaptive solution of the lognormal stationary diffusion problem. A computational example supports the theoretical statement.

Bayesian linear mixed-effects models and Bayesian ANOVA are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can - if the sphericity assumption is violated - likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from \url{//osf.io/mjf47/}.

We show that any application of the technique of unbiased simulation becomes perfect simulation when coalescence of the two coupled Markov chains can be practically assured in advance. This happens when a fixed number of iterations is high enough that the probability of needing any more to achieve coalescence is negligible; we suggest a value of $10^{-20}$. This finding enormously increases the range of problems for which perfect simulation, which exactly follows the target distribution, can be implemented. We design a new algorithm to make practical use of the high number of iterations by producing extra perfect sample points with little extra computational effort, at a cost of a small, controllable amount of serial correlation within sample sets of about 20 points. Different sample sets remain completely independent. The algorithm includes maximal coupling for continuous processes, to bring together chains that are already close. We illustrate the methodology on a simple, two-state Markov chain and on standard normal distributions up to 20 dimensions. Our technical formulation involves a nonzero probability, which can be made arbitrarily small, that a single perfect sample point may have its place taken by a "string" of many points which are assigned weights, each equal to $\pm 1$, that sum to~$1$. A point with a weight of $-1$ is a "hole", which is an object that can be cancelled by an equivalent point that has the same value but opposite weight $+1$.

Delaunay Triangulation(DT) is one of the important geometric problems that is used in various branches of knowledge such as computer vision, terrain modeling, spatial clustering and networking. Kinetic data structures have become very important in computational geometry for dealing with moving objects. However, when dealing with moving points, maintaining a dynamically changing Delaunay triangulation can be challenging. So, In this case, we have to update triangulation repeatedly. If points move so far, it is better to rebuild the triangulation. One approach to handle moving points is to use an incremental algorithm. For the case that points move slowly, we can give a faster algorithm than rebuilding it. Furthermore, sequential algorithms can be computationally expensive for large datasets. So, one way to compute as fast as possible is parallelism. In this paper, we propose a parallel algorithm for moving points. we propose an algorithm that divides datasets into equal partitions and give every partition to one block. Each block satisfay the Delaunay constraints after each time step and uses delete and insert algorithms to do this. We show this algorithm works faster than serial algorithms.

We describe a `discretize-then-relax' strategy to globally minimize integral functionals over functions $u$ in a Sobolev space satisfying prescribed Dirichlet boundary conditions. The strategy applies whenever the integral functional depends polynomially on $u$ and its derivatives, even if it is nonconvex. The `discretize' step uses a bounded finite-element scheme to approximate the integral minimization problem with a convergent hierarchy of polynomial optimization problems over a compact feasible set, indexed by the decreasing size $h$ of the finite-element mesh. The `relax' step employs sparse moment-SOS relaxations to approximate each polynomial optimization problem with a hierarchy of convex semidefinite programs, indexed by an increasing relaxation order $\omega$. We prove that, as $\omega\to\infty$ and $h\to 0$, solutions of such semidefinite programs provide approximate minimizers that converge in $L^p$ to the global minimizer of the original integral functional if this is unique. We also report computational experiments that show our numerical strategy works well even when technical conditions required by our theoretical analysis are not satisfied.

In classical logic, "P implies Q" is equivalent to "not-P or Q". It is well known that the equivalence is problematic. Actually, from "P implies Q", "not-P or Q" can be inferred ("Implication-to-disjunction" is valid), while from "not-P or Q", "P implies Q" cannot be inferred in general ("Disjunction-to-implication" is not valid), so the equivalence between them is invalid. This work aims to remove exactly the incorrect Disjunction-to-implication from classical logic (CL). The paper proposes a logical system (IRL), which has the properties (1) adding Disjunction-to-implication to IRL is simply CL, and (2) Disjunction-to-implication is independent of IRL, i.e. either Disjunction-to-implication or its negation cannot be derived in IRL. In other words, IRL is just the sub-system of CL with Disjunction-to-implication being exactly removed.

The Helmholtz equation is related to seismic exploration, sonar, antennas, and medical imaging applications. It is one of the most challenging problems to solve in terms of accuracy and convergence due to the scalability issues of the numerical solvers. For 3D large-scale applications, high-performance parallel solvers are also needed. In this paper, a matrix-free parallel iterative solver is presented for the three-dimensional (3D) heterogeneous Helmholtz equation. We consider the preconditioned Krylov subspace methods for solving the linear system obtained from finite-difference discretization. The Complex Shifted Laplace Preconditioner (CSLP) is employed since it results in a linear increase in the number of iterations as a function of the wavenumber. The preconditioner is approximately inverted using one parallel 3D multigrid cycle. For parallel computing, the global domain is partitioned blockwise. The matrix-vector multiplication and preconditioning operator are implemented in a matrix-free way instead of constructing large, memory-consuming coefficient matrices. Numerical experiments of 3D model problems demonstrate the robustness and outstanding strong scaling of our matrix-free parallel solution method. Moreover, the weak parallel scalability indicates our approach is suitable for realistic 3D heterogeneous Helmholtz problems with minimized pollution error.

Blumer et al. (1987, 1989) showed that any concept class that is learnable by Occam algorithms is PAC learnable. Board and Pitt (1990) showed a partial converse of this theorem: for concept classes that are closed under exception lists, any class that is PAC learnable is learnable by an Occam algorithm. However, their Occam algorithm outputs a hypothesis whose complexity is $\delta$-dependent, which is an important limitation. In this paper, we show that their partial converse applies to Occam algorithms with $\delta$-independent complexities as well. Thus, we provide a posteriori justification of various theoretical results and algorithm design methods which use the partial converse as a basis for their work.

北京阿比特科技有限公司