亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, I present three closed-form approximations of the two-sample Pearson Bayes factor. The techniques rely on some classical asymptotic results about gamma functions. These approximations permit simple closed-form calculation of the Pearson Bayes factor in cases where only the summary statistics are available (i.e., the t-score and degrees of freedom).

相關內容

We consider the problem of using SciML to predict solutions of high Mach fluid flows over irregular geometries. In this setting, data is limited, and so it is desirable for models to perform well in the low-data setting. We show that Neural Basis Functions (NBF), which learns a basis of behavior modes from the data and then uses this basis to make predictions, is more effective than a basis-unaware baseline model. In addition, we identify continuing challenges in the space of predicting solutions for this type of problem.

In this paper, two novel classes of implicit exponential Runge-Kutta (ERK) methods are studied for solving highly oscillatory systems. First of all, we analyze the symplectic conditions of two kinds of exponential integrators, and present a first-order symplectic method. In order to solve highly oscillatory problems, the highly accurate implicit ERK integrators (up to order four) are formulated by comparing the Taylor expansions of numerical and exact solutions, it is shown that the order conditions of two new kinds of exponential methods are identical to the order conditions of classical Runge-Kutta (RK) methods. Moreover, we investigate the linear stability properties of these exponential methods. Finally, numerical results not only present the long time energy preservation of the first-order symplectic method, but also illustrate the accuracy and efficiency of these formulated methods in comparison with standard ERK methods.

This paper makes the first bridge between the classical differential/boomerang uniformity and the newly introduced $c$-differential uniformity. We show that the boomerang uniformity of an odd APN function is given by the maximum of the entries (except for the first row/column) of the function's $(-1)$-Difference Distribution Table. In fact, the boomerang uniformity of an odd permutation APN function equals its $(-1)$-differential uniformity. We next apply this result to easily compute the boomerang uniformity of several odd APN functions. In the second part we give two classes of differentially low-uniform functions obtained by modifying the inverse function. The first class of permutations (CCZ-inequivalent to the inverse) over a finite field $\mathbb{F}_{p^n}$ ($p$, an odd prime) is obtained from the composition of the inverse function with an order-$3$ cycle permutation, with differential uniformity $3$ if $p=3$ and $n$ is odd; $5$ if $p=13$ and $n$ is even; and $4$ otherwise. The second class is a family of binomials and we show that their differential uniformity equals~$4$. We next complete the open case of $p=3$ in the investigation started by G\" olo\u glu and McGuire (2014), for $p\geq 5$, and continued by K\"olsch (2021), for $p=2$, $n\geq 5$, on the characterization of $L_1(X^{p^n-2})+L_2(X)$ (with linearized $L_1,L_2$) being a permutation polynomial. Finally, we extend to odd characteristic a result of Charpin and Kyureghyan (2010) providing an upper bound for the differential uniformity of the function and its switched version via a trace function.

In this paper, we consider the two-sample location shift model, a classic semiparametric model introduced by Stein (1956). This model is known for its adaptive nature, enabling nonparametric estimation with full parametric efficiency. Existing nonparametric estimators of the location shift often depend on external tuning parameters, which restricts their practical applicability (Van der Vaart and Wellner, 2021). We demonstrate that introducing an additional assumption of log-concavity on the underlying density can alleviate the need for tuning parameters. We propose a one step estimator for location shift estimation, utilizing log-concave density estimation techniques to facilitate tuning-free estimation of the efficient influence function. While we employ a truncated version of the one step estimator for theoretical adaptivity, our simulations indicate that the one step estimators perform best with zero truncation, eliminating the need for tuning during practical implementation.

In this paper, we introduce a new Bayesian approach for analyzing task fMRI data that simultaneously detects activation signatures and background connectivity. Our modeling involves a new hybrid tensor spatial-temporal basis strategy that enables scalable computing yet captures nearby and distant intervoxel correlation and long-memory temporal correlation. The spatial basis involves a composite hybrid transform with two levels: the first accounts for within-ROI correlation, and second between-ROI distant correlation. We demonstrate in simulations how our basis space regression modeling strategy increases sensitivity for identifying activation signatures, partly driven by the induced background connectivity that itself can be summarized to reveal biological insights. This strategy leads to computationally scalable fully Bayesian inference at the voxel or ROI level that adjusts for multiple testing. We apply this model to Human Connectome Project data to reveal insights into brain activation patterns and background connectivity related to working memory tasks.

The present article proposes a partitioned Dirichlet-Neumann algorithm, that allows to address unique challenges arising from a novel mixed-dimensional coupling of very slender fibers embedded in fluid flow using a regularized mortar-type finite element discretization. The fibers are modeled via one-dimensional (1D) partial differential equations based on geometrically exact nonlinear beam theory, while the flow is described by the three-dimensional (3D) incompressible Navier-Stokes equations. The arising truly mixed-dimensional 1D-3D coupling scheme constitutes a novel approximate model and numerical strategy, that naturally necessitates specifically tailored solution schemes to ensure an accurate and efficient computational treatment. In particular, we present a strongly coupled partitioned solution algorithm based on a Quasi-Newton method for applications involving fibers with high slenderness ratios that usually present a challenge with regard to the well-known added mass effect. The influence of all employed algorithmic and numerical parameters, namely the applied acceleration technique, the employed constraint regularization parameter as well as shape functions, on efficiency and results of the solution procedure is studied through appropriate examples. Finally, the convergence of the two-way coupled mixed-dimensional problem solution under uniform mesh refinement is demonstrated, a comparison to a 3D reference solution is performed, and the method's capabilities in capturing flow phenomena at large geometric scale separation is illustrated by the example of a submersed vegetation canopy.

Of all the possible projection methods for solving large-scale Lyapunov matrix equations, Galerkin approaches remain much more popular than Petrov-Galerkin ones. This is mainly due to the different nature of the projected problems stemming from these two families of methods. While a Galerkin approach leads to the solution of a low-dimensional matrix equation per iteration, a matrix least-squares problem needs to be solved per iteration in a Petrov-Galerkin setting. The significant computational cost of these least-squares problems has steered researchers towards Galerkin methods in spite of the appealing minimization properties of Petrov-Galerkin schemes. In this paper we introduce a framework that allows for modifying the Galerkin approach by low-rank, additive corrections to the projected matrix equation problem with the two-fold goal of attaining monotonic convergence rates similar to those of Petrov-Galerkin schemes while maintaining essentially the same computational cost of the original Galerkin method. We analyze the well-posedness of our framework and determine possible scenarios where we expect the residual norm attained by two low-rank-modified variants to behave similarly to the one computed by a Petrov-Galerkin technique. A panel of diverse numerical examples shows the behavior and potential of our new approach.

The trace plot is seldom used in meta-analysis, yet it is a very informative plot. In this article we define and illustrate what the trace plot is, and discuss why it is important. The Bayesian version of the plot combines the posterior density of tau, the between-study standard deviation, and the shrunken estimates of the study effects as a function of tau. With a small or moderate number of studies, tau is not estimated with much precision, and parameter estimates and shrunken study effect estimates can vary widely depending on the correct value of tau. The trace plot allows visualization of the sensitivity to tau along with a plot that shows which values of tau are plausible and which are implausible. A comparable frequentist or empirical Bayes version provides similar results. The concepts are illustrated using examples in meta-analysis and meta-regression; implementaton in R is facilitated in a Bayesian or frequentist framework using the bayesmeta and metafor packages, respectively.

This work introduces a novel cause-effect relation in Markov decision processes using the probability-raising principle. Initially, sets of states as causes and effects are considered, which is subsequently extended to regular path properties as effects and then as causes. The paper lays the mathematical foundations and analyzes the algorithmic properties of these cause-effect relations. This includes algorithms for checking cause conditions given an effect and deciding the existence of probability-raising causes. As the definition allows for sub-optimal coverage properties, quality measures for causes inspired by concepts of statistical analysis are studied. These include recall, coverage ratio and f-score. The computational complexity for finding optimal causes with respect to these measures is analyzed.

The Gearhart-Koshy acceleration for the Kaczmarz method for linear systems is a line-search with the unusual property that it does not minimize the residual, but the error. Recently one of the authors generalized the this acceleration from a line-search to a search in affine subspaces. In this paper, we demonstrate that the affine search is a Krylov space method that is neither a CG-type nor a MINRES-type method, and we prove that it is mathematically equivalent with a more canonical Gram-Schmidt-based method. We also investigate what abstract property of the Kaczmarz method enables this type of algorithm, and we conclude with a simple numerical example.

北京阿比特科技有限公司