亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a computationally efficient algorithm that is suitable for GPU implementation. This algorithm enables the identification of all weak pseudo-manifolds that meet specific facet conditions, drawn from a given input set. We employ this approach to enumerate toric colorable seed PL-spheres. Consequently, we achieve a comprehensive characterization of PL-spheres of dimension n-1 with n+4 vertices that possess a maximal Buchstaber number. A primary focus of this research is the fundamental categorization of non-singular complete toric varieties of Picard number 4. This classification serves as a valuable tool for addressing questions related to toric manifolds with a Picard number of 4. Notably, we have determined which of these manifolds satisfy equality within an inequality regarding the number of minimal components in their rational curve space. This addresses a question posed by Chen-Fu-Hwang in 2014 for this specific case.

相關內容

We introduce and analyse a family of hash and predicate functions that are more likely to produce collisions for small reducible configurations of vectors. These may offer practical improvements to lattice sieving for short vectors. In particular, in one asymptotic regime the family exhibits significantly different convergent behaviour than existing hash functions and predicates.

Adaptive importance sampling (AIS) algorithms are widely used to approximate expectations with respect to complicated target probability distributions. When the target has heavy tails, existing AIS algorithms can provide inconsistent estimators or exhibit slow convergence, as they often neglect the target's tail behaviour. To avoid this pitfall, we propose an AIS algorithm that approximates the target by Student-t proposal distributions. We adapt location and scale parameters by matching the escort moments - which are defined even for heavy-tailed distributions - of the target and the proposal. These updates minimize the $\alpha$-divergence between the target and the proposal, thereby connecting with variational inference. We then show that the $\alpha$-divergence can be approximated by a generalized notion of effective sample size and leverage this new perspective to adapt the tail parameter with Bayesian optimization. We demonstrate the efficacy of our approach through applications to synthetic targets and a Bayesian Student-t regression task on a real example with clinical trial data.

In this paper we propose a local projector for truncated hierarchical B-splines (THB-splines). The local THB-spline projector is an adaptation of the B\'ezier projector proposed by Thomas et al. (Comput Methods Appl Mech Eng 284, 2015) for B-splines and analysis-suitable T-splines (AS T-splines). For THB-splines, there are elements on which the restrictions of THB-splines are linearly dependent, contrary to B-splines and AS T-splines. Therefore, we cluster certain local mesh elements together such that the THB-splines with support over these clusters are linearly independent, and the B\'ezier projector is adapted to use these clusters. We introduce general extensions for which optimal convergence is shown theoretically and numerically. In addition, a simple adaptive refinement scheme is introduced and compared to Giust et al. (Comput. Aided Geom. Des. 80, 2020), where we find that our simple approach shows promise.

We construct a monotone continuous $Q^1$ finite element method on the uniform mesh for the anisotropic diffusion problem with a diagonally dominant diffusion coefficient matrix. The monotonicity implies the discrete maximum principle. Convergence of the new scheme is rigorously proven. On quadrilateral meshes, the matrix coefficient conditions translate into specific a mesh constraint.

Recently it has become common for applied works to combine commonly used survival analysis modeling methods, such as the multivariable Cox model, and propensity score weighting with the intention of forming a doubly robust estimator that is unbiased in large samples when either the Cox model or the propensity score model is correctly specified. This combination does not, in general, produce a doubly robust estimator, even after regression standardization, when there is truly a causal effect. We demonstrate via simulation this lack of double robustness for the semiparametric Cox model, the Weibull proportional hazards model, and a simple proportional hazards flexible parametric model, with both the latter models fit via maximum likelihood. We provide a novel proof that the combination of propensity score weighting and a proportional hazards survival model, fit either via full or partial likelihood, is consistent under the null of no causal effect of the exposure on the outcome under particular censoring mechanisms if either the propensity score or the outcome model is correctly specified and contains all confounders. Given our results suggesting that double robustness only exists under the null, we outline two simple alternative estimators that are doubly robust for the survival difference at a given time point (in the above sense), provided the censoring mechanism can be correctly modeled, and one doubly robust method of estimation for the full survival curve. We provide R code to use these estimators for estimation and inference in the supplementary materials.

Boundary value problems involving elliptic PDEs such as the Laplace and the Helmholtz equations are ubiquitous in physics and engineering. Many such problems have alternative formulations as integral equations that are mathematically more tractable than their PDE counterparts. However, the integral equation formulation poses a challenge in solving the dense linear systems that arise upon discretization. In cases where iterative methods converge rapidly, existing methods that draw on fast summation schemes such as the Fast Multipole Method are highly efficient and well established. More recently, linear complexity direct solvers that sidestep convergence issues by directly computing an invertible factorization have been developed. However, storage and compute costs are high, which limits their ability to solve large-scale problems in practice. In this work, we introduce a distributed-memory parallel algorithm based on an existing direct solver named ``strong recursive skeletonization factorization.'' The analysis of its parallel scalability applies generally to a class of existing methods that exploit the so-called strong admissibility. Specifically, we apply low-rank compression to certain off-diagonal matrix blocks in a way that minimizes data movement. Given a compression tolerance, our method constructs an approximate factorization of a discretized integral operator (dense matrix), which can be used to solve linear systems efficiently in parallel. Compared to iterative algorithms, our method is particularly suitable for problems involving ill-conditioned matrices or multiple right-hand sides. Large-scale numerical experiments are presented to demonstrate the performance of our implementation using the Julia language.

For a hypergraph $H$, the transversal is a subset of vertices whose intersection with every edge is nonempty. The cardinality of a minimum transversal is the transversal number of $H$, denoted by $\tau(H)$. The Tuza constant $c_k$ is defined as $\sup{\tau(H)/ (m+n)}$, where $H$ ranges over all $k$-uniform hypergraphs, with $m$ and $n$ being the number of edges and vertices, respectively. We give an upper bound and a lower bound on $c_k$. The upper bound improves the known ones for $k\geq 7$, and the lower bound improves the known ones for $k\in\{7, 8, 10, 11, 13, 14, 17\}$.

We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.

The monotonicity of discrete Laplacian implies discrete maximum principle, which in general does not hold for high order schemes. The $Q^2$ spectral element method has been proven monotone on a uniform rectangular mesh. In this paper we prove the monotonicity of the $Q^2$ spectral element method on quasi-uniform rectangular meshes under certain mesh constraints. In particular, we propose a relaxed Lorenz's condition for proving monotonicity.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司