亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Geometrically continuous splines are piecewise polynomial functions defined on a collection of patches which are stitched together through transition maps. They are called $G^{r}$-splines if, after composition with the transition maps, they are continuously differentiable functions to order $r$ on each pair of patches with stitched boundaries. This type of splines has been used to represent smooth shapes with complex topology for which (parametric) spline functions on fixed partitions are not sufficient. In this article, we develop new algebraic tools to analyze $G^r$-spline spaces. We define $G^{r}$-domains and transition maps using an algebraic approach, and establish an algebraic criterion to determine whether a piecewise function is $G^r$-continuous on the given domain. In the proposed framework, we construct a chain complex whose top homology is isomorphic to the $G^{r}$-spline space. This complex generalizes Billera-Schenck-Stillman homological complex used to study parametric splines. Additionally, we show how previous constructions of $G^r$-splines fit into this new algebraic framework, and present an algorithm to construct a bases for $G^r$-spline spaces. We illustrate how our algebraic approach works with concrete examples, and prove a dimension formula for the $G^r$-spline space in terms of invariants to the chain complex. In some special cases, explicit dimension formulas in terms of the degree of splines are also given.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

We develop a flexible stochastic approximation framework for analyzing the long-run behavior of learning in games (both continuous and finite). The proposed analysis template incorporates a wide array of popular learning algorithms, including gradient-based methods, the exponential/multiplicative weights algorithm for learning in finite games, optimistic and bandit variants of the above, etc. In addition to providing an integrated view of these algorithms, our framework further allows us to obtain several new convergence results, both asymptotic and in finite time, in both continuous and finite games. Specifically, we provide a range of criteria for identifying classes of Nash equilibria and sets of action profiles that are attracting with high probability, and we also introduce the notion of coherence, a game-theoretic property that includes strict and sharp equilibria, and which leads to convergence in finite time. Importantly, our analysis applies to both oracle-based and bandit, payoff-based methods - that is, when players only observe their realized payoffs.

Randomized subspace approximation with "matrix sketching" is an effective approach for constructing approximate partial singular value decompositions (SVDs) of large matrices. The performance of such techniques has been extensively analyzed, and very precise estimates on the distribution of the residual errors have been derived. However, our understanding of the accuracy of the computed singular vectors (measured in terms of the canonical angles between the spaces spanned by the exact and the computed singular vectors, respectively) remains relatively limited. In this work, we present bounds and estimates for canonical angles of randomized subspace approximation that can be computed efficiently either a priori or a posterior. Under moderate oversampling in the randomized SVD, our prior probabilistic bounds are asymptotically tight and can be computed efficiently, while bringing a clear insight into the balance between oversampling and power iterations given a fixed budget on the number of matrix-vector multiplications. The numerical experiments demonstrate the empirical effectiveness of these canonical angle bounds and estimates on different matrices under various algorithmic choices for the randomized SVD.

The chain graph model admits both undirected and directed edges in one graph, where symmetric conditional dependencies are encoded via undirected edges and asymmetric causal relations are encoded via directed edges. Though frequently encountered in practice, the chain graph model has been largely under investigated in literature, possibly due to the lack of identifiability conditions between undirected and directed edges. In this paper, we first establish a set of novel identifiability conditions for the Gaussian chain graph model, exploiting a low rank plus sparse decomposition of the precision matrix. Further, an efficient learning algorithm is built upon the identifiability conditions to fully recover the chain graph structure. Theoretical analysis on the proposed method is conducted, assuring its asymptotic consistency in recovering the exact chain graph structure. The advantage of the proposed method is also supported by numerical experiments on both simulated examples and a real application on the Standard & Poor 500 index data.

While meta-analyzing retrospective cancer patient cohorts, an investigation of differences in the expressions of target oncogenes across cancer subtypes is of substantial interest because the results may uncover novel tumorigenesis mechanisms and improve screening and treatment strategies. Weighting methods facilitate unconfounded comparisons of multigroup potential outcomes in multiple observational studies. For example, Guha et al. (2022) introduced concordant weights, allowing integrative analyses of survival outcomes by maximizing the effective sample size. However, it remains unclear how to use this or other weighting approaches to analyze a variety of continuous, categorical, ordinal, or multivariate outcomes, especially when research interests prioritize uncommon or unplanned estimands suggested by post hoc analyses; examples include percentiles and moments of group potential outcomes and pairwise correlations of multivariate outcomes. This paper proposes a unified meta-analytical approach accommodating various types of endpoints and fosters new estimators compatible with most weighting frameworks. Asymptotic properties of the estimators are investigated under mild assumptions. For undersampled groups, we devise small-sample procedures for quantifying estimation uncertainty. We meta-analyze multi-site TCGA breast cancer data, shedding light on the differential mRNA expression patterns of eight targeted genes for the subtypes infiltrating ductal carcinoma and infiltrating lobular carcinoma.

We derive an explicit formula, valid for all integers $r,d\ge 0$, for the dimension of the vector space $C^r_d(\Delta)$ of piecewise polynomial functions continuously differentiable to order $r$ and whose constituents have degree at most $d$, where $\Delta$ is a planar triangulation that has a single totally interior edge. This extends previous results of Toh\v{a}neanu, Min\'{a}\v{c}, and Sorokina. Our result is a natural successor of Schumaker's 1979 dimension formula for splines on a planar vertex star. Indeed, there has not been a dimension formula in this level of generality (valid for all integers $d,r\ge 0$ and any vertex coordinates) since Schumaker's result. We derive our results using commutative algebra.

In this paper, we study the maximum clique problem on hyperbolic random graphs. A hyperbolic random graph is a mathematical model for analyzing scale-free networks since it effectively explains the power-law degree distribution of scale-free networks. We propose a simple algorithm for finding a maximum clique in hyperbolic random graph. We first analyze the running time of our algorithm theoretically. We can compute a maximum clique on a hyperbolic random graph $G$ in $O(m + n^{4.5(1-\alpha)})$ expected time if a geometric representation is given or in $O(m + n^{6(1-\alpha)})$ expected time if a geometric representation is not given, where $n$ and $m$ denote the numbers of vertices and edges of $G$, respectively, and $\alpha$ denotes a parameter controlling the power-law exponent of the degree distribution of $G$. Also, we implemented and evaluated our algorithm empirically. Our algorithm outperforms the previous algorithm [BFK18] practically and theoretically. Beyond the hyperbolic random graphs, we have experiment on real-world networks. For most of instances, we get large cliques close to the optimum solutions efficiently.

Integrating multiple observational studies to make unconfounded causal or descriptive comparisons of group potential outcomes in a large natural population is challenging. Moreover, retrospective cohorts, being convenience samples, are usually unrepresentative of the natural population of interest and have groups with unbalanced covariates. We propose a general covariate-balancing framework based on pseudo-populations that extends established weighting methods to the meta-analysis of multiple retrospective cohorts with multiple groups. Additionally, by maximizing the effective sample sizes of the cohorts, we propose a FLEXible, Optimized, and Realistic (FLEXOR) weighting method appropriate for integrative analyses. We develop new weighted estimators for unconfounded inferences on wide-ranging population-level features and estimands relevant to group comparisons of quantitative, categorical, or multivariate outcomes. The asymptotic properties of these estimators are examined, and accurate small-sample procedures are devised for quantifying estimation uncertainty. Through simulation studies and meta-analyses of TCGA datasets, we discover the differential biomarker patterns of the two major breast cancer subtypes in the United States and demonstrate the versatility and reliability of the proposed weighting strategy, especially for the FLEXOR pseudo-population.

We propose a causal framework for decomposing a group disparity in an outcome in terms of an intermediate treatment variable. Our framework captures the contributions of group differences in baseline potential outcome, treatment prevalence, average treatment effect, and selection into treatment. This framework is counterfactually formulated and readily informs policy interventions. The decomposition component for differential selection into treatment is particularly novel, revealing a new mechanism for explaining and ameliorating disparities. This framework reformulates the classic Kitagawa-Blinder-Oaxaca decomposition in causal terms, supplements causal mediation analysis by explaining group disparities instead of group effects, and resolves conceptual difficulties of recent random equalization decompositions. We also provide a conditional decomposition that allows researchers to incorporate covariates in defining the estimands and corresponding interventions. We develop nonparametric estimators based on efficient influence functions of the decompositions. We show that, under mild conditions, these estimators are $\sqrt{n}$-consistent, asymptotically normal, semiparametrically efficient, and doubly robust. We apply our framework to study the causal role of education in intergenerational income persistence. We find that both differential prevalence of and differential selection into college graduation significantly contribute to the disparity in income attainment between income origin groups.

Evaluating the quality of learned representations without relying on a downstream task remains one of the challenges in representation learning. In this work, we present Geometric Component Analysis (GeomCA) algorithm that evaluates representation spaces based on their geometric and topological properties. GeomCA can be applied to representations of any dimension, independently of the model that generated them. We demonstrate its applicability by analyzing representations obtained from a variety of scenarios, such as contrastive learning models, generative models and supervised learning models.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司