亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We evaluate the values of the Lebesgue constants in polynomial interpolation for three types of Cantor sets. In all cases, the sequences of Lebesgue constants are not bounded. This disproves the statement by Mergelyan.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系(xi)統編(bian)譯器、體系(xi)結構和(he)綜合國際會議。 Publisher:ACM。 SIT:

We study the selection of adjustment sets for estimating the interventional mean under an individualized treatment rule. We assume a non-parametric causal graphical model with, possibly, hidden variables and at least one adjustment set comprised of observable variables. Moreover, we assume that observable variables have positive costs associated with them. We define the cost of an observable adjustment set as the sum of the costs of the variables that comprise it. We show that in this setting there exist adjustment sets that are minimum cost optimal, in the sense that they yield non-parametric estimators of the interventional mean with the smallest asymptotic variance among those that control for observable adjustment sets that have minimum cost. Our results are based on the construction of a special flow network associated with the original causal graph. We show that a minimum cost optimal adjustment set can be found by computing a maximum flow on the network, and then finding the set of vertices that are reachable from the source by augmenting paths. The optimaladj Python package implements the algorithms introduced in this paper.

A new clustering accuracy measure is proposed to determine the unknown number of clusters and to assess the quality of clustering of a data set given in any dimensional space. Our validity index applies the classical nonparametric univariate kernel density estimation method to the interpoint distances computed between the members of data. Being based on interpoint distances, it is free of the curse of dimensionality and therefore efficiently computable for high-dimensional situations where the number of study variables can be larger than the sample size. The proposed measure is compatible with any clustering algorithm and with every kind of data set where the interpoint distance measure can be defined to have a density function. Simulation study proves its superiority over widely used cluster validity indices like the average silhouette width and the Dunn index, whereas its applicability is shown with respect to a high-dimensional Biostatistical study of Alon data set and a large Astrostatistical application of time series with light curves of new variable stars.

For a graph class ${\cal H}$, the graph parameters elimination distance to ${\cal H}$ (denoted by ${\bf ed}_{\cal H}$) [Bulian and Dawar, Algorithmica, 2016], and ${\cal H}$-treewidth (denoted by ${\bf tw}_{\cal H}$) [Eiben et al. JCSS, 2021] aim to minimize the treedepth and treewidth, respectively, of the "torso" of the graph induced on a modulator to the graph class ${\cal H}$. Here, the torso of a vertex set $S$ in a graph $G$ is the graph with vertex set $S$ and an edge between two vertices $u, v \in S$ if there is a path between $u$ and $v$ in $G$ whose internal vertices all lie outside $S$. In this paper, we show that from the perspective of (non-uniform) fixed-parameter tractability (FPT), the three parameters described above give equally powerful parameterizations for every hereditary graph class ${\cal H}$ that satisfies mild additional conditions. In fact, we show that for every hereditary graph class ${\cal H}$ satisfying mild additional conditions, with the exception of ${\bf tw}_{\cal H}$ parameterized by ${\bf ed}_{\cal H}$, for every pair of these parameters, computing one parameterized by itself or any of the others is FPT-equivalent to the standard vertex-deletion (to ${\cal H}$) problem. As an example, we prove that an FPT algorithm for the vertex-deletion problem implies a non-uniform FPT algorithm for computing ${\bf ed}_{\cal H}$ and ${\bf tw}_{\cal H}$. The conclusions of non-uniform FPT algorithms being somewhat unsatisfactory, we essentially prove that if ${\cal H}$ is hereditary, union-closed, CMSO-definable, and (a) the canonical equivalence relation (or any refinement thereof) for membership in the class can be efficiently computed, or (b) the class admits a "strong irrelevant vertex rule", then there exists a uniform FPT algorithm for ${\bf ed}_{\cal H}$.

We first introduce a family of binary $pq^2$-periodic sequences based on the Euler quotients modulo $pq$, where $p$ and $q$ are two distinct odd primes and $p$ divides $q-1$. The minimal polynomials and linear complexities are determined for the proposed sequences provided that $2^{q-1} \not\equiv 1 \mod{q^2}.$ The results show that the proposed sequences have high linear complexities.

The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perkovi{\'c} et al. [Journal of Machine Learning Research, 18: 1--62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90\% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold. Code is available as part of the python package \url{//github.com/jakobrunge/tigramite}.

The aim of this thesis is to develop a theoretical framework to study parameter estimation of quantum channels. We study the task of estimating unknown parameters encoded in a channel in the sequential setting. A sequential strategy is the most general way to use a channel multiple times. Our goal is to establish lower bounds (called Cramer-Rao bounds) on the estimation error. The bounds we develop are universally applicable; i.e., they apply to all permissible quantum dynamics. We consider the use of catalysts to enhance the power of a channel estimation strategy. This is termed amortization. The power of a channel for a parameter estimation is determined by its Fisher information. Thus, we study how much a catalyst quantum state can enhance the Fisher information of a channel by defining the amortized Fisher information. We establish our bounds by proving that for certain Fisher information quantities, catalyst states do not improve the performance of a sequential estimation protocol compared to a parallel one. The technical term for this is an amortization collapse. We use this to establish bounds when estimating one parameter, or multiple parameters simultaneously. Our bounds apply universally and we also cast them as optimization problems. For the single parameter case, we establish bounds for general quantum channels using both the symmetric logarithmic derivative (SLD) Fisher information and the right logarithmic derivative (RLD) Fisher information. The task of estimating multiple parameters simultaneously is more involved than the single parameter case, because the Cramer-Rao bounds take the form of matrix inequalities. We establish a scalar Cramer-Rao bound for multiparameter channel estimation using the RLD Fisher information. For both single and multiparameter estimation, we provide a no-go condition for the so-called Heisenberg scaling using our RLD-based bound.

We estimate best-approximation errors using vector-valued finite elements for fields with low regularity in the scale of fractional-order Sobolev spaces. By assuming additionally that the target field has a curl or divergence property, we establish upper bounds on these errors that can be localized to the mesh cells. These bounds are derived using the quasi-interpolation errors with or without boundary prescription derived in [A. Ern and J.-L. Guermond, ESAIM Math. Model. Numer. Anal., 51 (2017), pp.~1367--1385]. By using the face-to-cell lifting operators analyzed in [A. Ern and J.-L. Guermond, Found. Comput. Math., (2021)], and exploiting the additional assumption made on the curl or the divergence of the target field, a localized upper bound on the quasi-interpolation error is derived. As an illustration, we show how to apply these results to the error analysis of the curl-curl problem associated with Maxwell's equations.

Reaction networks are often used to model interacting species in fields such as biochemistry and ecology. When the counts of the species are sufficiently large, the dynamics of their concentrations are typically modeled via a system of differential equations. However, when the counts of some species are small, the dynamics of the counts are typically modeled stochastically via a discrete state, continuous time Markov chain. A key quantity of interest for such models is the probability mass function of the process at some fixed time. Since paths of such models are relatively straightforward to simulate, we can estimate the probabilities by constructing an empirical distribution. However, the support of the distribution is often diffuse across a high-dimensional state space, where the dimension is equal to the number of species. Therefore generating an accurate empirical distribution can come with a large computational cost. We present a new Monte Carlo estimator that fundamentally improves on the "classical" Monte Carlo estimator described above. It also preserves much of classical Monte Carlo's simplicity. The idea is basically one of conditional Monte Carlo. Our conditional Monte Carlo estimator has two parameters, and their choice critically affects the performance of the algorithm. Hence, a key contribution of the present work is that we demonstrate how to approximate optimal values for these parameters in an efficient manner. Moreover, we provide a central limit theorem for our estimator, which leads to approximate confidence intervals for its error.

The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between one-dimensional random projections of the two measures. However, the topological, statistical, and computational consequences of this technique have not yet been well-established. In this paper, we aim at bridging this gap and derive various theoretical properties of sliced probability divergences. First, we show that slicing preserves the metric axioms and the weak continuity of the divergence, implying that the sliced divergence will share similar topological properties. We then precise the results in the case where the base divergence belongs to the class of integral probability metrics. On the other hand, we establish that, under mild conditions, the sample complexity of a sliced divergence does not depend on the problem dimension. We finally apply our general results to several base divergences, and illustrate our theory on both synthetic and real data experiments.

We introduce a new metric ($W_\nu$ $\nu$-based Wasserstein metric) on the set of probability measures on $X \subseteq \mathbb{R}^m$, based on a slight refinement of the notion of generalized geodesics with respect to a base measure $\nu$, relevant in particular for the case when $\nu$ is singular with respect to $m$-dimensional Lebesgue measure. $W_\nu$ is defined in terms of an iterated variational problem involving optimal transport to $\nu$; we also characterize it in terms of integrations of classical Wasserstein distance between the conditional probabilities with respect to $\nu$, and through limits of certain multi-marginal optimal transport problems. We also introduce a class of metrics which are dual in a certain sense to $W_\nu$ on the set of measures which are absolutely continuous with respect to a second fixed based measure $\sigma$.As we vary the base measure $\nu$, $W_\nu$ interpolates between the usual quadratic Wasserstein distance and a metric associated with the uniquely defined generalized geodesics obtained when $\nu$ is sufficiently regular. When $\nu$ concentrates on a lower dimensional submanifold of $\mathbb{R}^m$, we prove that the variational problem in the definition of the $\nu$-based Wasserstein distance has a unique solution. We establish geodesic convexity of the usual class of functionals and of the set of source measures $\mu$ such that optimal transport between $\mu$ and $\nu$ satisfies a strengthening of the generalized nestedness condition introduced in \cite{McCannPass20}. We also present two applications of the ideas introduced here. First, our dual metric is used to prove convergence of an iterative scheme to solve a variational problem arising in game theory. We also use the multi-marginal formulation to characterize solutions to the multi-marginal problem by an ordinary differential equation, yielding a new numerical method for it.

北京阿比特科技有限公司