亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We design an algorithm for computing the $L$-series associated to an Anderson $t$-motives, exhibiting quasilinear complexity with respect to the target precision. Based on experiments, we conjecture that the order of vanishing at $T=1$ of the $v$-adic $L$-series of a given Anderson $t$-motive with good reduction does not depend on the finite place $v$.

相關內容

The $k$-principal component analysis ($k$-PCA) problem is a fundamental algorithmic primitive that is widely-used in data analysis and dimensionality reduction applications. In statistical settings, the goal of $k$-PCA is to identify a top eigenspace of the covariance matrix of a distribution, which we only have implicit access to via samples. Motivated by these implicit settings, we analyze black-box deflation methods as a framework for designing $k$-PCA algorithms, where we model access to the unknown target matrix via a black-box $1$-PCA oracle which returns an approximate top eigenvector, under two popular notions of approximation. Despite being arguably the most natural reduction-based approach to $k$-PCA algorithm design, such black-box methods, which recursively call a $1$-PCA oracle $k$ times, were previously poorly-understood. Our main contribution is significantly sharper bounds on the approximation parameter degradation of deflation methods for $k$-PCA. For a quadratic form notion of approximation we term ePCA (energy PCA), we show deflation methods suffer no parameter loss. For an alternative well-studied approximation notion we term cPCA (correlation PCA), we tightly characterize the parameter regimes where deflation methods are feasible. Moreover, we show that in all feasible regimes, $k$-cPCA deflation algorithms suffer no asymptotic parameter loss for any constant $k$. We apply our framework to obtain state-of-the-art $k$-PCA algorithms robust to dataset contamination, improving prior work both in sample complexity and approximation quality.

In order to compute the Fourier transform of a function $f$ on the real line numerically, one samples $f$ on a grid and then takes the discrete Fourier transform. We derive exact error estimates for this procedure in terms of the decay and smoothness of $f$. The analysis provides a new recipe of how to relate the number of samples, the sampling interval, and the grid size.

The permanent of a non-negative square matrix can be well approximated by finding the minimum of the Bethe free energy functions associated with some suitably defined factor graph; the resulting approximation to the permanent is called the Bethe permanent. Vontobel gave a combinatorial characterization of the Bethe permanent via degree-$M$ Bethe permanents, which is based on degree-$M$ covers of the underlying factor graph. In this paper, we prove a degree-$M$-Bethe-permanent-based lower bound on the permanent of a non-negative matrix, which solves a conjecture proposed by Vontobel in [IEEE Trans. Inf. Theory, Mar. 2013]. We also prove a degree-$M$-Bethe-permanent-based upper bound on the permanent of a non-negative matrix. In the limit $M \to \infty$, these lower and upper bounds yield known Bethe-permanent-based lower and upper bounds on the permanent of a non-negative matrix. Moreover, we prove similar results for an approximation to the permanent known as the (scaled) Sinkhorn permanent.

A convergent numerical method for $\alpha$-dissipative solutions of the Hunter--Saxton equation is derived. The method is based on applying a tailor-made projection operator to the initial data, and then solving exactly using the generalized method of characteristics. The projection step is the only step that introduces any approximation error. It is therefore crucial that its design ensures not only a good approximation of the initial data, but also that errors due to the energy dissipation at later times remain small. Furthermore, it is shown that the main quantity of interest, the wave profile, converges in $L^{\infty}$ for all $t \geq 0$, while a subsequence of the energy density converges weakly for almost every time.

We consider the equivalence between the two main categorical models for the type-theoretical operation of context comprehension, namely P. Dybjer's categories with families and B. Jacobs' comprehension categories, and generalise it to the non-discrete case. The classical equivalence can be summarised in the slogan: "terms as sections". By recognising "terms as coalgebras", we show how to use the structure-semantics adjunction to prove that a 2-category of comprehension categories is biequivalent to a 2-category of (non-discrete) categories with families. The biequivalence restricts to the classical one proved by Hofmann in the discrete case. It also provides a framework where to compare different morphisms of these structures that have appeared in the literature, varying on the degree of preservation of the relevant structure. We consider in particular morphisms defined by Claraimbault-Dybjer, Jacobs, Larrea, and Uemura.

Let $D$ be a digraph. Its acyclic number $\vec{\alpha}(D)$ is the maximum order of an acyclic induced subdigraph and its dichromatic number $\vec{\chi}(D)$ is the least integer $k$ such that $V(D)$ can be partitioned into $k$ subsets inducing acyclic subdigraphs. We study ${\vec a}(n)$ and $\vec t(n)$ which are the minimum of $\vec\alpha(D)$ and the maximum of $\vec{\chi}(D)$, respectively, over all oriented triangle-free graphs of order $n$. For every $\epsilon>0$ and $n$ large enough, we show $(1/\sqrt{2} - \epsilon) \sqrt{n\log n} \leq \vec{a}(n) \leq \frac{107}{8} \sqrt n \log n$ and $\frac{8}{107} \sqrt n/\log n \leq \vec{t}(n) \leq (\sqrt 2 + \epsilon) \sqrt{n/\log n}$. We also construct an oriented triangle-free graph on 25 vertices with dichromatic number~3, and show that every oriented triangle-free graph of order at most 17 has dichromatic number at most 2.

Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be consistently estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing runtimes between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.

We present our investigation of the study of two variable hypergeometric series, namely Appell $F_{1}$ and $F_{3}$ series, and obtain a comprehensive list of its analytic continuations enough to cover the whole real $(x,y)$ plane, except on their singular loci. We also derive analytic continuations of their 3-variable generalization, the Lauricella $F_{D}^{(3)}$ series and the Lauricella-Saran $F_{S}^{(3)}$ series, leveraging the analytic continuations of $F_{1}$ and $F_{3}$, which ensures that the whole real $(x,y,z)$ space is covered, except on the singular loci of these functions. While these studies are motivated by the frequent occurrence of these multivariable hypergeometric functions in Feynman integral evaluation, they can also be used whenever they appear in other branches of mathematical physics. To facilitate their practical use, we provide four packages: $\texttt{AppellF1.wl}$, $\texttt{AppellF3.wl}$, $\texttt{LauricellaFD.wl}$, and $\texttt{LauricellaSaranFS.wl}$ in $\textit{MATHEMATICA}$. These packages are applicable for generic as well as non-generic values of parameters, keeping in mind their utilities in the evaluation of the Feynman integrals. We explicitly present various physical applications of these packages in the context of Feynman integral evaluation and compare the results using other means as well. Various $\textit{MATHEMATICA}$ notebooks demonstrating different numerical results are also provided along with this paper.

An $(n,m)$-graph is characterised by having $n$ types of arcs and $m$ types of edges. A homomorphism of an $(n,m)$-graph $G$ to an $(n,m)$-graph $H$, is a vertex mapping that preserves adjacency, direction, and type. The $(n,m)$-chromatic number of $G$, denoted by $\chi_{n,m}(G)$, is the minimum value of $|V(H)|$ such that there exists a homomorphism of $G$ to $H$. The theory of homomorphisms of $(n,m)$-graphs have connections with graph theoretic concepts like harmonious coloring, nowhere-zero flows; with other mathematical topics like binary predicate logic, Coxeter groups; and has application to the Query Evaluation Problem (QEP) in graph database. In this article, we show that the arboricity of $G$ is bounded by a function of $\chi_{n,m}(G)$ but not the other way around. Additionally, we show that the acyclic chromatic number of $G$ is bounded by a function of $\chi_{n,m}(G)$, a result already known in the reverse direction. Furthermore, we prove that the $(n,m)$-chromatic number for the family of graphs with a maximum average degree less than $2+ \frac{2}{4(2n+m)-1}$, including the subfamily of planar graphs with girth at least $8(2n+m)$, equals $2(2n+m)+1$. This improves upon previous findings, which proved the $(n,m)$-chromatic number for planar graphs with girth at least $10(2n+m)-4$ is $2(2n+m)+1$. It is established that the $(n,m)$-chromatic number for the family $\mathcal{T}_2$ of partial $2$-trees is both bounded below and above by quadratic functions of $(2n+m)$, with the lower bound being tight when $(2n+m)=2$. We prove $14 \leq \chi_{(0,3)}(\mathcal{T}_2) \leq 15$ and $14 \leq \chi_{(1,1)}(\mathcal{T}_2) \leq 21$ which improves both known lower bounds and the former upper bound. Moreover, for the latter upper bound, to the best of our knowledge we provide the first theoretical proof.

We study the problem of adaptive variable selection in a Gaussian white noise model of intensity $\varepsilon$ under certain sparsity and regularity conditions on an unknown regression function $f$. The $d$-variate regression function $f$ is assumed to be a sum of functions each depending on a smaller number $k$ of variables ($1 \leq k \leq d$). These functions are unknown to us and only few of them are nonzero. We assume that $d=d_\varepsilon \to \infty$ as $\varepsilon \to 0$ and consider the cases when $k$ is fixed and when $k=k_\varepsilon \to \infty$, $k=o(d)$ as $\varepsilon \to 0$. In this work, we introduce an adaptive selection procedure that, under some model assumptions, identifies exactly all nonzero $k$-variate components of $f$. In addition, we establish conditions under which exact identification of the nonzero components is impossible. These conditions ensure that the proposed selection procedure is the best possible in the asymptotically minimax sense with respect to the Hamming risk.

北京阿比特科技有限公司