亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Despite numerous years of research into the merits and trade-offs of various model selection criteria, obtaining robust results that elucidate the behavior of cross-validation remains a challenging endeavor. In this paper, we highlight the inherent limitations of cross-validation when employed to discern the structure of a Gaussian graphical model. We provide finite-sample bounds on the probability that the Lasso estimator for the neighborhood of a node within a Gaussian graphical model, optimized using a prediction oracle, misidentifies the neighborhood. Our results pertain to both undirected and directed acyclic graphs, encompassing general, sparse covariance structures. To support our theoretical findings, we conduct an empirical investigation of this inconsistency by contrasting our outcomes with other commonly used information criteria through an extensive simulation study. Given that many algorithms designed to learn the structure of graphical models require hyperparameter selection, the precise calibration of this hyperparameter is paramount for accurately estimating the inherent structure. Consequently, our observations shed light on this widely recognized practical challenge.

相關內容

《圖形模型》是國際公認的高評價的頂級期刊,專注于圖形模型的創建、幾何處理、動畫和可視化,以及它們在工程、科學、文化和娛樂方面的應用。GMOD為其讀者提供了經過徹底審查和精心挑選的論文,這些論文傳播令人興奮的創新,傳授嚴謹的理論基礎,提出健壯和有效的解決方案,或描述各種主題中的雄心勃勃的系統或應用程序。 官網地址:

In this paper, we design a new kind of high order inverse Lax-Wendroff (ILW) boundary treatment for solving hyperbolic conservation laws with finite difference method on a Cartesian mesh. This new ILW method decomposes the construction of ghost point values near inflow boundary into two steps: interpolation and extrapolation. At first, we impose values of some artificial auxiliary points through a polynomial interpolating the interior points near the boundary. Then, we will construct a Hermite extrapolation based on those auxiliary point values and the spatial derivatives at boundary obtained via the ILW procedure. This polynomial will give us the approximation to the ghost point value. By an appropriate selection of those artificial auxiliary points, high-order accuracy and stable results can be achieved. Moreover, theoretical analysis indicates that comparing with the original ILW method, especially for higher order accuracy, the new proposed one would require fewer terms using the relatively complicated ILW procedure and thus improve computational efficiency on the premise of maintaining accuracy and stability. We perform numerical experiments on several benchmarks, including one- and two-dimensional scalar equations and systems. The robustness and efficiency of the proposed scheme is numerically verified.

We give a fully polynomial-time randomized approximation scheme (FPRAS) for two terminal reliability in directed acyclic graphs (DAGs). In contrast, we also show the complementing problem of approximating two terminal unreliability in DAGs is #BIS-hard.

Motivated by the important statistical role of sparsity, the paper uncovers four reparametrizations for covariance matrices in which sparsity is associated with conditional independence graphs in a notional Gaussian model. The intimate relationship between the Iwasawa decomposition of the general linear group and the open cone of positive definite matrices allows a unifying perspective. Specifically, the positive definite cone can be reconstructed without loss or redundancy from the exponential map applied to four Lie subalgebras determined by the Iwasawa decomposition of the general linear group. This accords geometric interpretations to the reparametrizations and the corresponding notion of sparsity. Conditions that ensure legitimacy of the reparametrizations for statistical models are identified. While the focus of this work is on understanding population-level structure, there are strong methodological implications. In particular, since the population-level sparsity manifests in a vector space, imposition of sparsity on relevant sample quantities produces a covariance estimate that respects the positive definite cone constraint.

Inner products of neural network feature maps arises in a wide variety of machine learning frameworks as a method of modeling relations between inputs. This work studies the approximation properties of inner products of neural networks. It is shown that the inner product of a multi-layer perceptron with itself is a universal approximator for symmetric positive-definite relation functions. In the case of asymmetric relation functions, it is shown that the inner product of two different multi-layer perceptrons is a universal approximator. In both cases, a bound is obtained on the number of neurons required to achieve a given accuracy of approximation. In the symmetric case, the function class can be identified with kernels of reproducing kernel Hilbert spaces, whereas in the asymmetric case the function class can be identified with kernels of reproducing kernel Banach spaces. Finally, these approximation results are applied to analyzing the attention mechanism underlying Transformers, showing that any retrieval mechanism defined by an abstract preorder can be approximated by attention through its inner product relations. This result uses the Debreu representation theorem in economics to represent preference relations in terms of utility functions.

The exponential growth in scientific publications poses a severe challenge for human researchers. It forces attention to more narrow sub-fields, which makes it challenging to discover new impactful research ideas and collaborations outside one's own field. While there are ways to predict a scientific paper's future citation counts, they need the research to be finished and the paper written, usually assessing impact long after the idea was conceived. Here we show how to predict the impact of onsets of ideas that have never been published by researchers. For that, we developed a large evolving knowledge graph built from more than 21 million scientific papers. It combines a semantic network created from the content of the papers and an impact network created from the historic citations of papers. Using machine learning, we can predict the dynamic of the evolving network into the future with high accuracy, and thereby the impact of new research directions. We envision that the ability to predict the impact of new ideas will be a crucial component of future artificial muses that can inspire new impactful and interesting scientific ideas.

In this paper, we use the Bayesian inversion approach to study the data assimilation problem for a family of tumor growth models described by porous-medium type equations. The models contain uncertain parameters and are indexed by a physical parameter $m$, which characterizes the constitutive relation between density and pressure. Based on these models, we employ the Bayesian inversion framework to infer parametric and nonparametric unknowns that affect tumor growth from noisy observations of tumor cell density. We establish the well-posedness and the stability theories for the Bayesian inversion problem and further prove the convergence of the posterior distribution in the so-called incompressible limit, $m \rightarrow \infty$. Since the posterior distribution across the index regime $m\in[2,\infty)$ can thus be treated in a unified manner, such theoretical results also guide the design of the numerical inference for the unknown. We propose a generic computational framework for such inverse problems, which consists of a typical sampling algorithm and an asymptotic preserving solver for the forward problem. With extensive numerical tests, we demonstrate that the proposed method achieves satisfactory accuracy in the Bayesian inference of the tumor growth models, which is uniform with respect to the constitutive relation.

The goal of this paper is to investigate a family of optimization problems arising from list homomorphisms, and to understand what the best possible algorithms are if we restrict the problem to bounded-treewidth graphs. For a fixed $H$, the input of the optimization problem LHomVD($H$) is a graph $G$ with lists $L(v)$, and the task is to find a set $X$ of vertices having minimum size such that $(G-X,L)$ has a list homomorphism to $H$. We define analogously the edge-deletion variant LHomED($H$). This expressive family of problems includes members that are essentially equivalent to fundamental problems such as Vertex Cover, Max Cut, Odd Cycle Transversal, and Edge/Vertex Multiway Cut. For both variants, we first characterize those graphs $H$ that make the problem polynomial-time solvable and show that the problem is NP-hard for every other fixed $H$. Second, as our main result, we determine for every graph $H$ for which the problem is NP-hard, the smallest possible constant $c_H$ such that the problem can be solved in time $c^t_H\cdot n^{O(1)}$ if a tree decomposition of $G$ having width $t$ is given in the input.Let $i(H)$ be the maximum size of a set of vertices in $H$ that have pairwise incomparable neighborhoods. For the vertex-deletion variant LHomVD($H$), we show that the smallest possible constant is $i(H)+1$ for every $H$. The situation is more complex for the edge-deletion version. For every $H$, one can solve LHomED($H$) in time $i(H)^t\cdot n^{O(1)}$ if a tree decomposition of width $t$ is given. However, the existence of a specific type of decomposition of $H$ shows that there are graphs $H$ where LHomED($H$) can be solved significantly more efficiently and the best possible constant can be arbitrarily smaller than $i(H)$. Nevertheless, we determine this best possible constant and (assuming the SETH) prove tight bounds for every fixed $H$.

Strong spatial mixing (SSM) is an important quantitative notion of correlation decay for Gibbs distributions arising in statistical physics, probability theory, and theoretical computer science. A longstanding conjecture is that the uniform distribution on proper $q$-colorings on a $\Delta$-regular tree exhibits SSM whenever $q \ge \Delta+1$. Moreover, it is widely believed that as long as SSM holds on bounded-degree trees with $q$ colors, one would obtain an efficient sampler for $q$-colorings on all bounded-degree graphs via simple Markov chain algorithms. It is surprising that such a basic question is still open, even on trees, but then again it also highlights how much we still have to learn about random colorings. In this paper, we show the following: (1) For any $\Delta \ge 3$, SSM holds for random $q$-colorings on trees of maximum degree $\Delta$ whenever $q \ge \Delta + 3$. Thus we almost fully resolve the aforementioned conjecture. Our result substantially improves upon the previously best bound which requires $q \ge 1.59\Delta+\gamma^*$ for an absolute constant $\gamma^* > 0$. (2) For any $\Delta\ge 3$ and girth $g = \Omega_\Delta(1)$, we establish optimal mixing of the Glauber dynamics for $q$-colorings on graphs of maximum degree $\Delta$ and girth $g$ whenever $q \ge \Delta+3$. Our approach is based on a new general reduction from spectral independence on large-girth graphs to SSM on trees that is of independent interest. Using the same techniques, we also prove near-optimal bounds on weak spatial mixing (WSM), a closely-related notion to SSM, for the antiferromagnetic Potts model on trees.

Combinatorial optimization - a field of research addressing problems that feature strongly in a wealth of scientific and industrial contexts - has been identified as one of the core potential fields of applicability of quantum computers. It is still unclear, however, to what extent quantum algorithms can actually outperform classical algorithms for this type of problems. In this work, by resorting to computational learning theory and cryptographic notions, we prove that quantum computers feature an in-principle super-polynomial advantage over classical computers in approximating solutions to combinatorial optimization problems. Specifically, building on seminal work by Kearns and Valiant and introducing a new reduction, we identify special types of problems that are hard for classical computers to approximate up to polynomial factors. At the same time, we give a quantum algorithm that can efficiently approximate the optimal solution within a polynomial factor. The core of the quantum advantage discovered in this work is ultimately borrowed from Shor's quantum algorithm for factoring. Concretely, we prove a super-polynomial advantage for approximating special instances of the so-called integer programming problem. In doing so, we provide an explicit end-to-end construction for advantage bearing instances. This result shows that quantum devices have, in principle, the power to approximate combinatorial optimization solutions beyond the reach of classical efficient algorithms. Our results also give clear guidance on how to construct such advantage-bearing problem instances.

We investigate pointwise estimation of the function-valued velocity field of a second-order linear SPDE. Based on multiple spatially localised measurements, we construct a weighted augmented MLE and study its convergence properties as the spatial resolution of the observations tends to zero and the number of measurements increases. By imposing H\"older smoothness conditions, we recover the pointwise convergence rate known to be minimax-optimal in the linear regression framework. The optimality of the rate in the current setting is verified by adapting the lower bound ansatz based on the RKHS of local measurements to the nonparametric situation.

北京阿比特科技有限公司