亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There has recently been much interest in Gaussian fields on linear networks and, more generally, on compact metric graphs. One proposed strategy for defining such fields on a metric graph $\Gamma$ is through a covariance function that is isotropic in a metric on the graph. Another is through a fractional-order differential equation $L^{\alpha/2} (\tau u) = \mathcal{W}$ on $\Gamma$, where $L = \kappa^2 - \nabla(a\nabla)$ for (sufficiently nice) functions $\kappa, a$, and $\mathcal{W}$ is Gaussian white noise. We study Markov properties of these two types of fields. First, we show that no Gaussian random fields exist on general metric graphs that are both isotropic and Markov. Then, we show that the second type of fields, the generalized Whittle--Mat\'ern fields, are Markov if and only if $\alpha\in\mathbb{N}$. Further, if $\alpha\in\mathbb{N}$, a generalized Whittle--Mat\'ern field $u$ is Markov of order $\alpha$, which means that the field $u$ in one region $S\subset\Gamma$ is conditionally independent of $u$ in $\Gamma\setminus S$ given the values of $u$ and its $\alpha-1$ derivatives on $\partial S$. Finally, we provide two results as consequences of the theory developed: first we prove that the Markov property implies an explicit characterization of $u$ on a fixed edge $e$, revealing that the conditional distribution of $u$ on $e$ given the values at the two vertices connected to $e$ is independent of the geometry of $\Gamma$; second, we show that the solution to $L^{1/2}(\tau u) = \mathcal{W}$ on $\Gamma$ can obtained by conditioning independent generalized Whittle--Mat\'ern processes on the edges, with $\alpha=1$ and Neumann boundary conditions, on being continuous at the vertices.

相關內容

Here we merge the two fields of Cops and Robbers and Graph Pebbling to introduce the new topic of Cops and Robbers Pebbling. Both paradigms can be described by moving tokens (the cops) along the edges of a graph to capture a special token (the robber). In Cops and Robbers, all tokens move freely, whereas, in Graph Pebbling, some of the chasing tokens disappear with movement while the robber is stationary. In Cops and Robbers Pebbling, some of the chasing tokens (cops) disappear with movement, while the robber moves freely. We define the cop pebbling number of a graph to be the minimum number of cops necessary to capture the robber in this context, and present upper and lower bounds and exact values, some involving various domination parameters, for an array of graph classes. We also offer several interesting problems and conjectures.

Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.

This work addresses the approximation of the mean curvature flow of thin structures for which classical phase field methods are not suitable. By thin structures we mean either structures of higher codimension, typically filaments, or surfaces (including non orientables surfaces) that are not boundaries of a set. We propose a novel approach which consists in plugging into the classical Allen--Cahn equation a penalization term localized around the skeleton of the evolving set. This ensures that a minimal thickness is preserved during the evolution process. The numerical efficacy of our approach is illustrated with accurate approximations of the evolution by mean curvature flow of filament structures. Furthermore, we show the seamless adaptability of our approach to compute numerical approximations of solutions to the Steiner and Plateau problems in three dimensions.

General log-linear models are widely used to express the association in multivariate frequency data on contingency tables. The paper focuses on the power analysis for testing the goodness-of-fit hypothesis for these models. Conventionally, for the power-related sample size calculations a deviation from the null hypothesis, aka effect size, is specified by means of the chi-square goodness-of-fit index. It is argued that the odds ratio is a more natural measure of effect size, with the advantage of having a data-relevant interpretation. Therefore, a class of log-affine models that are specified by odds ratios whose values deviate from those of the null by a small amount can be chosen as an alternative. Being expressed as sets of constraints on odds ratios, both hypotheses are represented by smooth surfaces in the probability simplex, and thus, the power analysis can be given a geometric interpretation as well. A concept of geometric power is introduced and a Monte-Carlo algorithm for its estimation is proposed. The framework is applied to the power analysis of goodness-of-fit in the context of multinomial sampling. An iterative scaling procedure for generating distributions from a log-affine model is described and its convergence is proved. To illustrate, the geometric power analysis is carried out for data from a clinical study.

We consider two classes of natural stochastic processes on finite unlabeled graphs. These are Euclidean stochastic optimization algorithms on the adjacency matrix of weighted graphs and a modified version of the Metropolis MCMC algorithm on stochastic block models over unweighted graphs. In both cases we show that, as the size of the graph goes to infinity, the random trajectories of the stochastic processes converge to deterministic curves on the space of measure-valued graphons. Measure-valued graphons, introduced by Lov\'{a}sz and Szegedy in \cite{lovasz2010decorated}, are a refinement of the concept of graphons that can distinguish between two infinite exchangeable arrays that give rise to the same graphon limit. We introduce new metrics on this space which provide us with a natural notion of convergence for our limit theorems. This notion is equivalent to the convergence of infinite-exchangeable arrays. Under suitable assumptions and a specified time-scaling, the Metropolis chain admits a diffusion limit as the number of vertices go to infinity. We then demonstrate that, in an appropriately formulated zero-noise limit, the stochastic process of adjacency matrices of this diffusion converges to a deterministic gradient flow curve on the space of graphons introduced in\cite{Oh2023}. A novel feature of this approach is that it provides a precise exponential convergence rate for the Metropolis chain in a certain limiting regime. The connection between a natural Metropolis chain commonly used in exponential random graph models and gradient flows on graphons, to the best of our knowledge, is new in the literature as well.

This article proposes entropy stable discontinuous Galerkin schemes (DG) for two-fluid relativistic plasma flow equations. These equations couple the flow of relativistic fluids via electromagnetic quantities evolved using Maxwell's equations. The proposed schemes are based on the Gauss-Lobatto quadrature rule, which has the summation by parts (SBP) property. We exploit the structure of the equations having the flux with three independent parts coupled via nonlinear source terms. We design entropy stable DG schemes for each flux part, coupled with the fact that the source terms do not affect entropy, resulting in an entropy stable scheme for the complete system. The proposed schemes are then tested on various test problems in one and two dimensions to demonstrate their accuracy and stability.

Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.

Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation - sequentializing the graph - is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing them in an extention of MLL where axioms on a specific formula, the jumping formula, introduce constrains on the possible sequentializations. The jumping formula needs to be treated non-linearly, which we do either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization.

Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal $O(n \log n)$ sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using $L^p$-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of $L^p$-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the $L^p$-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph $G(n,d/n)$, where the previously known algorithms run in time $n^{O(\log d)}$ or applied only to large $d$. We refine these algorithmic bounds significantly, and develop fast $n^{1+o(1)}$ algorithms based on Glauber dynamics that apply to all $d$, throughout the uniqueness regime.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司