亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 2020, Behr defined the problem of edge coloring of signed graphs and showed that every signed graph $(G, \sigma)$ can be colored using exactly $\Delta(G)$ or $\Delta(G) + 1$ colors, where $\Delta(G)$ is the maximum degree in graph $G$. In this paper, we focus on products of signed graphs. We recall the definitions of the Cartesian, tensor, strong, and corona products of signed graphs and prove results for them. In particular, we show that $(1)$ the Cartesian product of $\Delta$-edge-colorable signed graphs is $\Delta$-edge-colorable, $(2)$ the tensor product of a $\Delta$-edge-colorable signed graph and a signed tree requires only $\Delta$ colors and $(3)$ the corona product of almost any two signed graphs is $\Delta$-edge-colorable. We also prove some results related to the coloring of products of signed paths and cycles.

相關內容

The Sibson and Arimoto capacity, which are based on the Sibson and Arimoto mutual information (MI) of order {\alpha}, respectively, are well-known generalizations of the channel capacity C. In this study, we derive novel alternating optimization algorithms for computing these capacities by providing new variational characterizations of the Sibson MI and Arimoto MI. Moreover, we prove that all iterative algorithms for computing these capacities are equivalent under appropriate conditions imposed on their initial distributions.

The four-parameter generalized beta distribution of the second kind (GBII) has been proposed for modelling insurance losses with heavy-tailed features. The aim of this paper is to present a parametric composite GBII regression modelling by splicing two GBII distributions using mode matching method. It is designed for simultaneous modeling of small and large claims and capturing the policyholder heterogeneity by introducing the covariates into the location parameter. In such cases, the threshold that splits two GBII distributions varies across individuals policyholders based on their risk features. The proposed regression modelling also contains a wide range of insurance loss distributions as the head and the tail respectively and provides the close-formed expressions for parameter estimation and model prediction. A simulation study is conducted to show the accuracy of the proposed estimation method and the flexibility of the regressions. Some illustrations of the applicability of the new class of distributions and regressions are provided with a Danish fire losses data set and a Chinese medical insurance claims data set, comparing with the results of competing models from the literature.

We consider the community detection problem in a sparse $q$-uniform hypergraph $G$, assuming that $G$ is generated according to the Hypergraph Stochastic Block Model (HSBM). We prove that a spectral method based on the non-backtracking operator for hypergraphs works with high probability down to the generalized Kesten-Stigum detection threshold conjectured by Angelini et al. (2015). We characterize the spectrum of the non-backtracking operator for the sparse HSBM and provide an efficient dimension reduction procedure using the Ihara-Bass formula for hypergraphs. As a result, community detection for the sparse HSBM on $n$ vertices can be reduced to an eigenvector problem of a $2n\times 2n$ non-normal matrix constructed from the adjacency matrix and the degree matrix of the hypergraph. To the best of our knowledge, this is the first provable and efficient spectral algorithm that achieves the conjectured threshold for HSBMs with $r$ blocks generated according to a general symmetric probability tensor.

In this work, a Generalized Finite Difference (GFD) scheme is presented for effectively computing the numerical solution of a parabolic-elliptic system modelling a bacterial strain with density-suppressed motility. The GFD method is a meshless method known for its simplicity for solving non-linear boundary value problems over irregular geometries. The paper first introduces the basic elements of the GFD method, and then an explicit-implicit scheme is derived. The convergence of the method is proven under a bound for the time step, and an algorithm is provided for its computational implementation. Finally, some examples are considered comparing the results obtained with a regular mesh and an irregular cloud of points.

We study cut finite element discretizations of a Darcy interface problem based on the mixed finite element pairs $\textbf{RT}_k\times Q_k$, $k\geq 0$. Here $Q_k$ is the space of discontinuous polynomial functions of degree less or equal to $k$ and $\textbf{RT}$ is the Raviart-Thomas space. We show that the standard ghost penalty stabilization, often added in the weak forms of cut finite element methods for stability and control of the condition number of the linear system matrix, destroys the divergence-free property of the considered element pairs. Therefore, we propose new stabilization terms for the pressure and show that we recover the optimal approximation of the divergence without losing control of the condition number of the linear system matrix. We prove that the method with the new stabilization term has pointwise divergence-free approximations of solenoidal velocity fields. We derive a priori error estimates for the proposed unfitted finite element discretization based on $\textbf{RT}_k\times Q_k$, $k\geq 0$. In addition, by decomposing the mesh into macro-elements and applying ghost penalty terms only on interior edges of macro-elements, stabilization is applied very restrictively and only where needed. Numerical experiments with element pairs $\textbf{RT}_0\times Q_0$, $\textbf{RT}_1\times Q_1$, and $\textbf{BDM}_1\times Q_0$ (where $\textbf{BDM}$ is the Brezzi-Douglas-Marini space) indicate that we have 1) optimal rates of convergence of the approximate velocity and pressure; 2) well-posed linear systems where the condition number of the system matrix scales as it does for fitted finite element discretizations; 3) optimal rates of convergence of the approximate divergence with pointwise divergence-free approximations of solenoidal velocity fields. All three properties hold independently of how the interface is positioned relative to the computational mesh.

Join-preserving maps on the discrete time scale $\omega^+$, referred to as time warps, have been proposed as graded modalities that can be used to quantify the growth of information in the course of program execution. The set of time warps forms a simple distributive involutive residuated lattice -- called the time warp algebra -- that is equipped with residual operations relevant to potential applications. In this paper, we show that although the time warp algebra generates a variety that lacks the finite model property, it nevertheless has a decidable equational theory. We also describe an implementation of a procedure for deciding equations in this algebra, written in the OCaml programming language, that makes use of the Z3 theorem prover.

Let $\mu$ be a probability measure on $\mathbb{R}^d$ and $\mu_N$ its empirical measure with sample size $N$. We prove a concentration inequality for the optimal transport cost between $\mu$ and $\mu_N$ for radial cost functions with polynomial local growth, that can have superpolynomial global growth. This result generalizes and improves upon estimates of Fournier and Guillin. The proof combines ideas from empirical process theory with known concentration rates for compactly supported $\mu$. By partitioning $\mathbb{R}^d$ into annuli, we infer a global estimate from local estimates on the annuli and conclude that the global estimate can be expressed as a sum of the local estimate and a mean-deviation probability for which efficient bounds are known.

In modern neuroscience, functional magnetic resonance imaging (fMRI) has been a crucial and irreplaceable tool that provides a non-invasive window into the dynamics of whole-brain activity. Nevertheless, fMRI is limited by hemodynamic blurring as well as high cost, immobility, and incompatibility with metal implants. Electroencephalography (EEG) is complementary to fMRI and can directly record the cortical electrical activity at high temporal resolution, but has more limited spatial resolution and is unable to recover information about deep subcortical brain structures. The ability to obtain fMRI information from EEG would enable cost-effective, imaging across a wider set of brain regions. Further, beyond augmenting the capabilities of EEG, cross-modality models would facilitate the interpretation of fMRI signals. However, as both EEG and fMRI are high-dimensional and prone to artifacts, it is currently challenging to model fMRI from EEG. To address this challenge, we propose a novel architecture that can predict fMRI signals directly from multi-channel EEG without explicit feature engineering. Our model achieves this by implementing a Sinusoidal Representation Network (SIREN) to learn frequency information in brain dynamics from EEG, which serves as the input to a subsequent encoder-decoder to effectively reconstruct the fMRI signal from a specific brain region. We evaluate our model using a simultaneous EEG-fMRI dataset with 8 subjects and investigate its potential for predicting subcortical fMRI signals. The present results reveal that our model outperforms a recent state-of-the-art model, and indicates the potential of leveraging periodic activation functions in deep neural networks to model functional neuroimaging data.

In this note we consider the approximation of the Greeks Delta and Gamma of American-style options through the numerical solution of time-dependent partial differential complementarity problems (PDCPs). This approach is very attractive as it can yield accurate approximations to these Greeks at essentially no additional computational cost during the numerical solution of the PDCP for the pertinent option value function. For the temporal discretization, the Crank-Nicolson method is arguably the most popular method in computational finance. It is well-known, however, that this method can have an undesirable convergence behaviour in the approximation of the Greeks Delta and Gamma for American-style options, even when backward Euler damping (Rannacher smoothing) is employed. In this note we study for the temporal discretization an interesting family of diagonally implicit Runge-Kutta (DIRK) methods together with the two-stage Lobatto IIIC method. Through ample numerical experiments for one- and two-asset American-style options, it is shown that these methods can yield a regular second-order convergence behaviour for the option value as well as for the Greeks Delta and Gamma. A mutual comparison reveals that the DIRK method with suitably chosen parameter $\theta$ is preferable.

A backward stable numerical calculation of a function with condition number $\kappa$ will have a relative accuracy of $\kappa\epsilon_{\text{machine}}$. Standard formulations and software implementations of finite-strain elastic materials models make use of the deformation gradient $\boldsymbol F = I + \partial \boldsymbol u/\partial \boldsymbol X$ and Cauchy-Green tensors. These formulations are not numerically stable, leading to loss of several digits of accuracy when used in the small strain regime, and often precluding the use of single precision floating point arithmetic. We trace the source of this instability to specific points of numerical cancellation, interpretable as ill-conditioned steps. We show how to compute various strain measures in a stable way and how to transform common constitutive models to their stable representations, formulated in either initial or current configuration. The stable formulations all provide accuracy of order $\epsilon_{\text{machine}}$. In many cases, the stable formulations have elegant representations in terms of appropriate strain measures and offer geometric intuition that is lacking in their standard representation. We show that algorithmic differentiation can stably compute stresses so long as the strain energy is expressed stably, and give principles for stable computation that can be applied to inelastic materials.

北京阿比特科技有限公司