亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We explore the maximum likelihood degree of a homogeneous polynomial $F$ on a projective variety $X$, $\mathrm{MLD}_F(X)$, which generalizes the concept of Gaussian maximum likelihood degree. We show that $\mathrm{MLD}_F(X)$ is equal to the count of critical points of a rational function on $X$, and give different geometric characterizations of it via topological Euler characteristic, dual varieties, and Chern classes.

相關內容

We consider the simultaneously fast and in-place computation of the Euclidean polynomial modular remainder $R(X) $\not\equiv$ A(X) \mod B(X)$ with $A$ and $B$ of respective degrees $n$ and $m $\le$ n$. But fast algorithms for this usually come at the expense of (potentially large) extra temporary space. To remain in-place a further issue is to avoid the storage of the whole quotient $Q(X)$ such that $A=BQ+R$. If the multiplication of two polynomials of degree $k$ can be performed with $M(k)$ operations and $O(k)$ extra space, and if it is allowed to use the input space of $A$ or $B$ for intermediate computations, but putting $A$ and $B$ back to their initial states after the completion of the remainder computation, we here propose an in-place algorithm (that is with its extra required space reduced to $O(1)$ only) using at most $O(n/m M(m)\log(m)$ arithmetic operations, if $\M(m)$ is quasi-linear, or $O(n/m M(m)}$ otherwise. We also propose variants that compute -- still in-place and with the same kind of complexity bounds -- the over-place remainder $A(X) $\not\equiv$ A(X) \mod B(X)$, the accumulated remainder $R(X) += A(X) \mod B(X)$ and the accumulated modular multiplication $R(X) += A(X)C(X) \mod B(X)$. To achieve this, we develop techniques for Toeplitz matrix operations which output is also part of the input. Fast and in-place accumulating versions are obtained for the latter, and thus for convolutions, and then used for polynomial remaindering. This is realized via further reductions to accumulated polynomial multiplication, for which fast in-place algorithms have recently been developed.

We propose a new alignment-free algorithm by constructing a compact vector representation on $\mathbb{R}^{24}$ of a DNA sequence of arbitrary length. Each component of this vector is obtained from a representative sequence, the elements of which are the values realized by a function $\Gamma$. $\Gamma$ acts on neighborhoods of arbitrary radius that are located at strategic positions within the DNA sequence and carries complete information about the local distribution of frequencies of the nucleotides as a consequence of the uniqueness of prime factorization of integer. The algorithm exhibits linear time complexity and turns out to consume significantly small memory. The two natural parameters characterizing the radius and location of the neighbourhoods are fixed by comparing the phylogenetic tree with the benchmark for full genome sequences of fish mtDNA datasets. Using these fitting parameters, the method is applied to analyze a number of genome sequences from benchmark and other standard datasets. The algorithm proves to be computationally efficient compared to Co-phylog and CD-MAWS when applied over a certain range of a simulated dataset.

Multiscale problems are challenging for neural network-based discretizations of differential equations, such as physics-informed neural networks (PINNs). This can be (partly) attributed to the so-called spectral bias of neural networks. To improve the performance of PINNs for time-dependent problems, a combination of multifidelity stacking PINNs and domain decomposition-based finite basis PINNs are employed. In particular, to learn the high-fidelity part of the multifidelity model, a domain decomposition in time is employed. The performance is investigated for a pendulum and a two-frequency problem as well as the Allen-Cahn equation. It can be observed that the domain decomposition approach clearly improves the PINN and stacking PINN approaches.

We present a new and straightforward derivation of a family $\mathcal{F}(h,\tau)$ of exponential splittings of Strang-type for the general linear evolutionary equation with two linear components. One component is assumed to be a time-independent, unbounded operator, while the other is a bounded one with explicit time dependence. The family $\mathcal{F}(h,\tau)$ is characterized by the length of the time-step $h$ and a continuous parameter $\tau$, which defines each member of the family. It is shown that the derivation and error analysis follows from two elementary arguments: the variation of constants formula and specific quadratures for integrals over simplices. For these Strang-type splittings, we prove their convergence which, depending on some commutators of the relevant operators, may be of first or second order. As a result, error bounds appear in terms of commutator bounds. Based on the explicit form of the error terms, we establish the influence of $\tau$ on the accuracy of $\mathcal{F}(h,\tau)$, allowing us to investigate the optimal value of $\tau$. This simple yet powerful approach establishes the connection between exponential integrators and splitting methods. Furthermore, the present approach can be easily applied to the derivation of higher-order splitting methods under similar considerations. Needless to say, the obtained results also apply to Strang-type splittings in the case of time independent-operators. To complement rigorous results, we present numerical experiments with various values of $\tau$ based on the linear Schr\"odinger equation.

Given a graph $G$, the optimization version of the graph burning problem seeks for a sequence of vertices, $(u_1,u_2,...,u_k) \in V(G)^k$, with minimum $k$ and such that every $v \in V(G)$ has distance at most $k-i$ to some vertex $u_i$. The length $k$ of the optimal solution is known as the burning number and is denoted by $b(G)$, an invariant that helps quantify the graph's vulnerability to contagion. This paper explores the advantages and limitations of an $\mathcal{O}(mn + kn^2)$ deterministic greedy heuristic for this problem, where $n$ is the graph's order, $m$ is the graph's size, and $k$ is a guess on $b(G)$. This heuristic is based on the relationship between the graph burning problem and the clustered maximum coverage problem, and despite having limitations on paths and cycles, it found most of the optimal and best-known solutions of benchmark and synthetic graphs with up to 102400 vertices.

We present algorithms for computing the reduced Gr\"{o}bner basis of the vanishing ideal of a finite set of points in a frame of ideal interpolation. Ideal interpolation is defined by a linear projector whose kernel is a polynomial ideal. In this paper, we translate interpolation condition functionals into formal power series via Taylor expansion, then the reduced Gr\"{o}bner basis is read from formal power series by Gaussian elimination. Our algorithm has a polynomial time complexity. It compares favorably with MMM algorithm in single point ideal interpolation and some several points ideal interpolation.

We describe a new dependent-rounding algorithmic framework for bipartite graphs. Given a fractional assignment $y$ of values to edges of graph $G = (U \cup V, E)$, the algorithms return an integral solution $Y$ such that each right-node $v \in V$ has at most one neighboring edge $f$ with $Y_f = 1$, and where the variables $Y_e$ also satisfy broad nonpositive-correlation properties. In particular, for any edges $e_1, e_2$ sharing a left-node $u \in U$, the variables $Y_{e_1}, Y_{e_2}$ have strong negative-correlation properties, i.e. the expectation of $Y_{e_1} Y_{e_2}$ is significantly below $y_{e_1} y_{e_2}$. This algorithm is based on generating negatively-correlated Exponential random variables and using them in a contention-resolution scheme inspired by an algorithm Im & Shadloo (2020). Our algorithm gives stronger and much more flexible negative correlation properties. Dependent rounding schemes with negative correlation properties have been used for approximation algorithms for job-scheduling on unrelated machines to minimize weighted completion times (Bansal, Srinivasan, & Svensson (2021), Im & Shadloo (2020), Im & Li (2023)). Using our new dependent-rounding algorithm, among other improvements, we obtain a $1.398$-approximation for this problem. This significantly improves over the prior $1.45$-approximation ratio of Im & Li (2023).

Using elementary means, we derive the three most popular splittings of $e^{(A+B)}$ and their error bounds in the case when $A$ and $B$ are (possibly unbounded) operators in a Hilbert space, generating strongly continuous semigroups, $e^{tA}$, $e^{tB}$ and $e^{t(A+B)}$. The error of these splittings is bounded in terms of the norm of the commutators $[A, B]$, $[A, [A, B]]$ and $[B, [A, B]]$.

Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.

The starting point for much of multivariate analysis (MVA) is an $n\times p$ data matrix whose $n$ rows represent observations and whose $p$ columns represent variables. Some multivariate data sets, however, may be best conceptualized not as $n$ discrete $p$-variate observations, but as $p$ curves or functions defined on a common time interval. We introduce a framework for extending techniques of multivariate analysis to such settings. The proposed framework rests on the assumption that the curves can be represented as linear combinations of basis functions such as B-splines. This is formally identical to the Ramsay-Silverman representation of functional data; but whereas functional data analysis extends MVA to the case of observations that are curves rather than vectors -- heuristically, $n\times p$ data with $p$ infinite -- we are instead concerned with what happens when $n$ is infinite. We describe how to translate the classical MVA methods of covariance and correlation estimation, principal component analysis, Fisher's linear discriminant analysis, and $k$-means clustering to the continuous-time setting. We illustrate the methods with a novel perspective on a well-known Canadian weather data set, and with applications to neurobiological and environmetric data. The methods are implemented in the publicly available R package \texttt{ctmva}.

北京阿比特科技有限公司