亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper addresses the computational problem of deciding invertibility (or one to one-ness) of a Boolean map $F$ in $n$-Boolean variables. This problem has a special case of deciding invertibilty of a map $F:\mathbb{F}_{2}^n\rightarrow\mathbb{F}_{2}^n$ over the binary field $\mathbb{F}_2$. Further the problem can be extended and stated over a finite field $\mathbb{F}$ instead of $\mathbb{F}_2$. Algebraic condition for invertibility of $F$ in this special case over a finite field is well known to be equivalent to invertibility of the Koopman operator of $F$ as shown in \cite{RamSule}. In this paper a condition for invertibility is derived in the special case of Boolean maps $F:B_0^n\rightarrow B_0^n$ where $B_0$ is the two element Boolean algebra in terms of \emph{implicants} of Boolean equations. This condition is then extended to the case of general maps in $n$ variables. Hence this condition answers the special case of invertibility of the map $F$ defined over the binary field $\mathbb{F}_2$ alternatively, in terms of implicants instead of the Koopman operator. The problem of deciding invertibility of a map $F$ (or that of finding its $GOE$) over finite fields appears to be distinct from the satisfiability problem (SAT) or the problem of deciding consistency of polynomial equations over finite fields. Hence the well known algorithms for deciding SAT or of solvability using Grobner basis for checking membership in an ideal generated by polynomials is not known to answer the question of invertibility of a map. Similarly it appears that algorithms for satisfiability or polynomial solvability are not useful for computation of $GOE(F)$ even for maps over the binary field $\mathbb{F}_2$.

相關內容

This paper introduces a prognostic method called FLASH that addresses the problem of joint modelling of longitudinal data and censored durations when a large number of both longitudinal and time-independent features are available. In the literature, standard joint models are either of the shared random effect or joint latent class type. Combining ideas from both worlds and using appropriate regularisation techniques, we define a new model with the ability to automatically identify significant prognostic longitudinal features in a high-dimensional context, which is of increasing importance in many areas such as personalised medicine or churn prediction. We develop an estimation methodology based on the EM algorithm and provide an efficient implementation. The statistical performance of the method is demonstrated both in extensive Monte Carlo simulation studies and on publicly available real-world datasets. Our method significantly outperforms the state-of-the-art joint models in predicting the latent class membership probability in terms of the C-index in a so-called ``real-time'' prediction setting, with a computational speed that is orders of magnitude faster than competing methods. In addition, our model automatically identifies significant features that are relevant from a practical perspective, making it interpretable.

The Burling sequence is a sequence of triangle-free graphs of increasing chromatic number. Each of them is isomorphic to the intersection graph of a set of axis-parallel boxes in $R^3$. These graphs were also proved to have other geometrical representations: intersection graphs of line segments in the plane, and intersection graphs of frames, where a frame is the boundary of an axis-aligned rectangle in the plane. We call Burling graph every graph that is an induced subgraph of some graph in the Burling sequence. We give five new equivalent ways to define Burling graphs. Three of them are geometrical, one is of a more graph-theoretical flavour and one is more axiomatic.

We study $L_2$-approximation problems $\text{APP}_d$ in the worst case setting in the weighted Korobov spaces $H_{d,\a,{\bm \ga}}$ with parameter sequences ${\bm \ga}=\{\ga_j\}$ and $\a=\{\az_j\}$ of positive real numbers $1\ge \ga_1\ge \ga_2\ge \cdots\ge 0$ and $\frac1 2<\az_1\le \az_2\le \cdots$. We consider the minimal worst case error $e(n,\text{APP}_d)$ of algorithms that use $n$ arbitrary continuous linear functionals with $d$ variables. We study polynomial convergence of the minimal worst case error, which means that $e(n,\text{APP}_d)$ converges to zero polynomially fast with increasing $n$. We recall the notions of polynomial, strongly polynomial, weak and $(t_1,t_2)$-weak tractability. In particular, polynomial tractability means that we need a polynomial number of arbitrary continuous linear functionals in $d$ and $\va^{-1}$ with the accuracy $\va$ of the approximation. We obtain that the matching necessary and sufficient condition on the sequences ${\bm \ga}$ and $\a$ for strongly polynomial tractability or polynomial tractability is $$\dz:=\liminf_{j\to\infty}\frac{\ln \ga_j^{-1}}{\ln j}>0,$$ and the exponent of strongly polynomial tractability is $$p^{\text{str}}=2\max\big\{\frac 1 \dz, \frac 1 {2\az_1}\big\}.$$

This paper gives a self-contained introduction to the Hilbert projective metric $\mathcal{H}$ and its fundamental properties, with a particular focus on the space of probability measures. We start by defining the Hilbert pseudo-metric on convex cones, focusing mainly on dual formulations of $\mathcal{H}$ . We show that linear operators on convex cones contract in the distance given by the hyperbolic tangent of $\mathcal{H}$, which in particular implies Birkhoff's classical contraction result for $\mathcal{H}$. Turning to spaces of probability measures, where $\mathcal{H}$ is a metric, we analyse the dual formulation of $\mathcal{H}$ in the general setting, and explore the geometry of the probability simplex under $\mathcal{H}$ in the special case of discrete probability measures. Throughout, we compare $\mathcal{H}$ with other distances between probability measures. In particular, we show how convergence in $\mathcal{H}$ implies convergence in total variation, $p$-Wasserstein distance, and any $f$-divergence. Furthermore, we derive a novel sharp bound for the total variation between two probability measures in terms of their Hilbert distance.

Challenges with data in the big-data era include (i) the dimension $p$ is often larger than the sample size $n$ (ii) outliers or contaminated points are frequently hidden and more difficult to detect. Challenge (i) renders most conventional methods inapplicable. Thus, it attracts tremendous attention from statistics, computer science, and bio-medical communities. Numerous penalized regression methods have been introduced as modern methods for analyzing high-dimensional data. Disproportionate attention has been paid to the challenge (ii) though. Penalized regression methods can do their job very well and are expected to handle the challenge (ii) simultaneously. Most of them, however, can break down by a single outlier (or single adversary contaminated point) as revealed in this article. The latter systematically examines leading penalized regression methods in the literature in terms of their robustness, provides quantitative assessment, and reveals that most of them can break down by a single outlier. Consequently, a novel robust penalized regression method based on the least sum of squares of depth trimmed residuals is proposed and studied carefully. Experiments with simulated and real data reveal that the newly proposed method can outperform some leading competitors in estimation and prediction accuracy in the cases considered.

Recently, a stability theory has been developed to study the linear stability of modified Patankar--Runge--Kutta (MPRK) schemes. This stability theory provides sufficient conditions for a fixed point of an MPRK scheme to be stable as well as for the convergence of an MPRK scheme towards the steady state of the corresponding initial value problem, whereas the main assumption is that the initial value is sufficiently close to the steady state. Initially, numerical experiments in several publications indicated that these linear stability properties are not only local, but even global, as is the case for general linear methods. Recently, however, it was discovered that the linear stability of the MPDeC(8) scheme is indeed only local in nature. Our conjecture is that this is a result of negative Runge--Kutta (RK) parameters of MPDeC(8) and that linear stability is indeed global, if the RK parameters are nonnegative. To support this conjecture, we examine the family of MPRK22($\alpha$) methods with negative RK parameters and show that even among these methods there are methods for which the stability properties are only local. However, this local linear stability is not observed for MPRK22($\alpha$) schemes with nonnegative Runge-Kutta parameters.

Given an image $u_0$, the aim of minimising the Mumford-Shah functional is to find a decomposition of the image domain into sub-domains and a piecewise smooth approximation $u$ of $u_0$ such that $u$ varies smoothly within each sub-domain. Since the Mumford-Shah functional is highly non-smooth, regularizations such as the Ambrosio-Tortorelli approximation can be considered which is one of the most computationally efficient approximations of the Mumford-Shah functional for image segmentation. While very impressive numerical results have been achieved in a large range of applications when minimising the functional, no analytical results are currently available for minimizers of the functional in the piecewise smooth setting, and this is the goal of this work. Our main result is the $\Gamma$-convergence of the Ambrosio-Tortorelli approximation of the Mumford-Shah functional for piecewise smooth approximations. This requires the introduction of an appropriate function space. As a consequence of our $\Gamma$-convergence result, we can infer the convergence of minimizers of the respective functionals.

We consider the minimal thermodynamic cost of an individual computation, where a single input $x$ is mapped to a single output $y$. In prior work, Zurek proposed that this cost was given by $K(x\vert y)$, the conditional Kolmogorov complexity of $x$ given $y$ (up to an additive constant which does not depend on $x$ or $y$). However, this result was derived from an informal argument, applied only to deterministic computations, and had an arbitrary dependence on the choice of protocol (via the additive constant). Here we use stochastic thermodynamics to derive a generalized version of Zurek's bound from a rigorous Hamiltonian formulation. Our bound applies to all quantum and classical processes, whether noisy or deterministic, and it explicitly captures the dependence on the protocol. We show that $K(x\vert y)$ is a minimal cost of mapping $x$ to $y$ that must be paid using some combination of heat, noise, and protocol complexity, implying a tradeoff between these three resources. Our result is a kind of "algorithmic fluctuation theorem" with implications for the relationship between the Second Law and the Physical Church-Turing thesis.

The history of the seemingly simple problem of straight line fitting in the presence of both $x$ and $y$ errors has been fraught with misadventure, with statistically ad hoc and poorly tested methods abounding in the literature. The problem stems from the emergence of latent variables describing the "true" values of the independent variables, the priors on which have a significant impact on the regression result. By analytic calculation of maximum a posteriori values and biases, and comprehensive numerical mock tests, we assess the quality of possible priors. In the presence of intrinsic scatter, the only prior that we find to give reliably unbiased results in general is a mixture of one or more Gaussians with means and variances determined as part of the inference. We find that a single Gaussian is typically sufficient and dub this model Marginalised Normal Regression (MNR). We illustrate the necessity for MNR by comparing it to alternative methods on an important linear relation in cosmology, and extend it to nonlinear regression and an arbitrary covariance matrix linking $x$ and $y$. We publicly release a Python/Jax implementation of MNR and its Gaussian mixture model extension that is coupled to Hamiltonian Monte Carlo for efficient sampling, which we call ROXY (Regression and Optimisation with X and Y errors).

This paper addresses the estimation of the second-order structure of a manifold cross-time random field (RF) displaying spatially varying Long Range Dependence (LRD), adopting the functional time series framework introduced in Ruiz-Medina (2022). Conditions for the asymptotic unbiasedness of the integrated periodogram operator in the Hilbert-Schmidt operator norm are derived beyond structural assumptions. Weak-consistent estimation of the long-memory operator is achieved under a semiparametric functional spectral framework in the Gaussian context. The case where the projected manifold process can display Short Range Dependence (SRD) and LRD at different manifold scales is also analyzed. The performance of both estimation procedures is illustrated in the simulation study, in the context of multifractionally integrated spherical functional autoregressive-moving average (SPHARMA(p,q)) processes.

北京阿比特科技有限公司