亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We assume that we observe $N$ independent copies of a diffusion process on a time interval $[0,2T]$. For a given time $t$, we estimate the transition density $p_t(x,y)$, namely the conditional density of $X_{t + s}$ given $X_s = x$, under conditions on the diffusion coefficients ensuring that this quantity exists. We use a least squares projection method on a product of finite dimensional spaces, prove risk bounds for the estimator and propose an anisotropic model selection method, relying on several reference norms. A simulation study illustrates the theoretical part for Ornstein-Uhlenbeck or square-root (Cox-Ingersoll-Ross) processes.

相關內容

In 1982, Tuza conjectured that the size $\tau(G)$ of a minimum set of edges that intersects every triangle of a graph $G$ is at most twice the size $\nu(G)$ of a maximum set of edge-disjoint triangles of $G$. This conjecture was proved for several graph classes. In this paper, we present three results regarding Tuza's Conjecture for dense graphs. By using a probabilistic argument, Tuza proved its conjecture for graphs on $n$ vertices with minimum degree at least $\frac{7n}{8}$. We extend this technique to show that Tuza's conjecture is valid for split graphs with minimum degree at least $\frac{3n}{5}$; and that $\tau(G) < \frac{28}{15}\nu(G)$ for every tripartite graph with minimum degree more than $\frac{33n}{56}$. Finally, we show that $\tau(G)\leq \frac{3}{2}\nu(G)$ when $G$ is a complete 4-partite graph. Moreover, this bound is tight.

In this work we study the rate-distortion function (RDF) for lossy compression of asynchronously-sampled continuous-time (CT) wide-sense cyclostationary (WSCS) Gaussian processes with memory. As the case of synchronous sampling, i.e., when the sampling interval is commensurate with the period of the cyclostationary statistics, has already been studied, we focus on discrete-time (DT) processes obtained by asynchronous sampling, i.e., when the sampling interval is incommensurate with the period of the cyclostationary statistics of the CT WSCS source process. It is further assumed that the sampling interval is smaller than the maximal autocorrelation length of the CT source process, which implies that the DT process possesses memory. Thus, the sampled process is a DT wide-sense almost cyclostationary (WSACS) processes with memory. This problem is motivated by the fact that man-made communications signals are modelled as CT WSCS processes; hence, applications of such sampling include, e.g., compress-and-forward relaying and recording systems. The main challenge follows because, with asynchronous sampling, the DT sampled process is not information-stable, and hence the characterization of its RDF should be carried out within the information-spectrum framework instead of using conventional information-theoretic arguments. This work expands upon our previous work which addressed the special case in which the DT process is independent across time. The existence of dependence between the samples requires new tools to obtain the characterization of the RDF.

We propose a new notion of uniqueness for the adversarial Bayes classifier in the setting of binary classification. Analyzing this concept produces a simple procedure for computing all adversarial Bayes classifiers for a well-motivated family of one dimensional data distributions. This characterization is then leveraged to show that as the perturbation radius increases, certain the regularity of adversarial Bayes classifiers improves. Various examples demonstrate that the boundary of the adversarial Bayes classifier frequently lies near the boundary of the Bayes classifier.

In this work, we investigate a numerical procedure for recovering a space-dependent diffusion coefficient in a (sub)diffusion model from the given terminal data, and provide a rigorous numerical analysis of the procedure. By exploiting decay behavior of the observation in time, we establish a novel H{\"o}lder type stability estimate for a large terminal time $T$. This is achieved by novel decay estimates of the (fractional) time derivative of the solution. To numerically recover the diffusion coefficient, we employ the standard output least-squares formulation with an $H^1(\Omega)$-seminorm penalty, and discretize the regularized problem by the Galerkin finite element method with continuous piecewise linear finite elements in space and backward Euler convolution quadrature in time. Further, we provide an error analysis of discrete approximations, and prove a convergence rate that matches the stability estimate. The derived $L^2(\Omega)$ error bound depends explicitly on the noise level, regularization parameter and discretization parameter(s), which gives a useful guideline of the \textsl{a priori} choice of discretization parameters with respect to the noise level in practical implementation. The error analysis is achieved using the conditional stability argument and discrete maximum-norm resolvent estimates. Several numerical experiments are also given to illustrate and complement the theoretical analysis.

We introduce and characterize the operational diversity order (ODO) in fading channels, as a proxy to the classical notion of diversity order at any arbitrary operational signal-to-noise ratio (SNR). Thanks to this definition, relevant insights are brought up in a number of cases: (i) We quantify that in line-of-sight scenarios an increased diversity order is attainable compared to that achieved asymptotically; (ii) this effect is attenuated, but still visible, in the presence of an additional dominant specular component; (iii) we confirm that the decay slope in Rayleigh product channels increases very slowly and never fully achieves unitary slope for finite values of SNR.

Strong data processing inequalities (SDPI) are an important object of study in Information Theory and have been well studied for $f$-divergences. Universal upper and lower bounds have been provided along with several applications, connecting them to impossibility (converse) results, concentration of measure, hypercontractivity, and so on. In this paper, we study R\'enyi divergence and the corresponding SDPI constant whose behavior seems to deviate from that of ordinary $\Phi$-divergences. In particular, one can find examples showing that the universal upper bound relating its SDPI constant to the one of Total Variation does not hold in general. In this work, we prove, however, that the universal lower bound involving the SDPI constant of the Chi-square divergence does indeed hold. Furthermore, we also provide a characterization of the distribution that achieves the supremum when $\alpha$ is equal to $2$ and consequently compute the SDPI constant for R\'enyi divergence of the general binary channel.

Airplane refueling problem is a nonlinear unconstrained optimization problem with $n!$ feasible solutions. Given a fleet of $n$ airplanes with mid-air refueling technique, the question is to find the best refueling policy to make the last remaining airplane travels the farthest. In order to deal with the large scale of airplanes refueling instances, we proposed the definition of sequential feasible solution by employing the refueling properties of data structure. We proved that if an airplanes refueling instance has feasible solutions, it must have the sequential feasible solutions; and the optimal feasible solution must be the optimal sequential feasible solution. Then we proposed the sequential search algorithm which consists of two steps. The first step of the sequential search algorithm aims to seek out all of the sequential feasible solutions. When the input size of $n$ is greater than an index number, we proved that the number of the sequential feasible solutions will change to grow at a polynomial rate. The second step of the sequential search algorithm aims to search for the maximal sequential feasible solution by bubble sorting all of the sequential feasible solutions. Moreover, we built an efficient computability scheme, according to which we could forecast within a polynomial time the computational complexity of the sequential search algorithm that runs on any given airplanes refueling instance. Thus we could provide a computational strategy for decision makers or algorithm users by considering with their available computing resources.

The Ulam distance of two permutations on $[n]$ is $n$ minus the length of their longest common subsequence. In this paper, we show that for every $\varepsilon>0$, there exists some $\alpha>0$, and an infinite set $\Gamma\subseteq \mathbb{N}$, such that for all $n\in\Gamma$, there is an explicit set $C_n$ of $(n!)^{\alpha}$ many permutations on $[n]$, such that every pair of permutations in $C_n$ has pairwise Ulam distance at least $(1-\varepsilon)\cdot n$. Moreover, we can compute the $i^{\text{th}}$ permutation in $C_n$ in poly$(n)$ time and can also decode in poly$(n)$ time, a permutation $\pi$ on $[n]$ to its closest permutation $\pi^*$ in $C_n$, if the Ulam distance of $\pi$ and $\pi^*$ is less than $ \frac{(1-\varepsilon)\cdot n}{4} $. Previously, it was implicitly known by combining works of Goldreich and Wigderson [Israel Journal of Mathematics'23] and Farnoud, Skachek, and Milenkovic [IEEE Transactions on Information Theory'13] in a black-box manner, that it is possible to explicitly construct $(n!)^{\Omega(1)}$ many permutations on $[n]$, such that every pair of them have pairwise Ulam distance at least $\frac{n}{6}\cdot (1-\varepsilon)$, for any $\varepsilon>0$, and the bound on the distance can be improved to $\frac{n}{4}\cdot (1-\varepsilon)$ if the construction of Goldreich and Wigderson is directly analyzed in the Ulam metric.

DNA strands serve as a storage medium for $4$-ary data over the alphabet $\{A,T,G,C\}$. DNA data storage promises formidable information density, long-term durability, and ease of replicability. However, information in this intriguing storage technology might be corrupted. Experiments have revealed that DNA sequences with long homopolymers and/or with low $GC$-content are notably more subject to errors upon storage. This paper investigates the utilization of the recently-introduced method for designing lexicographically-ordered constrained (LOCO) codes in DNA data storage. This paper introduces DNA LOCO (D-LOCO) codes, over the alphabet $\{A,T,G,C\}$ with limited runs of identical symbols. These codes come with an encoding-decoding rule we derive, which provides affordable encoding-decoding algorithms. In terms of storage overhead, the proposed encoding-decoding algorithms outperform those in the existing literature. Our algorithms are readily reconfigurable. D-LOCO codes are intrinsically balanced, which allows us to achieve balancing over the entire DNA strand with minimal rate penalty. Moreover, we propose four schemes to bridge consecutive codewords, three of which guarantee single substitution error detection per codeword. We examine the probability of undetecting errors. We also show that D-LOCO codes are capacity-achieving and that they offer remarkably high rates at moderate lengths.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司