亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a pure-jump stable Cox-Ingersoll-Ross ($\alpha$-stable CIR) process driven by a non-symmetric stable L{\'e}vy process with jump activity $\alpha$ $\in$ (1, 2) and we address the joint estimation of drift, scaling and jump activity parameters from high-frequency observations of the process on a fixed time period. We first prove the existence of a consistent, rate optimal and asymptotically conditionally gaussian estimator based on an approximation of the likelihood function. Moreover, uniqueness of the drift estimators is established assuming that the scaling coefficient and the jump activity are known or consistently estimated. Next we propose easy-toimplement preliminary estimators of all parameters and we improve them by a one-step procedure.

相關內容

Lax extensions of set functors play a key role in various areas including topology, concurrent systems, and modal logic, while predicate liftings provide a generic semantics of modal operators. We take a fresh look at the connection between lax extensions and predicate liftings from the point of view of quantale-enriched relations. Using this perspective, we show in particular that various fundamental concepts and results arise naturally and their proofs become very elementary. Ultimately, we prove that every lax extension is induced by a class of predicate liftings; we discuss several implications of this result.

Semi-unification is the combination of first-order unification and first-order matching. The undecidability of semi-unification has been proven by Kfoury, Tiuryn, and Urzyczyn in the 1990s by Turing reduction from Turing machine immortality (existence of a diverging configuration). The particular Turing reduction is intricate, uses non-computational principles, and involves various intermediate models of computation. The present work gives a constructive many-one reduction from the Turing machine halting problem to semi-unification. This establishes RE-completeness of semi-unification under many-one reductions. Computability of the reduction function, constructivity of the argument, and correctness of the argument is witnessed by an axiom-free mechanization in the Coq proof assistant. Arguably, this serves as comprehensive, precise, and surveyable evidence for the result at hand. The mechanization is incorporated into the existing, well-maintained Coq library of undecidability proofs. Notably, a variant of Hooper's argument for the undecidability of Turing machine immortality is part of the mechanization.

We numerically demonstrate a silicon add-drop microring-based reservoir computing scheme that combines parallel delayed inputs and wavelength division multiplexing. The scheme solves memory-demanding tasks like time-series prediction with good performance without requiring external optical feedback.

We consider a distributed coding for computing problem with constant decoding locality, i.e. with a vanishing error probability, any single sample of the function can be approximately recovered by probing only constant number of compressed bits. We establish an achievable rate region by designing an efficient coding scheme. The scheme reduces the required rate by introducing auxiliary random variables and supports local decoding at the same time. Then we show the rate region is optimal under mild regularity conditions on source distributions. A coding for computing problem with side information is analogously studied. These results indicate that more rate has to be taken in order to achieve lower coding complexity in distributed computing settings. Moreover, useful graph characterizations are developed to simplify the computation of the achievable rate region.

In 2023, Kuznetsov and Speranski introduced infinitary action logic with multiplexing $!^m\nabla \mathrm{ACT}_\omega$ and proved that the derivability problem for it lies between the $\omega$ and $\omega^\omega$ levels of the hyperarithmetical hierarchy. We prove that this problem is $\Delta^0_{\omega^\omega}$-complete under Turing reductions. Namely, we prove that it is recursively isomorphic to the satisfaction predicate for computable infinitary formulas of rank less than $\omega^\omega$ in the language of arithmetic. We also prove this result for the fragment of $!^m\nabla \mathrm{ACT}_\omega$ where Kleene star is not allowed to be in the scope of the subexponential. Finally, we present a family of logics, which are fragments of $!^m\nabla \mathrm{ACT}_\omega$, such that the complexity of the $k$-th logic is between $\Delta^0_{\omega^k}$ and $\Delta^0_{\omega^{k+1}}$.

This paper presents $\textbf{R}$epresentation-$\textbf{C}$onditioned image $\textbf{G}$eneration (RCG), a simple yet effective image generation framework which sets a new benchmark in class-unconditional image generation. RCG does not condition on any human annotations. Instead, it conditions on a self-supervised representation distribution which is mapped from the image distribution using a pre-trained encoder. During generation, RCG samples from such representation distribution using a representation diffusion model (RDM), and employs a pixel generator to craft image pixels conditioned on the sampled representation. Such a design provides substantial guidance during the generative process, resulting in high-quality image generation. Tested on ImageNet 256$\times$256, RCG achieves a Frechet Inception Distance (FID) of 3.31 and an Inception Score (IS) of 253.4. These results not only significantly improve the state-of-the-art of class-unconditional image generation but also rival the current leading methods in class-conditional image generation, bridging the long-standing performance gap between these two tasks. Code is available at //github.com/LTH14/rcg.

We present TIGERScore, a \textbf{T}rained metric that follows \textbf{I}nstruction \textbf{G}uidance to perform \textbf{E}xplainable, and \textbf{R}eference-free evaluation over a wide spectrum of text generation tasks. Different from other automatic evaluation methods that only provide arcane scores, TIGERScore is guided by natural language instruction to provide error analysis to pinpoint the mistakes in the generated text. Our metric is based on LLaMA-2, trained on our meticulously curated instruction-tuning dataset MetricInstruct which covers 6 text generation tasks and 23 text generation datasets. The dataset consists of 42K quadruple in the form of (instruction, input, system output $\rightarrow$ error analysis). We collected the `system outputs' through from a large variety of models to cover different types of errors. To quantitatively assess our metric, we evaluate its correlation with human ratings on 5 held-in datasets, 2 held-out datasets and show that TIGERScore can achieve the open-source SoTA correlation with human ratings across these datasets and almost approaches GPT-4 evaluator. As a reference-free metric, its correlation can even surpass the best existing reference-based metrics. To further qualitatively assess the rationale generated by our metric, we conduct human evaluation on the generated explanations and found that the explanations are 70.8\% accurate. Through these experimental results, we believe TIGERScore demonstrates the possibility of building universal explainable metrics to evaluate any text generation task.

We develop two unfitted cut finite element methods for the Stokes equations based on $\mathbf{H}^{\text{div}}$-conforming finite elements which exhibit optimal convergence order for the velocity, pointwise divergence-free velocity fields, and well-posed linear systems, independently of the position of the boundary relative to the computational mesh. The first method is based on the Brezzi-Douglas-Marini (BDM) elements and involves interior penalty terms to enforce tangential continuity of the velocity at interior edges in the mesh. The second method is a 3-field formulation involving the vorticity, velocity, and pressure and uses the Raviart-Thomas (RT) space for the velocity. We present mixed ghost penalty stabilization terms for both methods so that the resulting discrete problems are stable and the divergence-free property of the $\mathbf{H}^{\text{div}}$-conforming elements is preserved also on unfitted meshes. In both methods boundary conditions are imposed weakly. We show that imposing Dirichlet boundary conditions weakly introduces additional challenges; 1) The divergence-free property of the RT and the BDM finite elemens may be lost depending on how the normal component of the velocity field at the boundary is imposed. 2) Pressure robustness is affected by how well the boundary conditions are satisfied and may not hold even if the incompressibility condition holds pointwise. We study two approaches of weakly imposing the normal component of the velocity at the boundary; we either use a penalty parameter and Nitsche's method or a Lagrange multiplier method. We show that appropriate conditions on the velocity space has to be imposed when Nitsche's method is used. Pressure robustness can hold with both approaches by reducing the error at the boundary but this impacts the condition numbers of linear systems, independent of if the mesh is fitted or unfitted to the boundary.

Recent successes of massively overparameterized models have inspired a new line of work investigating the underlying conditions that enable overparameterized models to generalize well. This paper considers a framework where the possibly overparametrized model includes fake features, i.e., features that are present in the model but not in the data. We present a non-asymptotic high-probability bound on the generalization error of the ridge regression problem under the model misspecification of having fake features. Our highprobability results provide insights into the interplay between the implicit regularization provided by the fake features and the explicit regularization provided by the ridge parameter. Numerical results illustrate the trade-off between the number of fake features and how the optimal ridge parameter may heavily depend on the number of fake features.

Several recent works have studied the convergence \textit{in high probability} of stochastic gradient descent (SGD) and its clipped variant. Compared to vanilla SGD, clipped SGD is practically more stable and has the additional theoretical benefit of logarithmic dependence on the failure probability. However, the convergence of other practical nonlinear variants of SGD, e.g., sign SGD, quantized SGD and normalized SGD, that achieve improved communication efficiency or accelerated convergence is much less understood. In this work, we study the convergence bounds \textit{in high probability} of a broad class of nonlinear SGD methods. For strongly convex loss functions with Lipschitz continuous gradients, we prove a logarithmic dependence on the failure probability, even when the noise is heavy-tailed. Strictly more general than the results for clipped SGD, our results hold for any nonlinearity with bounded (component-wise or joint) outputs, such as clipping, normalization, and quantization. Further, existing results with heavy-tailed noise assume bounded $\eta$-th central moments, with $\eta \in (1,2]$. In contrast, our refined analysis works even for $\eta=1$, strictly relaxing the noise moment assumptions in the literature.

北京阿比特科技有限公司