亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The distribution-free chain ladder of Mack justified the use of the chain ladder predictor and enabled Mack to derive an estimator of conditional mean squared error of prediction for the chain ladder predictor. Classical insurance loss models, i.e. of compound Poisson type, are not consistent with Mack's distribution-free chain ladder. However, for a sequence of compound Poisson loss models indexed by exposure (e.g. number of contracts), we show that the chain ladder predictor and Mack's estimator of conditional mean squared error of prediction can be derived by considering large exposure asymptotics. Hence, quantifying chain ladder prediction uncertainty can be done with Mack's estimator without relying on the validity of the model assumptions of the distribution-free chain ladder.

相關內容

We characterize the strength, in terms of Weihrauch degrees, of certain problems related to Ramsey-like theorems concerning colourings of the rationals and of the natural numbers. The theorems we are chiefly interested in assert the existence of almost-homogeneous sets for colourings of pairs of rationals respectively natural numbers satisfying properties determined by some additional algebraic structure on the set of colours. In the context of reverse mathematics, most of the principles we study are equivalent to $\Sigma^0_2$-induction over $\mathrm{RCA}_0$. The associated problems in the Weihrauch lattice are related to $\mathrm{TC}_\mathbb{N}^*$, $(\mathrm{LPO}')^*$ or their product, depending on their precise formalizations.

In this paper, two novel classes of implicit exponential Runge-Kutta (ERK) methods are studied for solving highly oscillatory systems. First of all, we analyze the symplectic conditions of two kinds of exponential integrators, and present a first-order symplectic method. In order to solve highly oscillatory problems, the highly accurate implicit ERK integrators (up to order four) are formulated by comparing the Taylor expansions of numerical and exact solutions, it is shown that the order conditions of two new kinds of exponential methods are identical to the order conditions of classical Runge-Kutta (RK) methods. Moreover, we investigate the linear stability properties of these exponential methods. Finally, numerical results not only present the long time energy preservation of the first-order symplectic method, but also illustrate the accuracy and efficiency of these formulated methods in comparison with standard ERK methods.

We consider the Ensemble Kalman Inversion which has been recently introduced as an efficient, gradient-free optimisation method to estimate unknown parameters in an inverse setting. In the case of large data sets, the Ensemble Kalman Inversion becomes computationally infeasible as the data misfit needs to be evaluated for each particle in each iteration. Here, randomised algorithms like stochastic gradient descent have been demonstrated to successfully overcome this issue by using only a random subset of the data in each iteration, so-called subsampling techniques. Based on a recent analysis of a continuous-time representation of stochastic gradient methods, we propose, analyse, and apply subsampling-techniques within Ensemble Kalman Inversion. Indeed, we propose two different subsampling techniques: either every particle observes the same data subset (single subsampling) or every particle observes a different data subset (batch subsampling).

Many asymptotically minimax procedures for function estimation often rely on somewhat arbitrary and restrictive assumptions such as isotropy or spatial homogeneity. This work enhances the theoretical understanding of Bayesian additive regression trees under substantially relaxed smoothness assumptions. We provide a comprehensive study of asymptotic optimality and posterior contraction of Bayesian forests when the regression function has anisotropic smoothness that possibly varies over the function domain. The regression function can also be possibly discontinuous. We introduce a new class of sparse {\em piecewise heterogeneous anisotropic} H\"{o}lder functions and derive their minimax lower bound of estimation in high-dimensional scenarios under the $L_2$-loss. We then find that the Bayesian tree priors, coupled with a Dirichlet subset selection prior for sparse estimation in high-dimensional scenarios, adapt to unknown heterogeneous smoothness, discontinuity, and sparsity. These results show that Bayesian forests are uniquely suited for more general estimation problems that would render other default machine learning tools, such as Gaussian processes, suboptimal. Our numerical study shows that Bayesian forests often outperform other competitors such as random forests and deep neural networks, which are believed to work well for discontinuous or complicated smooth functions. Beyond nonparametric regression, we also examined posterior contraction of Bayesian forests for density estimation and binary classification using the technique developed in this study.

We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.

We introduce the new setting of open-vocabulary object 6D pose estimation, in which a textual prompt is used to specify the object of interest. In contrast to existing approaches, in our setting (i) the object of interest is specified solely through the textual prompt, (ii) no object model (e.g. CAD or video sequence) is required at inference, (iii) the object is imaged from two different viewpoints of two different scenes, and (iv) the object was not observed during the training phase. To operate in this setting, we introduce a novel approach that leverages a Vision-Language Model to segment the object of interest from two distinct scenes and to estimate its relative 6D pose. The key of our approach is a carefully devised strategy to fuse object-level information provided by the prompt with local image features, resulting in a feature space that can generalize to novel concepts. We validate our approach on a new benchmark based on two popular datasets, REAL275 and Toyota-Light, which collectively encompass 39 object instances appearing in four thousand image pairs. The results demonstrate that our approach outperforms both a well-established hand-crafted method and a recent deep learning-based baseline in estimating the relative 6D pose of objects in different scenes. Project website: //jcorsetti.github.io/oryon-website/.

Ductile damage models and cohesive laws incorporate the material plasticity entailing the growth of irrecoverable deformations even after complete failure. This unrealistic growth remains concealed until the unilateral effects arising from the crack closure emerge. We address this issue by proposing a new strategy to cope with the entire process of failure, from the very inception in the form of diffuse damage to the final stage, i.e. the emergence of sharp cracks. To this end, we introduce a new strain field, termed discontinuity strain, to the conventional additive strain decomposition to account for discontinuities in a continuous sense so that the standard principle of virtual work applies. We treat this strain field similar to a strong discontinuity, yet without introducing new kinematic variables and nonlinear boundary conditions. In this paper, we demonstrate the effectiveness of this new strategy at a simple ductile damage constitutive model. The model uses a scalar damage index to control the degradation process. The discontinuity strain field is injected into the strain decomposition if this damage index exceeds a certain threshold. The threshold corresponds to the limit at which the induced imperfections merge and form a discrete crack. With three-point bending tests under pure mode I and mixed-mode conditions, we demonstrate that this augmentation does not show the early crack closure artifact which is wrongly predicted by plastic damage formulations at load reversal. We also use the concrete damaged plasticity model provided in Abaqus commercial finite element program for our comparison. Lastly, a high-intensity low-cycle fatigue test demonstrates the unilateral effects resulting from the complete closure of the induced crack.

Quantum-key-distribution (QKD) networks are gaining importance and it has become necessary to analyze the most appropriate methods for their long-distance interconnection. In this paper, four different methods of interconnecting remote QKD networks are proposed. The methods are used to link three different QKD testbeds in Europe, located in Berlin, Madrid, and Poznan. Although long-distance QKD links are only emulated, the used methods can serve as a blueprint for a secure interconnection of distant QKD networks in the future. Specifically, the presented approaches combine, in a transparent way, different fiber and satellite physical media, as well as common standards of key-delivery interfaces. The testbed interconnections are designed to increase the security by utilizing multipath techniques and multiple hybridizations of QKD and post quantum cryptography (PQC) algorithms.

We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to languages not seen while training the self-supervised backbone. Moreover, without training on bilingual or parallel examples, ParrotTTS can transfer voices across languages while preserving the speaker specific characteristics, e.g., synthesizing fluent Hindi speech using a French speaker's voice and accent. We present extensive results in monolingual and multi-lingual scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models using only a fraction of paired data as latter.

Model sparsification in deep learning promotes simpler, more interpretable models with fewer parameters. This not only reduces the model's memory footprint and computational needs but also shortens inference time. This work focuses on creating sparse models optimized for multiple tasks with fewer parameters. These parsimonious models also possess the potential to match or outperform dense models in terms of performance. In this work, we introduce channel-wise l1/l2 group sparsity in the shared convolutional layers parameters (or weights) of the multi-task learning model. This approach facilitates the removal of extraneous groups i.e., channels (due to l1 regularization) and also imposes a penalty on the weights, further enhancing the learning efficiency for all tasks (due to l2 regularization). We analyzed the results of group sparsity in both single-task and multi-task settings on two widely-used Multi-Task Learning (MTL) datasets: NYU-v2 and CelebAMask-HQ. On both datasets, which consist of three different computer vision tasks each, multi-task models with approximately 70% sparsity outperform their dense equivalents. We also investigate how changing the degree of sparsification influences the model's performance, the overall sparsity percentage, the patterns of sparsity, and the inference time.

北京阿比特科技有限公司