亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we consider the numerical approximation of infinite horizon problems via the dynamic programming approach. The value function of the problem solves a Hamilton-Jacobi-Bellman (HJB) equation that is approximated by a fully discrete method. It is known that the numerical problem is difficult to handle by the so called curse of dimensionality. To mitigate this issue we apply a reduction of the order by means of a new proper orthogonal decomposition (POD) method based on time derivatives. We carry out the error analysis of the method using recently proved optimal bounds for the fully discrete approximations. Moreover, the use of snapshots based on time derivatives allow us to bound some terms of the error that could not be bounded in a standard POD approach. Some numerical experiments show the good performance of the method in practice.

相關內容

The trace plot is seldom used in meta-analysis, yet it is a very informative plot. In this article we define and illustrate what the trace plot is, and discuss why it is important. The Bayesian version of the plot combines the posterior density of tau, the between-study standard deviation, and the shrunken estimates of the study effects as a function of tau. With a small or moderate number of studies, tau is not estimated with much precision, and parameter estimates and shrunken study effect estimates can vary widely depending on the correct value of tau. The trace plot allows visualization of the sensitivity to tau along with a plot that shows which values of tau are plausible and which are implausible. A comparable frequentist or empirical Bayes version provides similar results. The concepts are illustrated using examples in meta-analysis and meta-regression; implementaton in R is facilitated in a Bayesian or frequentist framework using the bayesmeta and metafor packages, respectively.

In this paper, we propose a path re-planning algorithm that makes robots able to work in scenarios with moving obstacles. The algorithm switches between a set of pre-computed paths to avoid collisions with moving obstacles. It also improves the current path in an anytime fashion. The use of informed sampling enhances the search speed. Numerical results show the effectiveness of the strategy in different simulation scenarios.

In this paper, we consider the two-sample location shift model, a classic semiparametric model introduced by Stein (1956). This model is known for its adaptive nature, enabling nonparametric estimation with full parametric efficiency. Existing nonparametric estimators of the location shift often depend on external tuning parameters, which restricts their practical applicability (Van der Vaart and Wellner, 2021). We demonstrate that introducing an additional assumption of log-concavity on the underlying density can alleviate the need for tuning parameters. We propose a one step estimator for location shift estimation, utilizing log-concave density estimation techniques to facilitate tuning-free estimation of the efficient influence function. While we employ a truncated version of the one step estimator for theoretical adaptivity, our simulations indicate that the one step estimators perform best with zero truncation, eliminating the need for tuning during practical implementation.

This paper examines eight measures of skewness and Mardia measure of kurtosis for skew-elliptical distributions. Multivariate measures of skewness considered include Mardia, Malkovich-Afifi, Isogai, Song, Balakrishnan-Brito-Quiroz, M$\acute{o}$ri, Rohatgi and Sz$\acute{e}$kely, Kollo and Srivastava measures. We first study the canonical form of skew-elliptical distributions, and then derive exact expressions of all measures of skewness and kurtosis for the family of skew-elliptical distributions, except for Song's measure. Specifically, the formulas of these measures for skew normal, skew $t$, skew logistic, skew Laplace, skew Pearson type II and skew Pearson type VII distributions are obtained. Next, as in Malkovich and Afifi (1973), test statistics based on a random sample are constructed for illustrating the usefulness of the established results. In a Monte Carlo simulation study, different measures of skewness and kurtosis for $2$-dimensional skewed distributions are calculated and compared. Finally, real data is analyzed to demonstrate all the results.

Platform trials are a more efficient way of testing multiple treatments compared to running separate trials. In this paper we consider platform trials where, if a treatment is found to be superior to the control, it will become the new standard of care (and the control in the platform). The remaining treatments are then tested against this new control. In such a setting, one can either keep the information on both the new standard of care and the other active treatments before the control is changed or one could discard this information when testing for benefit of the remaining treatments. We will show analytically and numerically that retaining the information collected before the change in control can be detrimental to the power of the study. Specifically, we consider the overall power, the probability that the active treatment with the greatest treatment effect is found during the trial. We also consider the conditional power of the active treatments, the probability a given treatment can be found superior against the current control. We prove when, in a multi-arm multi-stage trial where no arms are added, retaining the information is detrimental to both overall and conditional power of the remaining treatments. This loss of power is studied for a motivating example. We then discuss the effect on platform trials in which arms are added later. On the basis of these observations we discuss different aspects to consider when deciding whether to run a continuous platform trial or whether one may be better running a new trial.

In this paper, we prove strong consistency of an estimator by the truncated singular value decomposition for a multivariate errors-in-variables linear regression model with collinearity. This result is an extension of Gleser's proof of the strong consistency of total least squares solutions to the case with modern rank constraints. While the usual discussion of consistency in the absence of solution uniqueness deals with the minimal norm solution, the contribution of this study is to develop a theory that shows the strong consistency of a set of solutions. The proof is based on properties of orthogonal projections, specifically properties of the Rayleigh-Ritz procedure for computing eigenvalues. This makes it suitable for targeting problems where some row vectors of the matrices do not contain noise. Therefore, this paper gives a proof for the regression model with the above condition on the row vectors, resulting in a natural generalization of the strong consistency for the standard TLS estimator.

We analyse the geometric instability of embeddings produced by graph neural networks (GNNs). Existing methods are only applicable for small graphs and lack context in the graph domain. We propose a simple, efficient and graph-native Graph Gram Index (GGI) to measure such instability which is invariant to permutation, orthogonal transformation, translation and order of evaluation. This allows us to study the varying instability behaviour of GNN embeddings on large graphs for both node classification and link prediction.

A novel method for detecting faults in power grids using a graph neural network (GNN) has been developed, aimed at enhancing intelligent fault diagnosis in network operation and maintenance. This GNN-based approach identifies faulty nodes within the power grid through a specialized electrical feature extraction model coupled with a knowledge graph. Incorporating temporal data, the method leverages the status of nodes from preceding and subsequent time periods to aid in current fault detection. To validate the effectiveness of this GNN in extracting node features, a correlation analysis of the output features from each node within the neural network layer was conducted. The results from experiments show that this method can accurately locate fault nodes in simulated scenarios with a remarkable 99.53% accuracy. Additionally, the graph neural network's feature modeling allows for a qualitative examination of how faults spread across nodes, providing valuable insights for analyzing fault nodes.

In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.

Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc. These achievements benefit from the CNNs outstanding capability to learn the input features with deep layers of neuron structures and iterative training process. However, these learned features are hard to identify and interpret from a human vision perspective, causing a lack of understanding of the CNNs internal working mechanism. To improve the CNN interpretability, the CNN visualization is well utilized as a qualitative analysis method, which translates the internal features into visually perceptible patterns. And many CNN visualization works have been proposed in the literature to interpret the CNN in perspectives of network structure, operation, and semantic concept. In this paper, we expect to provide a comprehensive survey of several representative CNN visualization methods, including Activation Maximization, Network Inversion, Deconvolutional Neural Networks (DeconvNet), and Network Dissection based visualization. These methods are presented in terms of motivations, algorithms, and experiment results. Based on these visualization methods, we also discuss their practical applications to demonstrate the significance of the CNN interpretability in areas of network design, optimization, security enhancement, etc.

北京阿比特科技有限公司