亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we introduce a new class of models for spatial data obtained from max-convolution processes based on indicator kernels with random shape. We show that this class of models have appealing dependence properties including tail dependence at short distances and independence at long distances. We further consider max-convolutions between such processes and processes with tail independence, in order to separately control the bulk and tail dependence behaviors, and to increase flexibility of the model at longer distances, in particular, to capture intermediate tail dependence. We show how parameters can be estimated using a weighted pairwise likelihood approach, and we conduct an extensive simulation study to show that the proposed inference approach is feasible in high dimensions and it yields accurate parameter estimates in most cases. We apply the proposed methodology to analyse daily temperature maxima measured at 100 monitoring stations in the state of Oklahoma, US. Our results indicate that our proposed model provides a good fit to the data, and that it captures both the bulk and the tail dependence structures accurately.

相關內容

Processing 是一(yi)門開(kai)源編程語言和(he)與(yu)之配套的(de)集成開(kai)發環境(IDE)的(de)名稱。Processing 在(zai)電子藝術(shu)和(he)視覺設計社區被(bei)用來(lai)教授編程基礎,并運(yun)用于大量的(de)新媒體和(he)互動藝術(shu)作品中。

In many practical control applications, the performance level of a closed-loop system degrades over time due to the change of plant characteristics. Thus, there is a strong need for redesigning a controller without going through the system modeling process, which is often difficult for closed-loop systems. Reinforcement learning (RL) is one of the promising approaches that enable model-free redesign of optimal controllers for nonlinear dynamical systems based only on the measurement of the closed-loop system. However, the learning process of RL usually requires a considerable number of trial-and-error experiments using the poorly controlled system that may accumulate wear on the plant. To overcome this limitation, we propose a model-free two-step design approach that improves the transient learning performance of RL in an optimal regulator redesign problem for unknown nonlinear systems. Specifically, we first design a linear control law that attains some degree of control performance in a model-free manner, and then, train the nonlinear optimal control law with online RL by using the designed linear control law in parallel. We introduce an offline RL algorithm for the design of the linear control law and theoretically guarantee its convergence to the LQR controller under mild assumptions. Numerical simulations show that the proposed approach improves the transient learning performance and efficiency in hyperparameter tuning of RL.

Creating a design from modular components necessitates three steps: Acquiring knowledge about available components, conceiving an abstract design concept, and implementing that concept in a concrete design. The third step entails many repetitive and menial tasks, such as inserting parts and creating joints between them. Especially when comparing and implementing design alternatives, this issue is compounded. We propose a use-case agnostic knowledge-driven framework to automate the implementation step. In particular, the framework catalogues the acquired knowledge and the design concept, as well as utilizes Combinatory Logic Synthesis to synthesize concrete design alternatives. This minimizes the effort required to create designs, allowing the design space to be thoroughly explored. We implemented the framework as a plugin for the CAD software Autodesk Fusion 360. We conducted a case study in which robotic arms were synthesized from a set of 28 modular components. Based on the case study, the applicability of the framework is analyzed and discussed.

The trace plot is seldom used in meta-analysis, yet it is a very informative plot. In this article we define and illustrate what the trace plot is, and discuss why it is important. The Bayesian version of the plot combines the posterior density of tau, the between-study standard deviation, and the shrunken estimates of the study effects as a function of tau. With a small or moderate number of studies, tau is not estimated with much precision, and parameter estimates and shrunken study effect estimates can vary widely depending on the correct value of tau. The trace plot allows visualization of the sensitivity to tau along with a plot that shows which values of tau are plausible and which are implausible. A comparable frequentist or empirical Bayes version provides similar results. The concepts are illustrated using examples in meta-analysis and meta-regression; implementaton in R is facilitated in a Bayesian or frequentist framework using the bayesmeta and metafor packages, respectively.

Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contribute to downstream classification performance. In particular, it remains unclear if they generalize enough to improve over directly using the additional data of their pre-training process for augmentation. We systematically evaluate a range of existing methods to generate images from diffusion models and study new extensions to assess their benefit for data augmentation. Personalizing diffusion models towards the target data outperforms simpler prompting strategies. However, using the pre-training data of the diffusion model alone, via a simple nearest-neighbor retrieval procedure, leads to even stronger downstream performance. Our study explores the potential of diffusion models in generating new training data, and surprisingly finds that these sophisticated models are not yet able to beat a simple and strong image retrieval baseline on simple downstream vision tasks.

In this paper, we consider the two-sample location shift model, a classic semiparametric model introduced by Stein (1956). This model is known for its adaptive nature, enabling nonparametric estimation with full parametric efficiency. Existing nonparametric estimators of the location shift often depend on external tuning parameters, which restricts their practical applicability (Van der Vaart and Wellner, 2021). We demonstrate that introducing an additional assumption of log-concavity on the underlying density can alleviate the need for tuning parameters. We propose a one step estimator for location shift estimation, utilizing log-concave density estimation techniques to facilitate tuning-free estimation of the efficient influence function. While we employ a truncated version of the one step estimator for theoretical adaptivity, our simulations indicate that the one step estimators perform best with zero truncation, eliminating the need for tuning during practical implementation.

We characterize the Schr\"odinger bridge problems by a family of Mckean-Vlasov stochastic control problems with no terminal time distribution constraint. In doing so, we use the theory of Hilbert space embeddings of probability measures and then describe the constraint as penalty terms defined by the maximum mean discrepancy in the control problems. A sequence of the probability laws of the state processes resulting from $\epsilon$-optimal controls converges to a unique solution of the Schr\"odinger's problem under mild conditions on given initial and terminal time distributions and an underlying diffusion process. We propose a neural SDE based deep learning algorithm for the Mckean-Vlasov stochastic control problems. Several numerical experiments validate our methods.

In this paper, we introduce the quantum adaptive distribution search (QuADS), a quantum continuous optimization algorithm that integrates Grover adaptive search (GAS) with the covariance matrix adaptation - evolution strategy (CMA-ES), a classical technique for continuous optimization. QuADS utilizes the quantum-based search capabilities of GAS and enhances them with the principles of CMA-ES for more efficient optimization. It employs a multivariate normal distribution for the initial state of the quantum search and repeatedly updates it throughout the optimization process. Our numerical experiments show that QuADS outperforms both GAS and CMA-ES. This is achieved through adaptive refinement of the initial state distribution rather than consistently using a uniform state, resulting in fewer oracle calls. This study presents an important step toward exploiting the potential of quantum computing for continuous optimization.

In this paper, we provide bounds in Wasserstein and total variation distances between the distributions of the successive iterates of two functional autoregressive processes with isotropic Gaussian noise of the form $Y_{k+1} = \mathrm{T}_\gamma(Y_k) + \sqrt{\gamma\sigma^2} Z_{k+1}$ and $\tilde{Y}_{k+1} = \tilde{\mathrm{T}}_\gamma(\tilde{Y}_k) + \sqrt{\gamma\sigma^2} \tilde{Z}_{k+1}$. More precisely, we give non-asymptotic bounds on $\rho(\mathcal{L}(Y_{k}),\mathcal{L}(\tilde{Y}_k))$, where $\rho$ is an appropriate weighted Wasserstein distance or a $V$-distance, uniformly in the parameter $\gamma$, and on $\rho(\pi_{\gamma},\tilde{\pi}_{\gamma})$, where $\pi_{\gamma}$ and $\tilde{\pi}_{\gamma}$ are the respective stationary measures of the two processes. The class of considered processes encompasses the Euler-Maruyama discretization of Langevin diffusions and its variants. The bounds we derive are of order $\gamma$ as $\gamma \to 0$. To obtain our results, we rely on the construction of a discrete sticky Markov chain $(W_k^{(\gamma)})_{k \in \mathbb{N}}$ which bounds the distance between an appropriate coupling of the two processes. We then establish stability and quantitative convergence results for this process uniformly on $\gamma$. In addition, we show that it converges in distribution to the continuous sticky process studied in previous work. Finally, we apply our result to Bayesian inference of ODE parameters and numerically illustrate them on two particular problems.

Nonparametric estimation of nonlocal interaction kernels is crucial in various applications involving interacting particle systems. The inference challenge, situated at the nexus of statistical learning and inverse problems, comes from the nonlocal dependency. A central question is whether the optimal minimax rate of convergence for this problem aligns with the rate of $M^{-\frac{2\beta}{2\beta+1}}$ in classical nonparametric regression, where $M$ is the sample size and $\beta$ represents the smoothness exponent of the radial kernel. Our study confirms this alignment for systems with a finite number of particles. We introduce a tamed least squares estimator (tLSE) that attains the optimal convergence rate for a broad class of exchangeable distributions. The tLSE bridges the smallest eigenvalue of random matrices and Sobolev embedding. This estimator relies on nonasymptotic estimates for the left tail probability of the smallest eigenvalue of the normal matrix. The lower minimax rate is derived using the Fano-Tsybakov hypothesis testing method. Our findings reveal that provided the inverse problem in the large sample limit satisfies a coercivity condition, the left tail probability does not alter the bias-variance tradeoff, and the optimal minimax rate remains intact. Our tLSE method offers a straightforward approach for establishing the optimal minimax rate for models with either local or nonlocal dependency.

A novel method for detecting faults in power grids using a graph neural network (GNN) has been developed, aimed at enhancing intelligent fault diagnosis in network operation and maintenance. This GNN-based approach identifies faulty nodes within the power grid through a specialized electrical feature extraction model coupled with a knowledge graph. Incorporating temporal data, the method leverages the status of nodes from preceding and subsequent time periods to aid in current fault detection. To validate the effectiveness of this GNN in extracting node features, a correlation analysis of the output features from each node within the neural network layer was conducted. The results from experiments show that this method can accurately locate fault nodes in simulated scenarios with a remarkable 99.53% accuracy. Additionally, the graph neural network's feature modeling allows for a qualitative examination of how faults spread across nodes, providing valuable insights for analyzing fault nodes.

北京阿比特科技有限公司