亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article we consider the filtering problem associated to partially observed diffusions, with observations following a marked point process. In the model, the data form a point process with observation times that have its intensity driven by a diffusion, with the associated marks also depending upon the diffusion process. We assume that one must resort to time-discretizing the diffusion process and develop particle and multilevel particle filters to recursively approximate the filter. In particular, we prove that our multilevel particle filter can achieve a mean square error (MSE) of $\mathcal{O}(\epsilon^2)$ ($\epsilon>0$ and arbitrary) with a cost of $\mathcal{O}(\epsilon^{-2.5})$ versus using a particle filter which has a cost of $\mathcal{O}(\epsilon^{-3})$ to achieve the same MSE. We then show how this methodology can be extended to give unbiased (that is with no time-discretization error) estimators of the filter, which are proved to have finite variance and with high-probability have finite cost. Finally, we extend our methodology to the problem of online static-parameter estimation.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

During Percutaneous Nephrolithotomy (PCNL) operations, the surgeon is required to define the incision point on the patient's back, align the needle to a pre-planned path, and perform puncture operations afterward. The procedure is currently performed manually using ultrasound or fluoroscopy imaging for needle orientation, which, however, implies limited accuracy and low reproducibility. This work incorporates Augmented Reality (AR) visualization with an optical see-through head-mounted display (OST-HMD) and Human-Robot Collaboration (HRC) framework to empower the surgeon's task completion performance. In detail, Eye-to-Hand calibration, system registration, and hologram model registration are performed to realize visual guidance. A Cartesian impedance controller is used to guide the operator during the needle puncture task execution. Experiments are conducted to verify the system performance compared with conventional manual puncture procedures and a 2D monitor-based visualisation interface. The results showed that the proposed framework achieves the lowest median and standard deviation error across all the experimental groups, respectively. Furthermore, the NASA-TLX user evaluation results indicate that the proposed framework requires the lowest workload score for task completion compared to other experimental setups. The proposed framework exhibits significant potential for clinical application in the PCNL task, as it enhances the surgeon's perception capability, facilitates collision-free needle insertion path planning, and minimises errors in task completion.

In this paper, we first extend the result of FL93 and prove universal consistency for a classification rule based on wide and deep ReLU neural networks trained on the logistic loss. Unlike the approach in FL93 that decomposes the estimation and empirical error, we directly analyze the classification risk based on the observation that a realization of a neural network that is wide enough is capable of interpolating an arbitrary number of points. Secondly, we give sufficient conditions for a class of probability measures under which classifiers based on neural networks achieve minimax optimal rates of convergence. Our result is motivated from the practitioner's observation that neural networks are often trained to achieve 0 training error, which is the case for our proposed neural network classifiers. Our proofs hinge on recent developments in empirical risk minimization and on approximation rates of deep ReLU neural networks for various function classes of interest. Applications to classical function spaces of smoothness illustrate the usefulness of our result.

Numerical interactions leading to users sharing textual content published by others are naturally represented by a network where the individuals are associated with the nodes and the exchanged texts with the edges. To understand those heterogeneous and complex data structures, clustering nodes into homogeneous groups as well as rendering a comprehensible visualisation of the data is mandatory. To address both issues, we introduce Deep-LPTM, a model-based clustering strategy relying on a variational graph auto-encoder approach as well as a probabilistic model to characterise the topics of discussion. Deep-LPTM allows to build a joint representation of the nodes and of the edges in two embeddings spaces. The parameters are inferred using a variational inference algorithm. We also introduce IC2L, a model selection criterion specifically designed to choose models with relevant clustering and visualisation properties. An extensive benchmark study on synthetic data is provided. In particular, we find that Deep-LPTM better recovers the partitions of the nodes than the state-of-the art ETSBM and STBM. Eventually, the emails of the Enron company are analysed and visualisations of the results are presented, with meaningful highlights of the graph structure.

In this manuscript, we propose an efficient manifold denoiser based on landmark diffusion and optimal shrinkage under the complicated high dimensional noise and compact manifold setup. It is flexible to handle several setups, including the high ambient space dimension with a manifold embedding that occupies a subspace of high or low dimensions, and the noise could be colored and dependent. A systematic comparison with other existing algorithms on both simulated and real datasets is provided. This manuscript is mainly algorithmic and we report several existing tools and numerical results. Theoretical guarantees and more comparisons will be reported in the official paper of this manuscript.

With the aim to provide teachers with more specific, frequent, and actionable feedback about their teaching, we explore how Large Language Models (LLMs) can be used to estimate ``Instructional Support'' domain scores of the CLassroom Assessment Scoring System (CLASS), a widely used observation protocol. We design a machine learning architecture that uses either zero-shot prompting of Meta's Llama2, and/or a classic Bag of Words (BoW) model, to classify individual utterances of teachers' speech (transcribed automatically using OpenAI's Whisper) for the presence of Instructional Support. Then, these utterance-level judgments are aggregated over an entire 15-min observation session to estimate a global CLASS score. Experiments on two CLASS-coded datasets of toddler and pre-kindergarten classrooms indicate that (1) automatic CLASS Instructional Support estimation accuracy using the proposed method (Pearson $R$ up to $0.47$) approaches human inter-rater reliability (up to $R=0.55$); (2) LLMs yield slightly greater accuracy than BoW for this task, though the best models often combined features extracted from both LLM and BoW; and (3) for classifying individual utterances, there is still room for improvement of automated methods compared to human-level judgments. Finally, (4) we illustrate how the model's outputs can be visualized at the utterance level to provide teachers with explainable feedback on which utterances were most positively or negatively correlated with specific CLASS dimensions.

Compared to the generations up to 4G, whose main focus was on broadband and coverage aspects, 5G has expanded the scope of wireless cellular systems towards embracing two new types of connectivity: massive machine-type communication (mMTC) and ultra-reliable low-latency communications (URLLC). This paper will discuss the possible evolution of these two types of connectivity within the umbrella of 6G wireless systems. The paper consists of three parts. The first part deals with the connectivity for a massive number of devices. While mMTC research in 5G was predominantly focused on the problem of uncoordinated access in the uplink for a large number of devices, the traffic patterns in 6G may become more symmetric, leading to closed-loop massive connectivity. One of the drivers for this is distributed learning/inference. The second part of the paper will discuss the evolution of wireless connectivity for critical services. While latency and reliability are tightly coupled in 5G, 6G will support a variety of safety critical control applications with different types of timing requirements, as evidenced by the emergence of metrics related to information freshness and information value. Additionally, ensuring ultra-high reliability for safety critical control applications requires modeling and estimation of the tail statistics of the wireless channel, queue length, and delay. The fulfillment of these stringent requirements calls for the development of novel AI-based techniques, incorporating optimization theory, explainable AI, generative AI and digital twins. The third part will analyze the coexistence of massive connectivity and critical services. We will consider scenarios in which a massive number of devices need to support traffic patterns of mixed criticality. This will be followed by a discussion about the management of wireless resources shared by services with different criticality.

In this paper we prove convergence rates for time discretisation schemes for semi-linear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator $A$ is the generator of a strongly continuous semigroup $S$ on a Hilbert space $X$, and the focus is on non-parabolic problems. The main results are optimal bounds for the uniform strong error $$\mathrm{E}_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|^p\Big)^{1/p},$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$. The usual schemes such as exponential Euler, implicit Euler, and Crank-Nicolson, etc.\ are included as special cases. Under conditions on the nonlinearity and the noise, we show - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (linear equation, additive noise, general $S$); - $\mathrm{E}_{k}^{\infty}\lesssim \sqrt{k} \log(T/k)$ (nonlinear equation, multiplicative noise, contractive $S$); - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (nonlinear wave equation, multiplicative noise) for a large class of time discretisation schemes. The logarithmic factor can be removed if the exponential Euler method is used with a (quasi)-contractive $S$. The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler pointwise strong error $$\mathrm{E}_k:=\bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|^p\bigg)^{1/p}.$$ Applications to Maxwell equations, Schr\"odinger equations, and wave equations are included. For these equations, our results improve and reprove several existing results with a unified method and provide the first results known for implicit Euler and Crank-Nicolson.

In real life, we frequently come across data sets that involve some independent explanatory variable(s) generating a set of ordinal responses. These ordinal responses may correspond to an underlying continuous latent variable, which is linearly related to the covariate(s), and takes a particular (ordinal) label depending on whether this latent variable takes value in some suitable interval specified by a pair of (unknown) cut-offs. The most efficient way of estimating the unknown parameters (i.e., the regression coefficients and the cut-offs) is the method of maximum likelihood (ML). However, contamination in the data set either in the form of misspecification of ordinal responses, or the unboundedness of the covariate(s), might destabilize the likelihood function to a great extent where the ML based methodology might lead to completely unreliable inferences. In this paper, we explore a minimum distance estimation procedure based on the popular density power divergence (DPD) to yield robust parameter estimates for the ordinal response model. This paper highlights how the resulting estimator, namely the minimum DPD estimator (MDPDE), can be used as a practical robust alternative to the classical procedures based on the ML. We rigorously develop several theoretical properties of this estimator, and provide extensive simulations to substantiate the theory developed.

This article presents the affordances that Generative Artificial Intelligence can have in disinformation context, one of the major threats to our digitalized society. We present a research framework to generate customized agent-based social networks for disinformation simulations that would enable understanding and evaluation of the phenomena whilst discussing open challenges.

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

北京阿比特科技有限公司