亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study the error in first order Sobolev norm in the approximation of solutions to linear parabolic PDEs. We use a Monte Carlo Euler scheme obtained from combining the Feynman--Kac representation with a Euler discretization of the underlying stochastic process. We derive approximation rates depending on the time-discretization, the number of Monte Carlo simulations, and the dimension. In particular, we show that the Monte Carlo Euler scheme breaks the curse of dimensionality with respect to the first order Sobolev norm. Our argument is based on new estimates on the weak error of the Euler approximation of a diffusion process together with its derivative with respect to the initial condition. As a consequence, we obtain that neural networks are able to approximate solutions of linear parabolic PDEs in first order Sobolev norm without the curse of dimensionality if the coefficients of the PDEs admit an efficient approximation with neural networks.

相關內容

In this paper, we address the problem of modeling data with periodic autoregressive (PAR) time series and additive noise. In most cases, the data are processed assuming a noise-free model (i.e., without additive noise), which is not a realistic assumption in real life. The first two steps in PAR model identification are order selection and period estimation, so the main focus is on these issues. Finally, the model should be validated, so a procedure for analyzing the residuals, which are considered here as multidimensional vectors, is proposed. Both order and period selection, as well as model validation, are addressed by using the characteristic function (CF) of the residual series. The CF is used to obtain the probability density function, which is utilized in the information criterion and for residuals distribution testing. To complete the PAR model analysis, the procedure for estimating the coefficients is necessary. However, this issue is only mentioned here as it is a separate task (under consideration in parallel). The presented methodology can be considered as the general framework for analyzing data with periodically non-stationary characteristics disturbed by finite-variance external noise. The original contribution is in the selection of the optimal model order and period identification, as well as the analysis of residuals. All these findings have been inspired by our previous work on machine condition monitoring that used PAR modeling

To minimize the average of a set of log-convex functions, the stochastic Newton method iteratively updates its estimate using subsampled versions of the full objective's gradient and Hessian. We contextualize this optimization problem as sequential Bayesian inference on a latent state-space model with a discriminatively-specified observation process. Applying Bayesian filtering then yields a novel optimization algorithm that considers the entire history of gradients and Hessians when forming an update. We establish matrix-based conditions under which the effect of older observations diminishes over time, in a manner analogous to Polyak's heavy ball momentum. We illustrate various aspects of our approach with an example and review other relevant innovations for the stochastic Newton method.

We report some results regarding the mechanization of normative (preference-based) conditional reasoning. Our focus is on Aqvist's system E for conditional obligation (and its extensions). Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox in population ethics, Parfit's repugnant conclusion. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.

We present SCULPT, a novel 3D generative model for clothed and textured 3D meshes of humans. Specifically, we devise a deep neural network that learns to represent the geometry and appearance distribution of clothed human bodies. Training such a model is challenging, as datasets of textured 3D meshes for humans are limited in size and accessibility. Our key observation is that there exist medium-sized 3D scan datasets like CAPE, as well as large-scale 2D image datasets of clothed humans and multiple appearances can be mapped to a single geometry. To effectively learn from the two data modalities, we propose an unpaired learning procedure for pose-dependent clothed and textured human meshes. Specifically, we learn a pose-dependent geometry space from 3D scan data. We represent this as per vertex displacements w.r.t. the SMPL model. Next, we train a geometry conditioned texture generator in an unsupervised way using the 2D image data. We use intermediate activations of the learned geometry model to condition our texture generator. To alleviate entanglement between pose and clothing type, and pose and clothing appearance, we condition both the texture and geometry generators with attribute labels such as clothing types for the geometry, and clothing colors for the texture generator. We automatically generated these conditioning labels for the 2D images based on the visual question answering model BLIP and CLIP. We validate our method on the SCULPT dataset, and compare to state-of-the-art 3D generative models for clothed human bodies. We will release the codebase for research purposes.

In this paper we develop a numerical method for efficiently approximating solutions of certain Zakai equations in high dimensions. The key idea is to transform a given Zakai SPDE into a PDE with random coefficients. We show that under suitable regularity assumptions on the coefficients of the Zakai equation, the corresponding random PDE admits a solution random field which, for almost all realizations of the random coefficients, can be written as a classical solution of a linear parabolic PDE. This makes it possible to apply the Feynman--Kac formula to obtain an efficient Monte Carlo scheme for computing approximate solutions of Zakai equations. The approach achieves good results in up to 25 dimensions with fast run times.

Conway's Game of Life is a two-dimensional cellular automaton. As a dynamical system, it is well-known to be computationally universal, i.e.\ capable of simulating an arbitrary Turing machine. We show that in a sense taking a single backwards step of Game of Life is a computationally universal process, by constructing patterns whose preimage computation encodes an arbitrary circuit-satisfaction problem, or (equivalently) any tiling problem. As a corollary, we obtain for example that the set of orphans is coNP-complete, exhibit a $6210 \times 37800$-periodic configuration that admits a preimage but no periodic one, and prove that the existence of a preimage for a periodic point is undecidable. Our constructions were obtained by a combination of computer searches and manual design.

The phenomenon of seat occupancy in university libraries is a prevalent issue. However, existing solutions, such as software-based seat reservations and sensors-based occupancy detection, have proven to be inadequate in effectively addressing this problem. In this study, we propose a novel approach: a serial dual-channel object detection model based on Faster RCNN. This model is designed to discern all instances of occupied seats within the library and continuously update real-time information regarding seat occupancy status. To train the neural network, a distinctive dataset is utilized, which blends virtual images generated using Unreal Engine 5 (UE5) with real-world images. Notably, our test results underscore the remarkable performance uplift attained through the application of self-generated virtual datasets in training Convolutional Neural Networks (CNNs), particularly within specialized scenarios. Furthermore, this study introduces a pioneering detection model that seamlessly amalgamates the Faster R-CNN-based object detection framework with a transfer learning-based object classification algorithm. This amalgamation not only significantly curtails the computational resources and time investments needed for neural network training but also considerably heightens the efficiency of single-frame detection rates. Additionally, a user-friendly web interface and a mobile application have been meticulously developed, constituting a computer vision-driven platform for detecting seat occupancy within library premises. Noteworthy is the substantial enhancement in seat occupancy recognition accuracy, coupled with a reduction in computational resources required for neural network training, collectively contributing to a considerable amplification in the overall efficiency of library seat management.

Artificial intelligence (AI) models are prevalent today and provide a valuable tool for artists. However, a lesser-known artifact that comes with AI models that is not always discussed is the glitch. Glitches occur for various reasons; sometimes, they are known, and sometimes they are a mystery. Artists who use AI models to generate art might not understand the reason for the glitch but often want to experiment and explore novel ways of augmenting the output of the glitch. This paper discusses some of the questions artists have when leveraging the glitch in AI art production. It explores the unexpected positive outcomes produced by glitches in the specific context of motion capture and performance art.

BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at //github.com/nlpyang/BertSum

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

北京阿比特科技有限公司