亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The temporal analysis of products (TAP) technique produces extensive transient kinetic data sets, but it is challenging to translate the large quantity of raw data into physically interpretable kinetic models, largely due to the computational scaling of existing numerical methods for fitting TAP data. In this work, we utilize kinetics-informed neural networks (KINNs), which are artificial feedforward neural networks designed to solve ordinary differential equations constrained by micro-kinetic models, to model the TAP data. We demonstrate that, under the assumption that all concentrations are known in the thin catalyst zone, KINNs can simultaneously fit the transient data, retrieve the kinetic model parameters, and interpolate unseen pulse behavior for multi-pulse experiments. We further demonstrate that, by modifying the loss function, KINNs maintain these capabilities even when precise thin-zone information is unavailable, as would be the case with real experimental TAP data. We also compare the approach to existing optimization techniques, which reveals improved noise tolerance and performance in extracting kinetic parameters. The KINNs approach offers an efficient alternative for TAP analysis and can assist in interpreting transient kinetics in complex systems over long timescales.

相關內容

ACM應用(yong)感(gan)知(zhi)(zhi)TAP(ACM Transactions on Applied Perception)旨在(zai)通過發表(biao)有(you)助(zhu)于統一這(zhe)些領域研究的(de)(de)高質量(liang)論(lun)文(wen)來(lai)增強計算(suan)機(ji)(ji)科學(xue)與(yu)心(xin)理學(xue)/感(gan)知(zhi)(zhi)之間的(de)(de)協(xie)同作用(yong)。該(gai)期刊發表(biao)跨學(xue)科研究,在(zai)跨計算(suan)機(ji)(ji)科學(xue)和感(gan)知(zhi)(zhi)心(xin)理學(xue)的(de)(de)任何主(zhu)題領域都具有(you)重大而持久的(de)(de)價值。所有(you)論(lun)文(wen)都必須包(bao)含感(gan)知(zhi)(zhi)和計算(suan)機(ji)(ji)科學(xue)兩個部(bu)分。主(zhu)題包(bao)括但不限于:視(shi)覺(jue)(jue)感(gan)知(zhi)(zhi):計算(suan)機(ji)(ji)圖形學(xue),科學(xue)/數據/信息可視(shi)化(hua),數字(zi)成(cheng)像,計算(suan)機(ji)(ji)視(shi)覺(jue)(jue),立體(ti)和3D顯(xian)(xian)示(shi)技術。聽(ting)(ting)覺(jue)(jue)感(gan)知(zhi)(zhi):聽(ting)(ting)覺(jue)(jue)顯(xian)(xian)示(shi)和界面,聽(ting)(ting)覺(jue)(jue)聽(ting)(ting)覺(jue)(jue)編(bian)碼(ma),空間聲音,語音合(he)成(cheng)和識別。觸(chu)覺(jue)(jue):觸(chu)覺(jue)(jue)渲染(ran),觸(chu)覺(jue)(jue)輸(shu)入和感(gan)知(zhi)(zhi)。感(gan)覺(jue)(jue)運動(dong)知(zhi)(zhi)覺(jue)(jue):手勢輸(shu)入,身體(ti)運動(dong)輸(shu)入。感(gan)官(guan)感(gan)知(zhi)(zhi):感(gan)官(guan)整合(he),多(duo)模(mo)式(shi)渲染(ran)和交互。 官(guan)網地址(zhi):

Applying differential privacy (DP) by means of the DP-SGD algorithm to protect individual data points during training is becoming increasingly popular in NLP. However, the choice of granularity at which DP is applied is often neglected. For example, neural machine translation (NMT) typically operates on the sentence-level granularity. From the perspective of DP, this setup assumes that each sentence belongs to a single person and any two sentences in the training dataset are independent. This assumption is however violated in many real-world NMT datasets, e.g. those including dialogues. For proper application of DP we thus must shift from sentences to entire documents. In this paper, we investigate NMT at both the sentence and document levels, analyzing the privacy/utility trade-off for both scenarios, and evaluating the risks of not using the appropriate privacy granularity in terms of leaking personally identifiable information (PII). Our findings indicate that the document-level NMT system is more resistant to membership inference attacks, emphasizing the significance of using the appropriate granularity when working with DP.

The modern approaches for computer vision tasks significantly rely on machine learning, which requires a large number of quality images. While there is a plethora of image datasets with a single type of images, there is a lack of datasets collected from multiple cameras. In this thesis, we introduce Paired Image and Video data from three CAMeraS, namely PIV3CAMS, aimed at multiple computer vision tasks. The PIV3CAMS dataset consists of 8385 pairs of images and 82 pairs of videos taken from three different cameras: Canon D5 Mark IV, Huawei P20, and ZED stereo camera. The dataset includes various indoor and outdoor scenes from different locations in Zurich (Switzerland) and Cheonan (South Korea). Some of the computer vision applications that can benefit from the PIV3CAMS dataset are image/video enhancement, view interpolation, image matching, and much more. We provide a careful explanation of the data collection process and detailed analysis of the data. The second part of this thesis studies the usage of depth information in the view synthesizing task. In addition to the regeneration of a current state-of-the-art algorithm, we investigate several proposed alternative models that integrate depth information geometrically. Through extensive experiments, we show that the effect of depth is crucial in small view changes. Finally, we apply our model to the introduced PIV3CAMS dataset to synthesize novel target views as an example application of PIV3CAMS.

Advection-dominated problems are commonly noticed in nature, engineering systems, and a wide range of industrial processes. For these problems, linear approximation methods (proper orthogonal decomposition and reduced basis method) are not suitable, as the Kolmogorov $n$-width decay is slow, leading to inefficient and inaccurate reduced order models. There are few non-linear approaches to accelerate the Kolmogorov $n$-width decay. In this work, we use a neural-network shift augmented transformation technique, that employs automatic-shit detection and detects the optimal non-linear transformation of the full-order model solution manifold $\mathcal{M}$. We exploit a deep-learning framework to derive parameter-dependent bijective mapping between the manifold $\mathcal{M}$ and the transformed manifold $\tilde{\mathcal{M}}$. It consists of two neural networks, 1) ShiftNet, to employ automatic-shift detection by learning the shift-operator, which finds the optimal shifts for numerous snapshots of the full-order solution manifold, to accelerate the Kolmogorov $n$-width decay, and 2) InterpNet, which learns the reference configuration and can reconstruct the field values of the same, for each shifted grid distribution. We construct non-intrusive reduced order models on the resulting transformed linear subspaces and employ automatic-shift detection for predictions. We test our methodology on advection-dominated problems, such as 1D travelling waves, 2D isentropic convective vortex and 2D two-phase flow test cases. This work leads to the development of the complete NNsPOD-ROM algorithm for model reduction of advection-dominated problems, comprising both offline-online stages.

Machine learning techniques have recently been of great interest for solving differential equations. Training these models is classically a data-fitting task, but knowledge of the expression of the differential equation can be used to supplement the training objective, leading to the development of physics-informed scientific machine learning. In this article, we focus on one class of models called nonlinear vector autoregression (NVAR) to solve ordinary differential equations (ODEs). Motivated by connections to numerical integration and physics-informed neural networks, we explicitly derive the physics-informed NVAR (piNVAR) which enforces the right-hand side of the underlying differential equation regardless of NVAR construction. Because NVAR and piNVAR completely share their learned parameters, we propose an augmented procedure to jointly train the two models. Then, using both data-driven and ODE-driven metrics, we evaluate the ability of the piNVAR model to predict solutions to various ODE systems, such as the undamped spring, a Lotka-Volterra predator-prey nonlinear model, and the chaotic Lorenz system.

Poisson distributed measurements in inverse problems often stem from Poisson point processes that are observed through discretized or finite-resolution detectors, one of the most prominent examples being positron emission tomography (PET). These inverse problems are typically reconstructed via Bayesian methods. A natural question then is whether and how the reconstruction converges as the signal-to-noise ratio tends to infinity and how this convergence interacts with other parameters such as the detector size. In this article we carry out a corresponding variational analysis for the exemplary Bayesian reconstruction functional from [arXiv:2311.17784,arXiv:1902.07521], which considers dynamic PET imaging (i.e.\ the object to be reconstructed changes over time) and uses an optimal transport regularization.

The insight that causal parameters are particularly suitable for out-of-sample prediction has sparked a lot development of causal-like predictors. However, the connection with strict causal targets, has limited the development with good risk minimization properties, but without a direct causal interpretation. In this manuscript we derive the optimal out-of-sample risk minimizing predictor of a certain target $Y$ in a non-linear system $(X,Y)$ that has been trained in several within-sample environments. We consider data from an observation environment, and several shifted environments. Each environment corresponds to a structural equation model (SEM), with random coefficients and with its own shift and noise vector, both in $L^2$. Unlike previous approaches, we also allow shifts in the target value. We define a sieve of out-of-sample environments, consisting of all shifts $\tilde{A}$ that are at most $\gamma$ times as strong as any weighted average of the observed shift vectors. For each $\beta\in\mathbb{R}^p$ we show that the supremum of the risk functions $R_{\tilde{A}}(\beta)$ has a worst-risk decomposition into a (positive) non-linear combination of risk functions, depending on $\gamma$. We then define the set $\mathcal{B}_\gamma$, as minimizers of this risk. The main result of the paper is that there is a unique minimizer ($|\mathcal{B}_\gamma|=1$) that can be consistently estimated by an explicit estimator, outside a set of zero Lebesgue measure in the parameter space. A practical obstacle for the initial method of estimation is that it involves the solution of a general degree polynomials. Therefore, we prove that an approximate estimator using the bisection method is also consistent.

We propose an experimental study of adaptive time-stepping methods for efficient modeling of the aggregation-fragmentation kinetics. Precise modeling of this phenomena usually requires utilization of the large systems of nonlinear ordinary differential equations and intensive computations. We concentrate on performance of three explicit Runge-Kutta time-integration methods and provide simulations for two types of problems: finding of equilibrium solutions and simulations for kinetics with periodic solutions. The first class of problems may be analyzed through the relaxation of the solution to the stationary state after large time. In this case, the adaptive time-stepping may help to reach it using big steps reducing cost of the calculations without loss of accuracy. In the second case, the problem becomes numerically unstable at certain points of the phase space and may require tiny steps making the simulations very time-consuming. Adaptive criteria allows to increase the steps for most of points and speedup simulations significantly.

Areas of computational mechanics such as uncertainty quantification and optimization usually involve repeated evaluation of numerical models that represent the behavior of engineering systems. In the case of complex nonlinear systems however, these models tend to be expensive to evaluate, making surrogate models quite valuable. Artificial neural networks approximate systems very well by taking advantage of the inherent information of its given training data. In this context, this paper investigates the improvement of the training process by including sensitivity information, which are partial derivatives w.r.t. inputs, as outlined by Sobolev training. In computational mechanics, sensitivities can be applied to neural networks by expanding the training loss function with additional loss terms, thereby improving training convergence resulting in lower generalisation error. This improvement is shown in two examples of linear and non-linear material behavior. More specifically, the Sobolev designed loss function is expanded with residual weights adjusting the effect of each loss on the training step. Residual weighting is the given scaling to the different training data, which in this case are response and sensitivities. These residual weights are optimized by an adaptive scheme, whereby varying objective functions are explored, with some showing improvements in accuracy and precision of the general training convergence.

A spatiotemporal deep learning framework is proposed that is capable of 2D full-field prediction of fracture in concrete mesostructures. This framework not only predicts fractures but also captures the entire history of the fracture process, from the crack initiation in the interfacial transition zone to the subsequent propagation of the cracks in the mortar matrix. In addition, a convolutional neural network is developed which can predict the averaged stress-strain curve of the mesostructures. The UNet modeling framework, which comprises an encoder-decoder section with skip connections, is used as the deep learning surrogate model. Training and test data are generated from high-fidelity fracture simulations of randomly generated concrete mesostructures. These mesostructures include geometric variabilities such as different aggregate particle geometrical features, spatial distribution, and the total volume fraction of aggregates. The fracture simulations are carried out in Abaqus, utilizing the cohesive phase-field fracture modeling technique as the fracture modeling approach. In this work, to reduce the number of training datasets, the spatial distribution of three sets of material properties for three-phase concrete mesostructures, along with the spatial phase-field damage index, are fed to the UNet to predict the corresponding stress and spatial damage index at the subsequent step. It is shown that after the training process using this methodology, the UNet model is capable of accurately predicting damage on the unseen test dataset by using 470 datasets. Moreover, another novel aspect of this work is the conversion of irregular finite element data into regular grids using a developed pipeline. This approach allows for the implementation of less complex UNet architecture and facilitates the integration of phase-field fracture equations into surrogate models for future developments.

We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.

北京阿比特科技有限公司