亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The acoustic wave equation is solved in time domain with a boundary element formulation. The time discretisation is performed with the generalised convolution quadrature method and for the spatial approximation standard lowest order elements are used. Collocation and Galerkin methods are applied. In the interest of increasing the efficiency of the boundary element method, a low-rank approximation such as the adaptive cross approximation (ACA) is carried out. We discuss about a generalisation of the ACA to approximate a three-dimensional array of data, i.e., usual boundary element matrices at several complex frequencies. This method is used within the generalised convolution quadrature (gCQ) method to obtain a real time domain formulation. The behaviour of the proposed method is studied with three examples, a unit cube, a unit cube with a reentrant corner, and a unit ball. The properties of the method are preserved in the data sparse representation and a significant reduction in storage is obtained.

相關內容

Image restoration is typically addressed through non-convex inverse problems, which are often solved using first-order block-wise splitting methods. In this paper, we consider a general type of non-convex optimisation model that captures many inverse image problems and present an inertial block proximal linearised minimisation (iBPLM) algorithm. Our new method unifies the Jacobi-type parallel and the Gauss-Seidel-type alternating update rules, and extends beyond these approaches. The inertial technique is also incorporated into each block-wise subproblem update, which can accelerate numerical convergence. Furthermore, we extend this framework with a plug-and-play variant (PnP-iBPLM) that integrates deep gradient denoisers, offering a flexible and robust solution for complex imaging tasks. We provide comprehensive theoretical analysis, demonstrating both subsequential and global convergence of the proposed algorithms. To validate our methods, we apply them to multi-block dictionary learning problems in image denoising and deblurring. Experimental results show that both iBPLM and PnP-iBPLM significantly enhance numerical performance and robustness in these applications.

Autoregressive moving average (ARMA) models are frequently used to analyze time series data. Despite the popularity of these models, likelihood-based inference for ARMA models has subtleties that have been previously identified but continue to cause difficulties in widely used data analysis strategies. We provide a summary of parameter estimation via maximum likelihood and discuss common pitfalls that may lead to sub-optimal parameter estimates. We propose a random initialization algorithm for parameter estimation that frequently yields higher likelihoods than traditional maximum likelihood estimation procedures. We then investigate the parameter uncertainty of maximum likelihood estimates, and propose the use of profile confidence intervals as a superior alternative to intervals derived from the Fisher's information matrix. Through a series of simulation studies, we demonstrate the efficacy of our proposed algorithm and the improved nominal coverage of profile confidence intervals compared to the normal approximation based on Fisher's Information.

Applications such as uncertainty quantification and optical tomography, require solving the radiative transfer equation (RTE) many times for various parameters. Efficient solvers for RTE are highly desired. Source Iteration with Synthetic Acceleration (SISA) is a popular and successful iterative solver for RTE. Synthetic Acceleration (SA) acts as a preconditioning step to accelerate the convergence of Source Iteration (SI). After each source iteration, classical SA strategies introduce a correction to the macroscopic particle density by solving a low order approximation to a kinetic correction equation. For example, Diffusion Synthetic Acceleration (DSA) uses the diffusion limit. However, these strategies may become less effective when the underlying low order approximations are not accurate enough. Furthermore, they do not exploit low rank structures concerning the parameters of parametric problems. To address these issues, we propose enhancing SISA with data-driven ROMs for the parametric problem and the corresponding kinetic correction equation. First, the ROM for the parametric problem can be utilized to obtain an improved initial guess. Second, the ROM for the kinetic correction equation can be utilized to design a novel SA strategy called ROMSAD. In the early stage, ROMSAD adopts a ROM based approximation, which builds on the kinetic description of the correction equation and leverages low rank structures concerning the parameters. This ROM-based approximation has greater efficiency than DSA in the early stage of SI. In the later stage, ROMSAD automatically switches to DSA to leverage its robustness. Additionally, we propose an approach to construct the ROM for the kinetic correction equation without directly solving it. In a sires of numerical tests, we compare the performance of the proposed methods with SI-DSA and DSA preconditioned GMRES solver.

Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce \textsc{Nuggets}, a novel and efficient methodology that leverages one-shot learning to discern and select high-quality instruction data from extensive datasets. \textsc{Nuggets} assesses the potential of individual instruction examples to act as effective one-shot learning instances, thereby identifying those that can significantly improve performance across diverse tasks. \textsc{Nuggets} utilizes a scoring system based on the impact of candidate examples on the perplexity of a diverse anchor set, facilitating the selection of the most advantageous data for instruction tuning. Through comprehensive evaluations on two benchmarks, including MT-Bench and Alpaca-Eval, we show that instruction tuning with the top 1\% of examples curated by \textsc{Nuggets} substantially outperforms conventional methods employing the entire dataset.

We develop a Bayesian inference method for discretely-observed stochastic differential equations (SDEs). Inference is challenging for most SDEs, due to the analytical intractability of the likelihood function. Nevertheless, forward simulation via numerical methods is straightforward, motivating the use of approximate Bayesian computation (ABC). We propose a conditional simulation scheme for SDEs that is based on lookahead strategies for sequential Monte Carlo (SMC) and particle smoothing using backward simulation. This leads to the simulation of trajectories that are consistent with the observed trajectory, thereby increasing the ABC acceptance rate. We additionally employ an invariant neural network, previously developed for Markov processes, to learn the summary statistics function required in ABC. The neural network is incrementally retrained by exploiting an ABC-SMC sampler, which provides new training data at each round. Since the SDEs simulation scheme differs from standard forward simulation, we propose a suitable importance sampling correction, which has the added advantage of guiding the parameters towards regions of high posterior density, especially in the first ABC-SMC round. Our approach achieves accurate inference and is about three times faster than standard (forward-only) ABC-SMC. We illustrate our method in five simulation studies, including three examples from the Chan-Karaolyi-Longstaff-Sanders SDE family, a stochastic bi-stable model (Schl{\"o}gl) that is notoriously challenging for ABC methods, and a two dimensional biochemical reaction network.

The machine learning community has shown increasing interest in addressing the domain adaptation problem on symmetric positive definite (SPD) manifolds. This interest is primarily driven by the complexities of neuroimaging data generated from brain signals, which often exhibit shifts in data distribution across recording sessions. These neuroimaging data, represented by signal covariance matrices, possess the mathematical properties of symmetry and positive definiteness. However, applying conventional domain adaptation methods is challenging because these mathematical properties can be disrupted when operating on covariance matrices. In this study, we introduce a novel geometric deep learning-based approach utilizing optimal transport on SPD manifolds to manage discrepancies in both marginal and conditional distributions between the source and target domains. We evaluate the effectiveness of this approach in three cross-session brain-computer interface scenarios and provide visualized results for further insights. The GitHub repository of this study can be accessed at //github.com/GeometricBCI/Deep-Optimal-Transport-for-Domain-Adaptation-on-SPD-Manifolds.

The generalization bound is a crucial theoretical tool for assessing the generalizability of learning methods and there exist vast literatures on generalizability of normal learning, adversarial learning, and data poisoning. Unlike other data poison attacks, the backdoor attack has the special property that the poisoned triggers are contained in both the training set and the test set and the purpose of the attack is two-fold. To our knowledge, the generalization bound for the backdoor attack has not been established. In this paper, we fill this gap by deriving algorithm-independent generalization bounds in the clean-label backdoor attack scenario. Precisely, based on the goals of backdoor attack, we give upper bounds for the clean sample population errors and the poison population errors in terms of the empirical error on the poisoned training dataset. Furthermore, based on the theoretical result, a new clean-label backdoor attack is proposed that computes the poisoning trigger by combining adversarial noise and indiscriminate poison. We show its effectiveness in a variety of settings.

Conventional diffusion models typically relies on a fixed forward process, which implicitly defines complex marginal distributions over latent variables. This can often complicate the reverse process' task in learning generative trajectories, and results in costly inference for diffusion models. To address these limitations, we introduce Neural Flow Diffusion Models (NFDM), a novel framework that enhances diffusion models by supporting a broader range of forward processes beyond the standard Gaussian. We also propose a novel parameterization technique for learning the forward process. Our framework provides an end-to-end, simulation-free optimization objective, effectively minimizing a variational upper bound on the negative log-likelihood. Experimental results demonstrate NFDM's strong performance, evidenced by state-of-the-art likelihood estimation. Furthermore, we investigate NFDM's capacity for learning generative dynamics with specific characteristics, such as deterministic straight lines trajectories, and demonstrate how the framework may be adopted for learning bridges between two distributions. The results underscores NFDM's versatility and its potential for a wide range of applications.

Accurately modeling the correlation structure of errors is essential for reliable uncertainty quantification in probabilistic time series forecasting. Recent deep learning models for multivariate time series have developed efficient parameterizations for time-varying contemporaneous covariance, but they often assume temporal independence of errors for simplicity. However, real-world data frequently exhibit significant error autocorrelation and cross-lag correlation due to factors such as missing covariates. In this paper, we present a plug-and-play method that learns the covariance structure of errors over multiple steps for autoregressive models with Gaussian-distributed errors. To achieve scalable inference and computational efficiency, we model the contemporaneous covariance using a low-rank-plus-diagonal parameterization and characterize cross-covariance through a group of independent latent temporal processes. The learned covariance matrix can be used to calibrate predictions based on observed residuals. We evaluate our method on probabilistic models built on RNN and Transformer architectures, and the results confirm the effectiveness of our approach in enhancing predictive accuracy and uncertainty quantification without significantly increasing the parameter size.

Sequential recommendation as an emerging topic has attracted increasing attention due to its important practical significance. Models based on deep learning and attention mechanism have achieved good performance in sequential recommendation. Recently, the generative models based on Variational Autoencoder (VAE) have shown the unique advantage in collaborative filtering. In particular, the sequential VAE model as a recurrent version of VAE can effectively capture temporal dependencies among items in user sequence and perform sequential recommendation. However, VAE-based models suffer from a common limitation that the representational ability of the obtained approximate posterior distribution is limited, resulting in lower quality of generated samples. This is especially true for generating sequences. To solve the above problem, in this work, we propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation. Specifically, we first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes (AVB) framework, which enables our model to generate high-quality latent variables. Then, we employ the contrastive loss. The latent variables will be able to learn more personalized and salient characteristics by minimizing the contrastive loss. Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence. Finally, we conduct extensive experiments on four real-world datasets. The experimental results show that our proposed ACVAE model outperforms other state-of-the-art methods.

北京阿比特科技有限公司