亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The main goal of this paper is to propose a new quaternion total variation regularization model for solving linear ill-posed quaternion inverse problems, which arise from three-dimensional signal filtering or color image processing. The quaternion total variation term in the model is represented by collaborative total variation regularization and approximated by a quaternion iteratively reweighted norm. A novel flexible quaternion generalized minimal residual method is presented to quickly solve this model. An improved convergence theory is established to obtain a sharp upper bound of the residual norm of quaternion minimal residual method (QGMRES). The convergence theory is also presented for preconditioned QGMRES. Numerical experiments indicate the superiority of the proposed model and algorithms over the state-of-the-art methods in terms of iteration steps, CPU time, and the quality criteria of restored color images.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 知識 (knowledge) · 蒸餾 · 語言模型化 · 白盒 ·
2024 年 10 月 1 日

Knowledge distillation (KD) is known as a promising solution to compress large language models (LLMs) via transferring their knowledge to smaller models. During this process, white-box KD methods usually minimize the distance between the output distributions of the two models so that more knowledge can be transferred. However, in the current white-box KD framework, the output distributions are from the respective output spaces of the two models, using their own prediction heads. We argue that the space discrepancy will lead to low similarity between the teacher model and the student model on both representation and distribution levels. Furthermore, this discrepancy also hinders the KD process between models with different vocabularies, which is common for current LLMs. To address these issues, we propose a dual-space knowledge distillation (DSKD) framework that unifies the output spaces of the two models for KD. On the basis of DSKD, we further develop a cross-model attention mechanism, which can automatically align the representations of the two models with different vocabularies. Thus, our framework is not only compatible with various distance functions for KD (e.g., KL divergence) like the current framework, but also supports KD between any two LLMs regardless of their vocabularies. Experiments on task-agnostic instruction-following benchmarks show that DSKD significantly outperforms the current white-box KD framework with various distance functions, and also surpasses existing KD methods for LLMs with different vocabularies.

We introduce a new task -- language-driven video inpainting, which uses natural language instructions to guide the inpainting process. This approach overcomes the limitations of traditional video inpainting methods that depend on manually labeled binary masks, a process often tedious and labor-intensive. We present the Remove Objects from Videos by Instructions (ROVI) dataset, containing 5,650 videos and 9,091 inpainting results, to support training and evaluation for this task. We also propose a novel diffusion-based language-driven video inpainting framework, the first end-to-end baseline for this task, integrating Multimodal Large Language Models to understand and execute complex language-based inpainting requests effectively. Our comprehensive results showcase the dataset's versatility and the model's effectiveness in various language-instructed inpainting scenarios. We will make datasets, code, and models publicly available.

This paper deals with asymptotic errors, limit theorems for errors between numerical and exact solutions of stochastic differential equation (SDE) driven by one-dimensional fractional Brownian motion (fBm). The Euler-Maruyama, higher-order Milstein, and Crank-Nicolson schemes are among the most studied numerical schemes for SDE (fSDE) driven by fBm. Most previous studies of asymptotic errors have derived specific asymptotic errors for these schemes as main theorems or their corollary. Even in the one-dimensional case, the asymptotic error was not determined for the Milstein or the Crank-Nicolson method when the Hurst exponent is less than or equal to $1/3$ with a drift term. We obtained a new evaluation method for convergence and asymptotic errors. This evaluation method improves the conditions under which we can prove convergence of the numerical scheme and obtain the asymptotic error under the same conditions. We have completely determined the asymptotic error of the Milstein method for arbitrary orders. In addition, we have newly determined the asymptotic error of the Crank-Nicolson method for $1/4<H\leq 1/3$.

Multiphase flows are an important class of fluid flow and their study facilitates the development of diverse applications in industrial, natural and biomedical systems. Simulating such flows requires significant computational resources, making it prudent to devise an adaptive mesh refinement (AMR) method to mitigate this burden. We use a mathematical model that takes a continuum mechanical approach to describe multiphase mixture flows. The resulting system of equations poses numerical challenges due to the presence of multiple non-linear terms and a co-incompressibility condition, while the resulting fluid dynamics necessitate the development of an adaptive mesh refinement technique to accurately capture regions of interest while keeping computational costs low. We present an accurate, robust, and efficient computational method for simulating multiphase mixtures on adaptive grids, and utilize a multigrid solver to precondition the saddle-point system. We demonstrate that the AMR solver asymptotically approaches second order accuracy in $L^1$, $L^2$ and $L^\infty$ norms for all solution variables of the Newtonian and non-Newtonian models. All experiments demonstrate the solver is stable provided the time step size satisfies the imposed CFL condition. The solver can accurately resolve sharp gradients in the solution and, with the multigrid preconditioner, the solver behavior is independent of grid spacing. Our AMR solver offers a major cost savings benefit, providing up to 10x speedup in the numerical experiments presented here, with greater speedup possible depending on the problem set-up.

This paper presents a novel framework for tensor eigenvalue analysis in the context of multi-modal data fusion, leveraging topological invariants such as Betti numbers. Traditional approaches to tensor eigenvalue analysis often extend matrix theory, whereas this work introduces a topological perspective to enhance the understanding of tensor structures. By establishing new theorems that link eigenvalues to topological features, the proposed framework provides deeper insights into the latent structure of data, improving both interpretability and robustness. Applications in data fusion demonstrate the theoretical and practical significance of this approach, with potential for broad impact in machine learning and data science.

In this paper we develop second kind integral formulations for flexural wave scattering problems involving the clamped, free, and supported plate boundary conditions. While the clamped plate problem can be solved with layer potentials previously developed for the biharmonic equation [1], the free plate problem is more difficult due to the complex nature of the boundary conditions. In this paper we describe a representation for the free plate problem that uses the Hilbert transform to cancel singularities of certain layer potentials, ultimately leading to a Fredholm integral equation of the second kind. Additionally, for the supported plate problem, we improve on an existing representation to obtain a second kind integral equation. With these representations, it is possible to solve flexural wave scattering problems with high-order-accurate methods, examine the far-field patterns of scattering objects, and solve large problems involving multiple scatterers.

This note presents an approach for estimating the spatial distribution of static properties in reservoir modeling using a nearest-neighbor neural network. The method leverages the strengths of neural networks in approximating complex, non-linear functions, particularly for tasks involving spatial interpolation. It incorporates a nearest-neighbor algorithm to capture local spatial relationships between data points and introduces randomization to quantify the uncertainty inherent in the interpolation process. This approach addresses the limitations of traditional geostatistical methods, such as Inverse Distance Weighting (IDW) and Kriging, which often fail to model the complex non-linear dependencies in reservoir data. By integrating spatial proximity and uncertainty quantification, the proposed method can improve the accuracy of static property predictions like porosity and permeability.

This paper gives an elementary proof for the following theorem: a renewal process can be represented by a doubly-stochastic Poisson process (DSPP) if and only if the Laplace-Stieltjes transform of the inter-arrival times is of the following form: $$\phi(\theta)=\lambda\left[\lambda+\theta+k\int_0^\infty\left(1-e^{-\theta z}\right)\,dG(z)\right]^{-1},$$ for some positive real numbers $\lambda, k$, and some distribution function $G$ with $G(\infty)=1$. The intensity process $\Lambda(t)$ of the corresponding DSPP jumps between $\lambda$ and $0$, with the time spent at $\lambda$ being independent random variables that are exponentially distributed with mean $1/k$, and the time spent at $0$ being independent random variables with distribution function $G$.

This paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs). Instruction tuning refers to the process of further training LLMs on a dataset consisting of \textsc{(instruction, output)} pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. In this work, we make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications, along with an analysis on aspects that influence the outcome of IT (e.g., generation of instruction outputs, size of the instruction dataset, etc). We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research.

Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.

北京阿比特科技有限公司