亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this contribution, a wave equation with a time-dependent variable-order fractional damping term and a nonlinear source is considered. Avoiding the circumstances of expressing the nonlinear variable-order fractional wave equations via closed-form expressions in terms of special functions, we investigate the existence and uniqueness of this problem with Rothe's method. First, the weak formulation for the considered wave problem is proposed. Then, the uniqueness of a solution is established by employing Gr\"onwall's lemma. The Rothe scheme's basic idea is to use Rothe functions to extend the solutions on single-time steps over the entire time frame. Inspired by that, we next introduce a uniform mesh time-discrete scheme based on a discrete convolution approximation in the backward sense. By applying some reasonable assumptions to the given data, we can predict a priori estimates for the time-discrete solution. Employing these estimates side by side with Rothe functions leads to proof of the solution's existence over the whole time interval. Finally, the full discretisation of the problem is introduced by invoking Galerkin spectral techniques in the spatial direction, and numerical examples are given.

相關內容

This paper introduces a formulation of the variable density incompressible Navier-Stokes equations by modifying the nonlinear terms in a consistent way. For Galerkin discretizations, the formulation leads to full discrete conservation of mass, squared density, momentum, angular momentum and kinetic energy without the divergence-free constraint being strongly enforced. In addition to favorable conservation properties, the formulation is shown to make the density field invariant to global shifts. The effect of viscous regularizations on conservation properties is also investigated. Numerical tests validate the theory developed in this work. The new formulation shows superior performance compared to other formulations from the literature, both in terms of accuracy for smooth problems and in terms of robustness.

We examine the behaviour of the Laplace and saddlepoint approximations in the high-dimensional setting, where the dimension of the model is allowed to increase with the number of observations. Approximations to the joint density, the marginal posterior density and the conditional density are considered. Our results show that under the mildest assumptions on the model, the error of the joint density approximation is $O(p^4/n)$ if $p = o(n^{1/4})$ for the Laplace approximation and saddlepoint approximation, and $O(p^3/n)$ if $p = o(n^{1/3})$ under additional assumptions on the second derivative of the log-likelihood. Stronger results are obtained for the approximation to the marginal posterior density.

Task arithmetic has recently emerged as a cost-effective and scalable approach to edit pre-trained models directly in weight space: By adding the fine-tuned weights of different tasks, the model's performance can be improved on these tasks, while negating them leads to task forgetting. Yet, our understanding of the effectiveness of task arithmetic and its underlying principles remains limited. We present a comprehensive study of task arithmetic in vision-language models and show that weight disentanglement is the crucial factor that makes it effective. This property arises during pre-training and manifests when distinct directions in weight space govern separate, localized regions in function space associated with the tasks. Notably, we show that fine-tuning models in their tangent space by linearizing them amplifies weight disentanglement. This leads to substantial performance improvements across multiple task arithmetic benchmarks and diverse models. Building on these findings, we provide theoretical and empirical analyses of the neural tangent kernel (NTK) of these models and establish a compelling link between task arithmetic and the spatial localization of the NTK eigenfunctions. Overall, our work uncovers novel insights into the fundamental mechanisms of task arithmetic and offers a more reliable and effective approach to edit pre-trained models through the NTK linearization.

This paper deals with variable selection in multivariate linear regression model when the data are observations on a spatial domain being a grid of sites in $\mathbb{Z}^d$ with $d\geqslant 2$. We use a criterion that allows to characterize the subset of relevant variables as depending on two parameters, and we propose estimators for these parameters based on spatially dependent observations. We prove the consistency, under specified assumptions, of the method thus proposed. A simulation study made in order to assess the finite-sample behaviour of the proposed method with comparison to existing ones is presented.

In this paper, we present a new methodology to develop arbitrary high-order structure-preserving methods for solving the quantum Zakharov system. The key ingredients of our method are: (i) the original Hamiltonian energy is reformulated into a quadratic form by introducing a new quadratic auxiliary variable; (ii) based on the energy variational principle, the original system is then rewritten into a new equivalent system which inherits the mass conservation law and a quadratic energy; (iii) the resulting system is discretized by symplectic Runge-Kutta method in time combining with the Fourier pseudo-spectral method in space. The proposed method achieves arbitrary high-order accurate in time and can preserve the discrete mass and original Hamiltonian energy exactly. Moreover, an efficient iterative solver is presented to solve the resulting discrete nonlinear equations. Finally, ample numerical examples are presented to demonstrate the theoretical claims and illustrate the efficiency of our methods.

Self-supervised speech representations are known to encode both speaker and phonetic information, but how they are distributed in the high-dimensional space remains largely unexplored. We hypothesize that they are encoded in orthogonal subspaces, a property that lends itself to simple disentanglement. Applying principal component analysis to representations of two predictive coding models, we identify two subspaces that capture speaker and phonetic variances, and confirm that they are nearly orthogonal. Based on this property, we propose a new speaker normalization method which collapses the subspace that encodes speaker information, without requiring transcriptions. Probing experiments show that our method effectively eliminates speaker information and outperforms a previous baseline in phone discrimination tasks. Moreover, the approach generalizes and can be used to remove information of unseen speakers.

Low-rank matrix completion consists of computing a matrix of minimal complexity that recovers a given set of observations as accurately as possible, and has numerous applications such as product recommendation. Unfortunately, existing methods for solving low-rank matrix completion are heuristics that, while highly scalable and often identifying high-quality solutions, do not possess any optimality guarantees. We reexamine matrix completion with an optimality-oriented eye, by reformulating low-rank problems as convex problems over the non-convex set of projection matrices and implementing a disjunctive branch-and-bound scheme that solves them to certifiable optimality. Further, we derive a novel and often tight class of convex relaxations by decomposing a low-rank matrix as a sum of rank-one matrices and incentivizing, via a Shor relaxation, that each two-by-two minor in each rank-one matrix has determinant zero. In numerical experiments, our new convex relaxations decrease the optimality gap by two orders of magnitude compared to existing attempts. Moreover, we showcase the performance of our disjunctive branch-and-bound scheme and demonstrate that it solves matrix completion problems over 150x150 matrices to certifiable optimality in hours, constituting an order of magnitude improvement on the state-of-the-art for certifiably optimal methods.

Discretizing a solution in the Fourier domain rather than the time domain presents a significant advantage in solving transport problems that vary smoothly and periodically in time, such as cardiorespiratory flows. The finite element solution of the resulting time-spectral formulation is investigated here for the convection-diffusion equations. In addition to the baseline Galerkin's method, we consider stabilized approaches inspired by the streamline upwind Petrov/Galerkin (SUPG), least square (LSQ), and variational multiscale (VMS) methods. We also introduce a new augmented SUPG (ASU) method that, by design, produces a nodally exact solution in one dimension for piecewise linear interpolation functions. Comparing these five methods using 1D, 2D, and 3D canonical test cases shows while the ASU is most accurate overall, it exhibits convergence issues in extremely oscillatory flows with a high Womersley number in 3D. The VMS method presents an attractive alternative due to its excellent convergence characteristics and reasonable accuracy.

In this work we begin a theoretical and numerical investigation on the spectra of evolution operators of neutral renewal equations, with the stability of equilibria and periodic orbits in mind. We start from the simplest form of linear periodic equation with one discrete delay and fully characterize the spectrum of its monodromy operator. We perform numerical experiments discretizing the evolution operators via pseudospectral collocation, confirming the theoretical results and giving perspectives on the generalization to systems and to multiple delays. Although we do not attempt to perform a rigorous numerical analysis of the method, we give some considerations on a possible approach to the problem.

Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.

北京阿比特科技有限公司