亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Shallow Ice Approximation (SIA) model on strong form is commonly used for inferring the flow dynamics of grounded ice sheets. The solution to the SIA model is a closed-form expression for the velocity field. When that velocity field is used to advance the ice surface in time, the time steps have to take small values due to quadratic scaling in terms of the horizontal mesh size. In this paper we write the SIA model on weak form, and add in the Free Surface Stabilization Algorithm (FSSA) terms. We find numerically that the time step restriction scaling is improved from quadratic to linear, but only for large horizontal mesh sizes. We then extend the weak form by adding the initially neglected normal stress terms. This allows for a linear time step restriction across the whole range of the horizontal mesh sizes, leading to an improved efficiency. Theoretical analysis demonstrates that the inclusion of FSSA stabilization terms transitions the explicit time stepping treatment of second derivative surface terms to an implicit approach. Moreover, a computational cost analysis, combined with numerical results on stability and accuracy, advocates for preferring the SIA models written on weak form over the standard SIA model.

相關內容

We investigate the parameter estimation and prediction of two forms of the stochastic SIR model driven by small L\'{e}vy noise with time-dependent periodic transmission. We present consistency and rate of convergence results for the least-squares estimators. We include simulation studies using the method of projected gradient descent.

This note continues and extends the study from Spokoiny (2023a) about estimation for parametric models with possibly large or even infinite parameter dimension. We consider a special class of stochastically linear smooth (SLS) models satisfying three major conditions: the stochastic component of the log-likelihood is linear in the model parameter, while the expected log-likelihood is a smooth and concave function. For the penalized maximum likelihood estimators (pMLE), we establish several finite sample bounds about its concentration and large deviations as well as the Fisher and Wilks expansions and risk bounds. In all results, the remainder is given explicitly and can be evaluated in terms of the effective sample size $ n $ and effective parameter dimension $ \mathbb{p} $ which allows us to identify the so-called \emph{critical parameter dimension}. The results are also dimension and coordinate-free. Despite generality, all the presented bounds are nearly sharp and the classical asymptotic results can be obtained as simple corollaries. Our results indicate that the use of advanced fourth-order expansions allows to relax the critical dimension condition $ \mathbb{p}^{3} \ll n $ from Spokoiny (2023a) to $ \mathbb{p}^{3/2} \ll n $. Examples for classical models like logistic regression, log-density and precision matrix estimation illustrate the applicability of general results.

We provide full theoretical guarantees for the convergence behaviour of diffusion-based generative models under the assumption of strongly log-concave data distributions while our approximating class of functions used for score estimation is made of Lipschitz continuous functions. We demonstrate via a motivating example, sampling from a Gaussian distribution with unknown mean, the powerfulness of our approach. In this case, explicit estimates are provided for the associated optimization problem, i.e. score approximation, while these are combined with the corresponding sampling estimates. As a result, we obtain the best known upper bound estimates in terms of key quantities of interest, such as the dimension and rates of convergence, for the Wasserstein-2 distance between the data distribution (Gaussian with unknown mean) and our sampling algorithm. Beyond the motivating example and in order to allow for the use of a diverse range of stochastic optimizers, we present our results using an $L^2$-accurate score estimation assumption, which crucially is formed under an expectation with respect to the stochastic optimizer and our novel auxiliary process that uses only known information. This approach yields the best known convergence rate for our sampling algorithm.

In this paper, we innovatively develop uniform/variable-time-step weighted and shifted BDF2 (WSBDF2) methods for the anisotropic Cahn-Hilliard (CH) model, combining the scalar auxiliary variable (SAV) approach with two types of stabilized techniques. Using the concept of $G$-stability, the uniform-time-step WSBDF2 method is theoretically proved to be energy-stable. Due to the inapplicability of the relevant G-stability properties, another technique is adopted in this work to demonstrate the energy stability of the variable-time-step WSBDF2 method. In addition, the two numerical schemes are all mass-conservative.Finally, numerous numerical simulations are presented to demonstrate the stability and accuracy of these schemes.

Alternative formulations for the optimization of chemical process flowsheets are presented that leverage surrogate models and implicit functions to replace and remove, respectively, the algebraic equations that describe a difficult-to-converge Gibbs reactor unit operation. Convergence reliability, solve time, and solution quality of an optimization problem are compared among full-space, ALAMO surrogate, neural network surrogate, and implicit function formulations. Both surrogate and implicit formulations lead to better convergence reliability, with low sensitivity to process parameters. The surrogate formulations are faster at the cost of minor solution error, while the implicit formulation provides exact solutions with similar solve time. In a parameter sweep on an autothermal reformer flowsheet optimization problem, the full space formulation solves 33 out of 64 instances, while the implicit function formulation solves 52 out of 64 instances, the ALAMO polynomial formulation solves 64 out of 64 instances, and the neural network formulation solves 48 out of 64 instances. This work demonstrates the trade-off between accuracy and solve time that exists in current methods for improving convergence reliability of chemical process flowsheet optimization problems.

This work presents GAL{\AE}XI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GAL{\AE}XI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. GAL{\AE}XI exhibits excellent strong scaling properties up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor-Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GAL{\AE}XI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GAL{\AE}XI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GAL{\AE}XI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures.

Vision-Language Pre-training (VLP) models like CLIP have achieved remarkable success in computer vision and particularly demonstrated superior robustness to distribution shifts of 2D images. However, their robustness under 3D viewpoint variations is still limited, which can hinder the development for real-world applications. This paper successfully addresses this concern while keeping VLPs' original performance by breaking through two primary obstacles: 1) the scarcity of training data and 2) the suboptimal fine-tuning paradigms. To combat data scarcity, we build the Multi-View Caption (MVCap) dataset -- a comprehensive collection of over four million multi-view image-text pairs across more than 100K objects, providing more potential for VLP models to develop generalizable viewpoint-invariant representations. To address the limitations of existing paradigms in performance trade-offs and training efficiency, we design a novel fine-tuning framework named Omniview-Tuning (OVT). Specifically, OVT introduces a Cross-Viewpoint Alignment objective through a minimax-like optimization strategy, which effectively aligns representations of identical objects from diverse viewpoints without causing overfitting. Additionally, OVT fine-tunes VLP models in a parameter-efficient manner, leading to minimal computational cost. Extensive experiments on various VLP models with different architectures validate that OVT significantly improves the models' resilience to viewpoint shifts and keeps the original performance, establishing a pioneering standard for boosting the viewpoint invariance of VLP models.

In the past few years, the emergence of pre-training models has brought uni-modal fields such as computer vision (CV) and natural language processing (NLP) to a new era. Substantial works have shown they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch. So can such pre-trained models be applied to multi-modal tasks? Researchers have explored this problem and made significant progress. This paper surveys recent advances and new frontiers in vision-language pre-training (VLP), including image-text and video-text pre-training. To give readers a better overall grasp of VLP, we first review its recent advances from five aspects: feature extraction, model architecture, pre-training objectives, pre-training datasets, and downstream tasks. Then, we summarize the specific VLP models in detail. Finally, we discuss the new frontiers in VLP. To the best of our knowledge, this is the first survey on VLP. We hope that this survey can shed light on future research in the VLP field.

Recently pre-trained language representation models such as BERT have shown great success when fine-tuned on downstream tasks including information retrieval (IR). However, pre-training objectives tailored for ad-hoc retrieval have not been well explored. In this paper, we propose Pre-training with Representative wOrds Prediction (PROP) for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the "ideal" document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. Given an input document, we sample a pair of word sets according to the document language model, where the set with higher likelihood is deemed as more representative of the document. We then pre-train the Transformer model to predict the pairwise preference between the two word sets, jointly with the Masked Language Model (MLM) objective. By further fine-tuning on a variety of representative downstream ad-hoc retrieval tasks, PROP achieves significant improvements over baselines without pre-training or with other pre-training methods. We also show that PROP can achieve exciting performance under both the zero- and low-resource IR settings. The code and pre-trained models are available at //github.com/Albert-Ma/PROP.

Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.

北京阿比特科技有限公司