亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a new concept of the locally conservative flux and investigate its relationship with the compatible discretization pioneered by Dawson, Sun and Wheeler [11]. We then demonstrate how the new concept of the locally conservative flux can play a crucial role in obtaining the L2 norm stability of the discontinuous Galerkin finite element scheme for the transport in the coupled system with flow. In particular, the lowest order discontinuous Galerkin finite element for the transport is shown to inherit the positivity and maximum principle when the locally conservative flux is used, which has been elusive for many years in literature. The theoretical results established in this paper are based on the equivalence between Lesaint-Raviart discontinuous Galerkin scheme and Brezzi-Marini-Suli discontinuous Galerkin scheme for the linear hyperbolic system as well as the relationship between the Lesaint-Raviart discontinuous Galerkin scheme and the characteristic method along the streamline. Sample numerical experiments have also been performed to justify our theoretical findings

相關內容

迄今為止,產品設計師最友好的交互動畫軟件。

Computational problems concerning the orbit of a point under the action of a matrix group occur in numerous subfields of computer science, including complexity theory, program analysis, quantum computation, and automata theory. In many cases the focus extends beyond orbits proper to orbit closures under a suitable topology. Typically one starts from a group and several points and asks questions about the orbit closure of the points under the action of the group, e.g., whether two given orbit closures intersect. In this paper we consider a collection of what we call determination problems concerning groups and orbit closures. These problems begin with a given variety and seek to understand whether and how it arises either as an algebraic group or as an orbit closure. The how question asks whether the underlying group is $s$-generated, meaning it is topologically generated by $s$ matrices for a given number $s$. Among other applications, problems of this type have recently been studied in the context of synthesising loops subject to certain specified invariants on program variables. Our main result is a polynomial-space procedure that inputs a variety $V$ and a number $s$ and determines whether $V$ arises as an orbit closure of a point under an $s$-generated commutative matrix group. The main tools in our approach are rooted in structural properties of commutative algebraic matrix groups and lattice theory. We leave open the question of determining whether a variety is an orbit closure of a point under an algebraic matrix group (without the requirement of commutativity). In this regard, we note that a recent paper by Nosan et al. [NPSHW2021] gives an elementary procedure to compute the orbit closure of a point under finitely many matrices.

We propose a new statistical reduced complexity climate model. The centerpiece of the model consists of a set of physical equations for the global climate system which we show how to cast in non-linear state space form. The parameters in the model are estimated using the method of maximum likelihood with the likelihood function being evaluated by the extended Kalman filter. Our statistical framework is based on well-established methodology and is computationally feasible. In an empirical analysis, we estimate the parameters for a data set comprising the period 1959-2022. A likelihood ratio test sheds light on the most appropriate equation for converting the level of atmospheric concentration of carbon dioxide into radiative forcing. Using the estimated model, and different future paths of greenhouse gas emissions, we project global mean surface temperature until the year 2100. Our results illustrate the potential of combining statistical modelling with physical insights to arrive at rigorous statistical analyses of the climate system.

Evaluating the generalisation capabilities of multimodal models based solely on their performance on out-of-distribution data fails to capture their true robustness. This work introduces a comprehensive evaluation framework that systematically examines the role of instructions and inputs in the generalisation abilities of such models, considering architectural design, input perturbations across language and vision modalities, and increased task complexity. The proposed framework uncovers the resilience of multimodal models to extreme instruction perturbations and their vulnerability to observational changes, raising concerns about overfitting to spurious correlations. By employing this evaluation framework on current Transformer-based multimodal models for robotic manipulation tasks, we uncover limitations and suggest future advancements should focus on architectural and training innovations that better integrate multimodal inputs, enhancing a model's generalisation prowess by prioritising sensitivity to input content over incidental correlations.

Performance evaluation of particular channel coding has been a significant topic in coding theory, often involving the use of bounding techniques. This paper focuses on the new family of capacity-achieving codes, Spinal codes, to provide a comprehensive analysis framework to tightly upper bound the block error rate (BLER) of Spinal codes in the finite block length (FBL) regime. First, we resort to a variant of the Gallager random coding bound to upper bound the BLER of Spinal codes over the fading channel. Then, this paper derives a new bound without resorting to the use of Gallager random coding bound, achieving provable tightness over the wide range of signal-to-noise ratios (SNR). The derived BLER upper bounds in this paper are generalized, facilitating the performance evaluations of Spinal codes over different types of fast fading channels. Over the Rayleigh, Nakagami-m, and Rician fading channels, this paper explicitly derived the BLER upper bounds on Spinal codes as case studies. Based on the bounds, we theoretically reveal that the tail transmission pattern (TTP) for ML-decoded Spinal codes remains optimal in terms of reliability performance. Simulations verify the tightness of the bounds and the insights obtained.

Practical optimization problems may contain different kinds of difficulties that are often not tractable if one relies on a particular optimization method. Different optimization approaches offer different strengths that are good at tackling one or more difficulty in an optimization problem. For instance, evolutionary algorithms have a niche in handling complexities like discontinuity, non-differentiability, discreteness and non-convexity. However, evolutionary algorithms may get computationally expensive for mathematically well behaved problems with large number of variables for which classical mathematical programming approaches are better suited. In this paper, we demonstrate a decomposition strategy that allows us to synergistically apply two complementary approaches at the same time on a complex optimization problem. Evolutionary algorithms are useful in this context as their flexibility makes pairing with other solution approaches easy. The decomposition idea is a special case of bilevel optimization that separates the difficulties into two levels and assigns different approaches at each level that is better equipped at handling them. We demonstrate the benefits of the proposed decomposition idea on a wide range of test problems.

We consider the problem of parameter estimation in a high-dimensional generalized linear model. Spectral methods obtained via the principal eigenvector of a suitable data-dependent matrix provide a simple yet surprisingly effective solution. However, despite their wide use, a rigorous performance characterization, as well as a principled way to preprocess the data, are available only for unstructured (i.i.d.\ Gaussian and Haar orthogonal) designs. In contrast, real-world data matrices are highly structured and exhibit non-trivial correlations. To address the problem, we consider correlated Gaussian designs capturing the anisotropic nature of the features via a covariance matrix $\Sigma$. Our main result is a precise asymptotic characterization of the performance of spectral estimators. This allows us to identify the optimal preprocessing that minimizes the number of samples needed for parameter estimation. Surprisingly, such preprocessing is universal across a broad set of designs, which partly addresses a conjecture on optimal spectral estimators for rotationally invariant models. Our principled approach vastly improves upon previous heuristic methods, including for designs common in computational imaging and genetics. The proposed methodology, based on approximate message passing, is broadly applicable and opens the way to the precise characterization of spiked matrices and of the corresponding spectral methods in a variety of settings.

We conduct a systematic study of the approximation properties of Transformer for sequence modeling with long, sparse and complicated memory. We investigate the mechanisms through which different components of Transformer, such as the dot-product self-attention, positional encoding and feed-forward layer, affect its expressive power, and we study their combined effects through establishing explicit approximation rates. Our study reveals the roles of critical parameters in the Transformer, such as the number of layers and the number of attention heads. These theoretical insights are validated experimentally and offer natural suggestions for alternative architectures.

Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions. For example, in a search and rescue mission, a legged robot could climb over debris, crawl through gaps, and navigate out of dead ends. However, the robot's controller needs to respond intelligently to such varied obstacles, and this requires handling unexpected and unusual scenarios successfully. This presents an open challenge to current learning methods, which often struggle with generalization to the long tail of unexpected situations without heavy human supervision. To address this issue, we investigate how to leverage the broad knowledge about the structure of the world and commonsense reasoning capabilities of vision-language models (VLMs) to aid legged robots in handling difficult, ambiguous situations. We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection with VLMs: (1) in-context adaptation over previous robot interactions and (2) planning multiple skills into the future and replanning. We evaluate VLM-PC on several challenging real-world obstacle courses, involving dead ends and climbing and crawling, on a Go1 quadruped robot. Our experiments show that by reasoning over the history of interactions and future plans, VLMs enable the robot to autonomously perceive, navigate, and act in a wide range of complex scenarios that would otherwise require environment-specific engineering or human guidance.

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

Molecular design and synthesis planning are two critical steps in the process of molecular discovery that we propose to formulate as a single shared task of conditional synthetic pathway generation. We report an amortized approach to generate synthetic pathways as a Markov decision process conditioned on a target molecular embedding. This approach allows us to conduct synthesis planning in a bottom-up manner and design synthesizable molecules by decoding from optimized conditional codes, demonstrating the potential to solve both problems of design and synthesis simultaneously. The approach leverages neural networks to probabilistically model the synthetic trees, one reaction step at a time, according to reactivity rules encoded in a discrete action space of reaction templates. We train these networks on hundreds of thousands of artificial pathways generated from a pool of purchasable compounds and a list of expert-curated templates. We validate our method with (a) the recovery of molecules using conditional generation, (b) the identification of synthesizable structural analogs, and (c) the optimization of molecular structures given oracle functions relevant to drug discovery.

北京阿比特科技有限公司