亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove precise rates of convergence for monotone approximation schemes of fractional and nonlocal Hamilton-Jacobi-Bellman (HJB) equations. We consider diffusion corrected difference-quadrature schemes from the literature and new approximations based on powers of discrete Laplacians, approximations which are (formally) fractional order and 2nd order methods. It is well-known in numerical analysis that convergence rates depend on the regularity of solutions, and here we consider cases with varying solution regularity: (i) Strongly degenerate problems with Lipschitz solutions, and (ii) weakly non-degenerate problems where we show that solutions have bounded fractional derivatives of order between 1 and 2. Our main results are optimal error estimates with convergence rates that capture precisely both the fractional order of the schemes and the fractional regularity of the solutions. For strongly degenerate equations, these rates improve earlier results. For weakly non-degenerate problems of order greater than one, the results are new. Here we show improved rates compared to the strongly degenerate case, rates that are always better than 1/2.

相關內容

We provide a variety of lower bounds for the well-known shortcut set problem: how much can one decrease the diameter of a directed graph on $n$ vertices and $m$ edges by adding $O(n)$ or $O(m)$ of shortcuts from the transitive closure of the graph. Our results are based on a vast simplification of the recent construction of Bodwin and Hoppenworth [FOCS 2023] which was used to show an $\widetilde{\Omega}(n^{1/4})$ lower bound for the $O(n)$-sized shortcut set problem. We highlight that our simplification completely removes the use of the convex sets by B\'ar\'any and Larman [Math. Ann. 1998] used in all previous lower bound constructions. Our simplification also removes the need for randomness and further removes some log factors. This allows us to generalize the construction to higher dimensions, which in turn can be used to show the following results. For $O(m)$-sized shortcut sets, we show an $\Omega(n^{1/5})$ lower bound, improving on the previous best $\Omega(n^{1/8})$ lower bound. For all $\varepsilon > 0$, we show that there exists a $\delta > 0$ such that there are $n$-vertex $O(n)$-edge graphs $G$ where adding any shortcut set of size $O(n^{2-\varepsilon})$ keeps the diameter of $G$ at $\Omega(n^\delta)$. This improves the sparsity of the constructed graph compared to a known similar result by Hesse [SODA 2003]. We also consider the sourcewise setting for shortcut sets: given a graph $G=(V,E)$, a set $S\subseteq V$, how much can we decrease the sourcewise diameter of $G$, $\max_{(s, v) \in S \times V, \text{dist}(s, v) < \infty} \text{dist}(s,v)$ by adding a set of edges $H$ from the transitive closure of $G$? We show that for any integer $d \ge 2$, there exists a graph $G=(V, E)$ on $n$ vertices and $S \subseteq V$ with $|S| = \widetilde{\Theta}(n^{3/(d+3)})$, such that when adding $O(n)$ or $O(m)$ shortcuts, the sourcewise diameter is $\widetilde{\Omega}(|S|^{1/3})$.

The success of AI assistants based on language models (LLMs) hinges crucially on Reinforcement Learning from Human Feedback (RLHF), which enables the generation of responses more aligned with human preferences. As universal AI assistants, there's a growing expectation for them to perform consistently across various domains. However, previous work shows that Reinforcement Learning (RL) often exploits shortcuts to attain high rewards and overlooks challenging samples. This focus on quick reward gains undermines both the stability in training and the model's ability to generalize to new, unseen data. In this work, we propose a novel approach that can learn a consistent policy via RL across various data groups or domains. Given the challenges associated with acquiring group annotations, our method automatically classifies data into different groups, deliberately maximizing performance variance. Then, we optimize the policy to perform well on challenging groups. Lastly, leveraging the established groups, our approach adaptively adjusts the exploration space, allocating more learning capacity to more challenging data and preventing the model from over-optimizing on simpler data. Experimental results indicate that our approach significantly enhances training stability and model generalization.

We consider the problem of designing sample efficient learning algorithms for infinite horizon discounted reward Markov Decision Process. Specifically, we propose the Accelerated Natural Policy Gradient (ANPG) algorithm that utilizes an accelerated stochastic gradient descent process to obtain the natural policy gradient. ANPG achieves $\mathcal{O}({\epsilon^{-2}})$ sample complexity and $\mathcal{O}(\epsilon^{-1})$ iteration complexity with general parameterization where $\epsilon$ defines the optimality error. This improves the state-of-the-art sample complexity by a $\log(\frac{1}{\epsilon})$ factor. ANPG is a first-order algorithm and unlike some existing literature, does not require the unverifiable assumption that the variance of importance sampling (IS) weights is upper bounded. In the class of Hessian-free and IS-free algorithms, ANPG beats the best-known sample complexity by a factor of $\mathcal{O}(\epsilon^{-\frac{1}{2}})$ and simultaneously matches their state-of-the-art iteration complexity.

Multi-modal Large Language Models (MLLMs) have made significant strides in expanding the capabilities of Large Language Models (LLMs) through the incorporation of visual perception interfaces. Despite the emergence of exciting applications and the availability of diverse instruction tuning data, existing approaches often rely on CLIP or its variants as the visual branch, and merely extract features from the deep layers. However, these methods lack a comprehensive analysis of the visual encoders in MLLMs. In this paper, we conduct an extensive investigation into the effectiveness of different vision encoders within MLLMs. Our findings reveal that the shallow layer features of CLIP offer particular advantages for fine-grained tasks such as grounding and region understanding. Surprisingly, the vision-only model DINO, which is not pretrained with text-image alignment, demonstrates promising performance as a visual branch within MLLMs. By simply equipping it with an MLP layer for alignment, DINO surpasses CLIP in fine-grained related perception tasks. Building upon these observations, we propose a simple yet effective feature merging strategy, named COMM, that integrates CLIP and DINO with Multi-level features Merging, to enhance the visual capabilities of MLLMs. We evaluate COMM through comprehensive experiments on a wide range of benchmarks, including image captioning, visual question answering, visual grounding, and object hallucination. Experimental results demonstrate the superior performance of COMM compared to existing methods, showcasing its enhanced visual capabilities within MLLMs. Code will be made available at //github.com/YuchenLiu98/COMM.

We introduce a univariate signal deconvolution method based on the principles of an approach to Artificial General Intelligence in order to build a general-purpose model of models independent of any arbitrarily assumed prior probability distribution. We investigate how non-random data may encode information about the physical properties, such as dimensions and length scales of the space in which a signal or message may have been originally encoded, embedded, or generated. Our multidimensional space reconstruction method is based on information theory and algorithmic probability, so that it is proven to be agnostic vis-a-vis the arbitrarily chosen encoding-decoding scheme, computable or semi-computable method of approximation to algorithmic complexity, and computational model. The results presented in this paper are useful for applications in coding theory, particularly in zero-knowledge one-way communication channels, such as in deciphering messages from unknown generating sources about which no prior knowledge is available and to which no return message can be sent. We argue that this method has the potential to be of great value in cryptography, signal processing, causal deconvolution, life and technosignature detection.

Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP. Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of $\mathcal{O}(C)$, where $C$ is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.

The rapid growth of decentralized energy resources and especially Electric Vehicles (EV), that are expected to increase sharply over the next decade, will put further stress on existing power distribution networks, increasing the need for higher system reliability and flexibility. In an attempt to avoid unnecessary network investments and to increase the controllability over distribution networks, network operators develop demand response (DR) programs that incentivize end users to shift their consumption in return for financial or other benefits. Artificial intelligence (AI) methods are in the research forefront for residential load scheduling applications, mainly due to their high accuracy, high computational speed and lower dependence on the physical characteristics of the models under development. The aim of this work is to identify households' EV cost-reducing charging policy under a Time-of-Use tariff scheme, with the use of Deep Reinforcement Learning, and more specifically Deep Q-Networks (DQN). A novel end users flexibility potential reward is inferred from historical data analysis, where households with solar power generation have been used to train and test the designed algorithm. The suggested DQN EV charging policy can lead to more than 20% of savings in end users electricity bills.

The globally convergent convexification numerical method is constructed for a Coefficient Inverse Problem for the Mean Field Games System. A coefficient characterizing the global interaction term is recovered from the single measurement data. In particular, a new Carleman estimate for the Volterra integral operator is proven, and it stronger than the previously known one. Numerical results demonstrate accurate reconstructions from noisy data.

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.

北京阿比特科技有限公司