亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For the numerical solution of Dirichlet-type boundary value problems associated to nonlinear fractional differential equations of order $\alpha \in (1,2)$ that use Caputo derivatives, we suggest to employ shooting methods. In particular, we demonstrate that the so-called proportional secting technique for selecting the required initial values leads to numerical schemes that converge to high accuracy in a very small number of shooting iterations, and we provide an explanation of the analytical background for this favourable numerical behaviour.

相關內容

Program synthesis aims to create accurate, executable programs from problem specifications, specifically from natural language descriptions in our context. Recent studies have leveraged the power of reinforcement learning (RL) in conjunction with large language models (LLMs), significantly enhancing code generation capabilities. The application of RL focuses on directly optimizing for functional correctness, offering an advantage over conventional supervised methods. Despite policy-based RL methods dominating the literature on RL for program synthesis, the nature of program synthesis tasks hints at a natural alignment with value-based methods. This stems from the rich collection of off-policy programs, including those developed by human programmers and also historical samples, coupled with the straightforward verification of generated programs through automated unit testing, meaning rewards are easy to obtain. Diverging from the dominant use of policy-based algorithms, our work explores the feasibility of value-based approaches, leading to the development of our $\mathcal{B}$-Coder (pronounced Bellman coder). Yet, training value-based methods presents challenges due to the enormous search space inherent to program synthesis. To this end, we introduce an initialization protocol for RL agents utilizing pre-trained LMs and a conservative Bellman operator to reduce training complexities. Moreover, we demonstrate how to leverage the learned value functions as a dual strategy to post-process generated programs. Our empirical evaluations demonstrated $\mathcal{B}$-Coder's capability in achieving state-of-the-art performance when compared to policy-based methods. Remarkably, this achievement is reached with minimal reward engineering effort, highlighting the effectiveness of value-based RL, independent of reward designs.

We study the distributions of waiting times in variations of the negative binomial distribution of order $k$. One variation apply different enumeration scheme on the runs of successes. Another case considers binary trials for which the probability of ones is geometrically varying. We investigate the exact distribution of the waiting time for the $r$-th occurrence of success run of a specified length (non-overlapping, overlapping, at least, exactly, $\ell$-overlapping) in a $q$-sequence of binary trials. The main theorems are Type $1$, $2$, $3$ and $4$ $q$-negative binomial distribution of order $k$ and $q$-negative binomial distribution of order $k$ in the $\ell$-overlapping case. In the present work, we consider a sequence of independent binary zero and one trials with not necessarily identical distribution with the probability of ones varying according to a geometric rule. Exact formulae for the distributions obtained by means of enumerative combinatorics.

We consider the gradient descent flow widely used for the minimization of the $\mathcal{L}^2$ cost function in Deep Learning networks, and introduce two modified versions; one adapted for the overparametrized setting, and the other for the underparametrized setting. Both have a clear and natural invariant geometric meaning, taking into account the pullback vector bundle structure in the overparametrized, and the pushforward vector bundle structure in the underparametrized setting. In the overparametrized case, we prove that, provided that a rank condition holds, all orbits of the modified gradient descent drive the $\mathcal{L}^2$ cost to its global minimum at a uniform exponential convergence rate; one thereby obtains an a priori stopping time for any prescribed proximity to the global minimum. We point out relations of the latter to sub-Riemannian geometry.

A unified construction of $H(\textrm{div})$-conforming finite element tensors, including vector div element, symmetric div matrix element, traceless div matrix element, and in general tensors with linear constraints, is developed in this work. It is based on the geometric decomposition of Lagrange elements into bubble functions on each sub-simplex. Then the tensor at each sub-simplex is decomposed into the tangential and the normal component. The tangential component forms the bubble function space and the normal component characterizes the trace. A deep exploration on boundary degrees of freedom is presented for discovering various finite elements. The developed finite element spaces are $H(\textrm{div})$-conforming and satisfy the discrete inf-sup condition. An explicit basis of the constraint tensor space is also established.

Sparse index tracking is a prominent passive portfolio management strategy that constructs a sparse portfolio to track a financial index. A sparse portfolio is preferable to a full portfolio in terms of reducing transaction costs and avoiding illiquid assets. To achieve portfolio sparsity, conventional studies have utilized $\ell_p$-norm regularizations as a continuous surrogate of the $\ell_0$-norm regularization. Although these formulations can construct sparse portfolios, their practical application is challenging due to the intricate and time-consuming process of tuning parameters to define the precise upper limit of assets in the portfolio. In this paper, we propose a new problem formulation of sparse index tracking using an $\ell_0$-norm constraint that enables easy control of the upper bound on the number of assets in the portfolio. Moreover, our approach offers a choice between constraints on portfolio and turnover sparsity, further reducing transaction costs by limiting asset updates at each rebalancing interval. Furthermore, we develop an efficient algorithm for solving this problem based on a primal-dual splitting method. Finally, we illustrate the effectiveness of the proposed method through experiments on the S&P500 and Russell3000 index datasets.

Differentiable physics simulation provides an avenue to tackle previously intractable challenges through gradient-based optimization, thereby greatly improving the efficiency of solving robotics-related problems. To apply differentiable simulation in diverse robotic manipulation scenarios, a key challenge is to integrate various materials in a unified framework. We present SoftMAC, a differentiable simulation framework that couples soft bodies with articulated rigid bodies and clothes. SoftMAC simulates soft bodies with the continuum-mechanics-based Material Point Method (MPM). We provide a novel forecast-based contact model for MPM, which effectively reduces penetration without introducing other artifacts like unnatural rebound. To couple MPM particles with deformable and non-volumetric clothes meshes, we also propose a penetration tracing algorithm that reconstructs the signed distance field in local area. Diverging from previous works, SoftMAC simulates the complete dynamics of each modality and incorporates them into a cohesive system with an explicit and differentiable coupling mechanism. The feature empowers SoftMAC to handle a broader spectrum of interactions, such as soft bodies serving as manipulators and engaging with underactuated systems. We conducted comprehensive experiments to validate the effectiveness and accuracy of the proposed differentiable pipeline in downstream robotic manipulation applications. Supplementary materials and videos are available on our project website at //sites.google.com/view/softmac.

Estimating the region of attraction (${\tt RoA}$) for a robot controller is essential for safe application and controller composition. Many existing methods require a closed-form expression that limit applicability to data-driven controllers. Methods that operate only over trajectory rollouts tend to be data-hungry. In prior work, we have demonstrated that topological tools based on ${\it Morse Graphs}$ (directed acyclic graphs that combinatorially represent the underlying nonlinear dynamics) offer data-efficient ${\tt RoA}$ estimation without needing an analytical model. They struggle, however, with high-dimensional systems as they operate over a state-space discretization. This paper presents ${\it Mo}$rse Graph-aided discovery of ${\it R}$egions of ${\it A}$ttraction in a learned ${\it L}$atent ${\it S}$pace (${\tt MORALS}$). The approach combines auto-encoding neural networks with Morse Graphs. ${\tt MORALS}$ shows promising predictive capabilities in estimating attractors and their ${\tt RoA}$s for data-driven controllers operating over high-dimensional systems, including a 67-dim humanoid robot and a 96-dim 3-fingered manipulator. It first projects the dynamics of the controlled system into a learned latent space. Then, it constructs a reduced form of Morse Graphs representing the bistability of the underlying dynamics, i.e., detecting when the controller results in a desired versus an undesired behavior. The evaluation on high-dimensional robotic datasets indicates data efficiency in ${\tt RoA}$ estimation.

Based on the theory of homogeneous spaces we derive geometrically optimal edge attributes to be used within the flexible message-passing framework. We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs that should be treated equally. We define equivalence classes of point-pairs that are identical up to a transformation in the group and derive attributes that uniquely identify these classes. Weight sharing is then obtained by conditioning message functions on these attributes. As an application of the theory, we develop an efficient equivariant group convolutional network for processing 3D point clouds. The theory of homogeneous spaces tells us how to do group convolutions with feature maps over the homogeneous space of positions $\mathbb{R}^3$, position and orientations $\mathbb{R}^3 {\times} S^2$, and the group $SE(3)$ itself. Among these, $\mathbb{R}^3 {\times} S^2$ is an optimal choice due to the ability to represent directional information, which $\mathbb{R}^3$ methods cannot, and it significantly enhances computational efficiency compared to indexing features on the full $SE(3)$ group. We support this claim with state-of-the-art results -- in accuracy and speed -- on five different benchmarks in 2D and 3D, including interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models.

The monotonicity of discrete Laplacian, i.e., inverse positivity of stiffness matrix, implies discrete maximum principle, which is in general not true for high order accurate schemes on unstructured meshes. On the other hand, it is possible to construct high order accurate monotone schemes on structured meshes. All previously known high order accurate inverse positive schemes are fourth order accurate schemes, which is either an M-matrix or a product of two M-matrices. For the $Q^3$ spectral element method for the two-dimensional Laplacian, we prove its stiffness matrix is a product of four M-matrices thus it is monotone. Such a scheme can be regarded as a fifth order accurate finite difference scheme.

We consider a distributed setup for reinforcement learning, where each agent has a copy of the same Markov Decision Process but transitions are sampled from the corresponding Markov chain independently by each agent. We show that in this setting, we can achieve a linear speedup for TD($\lambda$), a family of popular methods for policy evaluation, in the sense that $N$ agents can evaluate a policy $N$ times faster provided the target accuracy is small enough. Notably, this speedup is achieved by ``one shot averaging,'' a procedure where the agents run TD($\lambda$) with Markov sampling independently and only average their results after the final step. This significantly reduces the amount of communication required to achieve a linear speedup relative to previous work.

北京阿比特科技有限公司