亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Continuous 2-dimensional space is often discretized by considering a mesh of weighted cells. In this work we study how well a weighted mesh approximates the space, with respect to shortest paths. We consider a shortest path $ \mathit{SP_w}(s,t) $ from $ s $ to $ t $ in the continuous 2-dimensional space, a shortest vertex path $ \mathit{SVP_w}(s,t) $ (or any-angle path), which is a shortest path where the vertices of the path are vertices of the mesh, and a shortest grid path $ \mathit{SGP_w}(s,t) $, which is a shortest path in a graph associated to the weighted mesh. We provide upper and lower bounds on the ratios $ \frac{\lVert \mathit{SGP_w}(s,t)\rVert}{\lVert \mathit{SP_w}(s,t)\rVert} $, $ \frac{\lVert \mathit{SVP_w}(s,t)\rVert}{\lVert \mathit{SP_w}(s,t)\rVert} $, $ \frac{\lVert \mathit{SGP_w}(s,t)\rVert}{\lVert \mathit{SVP_w}(s,t)\rVert} $ in square and hexagonal meshes, extending previous results for triangular grids. These ratios determine the effectiveness of existing algorithms that compute shortest paths on the graphs obtained from the grids. Our main results are that the ratio $ \frac{\lVert \mathit{SGP_w}(s,t)\rVert}{\lVert \mathit{SP_w}(s,t)\rVert} $ is at most $ \frac{2}{\sqrt{2+\sqrt{2}}} \approx 1.08 $ and $ \frac{2}{\sqrt{2+\sqrt{3}}} \approx 1.04 $ in a square and a hexagonal mesh, respectively.

相關內容

Transformers have a remarkable ability to learn and execute tasks based on examples provided within the input itself, without explicit prior training. It has been argued that this capability, known as in-context learning (ICL), is a cornerstone of Transformers' success, yet questions about the necessary sample complexity, pretraining task diversity, and context length for successful ICL remain unresolved. Here, we provide a precise answer to these questions in an exactly solvable model of ICL of a linear regression task by linear attention. We derive sharp asymptotics for the learning curve in a phenomenologically-rich scaling regime where the token dimension is taken to infinity; the context length and pretraining task diversity scale proportionally with the token dimension; and the number of pretraining examples scales quadratically. We demonstrate a double-descent learning curve with increasing pretraining examples, and uncover a phase transition in the model's behavior between low and high task diversity regimes: In the low diversity regime, the model tends toward memorization of training tasks, whereas in the high diversity regime, it achieves genuine in-context learning and generalization beyond the scope of pretrained tasks. These theoretical insights are empirically validated through experiments with both linear attention and full nonlinear Transformer architectures.

Reduced order models based on the transport of a lower dimensional manifold representation of the thermochemical state, such as Principal Component (PC) transport and Machine Learning (ML) techniques, have been developed to reduce the computational cost associated with the Direct Numerical Simulations (DNS) of reactive flows. Both PC transport and ML normally require an abundance of data to exhibit sufficient predictive accuracy, which might not be available due to the prohibitive cost of DNS or experimental data acquisition. To alleviate such difficulties, similar data from an existing dataset or domain (source domain) can be used to train ML models, potentially resulting in adequate predictions in the domain of interest (target domain). This study presents a novel probabilistic transfer learning (TL) framework to enhance the trust in ML models in correctly predicting the thermochemical state in a lower dimensional manifold and a sparse data setting. The framework uses Bayesian neural networks, and autoencoders, to reduce the dimensionality of the state space and diffuse the knowledge from the source to the target domain. The new framework is applied to one-dimensional freely-propagating flame solutions under different data sparsity scenarios. The results reveal that there is an optimal amount of knowledge to be transferred, which depends on the amount of data available in the target domain and the similarity between the domains. TL can reduce the reconstruction error by one order of magnitude for cases with large sparsity. The new framework required 10 times less data for the target domain to reproduce the same error as in the abundant data scenario. Furthermore, comparisons with a state-of-the-art deterministic TL strategy show that the probabilistic method can require four times less data to achieve the same reconstruction error.

Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy, provided that the type II error probability is sufficiently small. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.

In the setting of homotopy type theory, each type can be interpreted as a space. Moreover, given an element of a type, i.e. a point in the corresponding space, one can define another type which encodes the space of loops based at this point. In particular, when the type we started with is a groupoid, this loop space is always a group. Conversely, to every group we can associate a type (more precisely, a pointed connected groupoid) whose loop space is this group: this operation is called delooping. The generic procedures for constructing such deloopings of groups (based on torsors, or on descriptions of Eilenberg-MacLane spaces as higher inductive types) are unfortunately equipped with elimination principles which do not directly allow eliminating to untruncated types, and are thus difficult to work with in practice. Here, we construct deloopings of the cyclic groups $\mathbb{Z}_m$ which are cellular, and thus do not suffer from this shortcoming. In order to do so, we provide type-theoretic implementations of lens spaces, which constitute an important family of spaces in algebraic topology. Our definition is based on the computation of an iterative join of suitable maps from the circle to an arbitrary delooping of $\mathbb{Z}_m$. In some sense, this work generalizes the construction of real projective spaces by Buchholtz and Rijke, which handles the case m=2, although the general setting requires more involved tools. Finally, we use this construction to also provide cellular descriptions of dihedral groups, and explain how we can hope to use those to compute the cohomology and higher actions of such groups.

Neuro-evolutionary methods have proven effective in addressing a wide range of tasks. However, the study of the robustness and generalizability of evolved artificial neural networks (ANNs) has remained limited. This has immense implications in the fields like robotics where such controllers are used in control tasks. Unexpected morphological or environmental changes during operation can risk failure if the ANN controllers are unable to handle these changes. This paper proposes an algorithm that aims to enhance the robustness and generalizability of the controllers. This is achieved by introducing morphological variations during the evolutionary training process. As a results, it is possible to discover generalist controllers that can handle a wide range of morphological variations sufficiently without the need of the information regarding their morphologies or adaptation of their parameters. We perform an extensive experimental analysis on simulation that demonstrates the trade-off between specialist and generalist controllers. The results show that generalists are able to control a range of morphological variations with a cost of underperforming on a specific morphology relative to a specialist. This research contributes to the field by addressing the limited understanding of robustness and generalizability and proposes a method by which to improve these properties.

Maximal regularity is a kind of a priori estimates for parabolic-type equations and it plays an important role in the theory of nonlinear differential equations. The aim of this paper is to investigate the temporally discrete counterpart of maximal regularity for the discontinuous Galerkin (DG) time-stepping method. We will establish such an estimate without logarithmic factor over a quasi-uniform temporal mesh. To show the main result, we introduce the temporally regularized Green's function and then reduce the discrete maximal regularity to a weighted error estimate for its DG approximation. Our results would be useful for investigation of DG approximation of nonlinear parabolic problems.

Finite discrete-time dynamical systems (FDDS) model phenomena that evolve deterministically in discrete time. It is possible to define sum and product operations on these systems (disjoint union and direct product, respectively) giving a commutative semiring. This algebraic structure led to several works employing polynomial equations to model hypotheses on phenomena modelled using FDDS. To solve these equations, algorithms for performing the division and computing $k$-th roots are needed. In this paper, we propose two polynomial algorithms for these tasks, under the condition that the result is a connected FDDS. This ultimately leads to an efficient solution to equations of the type $AX^k=B$ for connected $X$. These results are some of the important final steps for solving more general polynomial equations on FDDS.

We study the numerical approximation of the stochastic heat equation with a distributional reaction term. Under a condition on the Besov regularity of the reaction term, it was proven recently that a strong solution exists and is unique in the pathwise sense, in a class of H\"older continuous processes. For a suitable choice of sequence $(b^k)_{k\in \mathbb{N}}$ approximating $b$, we prove that the error between the solution $u$ of the SPDE with reaction term $b$ and its tamed Euler finite-difference scheme with mollified drift $b^k$, converges to $0$ in $L^m(\Omega)$ with a rate that depends on the Besov regularity of $b$. In particular, one can consider two interesting cases: first, even when $b$ is only a (finite) measure, a rate of convergence is obtained. On the other hand, when $b$ is a bounded measurable function, the (almost) optimal rate of convergence $(\frac{1}{2}-\varepsilon)$-in space and $(\frac{1}{4}-\varepsilon)$-in time is achieved. Stochastic sewing techniques are used in the proofs, in particular to deduce new regularising properties of the discrete Ornstein-Uhlenbeck process.

In traditional topology optimization, the computing time required to iteratively update the material distribution within a design domain strongly depends on the complexity or size of the problem, limiting its application in real engineering contexts. This work proposes a multi-stage machine learning strategy that aims to predict an optimal topology and the related stress fields of interest, either in 2D or 3D, without resorting to any iterative analysis and design process. The overall topology optimization is treated as regression task in a low-dimensional latent space, that encodes the variability of the target designs. First, a fully-connected model is employed to surrogate the functional link between the parametric input space characterizing the design problem and the latent space representation of the corresponding optimal topology. The decoder branch of an autoencoder is then exploited to reconstruct the desired optimal topology from its latent representation. The deep learning models are trained on a dataset generated through a standard method of topology optimization implementing the solid isotropic material with penalization, for varying boundary and loading conditions. The underlying hypothesis behind the proposed strategy is that optimal topologies share enough common patterns to be compressed into small latent space representations without significant information loss. Results relevant to a 2D Messerschmitt-B\"olkow-Blohm beam and a 3D bridge case demonstrate the capabilities of the proposed framework to provide accurate optimal topology predictions in a fraction of a second.

Variable-exponent fractional models attract increasing attentions in various applications, while the rigorous analysis is far from well developed. This work provides general tools to address these models. Specifically, we first develop a convolution method to study the well-posedness, regularity, an inverse problem and numerical approximation for the sundiffusion of variable exponent. For models such as the variable-exponent two-sided space-fractional boundary value problem (including the variable-exponent fractional Laplacian equation as a special case) and the distributed variable-exponent model, for which the convolution method does not apply, we develop a perturbation method to prove their well-posedness. The relation between the convolution method and the perturbation method is discussed, and we further apply the latter to prove the well-posedness of the variable-exponent Abel integral equation and discuss the constraint on the data under different initial values of variable exponent.

北京阿比特科技有限公司