亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we study the effect of small-cut elements on the critical time-step size in an immersogeometric context. We analyze different formulations for second-order (membrane) and fourth-order (shell-type) equations, and derive scaling relations between the critical time-step size and the cut-element size for various types of cuts. In particular, we focus on different approaches for the weak imposition of Dirichlet conditions: by penalty enforcement and with Nitsche's method. The stability requirement for Nitsche's method necessitates either a cut-size dependent penalty parameter, or an additional ghost-penalty stabilization term is necessary. Our findings show that both techniques suffer from cut-size dependent critical time-step sizes, but the addition of a ghost-penalty term to the mass matrix serves to mitigate this issue. We confirm that this form of `mass-scaling' does not adversely affect error and convergence characteristics for a transient membrane example, and has the potential to increase the critical time-step size by orders of magnitude. Finally, for a prototypical simulation of a Kirchhoff-Love shell, our stabilized Nitsche formulation reduces the solution error by well over an order of magnitude compared to a penalty formulation at equal time-step size.

相關內容

There has recently been much interest in Gaussian processes on linear networks and more generally on compact metric graphs. One proposed strategy for defining such processes on a metric graph $\Gamma$ is through a covariance function that is isotropic in a metric on the graph. Another is through a fractional order differential equation $L^\alpha (\tau u) = \mathcal{W}$ on $\Gamma$, where $L = \kappa^2 - \nabla(a\nabla)$ for (sufficiently nice) functions $\kappa, a$, and $\mathcal{W}$ is Gaussian white noise. We study Markov properties of these two types of fields. We first show that there are no Gaussian random fields on general metric graphs that are both isotropic and Markov. We then show that the second type of fields, the generalized Whittle--Mat\'ern fields, are Markov if and only if $\alpha\in\mathbb{N}$, and if $\alpha\in\mathbb{N}$, the field is Markov of order $\alpha$, which essentially means that the process in one region $S\subset\Gamma$ is conditionally independent the process in $\Gamma\setminus S$ given the values of the process and its $\alpha-1$ derivatives on $\partial S$. Finally, we show that the Markov property implies an explicit characterization of the process on a fixed edge $e$, which in particular shows that the conditional distribution of the process on $e$ given the values at the two vertices connected to $e$ is independent of the geometry of $\Gamma$.

This study is devoted to two of the oldest known manuscripts in which the oeuvre of the medieval mystical author Hadewijch has been preserved: Brussels, KBR, 2879-2880 (ms. A) and Brussels, KBR, 2877-2878 (ms. B). On the basis of codicological and contextual arguments, it is assumed that the scribe who produced B used A as an exemplar. While the similarities in both layout and content between the two manuscripts are striking, the present article seeks to identify the differences. After all, regardless of the intention to produce a copy that closely follows the exemplar, subtle linguistic variation is apparent. Divergences relate to spelling conventions, but also to the way in which words are abbreviated (and the extent to which abbreviations occur). The present study investigates the spelling profiles of the scribes who produced mss. A and B in a computational way. In the first part of this study, we will present both manuscripts in more detail, after which we will consider prior research carried out on scribal profiling. The current study both builds and expands on Kestemont (2015). Next, we outline the methodology used to analyse and measure the degree of scribal appropriation that took place when ms. B was copied off the exemplar ms. A. After this, we will discuss the results obtained, focusing on the scribal variation that can be found both at the level of individual words and n-grams. To this end, we use machine learning to identify the most distinctive features that separate manuscript A from B. Finally, we look at possible diachronic trends in the appropriation by B's scribe of his exemplar. We argue that scribal takeovers in the exemplar impacts the practice of the copying scribe, while transitions to a different content matter cause little to no effect.

We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.

The propagation of charged particles through a scattering medium in the presence of a magnetic field can be described by a Fokker-Planck equation with Lorentz force. This model is studied both, from a theoretical and a numerical point of view. A particular trace estimate is derived for the relevant function spaces to clarify the meaning of boundary values. Existence of a weak solution is then proven by the Rothe method. In the second step of our investigations, a fully practicable discretization scheme is proposed based on implicit time-stepping through the energy levels and a spherical-harmonics finite-element discretization with respect to the remaining variables. A full error analysis of the resulting scheme is given, and numerical results are presented to illustrate the theoretical results and the performance of the proposed method.

We propose the use of machine learning techniques to find optimal quadrature rules for the construction of stiffness and mass matrices in isogeometric analysis (IGA). We initially consider 1D spline spaces of arbitrary degree spanned over uniform and non-uniform knot sequences, and then the generated optimal rules are used for integration over higher-dimensional spaces using tensor product sense. The quadrature rule search is posed as an optimization problem and solved by a machine learning strategy based on gradient-descent. However, since the optimization space is highly non-convex, the success of the search strongly depends on the number of quadrature points and the parameter initialization. Thus, we use a dynamic programming strategy that initializes the parameters from the optimal solution over the spline space with a lower number of knots. With this method, we found optimal quadrature rules for spline spaces when using IGA discretizations with up to 50 uniform elements and polynomial degrees up to 8, showing the generality of the approach in this scenario. For non-uniform partitions, the method also finds an optimal rule in a reasonable number of test cases. We also assess the generated optimal rules in two practical case studies, namely, the eigenvalue problem of the Laplace operator and the eigenfrequency analysis of freeform curved beams, where the latter problem shows the applicability of the method to curved geometries. In particular, the proposed method results in savings with respect to traditional Gaussian integration of up to 44% in 1D, 68% in 2D, and 82% in 3D spaces.

In this work, we study non-asymptotic bounds on correlation between two time realizations of stable linear systems with isotropic Gaussian noise. Consequently, via sampling from a sub-trajectory and using \emph{Talagrands'} inequality, we show that empirical averages of reward concentrate around steady state (dynamical system mixes to when closed loop system is stable under linear feedback policy ) reward , with high-probability. As opposed to common belief of larger the spectral radius stronger the correlation between samples, \emph{large discrepancy between algebraic and geometric multiplicity of system eigenvalues leads to large invariant subspaces related to system-transition matrix}; once the system enters the large invariant subspace it will travel away from origin for a while before coming close to a unit ball centered at origin where an isotropic Gaussian noise can with high probability allow it to escape the current invariant subspace it resides in, leading to \emph{bottlenecks} between different invariant subspaces that span $\mathbb{R}^{n}$, to be precise : system initiated in a large invariant subspace will be stuck there for a long-time: log-linear in dimension of the invariant subspace and inversely to log of inverse of magnitude of the eigenvalue. In the problem of Ordinary Least Squares estimate of system transition matrix via a single trajectory, this phenomenon is even more evident if spectrum of transition matrix associated to large invariant subspace is explosive and small invariant subspaces correspond to stable eigenvalues. Our analysis provide first interpretable and geometric explanation into intricacies of learning and concentration for random dynamical systems on continuous, high dimensional state space; exposing us to surprises in high dimensions

In this paper we investigate the stability properties of the so-called gBBKS and GeCo methods, which belong to the class of nonstandard schemes and preserve the positivity as well as all linear invariants of the underlying system of ordinary differential equations for any step size. A stability investigation for these methods, which are outside the class of general linear methods, is challenging since the iterates are always generated by a nonlinear map even for linear problems. Recently, a stability theorem was derived presenting criteria for understanding such schemes. For the analysis, the schemes are applied to general linear equations and proven to be generated by $\mathcal C^1$-maps with locally Lipschitz continuous first derivatives. As a result, the above mentioned stability theorem can be applied to investigate the Lyapunov stability of non-hyperbolic fixed points of the numerical method by analyzing the spectrum of the corresponding Jacobian of the generating map. In addition, if a fixed point is proven to be stable, the theorem guarantees the local convergence of the iterates towards it. In the case of first and second order gBBKS schemes the stability domain coincides with that of the underlying Runge--Kutta method. Furthermore, while the first order GeCo scheme converts steady states to stable fixed points for all step sizes and all linear test problems of finite size, the second order GeCo scheme has a bounded stability region for the considered test problems. Finally, all theoretical predictions from the stability analysis are validated numerically.

We derive high-dimensional scaling limits and fluctuations for the online least-squares Stochastic Gradient Descent (SGD) algorithm by taking the properties of the data generating model explicitly into consideration. Our approach treats the SGD iterates as an interacting particle system, where the expected interaction is characterized by the covariance structure of the input. Assuming smoothness conditions on moments of order up to eight orders, and without explicitly assuming Gaussianity, we establish the high-dimensional scaling limits and fluctuations in the form of infinite-dimensional Ordinary Differential Equations (ODEs) or Stochastic Differential Equations (SDEs). Our results reveal a precise three-step phase transition of the iterates; it goes from being ballistic, to diffusive, and finally to purely random behavior, as the noise variance goes from low, to moderate and finally to very-high noise setting. In the low-noise setting, we further characterize the precise fluctuations of the (scaled) iterates as infinite-dimensional SDEs. We also show the existence and uniqueness of solutions to the derived limiting ODEs and SDEs. Our results have several applications, including characterization of the limiting mean-square estimation or prediction errors and their fluctuations which can be obtained by analytically or numerically solving the limiting equations.

A species that, coming from a source population, appears in a new environment where it was not present before is named alien. Due to the harm it poses to biodiversity and the expenses associated with its control, the phenomenon of alien species invasions is currently under careful examination. Although the presence of a considerable literature on the subject, the formulation of a dedicated statistical model has been deemed essential. The objective is to overcome current computational constraints while also correctly accounting for the dynamics behind the spread of alien species. A first record can be seen as a relational event, where the species (the sender) reaches a region (the receiver) for the first time in a certain year. As a result, whenever an alien species is introduced, the relational event graph adds a time-stamped edge. Besides potentially time-varying exogenous and endogenous covariates, our smooth relational event model (REM) also incorporates time-varying and random effects to explain the invasion rate. Particularly, we aim to track temporal variations in impacts' direction and magnitude of the ecological, socioeconomic, historical, and cultural forces at work. Network structures of particular interest (such as species' co-invasion affinity) are inspected as well. Our inference procedure relies on case-control sampling, yielding the same likelihood as that of a logistic regression. Due to the smooth nature of the incorporated effects, we may fit a generalised additive model where random effects are also estimated as 0-dimensional splines. The consequent computational advantage makes it possible to simultaneously examine many taxonomies. We explore how vascular plants and insects behave together. The goodness of fit of the smooth REM may be evaluated by means of test statistics computed as region-specific sums of martingale-residuals.

The design of cable-stayed bridges requires the determination of several design variables' values. Civil engineers usually perform this task by hand as an iteration of steps that stops when the engineer is happy with both the cost and maintaining the structural constraints of the solution. The problem's difficulty arises from the fact that changing a variable may affect other variables, meaning that they are not independent, suggesting that we are facing a deceptive landscape. In this work, we compare two approaches to a baseline solution: a Genetic Algorithm and a CMA-ES algorithm. There are two objectives when designing the bridges: minimizing the cost and maintaining the structural constraints in acceptable values to be considered safe. These are conflicting objectives, meaning that decreasing the cost often results in a bridge that is not structurally safe. The results suggest that CMA-ES is a better option for finding good solutions in the search space, beating the baseline with the same amount of evaluations, while the Genetic Algorithm could not. In concrete, the CMA-ES approach is able to design bridges that are cheaper and structurally safe.

北京阿比特科技有限公司