亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces a cable finite element model based on an accurate description of the tension field for the static nonlinear analysis of cable structures. The proposed cable element is developed using the geometrically exact beam model that adequately considers the effects of large displacements. By neglecting flexural stiffness and shear deformation, the formulation of the cable finite element for scenarios involving given unstrained length and undetermined unstrained length is respectively presented. Additionally, the implementations of solutions based on complete tangent matrix and element internal iteration are introduced. Numerical examples are conducted to validate the accuracy of the presented formulation for cable analysis under various conditions and to demonstrate the computational efficiency of the proposed element and solution method. The results indicate that the proposed cable finite element not only exhibits extremely high accuracy but also effectively addresses the problem of determining the cable state with an unknown unstrained length, demonstrating the wide applicability of the proposed element. Through the utilization of an iteration algorithm with arc-length control and the introduction of additional control conditions, the proposed cable finite element can be further utilized to solve complex practical engineering problems.

相關內容

While score-based generative models (SGMs) have achieved remarkable success in enormous image generation tasks, their mathematical foundations are still limited. In this paper, we analyze the approximation and generalization of SGMs in learning a family of sub-Gaussian probability distributions. We introduce a notion of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure. We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution in total variation with a dimension-independent rate. We illustrate our theory through examples, which include certain mixtures of Gaussians. An essential ingredient of our proof is to derive a dimension-free deep neural network approximation rate for the true score function associated with the forward process, which is interesting in its own right.

Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effects of modifiable risk factors on diseases from observational data. One of the major challenges in Mendelian randomization is that many genetic variants are only modestly or even weakly associated with the risk factor of interest, a setting known as many weak instruments. Many existing methods, such as the popular inverse-variance weighted (IVW) method, could be biased when the instrument strength is weak. To address this issue, the debiased IVW (dIVW) estimator, which is shown to be robust to many weak instruments, was recently proposed. However, this estimator still has non-ignorable bias when the effective sample size is small. In this paper, we propose a modified debiased IVW (mdIVW) estimator by multiplying a modification factor to the original dIVW estimator. After this simple correction, we show that the bias of the mdIVW estimator converges to zero at a faster rate than that of the dIVW estimator under some regularity conditions. Moreover, the mdIVW estimator has smaller variance than the dIVW estimator.We further extend the proposed method to account for the presence of instrumental variable selection and balanced horizontal pleiotropy. We demonstrate the improvement of the mdIVW estimator over the dIVW estimator through extensive simulation studies and real data analysis.

This paper delves into a nonparametric estimation approach for the interaction function within diffusion-type particle system models. We introduce two estimation methods based upon an empirical risk minimization. Our study encompasses an analysis of the stochastic and approximation errors associated with both procedures, along with an examination of certain minimax lower bounds. In particular, we show that there is a natural metric under which the corresponding minimax estimation error of the interaction function converges to zero with parametric rate. This result is rather suprising given complexity of the underlying estimation problem and rather large classes of interaction functions for which the above parametric rate holds.

The classical approach to analyzing extreme value data is the generalized Pareto distribution (GPD). When the GPD is used to explain a target variable with the large dimension of covariates, the shape and scale function of covariates included in GPD are sometimes modeled using the generalized additive models (GAM). In contrast to many results of application, there are no theoretical results on the hybrid technique of GAM and GPD, which motivates us to develop its asymptotic theory. We provide the rate of convergence of the estimator of shape and scale functions, as well as its local asymptotic normality.

Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.

We introduce an algebraic concept of the frame for abstract conditional independence (CI) models, together with basic operations with respect to which such a frame should be closed: copying and marginalization. Three standard examples of such frames are (discrete) probabilistic CI structures, semi-graphoids and structural semi-graphoids. We concentrate on those frames which are closed under the operation of set-theoretical intersection because, for these, the respective families of CI models are lattices. This allows one to apply the results from lattice theory and formal concept analysis to describe such families in terms of implications among CI statements. The central concept of this paper is that of self-adhesivity defined in algebraic terms, which is a combinatorial reflection of the self-adhesivity concept studied earlier in context of polymatroids and information theory. The generalization also leads to a self-adhesivity operator defined on the hyper-level of CI frames. We answer some of the questions related to this approach and raise other open questions. The core of the paper is in computations. The combinatorial approach to computation might overcome some memory and space limitation of software packages based on polyhedral geometry, in particular, if SAT solvers are utilized. We characterize some basic CI families over 4 variables in terms of canonical implications among CI statements. We apply our method in information-theoretical context to the task of entropic region demarcation over 5 variables.

In the context of interactive theorem provers based on a dependent type theory, automation tactics (dedicated decision procedures, call of automated solvers, ...) are often limited to goals which are exactly in some expected logical fragment. This very often prevents users from applying these tactics in other contexts, even similar ones. This paper discusses the design and the implementation of pre-processing operations for automating formal proofs in the Coq proof assistant. It presents the implementation of a wide variety of predictible, atomic goal transformations, which can be composed in various ways to target different backends. A gallery of examples illustrates how it helps to expand significantly the power of automation engines.

The scale function holds significant importance within the fluctuation theory of Levy processes, particularly in addressing exit problems. However, its definition is established through the Laplace transform, thereby lacking explicit representations in general. This paper introduces a novel series representation for this scale function, employing Laguerre polynomials to construct a uniformly convergent approximate sequence. Additionally, we derive statistical inference based on specific discrete observations, presenting estimators of scale functions that are asymptotically normal.

Accurate triangulation of the domain plays a pivotal role in computing the numerical approximation of the differential operators. A good triangulation is the one which aids in reducing discretization errors. In a standard collocation technique, the smooth curved domain is typically triangulated with a mesh by taking points on the boundary to approximate them by polygons. However, such an approach often leads to geometrical errors which directly affect the accuracy of the numerical approximation. To restrict such geometrical errors, \textit{isoparametric}, \textit{subparametric}, and \textit{iso-geometric} methods were introduced which allow the approximation of the curved surfaces (or curved line segments). In this paper, we present an efficient finite element method to approximate the solution to the elliptic boundary value problem (BVP), which governs the response of an elastic solid containing a v-notch and inclusions. The algebraically nonlinear constitutive equation along with the balance of linear momentum reduces to second-order quasi-linear elliptic partial differential equation. Our approach allows us to represent the complex curved boundaries by smooth \textit{one-of-its-kind} point transformation. The main idea is to obtain higher-order shape functions which enable us to accurately compute the entries in the finite element matrices and vectors. A Picard-type linearization is utilized to handle the nonlinearities in the governing differential equation. The numerical results for the test cases show considerable improvement in the accuracy.

This manuscript develops edge-averaged virtual element (EAVE) methodologies to address convection-diffusion problems effectively in the convection-dominated regime. It introduces a variant of EAVE that ensures monotonicity (producing an $M$-matrix) on Voronoi polygonal meshes, provided their duals are Delaunay triangulations with acute angles. Furthermore, the study outlines a comprehensive framework for EAVE methodologies, introducing another variant that integrates with the stiffness matrix derived from the lowest-order virtual element method for the Poisson equation. Numerical experiments confirm the theoretical advantages of the monotonicity property and demonstrate an optimal convergence rate across various mesh configurations.

北京阿比特科技有限公司