亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a theoretical and computational framework based on fractional calculus for the analysis of the nonlocal static response of cylindrical shell panels. The differ-integral nature of fractional derivatives allows an efficient and accurate methodology to account for the effect of long-range (nonlocal) interactions in curved structures. More specifically, the use of frame-invariant fractional-order kinematic relations enables a physically, mathematically, and thermodynamically consistent formulation to model the nonlocal elastic interactions. In order to evaluate the response of these nonlocal shells under practical scenarios involving generalized loads and boundary conditions, the fractional-Finite Element Method (f-FEM) is extended to incorporate shell elements based on the first-order shear-deformable displacement theory. Finally, numerical studies are performed exploring both the linear and the geometrically nonlinear static response of nonlocal cylindrical shell panels. This study is intended to provide a general foundation to investigate the nonlocal behavior of curved structures by means of fractional order models.

相關內容

Isogeometric analysis with the boundary element method (IGABEM) has recently gained interest. In this paper, the approximability of IGABEM on 3D acoustic scattering problems will be investigated and a new improved BeTSSi submarine will be presented as a benchmark example. Both Galerkin and collocation are considered in combination with several boundary integral equations (BIE). In addition to the conventional BIE, regularized versions of this BIE will be considered. Moreover, the hyper-singular BIE and the Burton--Miller formulation are also considered. A new adaptive integration routine is presented, and the numerical examples show the importance of the integration procedure in the boundary element method. The numerical examples also include comparison between standard BEM and IGABEM, which again verifies the higher accuracy obtained from the increased inter-element continuity of the spline basis functions. One of the main objectives in this paper is benchmarking acoustic scattering problems, and the method of manufactured solution will be used frequently in this regard.

Given its status as a classic problem and its importance to both theoreticians and practitioners, edit distance provides an excellent lens through which to understand how the theoretical analysis of algorithms impacts practical implementations. From an applied perspective, the goals of theoretical analysis are to predict the empirical performance of an algorithm and to serve as a yardstick to design novel algorithms that perform well in practice. In this paper, we systematically survey the types of theoretical analysis techniques that have been applied to edit distance and evaluate the extent to which each one has achieved these two goals. These techniques include traditional worst-case analysis, worst-case analysis parametrized by edit distance or entropy or compressibility, average-case analysis, semi-random models, and advice-based models. We find that the track record is mixed. On one hand, two algorithms widely used in practice have been born out of theoretical analysis and their empirical performance is captured well by theoretical predictions. On the other hand, all the algorithms developed using theoretical analysis as a yardstick since then have not had any practical relevance. We conclude by discussing the remaining open problems and how they can be tackled.

Isogeometric analysis (IGA) has proven to be an improvement on the classical finite element method (FEM) in several fields, including structural mechanics and fluid dynamics. In this paper, the performance of IGA coupled with the infinite element method (IEM) for some acoustic scattering problems is investigated. In particular, the simple problem of acoustic scattering by a rigid sphere, and the scattering of acoustic waves by an elastic spherical shell with fluid domains both inside and outside, representing a full acoustic-structure interaction (ASI) problem. Finally, a mock shell and a simplified submarine benchmark are investigated. The numerical examples include comparisons between IGA and the FEM. Our main finding is that the usage of IGA significantly increases the accuracy compared to the usage of $C^0$ FEM due to increased inter-element continuity of the spline basis functions.

This work presents a numerical formulation to model isotropic viscoelastic material behavior for membranes and thin shells. The surface and the shell theory are formulated within a curvilinear coordinate system, which allows the representation of general surfaces and deformations. The kinematics follow from Kirchhoff-Love theory and the discretization makes use of isogeometric shape functions. A multiplicative split of the surface deformation gradient is employed, such that an intermediate surface configuration is introduced. The surface metric and curvature of this intermediate configuration follow from the solution of nonlinear evolution laws - ordinary differential equations (ODEs) - that stem from a generalized viscoelastic solid model. The evolution laws are integrated numerically with the implicit Euler scheme and linearized within the Newton-Raphson scheme of the nonlinear finite element framework. The implementation of surface and bending viscosity is verified with the help of analytical solutions and shows ideal convergence behavior. The chosen numerical examples capture large deformations and typical viscoelasticity behavior, such as creep, relaxation, and strain rate dependence. It is shown that the proposed formulation can also be straightforwardly applied to model boundary viscoelasticity of 3D bodies.

We introduce and analyze various Regularized Combined Field Integral Equations (CFIER) formulations of time-harmonic Navier equations in media with piece-wise constant material properties. These formulations can be derived systematically starting from suitable coercive approximations of Dirichlet-to-Neumann operators (DtN), and we present a periodic pseudodifferential calculus framework within which the well posedness of CIER formulations can be established. We also use the DtN approximations to derive and analyze Optimized Schwarz (OS) methods for the solution of elastodynamics transmission problems. The pseudodifferential calculus we develop in this paper relies on careful singularity splittings of the kernels of Navier boundary integral operators which is also the basis of high-order Nystr\"om quadratures for their discretizations. Based on these high-order discretizations we investigate the rate of convergence of iterative solvers applied to CFIER and OS formulations of scattering and transmission problems. We present a variety of numerical results that illustrate that the CFIER methodology leads to important computational savings over the classical CFIE one, whenever iterative solvers are used for the solution of the ensuing discretized boundary integral equations. Finally, we show that the OS methods are competitive in the high-frequency high-contrast regime.

Tokenization is an important text preprocessing step to prepare input tokens for deep language models. WordPiece and BPE are de facto methods employed by important models, such as BERT and GPT. However, the impact of tokenization can be different for morphologically rich languages, such as Turkic languages, where many words can be generated by adding prefixes and suffixes. We compare five tokenizers at different granularity levels, i.e. their outputs vary from smallest pieces of characters to the surface form of words, including a Morphological-level tokenizer. We train these tokenizers and pretrain medium-sized language models using RoBERTa pretraining procedure on the Turkish split of the OSCAR corpus. We then fine-tune our models on six downstream tasks. Our experiments, supported by statistical tests, reveal that Morphological-level tokenizer has challenging performance with de facto tokenizers. Furthermore, we find that increasing the vocabulary size improves the performance of Morphological and Word-level tokenizers more than that of de facto tokenizers. The ratio of the number of vocabulary parameters to the total number of model parameters can be empirically chosen as 20% for de facto tokenizers and 40% for other tokenizers to obtain a reasonable trade-off between model size and performance.

We provide a decision theoretic analysis of bandit experiments. The setting corresponds to a dynamic programming problem, but solving this directly is typically infeasible. Working within the framework of diffusion asymptotics, we define suitable notions of asymptotic Bayes and minimax risk for bandit experiments. For normally distributed rewards, the minimal Bayes risk can be characterized as the solution to a nonlinear second-order partial differential equation (PDE). Using a limit of experiments approach, we show that this PDE characterization also holds asymptotically under both parametric and non-parametric distribution of the rewards. The approach further describes the state variables it is asymptotically sufficient to restrict attention to, and therefore suggests a practical strategy for dimension reduction. The upshot is that we can approximate the dynamic programming problem defining the bandit experiment with a PDE which can be efficiently solved using sparse matrix routines. We derive the optimal Bayes and minimax policies from the numerical solutions to these equations. The proposed policies substantially dominate existing methods such as Thompson sampling. The framework also allows for substantial generalizations to the bandit problem such as time discounting and pure exploration motives.

Dynamic Linear Models (DLMs) are commonly employed for time series analysis due to their versatile structure, simple recursive updating, ability to handle missing data, and probabilistic forecasting. However, the options for count time series are limited: Gaussian DLMs require continuous data, while Poisson-based alternatives often lack sufficient modeling flexibility. We introduce a novel semiparametric methodology for count time series by warping a Gaussian DLM. The warping function has two components: a (nonparametric) transformation operator that provides distributional flexibility and a rounding operator that ensures the correct support for the discrete data-generating process. We develop conjugate inference for the warped DLM, which enables analytic and recursive updates for the state space filtering and smoothing distributions. We leverage these results to produce customized and efficient algorithms for inference and forecasting, including Monte Carlo simulation for offline analysis and an optimal particle filter for online inference. This framework unifies and extends a variety of discrete time series models and is valid for natural counts, rounded values, and multivariate observations. Simulation studies illustrate the excellent forecasting capabilities of the warped DLM. The proposed approach is applied to a multivariate time series of daily overdose counts and demonstrates both modeling and computational successes.

It is shown, with two sets of indicators that separately load on two distinct factors, independent of one another conditional on the past, that if it is the case that at least one of the factors causally affects the other, then, in many settings, the process will converge to a factor model in which a single factor will suffice to capture the covariance structure among the indicators. Factor analysis with one wave of data can then not distinguish between factor models with a single factor versus those with two factors that are causally related. Therefore, unless causal relations between factors can be ruled out a priori, alleged empirical evidence from one-wave factor analysis for a single factor still leaves open the possibilities of a single factor or of two factors that causally affect one another. The implications for interpreting the factor structure of psychological scales, such as self-report scales for anxiety and depression, or for happiness and purpose, are discussed. The results are further illustrated through simulations to gain insight into the practical implications of the results in more realistic settings prior to the convergence of the processes. Some further generalizations to an arbitrary number of underlying factors are noted.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司