亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study flow around a cylinder from a dynamics perspective, using drag and lift as indicators. We observe that the mean drag coefficient bifurcates from the steady case when the Karman vortex street emerges. We also find a jump in the dimension of the drag/lift attractor just above Reynolds number 100. We compare the simulated drag values with experimental data obtained over the last hundred years. Our simulations suggest that a vibrational resonance in the cylinder would be unlikely for Reynolds numbers greater than 1000, where the drag/lift behavior is fully chaotic.

相關內容

This paper explores an iterative coupling approach to solve linear thermo-poroelasticity problems, with its application as a high-fidelity discretization utilizing finite elements during the training of projection-based reduced order models. One of the main challenges in addressing coupled multi-physics problems is the complexity and computational expenses involved. In this study, we introduce a decoupled iterative solution approach, integrated with reduced order modeling, aimed at augmenting the efficiency of the computational algorithm. The iterative coupling technique we employ builds upon the established fixed-stress splitting scheme that has been extensively investigated for Biot's poroelasticity. By leveraging solutions derived from this coupled iterative scheme, the reduced order model employs an additional Galerkin projection onto a reduced basis space formed by a small number of modes obtained through proper orthogonal decomposition. The effectiveness of the proposed algorithm is demonstrated through numerical experiments, showcasing its computational prowess.

Far-field speech recognition is a challenging task that conventionally uses signal processing beamforming to attack noise and interference problem. But the performance has been found usually limited due to heavy reliance on environmental assumption. In this paper, we propose a unified multichannel far-field speech recognition system that combines the neural beamforming and transformer-based Listen, Spell, Attend (LAS) speech recognition system, which extends the end-to-end speech recognition system further to include speech enhancement. Such framework is then jointly trained to optimize the final objective of interest. Specifically, factored complex linear projection (fCLP) has been adopted to form the neural beamforming. Several pooling strategies to combine look directions are then compared in order to find the optimal approach. Moreover, information of the source direction is also integrated in the beamforming to explore the usefulness of source direction as a prior, which is usually available especially in multi-modality scenario. Experiments on different microphone array geometry are conducted to evaluate the robustness against spacing variance of microphone array. Large in-house databases are used to evaluate the effectiveness of the proposed framework and the proposed method achieve 19.26\% improvement when compared with a strong baseline.

The Levin method is a well-known technique for evaluating oscillatory integrals, which operates by solving a certain ordinary differential equation in order to construct an antiderivative of the integrand. It was long believed that this approach suffers from "low-frequency breakdown," meaning that the accuracy of the calculated value of the integral deteriorates when the integrand is only slowly oscillating. Recently presented experimental evidence, however, suggests that if a Chebyshev spectral method is used to discretize the differential equation and the resulting linear system is solved via a truncated singular value decomposition, then no low-frequency breakdown occurs. Here, we provide a proof that this is the case, and our proof applies not only when the integrand is slowly oscillating, but even in the case of stationary points. Our result puts adaptive schemes based on the Levin method on a firm theoretical foundation and accounts for their behavior in the presence of stationary points. We go on to point out that by combining an adaptive Levin scheme with phase function methods for ordinary differential equations, a large class of oscillatory integrals involving special functions, including products of such functions and the compositions of such functions with slowly-varying functions, can be easily evaluated without the need for symbolic computations. Finally, we present the results of numerical experiments which illustrate the consequences of our analysis and demonstrate the properties of the adaptive Levin method.

Gradient-enhanced Kriging (GE-Kriging) is a well-established surrogate modelling technique for approximating expensive computational models. However, it tends to get impractical for high-dimensional problems due to the size of the inherent correlation matrix and the associated high-dimensional hyper-parameter tuning problem. To address these issues, a new method, called sliced GE-Kriging (SGE-Kriging), is developed in this paper for reducing both the size of the correlation matrix and the number of hyper-parameters. We first split the training sample set into multiple slices, and invoke Bayes' theorem to approximate the full likelihood function via a sliced likelihood function, in which multiple small correlation matrices are utilized to describe the correlation of the sample set rather than one large one. Then, we replace the original high-dimensional hyper-parameter tuning problem with a low-dimensional counterpart by learning the relationship between the hyper-parameters and the derivative-based global sensitivity indices. The performance of SGE-Kriging is finally validated by means of numerical experiments with several benchmarks and a high-dimensional aerodynamic modeling problem. The results show that the SGE-Kriging model features an accuracy and robustness that is comparable to the standard one but comes at much less training costs. The benefits are most evident for high-dimensional problems with tens of variables.

The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In settings with large coarsest-level problems, approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods. In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using these results, we derive coarsest-level stopping criterion through which we may control the difference between the approximation computed by a V-cycle method with approximate coarsest-level solver and the approximation which would be computed if the coarsest-level problems were solved exactly. The coarsest-level stopping criterion may thus be set up such that the V-cycle method converges to a chosen finest-level accuracy in (nearly) the same number of V-cycle iterations as the V-cycle method with exact coarsest-level solver. We also utilize the theoretical results to discuss how the convergence of the V-cycle method may be affected by the choice of a tolerance in a coarsest-level stopping criterion based on the relative residual norm.

In this work, we present the physics-informed neural network (PINN) model applied particularly to dynamic problems in solid mechanics. We focus on forward and inverse problems. Particularly, we show how a PINN model can be used efficiently for material identification in a dynamic setting. In this work, we assume linear continuum elasticity. We show results for two-dimensional (2D) plane strain problem and then we proceed to apply the same techniques for a three-dimensional (3D) problem. As for the training data we use the solution based on the finite element method. We rigorously show that PINN models are accurate, robust and computationally efficient, especially as a surrogate model for material identification problems. Also, we employ state-of-the-art techniques from the PINN literature which are an improvement to the vanilla implementation of PINN. Based on our results, we believe that the framework we have developed can be readily adapted to computational platforms for solving multiple dynamic problems in solid mechanics.

In the analysis of cluster randomized trials, two typical features are that individuals within a cluster are correlated and that the total number of clusters can sometimes be limited. While model-robust treatment effect estimators have been recently developed, their asymptotic theory requires the number of clusters to approach infinity, and one often has to empirically assess the applicability of those methods in finite samples. To address this challenge, we propose a conformal causal inference framework that achieves the target coverage probability of treatment effects in finite samples without the need for asymptotic approximations. Meanwhile, we prove that this framework is compatible with arbitrary working models, including machine learning algorithms leveraging baseline covariates, possesses robustness against arbitrary misspecification of working models, and accommodates a variety of within-cluster correlations. Under this framework, we offer efficient algorithms to make inferences on treatment effects at both the cluster and individual levels, applicable to user-specified covariate subgroups and two types of test data. Finally, we demonstrate our methods via simulations and a real data application based on a cluster randomized trial for treating chronic pain.

In many jurisdictions, forensic evidence is presented in the form of categorical statements by forensic experts. Several large-scale performance studies have been performed that report error rates to elucidate the uncertainty associated with such categorical statements. There is growing scientific consensus that the likelihood ratio (LR) framework is the logically correct form of presentation for forensic evidence evaluation. Yet, results from the large-scale performance studies have not been cast in this framework. Here, I show how to straightforwardly calculate an LR for any given categorical statement using data from the performance studies. This number quantifies how much more we should believe the hypothesis of same source vs different source, when provided a particular expert witness statement. LRs are reported for categorical statements resulting from the analysis of latent fingerprints, bloodstain patterns, handwriting, footwear and firearms. The highest LR found for statements of identification was 376 (fingerprints), the lowest found for statements of exclusion was 1/28 (handwriting). The LRs found may be more insightful for those used to this framework than the various error rates reported previously. An additional advantage of using the LR in this way is the relative simplicity; there are no decisions necessary on what error rate to focus on or how to handle inconclusive statements. The values found are closer to 1 than many would have expected. One possible explanation for this mismatch is that we undervalue numerical LRs. Finally, a note of caution: the LR values reported here come from a simple calculation that does not do justice to the nuances of the large-scale studies and their differences to casework, and should be treated as ball-park figures rather than definitive statements on the evidential value of whole forensic scientific fields.

Non-core drilling has gradually become the primary exploration method in geological exploration engineering, and well logging curves have increasingly gained importance as the main carriers of geological information. However, factors such as geological environment, logging equipment, borehole quality, and unexpected events can all impact the quality of well logging curves. Previous methods of re-logging or manual corrections have been associated with high costs and low efficiency. This paper proposes a machine learning method that utilizes existing data to predict missing data, and its effectiveness and feasibility have been validated through field experiments. The proposed method builds on the traditional Long Short-Term Memory (LSTM) neural network by incorporating a self-attention mechanism to analyze the sequential dependencies of the data. It selects the dominant computational results in the LSTM, reducing the computational complexity from O(n^2) to O(nlogn) and improving model efficiency. Experimental results demonstrate that the proposed method achieves higher accuracy compared to traditional curve synthesis methods based on Fully Connected Neural Networks (FCNN) and vanilla LSTM. This accurate, efficient, and cost-effective prediction method holds a practical value in engineering applications.

We consider wave scattering from a system of highly contrasting resonators with time-modulated material parameters. In this setting, the wave equation reduces to a system of coupled Helmholtz equations that models the scattering problem. We consider the one-dimensional setting. In order to understand the energy of the system, we prove a novel higher-order discrete, capacitance matrix approximation of the subwavelength resonant quasifrequencies. Further, we perform numerical experiments to support and illustrate our analytical results and show how periodically time-dependent material parameters affect the scattered wave field.

北京阿比特科技有限公司