亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Non-core drilling has gradually become the primary exploration method in geological exploration engineering, and well logging curves have increasingly gained importance as the main carriers of geological information. However, factors such as geological environment, logging equipment, borehole quality, and unexpected events can all impact the quality of well logging curves. Previous methods of re-logging or manual corrections have been associated with high costs and low efficiency. This paper proposes a machine learning method that utilizes existing data to predict missing data, and its effectiveness and feasibility have been validated through field experiments. The proposed method builds on the traditional Long Short-Term Memory (LSTM) neural network by incorporating a self-attention mechanism to analyze the sequential dependencies of the data. It selects the dominant computational results in the LSTM, reducing the computational complexity from O(n^2) to O(nlogn) and improving model efficiency. Experimental results demonstrate that the proposed method achieves higher accuracy compared to traditional curve synthesis methods based on Fully Connected Neural Networks (FCNN) and vanilla LSTM. This accurate, efficient, and cost-effective prediction method holds a practical value in engineering applications.

相關內容

長短期記憶網絡(LSTM)是一種用于深度學習領域的人工回歸神經網絡(RNN)結構。與標準的前饋神經網絡不同,LSTM具有反饋連接。它不僅可以處理單個數據點(如圖像),還可以處理整個數據序列(如語音或視頻)。例如,LSTM適用于未分段、連接的手寫識別、語音識別、網絡流量或IDSs(入侵檢測系統)中的異常檢測等任務。

Brittle solids are often toughened by adding a second-phase material. This practice often results in composites with material heterogeneities on the meso scale: large compared to the scale of the process zone but small compared to that of the application. The specific configuration (both geometrical and mechanical) of this mesoscale heterogeneity is generally recognized as important in determining crack propagation and, subsequently, the (effective) toughness of the composite. Here, we systematically investigate how dynamic crack propagation is affected by mesoscale heterogeneities taking the form of an array of inclusions. Using a variational phase-field approach, we compute the apparent crack speed and fracture energy dissipation rate to compare crack propagation under Mode-I loading across different configurations of these inclusions. If fixing the volume fraction of inclusions, matching the inclusion size to the K-dominance zone size gives rise to the best toughening outcome. Conversely, if varying the volume fraction of inclusions, a lower volume fraction configuration can lead to a better toughening outcome if and only if the inclusion size approaches from above the size of the K-dominance zone. Since the size of the K-dominance zone can be estimated \textit{a priori} given an understanding of the application scenario and material availability, we can, in principle, exploit this estimation to design a material's mesoscale heterogeneity that optimally balances the tradeoff between strength and toughness. This paves the way for realizing functional (meta-)materials against crack propagation in extreme environments.

The main reason for query model's prominence in complexity theory and quantum computing is the presence of concrete lower bounding techniques: polynomial and adversary method. There have been considerable efforts to give lower bounds using these methods, and to compare/relate them with other measures based on the decision tree. We explore the value of these lower bounds on quantum query complexity and their relation with other decision tree based complexity measures for the class of symmetric functions, arguably one of the most natural and basic sets of Boolean functions. We show an explicit construction for the dual of the positive adversary method and also of the square root of private coin certificate game complexity for any total symmetric function. This shows that the two values can't be distinguished for any symmetric function. Additionally, we show that the recently introduced measure of spectral sensitivity gives the same value as both positive adversary and approximate degree for every total symmetric Boolean function. Further, we look at the quantum query complexity of Gap Majority, a partial symmetric function. It has gained importance recently in regard to understanding the composition of randomized query complexity. We characterize the quantum query complexity of Gap Majority and show a lower bound on noisy randomized query complexity (Ben-David and Blais, FOCS 2020) in terms of quantum query complexity. Finally, we study how large certificate complexity and block sensitivity can be as compared to sensitivity for symmetric functions (even up to constant factors). We show tight separations, i.e., give upper bounds on possible separations and construct functions achieving the same.

Fourth-order variational inequalities are encountered in various scientific and engineering disciplines, including elliptic optimal control problems and plate obstacle problems. In this paper, we consider additive Schwarz methods for solving fourth-order variational inequalities. Based on a unified framework of various finite element methods for fourth-order variational inequalities, we develop one- and two-level additive Schwarz methods. We prove that the two-level method is scalable in the sense that the convergence rate of the method depends on $H/h$ and $H/\delta$ only, where $h$ and $H$ are the typical diameters of an element and a subdomain, respectively, and $\delta$ measures the overlap among the subdomains. This proof relies on a new nonlinear positivity-preserving coarse interpolation operator, the construction of which was previously unknown. To the best of our knowledge, this analysis represents the first investigation into the scalability of the two-level additive Schwarz method for fourth-order variational inequalities. Our theoretical results are verified by numerical experiments.

We present an algorithm for the exact computer-aided construction of the Voronoi cells of lattices with known symmetry group. Our algorithm scales better than linearly with the total number of faces and is applicable to dimensions beyond 12, which previous methods could not achieve. The new algorithm is applied to the Coxeter-Todd lattice $K_{12}$ as well as to a family of lattices obtained from laminating $K_{12}$. By optimizing this family, we obtain a new best 13-dimensional lattice quantizer (among the lattices with published exact quantizer constants).

Complex interval arithmetic is a powerful tool for the analysis of computational errors. The naturally arising rectangular, polar, and circular (together called primitive) interval types are not closed under simple arithmetic operations, and their use yields overly relaxed bounds. The later introduced polygonal type, on the other hand, allows for arbitrarily precise representation of the above operations for a higher computational cost. We propose the polyarcular interval type as an effective extension of the previous types. The polyarcular interval can represent all primitive intervals and most of their arithmetic combinations precisely and has an approximation capability competing with that of the polygonal interval. In particular, in antenna tolerance analysis it can achieve perfect accuracy for lower computational cost then the polygonal type, which we show in a relevant case study. In this paper, we present a rigorous analysis of the arithmetic properties of all five interval types, involving a new algebro-geometric method of boundary analysis.

We consider the numerical behavior of the fixed-stress splitting method for coupled poromechanics as undrained regimes are approached. We explain that pressure stability is related to the splitting error of the scheme, not the fact that the discrete saddle point matrix never appears in the fixed-stress approach. This observation reconciles previous results regarding the pressure stability of the splitting method. Using examples of compositional poromechanics with application to geological CO$_2$ sequestration, we see that solutions obtained using the fixed-stress scheme with a low order finite element-finite volume discretization which is not inherently inf-sup stable can exhibit the same pressure oscillations obtained with the corresponding fully implicit scheme. Moreover, pressure jump stabilization can effectively remove these spurious oscillations in the fixed-stress setting, while also improving the efficiency of the scheme in terms of the number of iterations required at every time step to reach convergence.

Diffusion models have demonstrated remarkable performance in generation tasks. Nevertheless, explaining the diffusion process remains challenging due to it being a sequence of denoising noisy images that are difficult for experts to interpret. To address this issue, we propose the three research questions to interpret the diffusion process from the perspective of the visual concepts generated by the model and the region where the model attends in each time step. We devise tools for visualizing the diffusion process and answering the aforementioned research questions to render the diffusion process human-understandable. We show how the output is progressively generated in the diffusion process by explaining the level of denoising and highlighting relationships to foundational visual concepts at each time step through the results of experiments with various visual analyses using the tools. Throughout the training of the diffusion model, the model learns diverse visual concepts corresponding to each time-step, enabling the model to predict varying levels of visual concepts at different stages. We substantiate our tools using Area Under Cover (AUC) score, correlation quantification, and cross-attention mapping. Our findings provide insights into the diffusion process and pave the way for further research into explainable diffusion mechanisms.

Non-Hermitian topological phases can produce some remarkable properties, compared with their Hermitian counterpart, such as the breakdown of conventional bulk-boundary correspondence and the non-Hermitian topological edge mode. Here, we introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we use the smallest module of the periodic circuit as one unit to construct high-dimensional circuit data features. Further, we use the Dense Convolutional Network (DenseNet), a type of convolutional neural network that utilizes dense connections between layers to design a non-Hermitian topolectrical Chern circuit, as the DenseNet algorithm is more suitable for processing high-dimensional data. Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.

The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess fundamental conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that such a family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement substantially enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.

We consider the problem of sketching a set valuation function, which is defined as the expectation of a valuation function of independent random item values. We show that for monotone subadditive or submodular valuation functions satisfying a weak homogeneity condition, or certain other conditions, there exist discretized distributions of item values with $O(k\log(k))$ support sizes that yield a sketch valuation function which is a constant-factor approximation, for any value query for a set of items of cardinality less than or equal to $k$. The discretized distributions can be efficiently computed by an algorithm for each item's value distribution separately. Our results hold under conditions that accommodate a wide range of valuation functions arising in applications, such as the value of a team corresponding to the best performance of a team member, constant elasticity of substitution production functions exhibiting diminishing returns used in economics and consumer theory, and others. Sketch valuation functions are particularly valuable for finding approximate solutions to optimization problems such as best set selection and welfare maximization. They enable computationally efficient evaluation of approximate value oracle queries and provide an approximation guarantee for the underlying optimization problem.

北京阿比特科技有限公司