亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The modeling of cracks is an important topic - both in engineering as well as in mathematics. Since crack propagation is characterized by a free boundary value problem (the geometry of the crack is not known beforehand, but part of the solution), approximations of the underlying sharp-interface problem based on phase-field models are often considered. Focusing on a rate-independent setting, these models are defined by a unidirectional gradient-flow of an energy functional. Since this energy functional is non-convex, the evolution of the variables such as the displacement field and the phase-field variable might be discontinuous in time leading to so-called brutal crack growth. For this reason, solution concepts have to be carefully chosen in order to predict discontinuities that are physically reasonable. One such concept is that of Balanced Viscosity solutions (BV solutions). This concept predicts physically sound energy trajectories that do not jump across energy barriers. The paper deals with a time-adaptive finite element phase-field model for rate-independent fracture which converges to BV solutions. The model is motivated by constraining the pseudo-velocity of the crack tip. The resulting constrained minimization problem is solved by the augmented Lagrangian method. Numerical examples highlight the predictive capabilities of the model and furthermore show the efficiency and the robustness of the final algorithm.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · · SimPLe · 無向 · 無向圖 ·
2024 年 4 月 23 日

We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph $G = (V, E)$, vertex demands $b \in \mathbb{R}^V$ such that $\sum_{v \in V} b(v) = 0$, positive edge costs $c \in \mathbb{R}_{>0}^E$, and a parameter $\varepsilon > 0$. In $O(\varepsilon^{-2} m \log^{O(1)} n)$ time, it returns a flow $f$ such that the net flow out of each vertex is equal to the vertex's demand and the cost of the flow is within a $(1 + \varepsilon)$ factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the $\Omega(n^2)$ vertex-vertex distances that an approximation of this kind suggests, we also limit the available routing decisions using distances explicitly stored in the well-known Thorup-Zwick distance oracle.

A novel strategy is proposed for the coupling of field and circuit equations when modeling power devices in the low-frequency regime. The resulting systems of differential-algebraic equations have a particular geometric structure which explicitly encodes the energy storage, dissipation, and transfer mechanisms. This implies a power balance on the continuous level which can be preserved under appropriate discretization in space and time. The models and main results are presented in detail for linear constitutive models, but the extension to nonlinear elements and more general coupling mechanisms is possible. The theoretical findings are demonstrated by numerical results.

Hierarchical matrices approximate a given matrix by a decomposition into low-rank submatrices that can be handled efficiently in factorized form. $\mathcal{H}^2$-matrices refine this representation following the ideas of fast multipole methods in order to achieve linear, i.e., optimal complexity for a variety of important algorithms. The matrix multiplication, a key component of many more advanced numerical algorithms, has so far proven tricky: the only linear-time algorithms known so far either require the very special structure of HSS-matrices or need to know a suitable basis for all submatrices in advance. In this article, a new and fairly general algorithm for multiplying $\mathcal{H}^2$-matrices in linear complexity with adaptively constructed bases is presented. The algorithm consists of two phases: first an intermediate representation with a generalized block structure is constructed, then this representation is re-compressed in order to match the structure prescribed by the application. The complexity and accuracy are analysed and numerical experiments indicate that the new algorithm can indeed be significantly faster than previous attempts.

Quantized tensor trains (QTTs) have recently emerged as a framework for the numerical discretization of continuous functions, with the potential for widespread applications in numerical analysis. However, the theory of QTT approximation is not fully understood. In this work, we advance this theory from the point of view of multiscale polynomial interpolation. This perspective clarifies why QTT ranks decay with increasing depth, quantitatively controls QTT rank in terms of smoothness of the target function, and explains why certain functions with sharp features and poor quantitative smoothness can still be well approximated by QTTs. The perspective also motivates new practical and efficient algorithms for the construction of QTTs from function evaluations on multiresolution grids.

Cross-validation is usually employed to evaluate the performance of a given statistical methodology. When such a methodology depends on a number of tuning parameters, cross-validation proves to be helpful to select the parameters that optimize the estimated performance. In this paper, however, a very different and nonstandard use of cross-validation is investigated. Instead of focusing on the cross-validated parameters, the main interest is switched to the estimated value of the error criterion at optimal performance. It is shown that this approach is able to provide consistent and efficient estimates of some density functionals, with the noteworthy feature that these estimates do not rely on the choice of any further tuning parameter, so that, in that sense, they can be considered to be purely empirical. Here, a base case of application of this new paradigm is developed in full detail, while many other possible extensions are hinted as well.

Circuit complexity, defined as the minimum circuit size required for implementing a particular Boolean computation, is a foundational concept in computer science. Determining circuit complexity is believed to be a hard computational problem [1]. Recently, in the context of black holes, circuit complexity has been promoted to a physical property, wherein the growth of complexity is reflected in the time evolution of the Einstein-Rosen bridge (``wormhole'') connecting the two sides of an AdS ``eternal'' black hole [2]. Here we explore another link between complexity and thermodynamics for circuits of given functionality, making the physics-inspired approach relevant to real computational problems, for which functionality is the key element of interest. In particular, our thermodynamic framework provides a new perspective on the obfuscation of programs of arbitrary length -- an important problem in cryptography -- as thermalization through recursive mixing of neighboring sections of a circuit, which can be viewed as the mixing of two containers with ``gases of gates''. This recursive process equilibrates the average complexity and leads to the saturation of the circuit entropy, while preserving functionality of the overall circuit. The thermodynamic arguments hinge on ergodicity in the space of circuits which we conjecture is limited to disconnected ergodic sectors due to fragmentation. The notion of fragmentation has important implications for the problem of circuit obfuscation as it implies that there are circuits with same size and functionality that cannot be connected via local moves. Furthermore, we argue that fragmentation is unavoidable unless the complexity classes NP and coNP coincide, a statement that implies the collapse of the polynomial hierarchy of computational complexity theory to its first level.

While computer modeling and simulation are crucial for understanding scientometrics, their practical use in literature remains somewhat limited. In this study, we establish a joint coauthorship and citation network using preferential attachment. As papers get published, we update the coauthorship network based on each paper's author list, representing the collaborative team behind it. This team is formed considering the number of collaborations each author has, and we introduce new authors at a fixed probability, expanding the coauthorship network. Simultaneously, as each paper cites a specific number of references, we add an equivalent number of citations to the citation network upon publication. The likelihood of a paper being cited depends on its existing citations, fitness value, and age. Then we calculate the journal impact factor and h-index, using them as examples of scientific impact indicators. After thorough validation, we conduct case studies to analyze the impact of different parameters on the journal impact factor and h-index. The findings reveal that increasing the reference number N or reducing the paper's lifetime {\theta} significantly boosts the journal impact factor and average h-index. On the other hand, enlarging the team size m without introducing new authors or decreasing the probability of newcomers p notably increases the average h-index. In conclusion, it is evident that various parameters influence scientific impact indicators, and their interpretation can be manipulated by authors. Thus, exploring the impact of these parameters and continually refining scientific impact indicators are essential. The modeling and simulation method serves as a powerful tool in this ongoing process, and the model can be easily extended to include other scientific impact indicators and scenarios.

In the context of the optimization of rotating electric machines, many different objective functions are of interest and considering this during the optimization is of crucial importance. While evolutionary algorithms can provide a Pareto front straightforwardly and are widely used in this context, derivative-based optimization algorithms can be computationally more efficient. In this case, a Pareto front can be obtained by performing several optimization runs with different weights. In this work, we focus on a free-form shape optimization approach allowing for arbitrary motor geometries. In particular, we propose a way to efficiently obtain Pareto-optimal points by moving along to the Pareto front exploiting a homotopy method based on second order shape derivatives.

One of the main challenges in surrogate modeling is the limited availability of data due to resource constraints associated with computationally expensive simulations. Multi-fidelity methods provide a solution by chaining models in a hierarchy with increasing fidelity, associated with lower error, but increasing cost. In this paper, we compare different multi-fidelity methods employed in constructing Gaussian process surrogates for regression. Non-linear autoregressive methods in the existing literature are primarily confined to two-fidelity models, and we extend these methods to handle more than two levels of fidelity. Additionally, we propose enhancements for an existing method incorporating delay terms by introducing a structured kernel. We demonstrate the performance of these methods across various academic and real-world scenarios. Our findings reveal that multi-fidelity methods generally have a smaller prediction error for the same computational cost as compared to the single-fidelity method, although their effectiveness varies across different scenarios.

In this paper, by using $|x|=2\max\{0,x\}-x$, a class of maximum-based iteration methods is established to solve the generalized absolute value equation $Ax-B|x|=b$. Some convergence conditions of the proposed method are presented. By some numerical experiments, the effectiveness and feasibility of the proposed method are confirmed.

北京阿比特科技有限公司