亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this study, we explore mixed-dimensional Thermo-Hydro-Mechanical (THM) models in fractured porous media accounting for Coulomb frictional contact at matrix fracture interfaces. The simulation of such models plays an important role in many applications such as hydraulic stimulation in deep geothermal systems and assessing induced seismic risks in CO2 storage. We first extend to the mixed-dimensional framework the thermodynamically consistent THM models derived in [16] based on first and second principles of thermodynamics. Two formulations of the energy equation will be considered based either on energy conservation or on the entropy balance, assuming a vanishing thermo-poro-elastic dissipation. Our focus is on space time discretisations preserving energy estimates for both types of formulations and for a general single phase fluid thermodynamical model. This is achieved by a Finite Volume discretisation of the non-isothermal flow based on coercive fluxes and a tailored discretisation of the non-conservative convective terms. It is combined with a mixed Finite Element formulation of the contact-mechanical model with face-wise constant Lagrange multipliers accounting for the surface tractions, which preserves the dissipative properties of the contact terms. The discretisations of both THM formulations are investigated and compared in terms of convergence, accuracy and robustness on 2D test cases. It includes a Discrete Fracture Matrix model with a convection dominated thermal regime, and either a weakly compressible liquid or a highly compressible gas thermodynamical model.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 相互獨立的 · MoDELS · 知識 (knowledge) · 線性的 ·
2024 年 3 月 6 日

We present a new method for causal discovery in linear structural vector autoregressive models. We adapt an idea designed for independent observations to the case of time series while retaining its favorable properties, i.e., explicit error control for false causal discovery, at least asymptotically. We apply our method to several real-world bivariate time series datasets and discuss its findings which mostly agree with common understanding. The arrow of time in a model can be interpreted as background knowledge on possible causal mechanisms. Hence, our ideas could be extended to incorporating different background knowledge, even for independent observations.

We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time. This yields, to our knowledge, the first known family of codes that can be decoded in nearly linear time, even as they approach the list decoding capacity. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list-decoding. It is known that for every $\epsilon >0$, and rate $R \in (0,1)$, there exist explicit families of these codes that have rate $R$ and can be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above tasks in nearly linear time. Our algorithms have two main components. The first builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a nearly linear time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design nearly-linear time algorithms for two natural algebraic problems. The first algorithm solves linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field element. These algorithms can be viewed as generalizations of classical algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.

We propose a new Bayesian strategy for adaptation to smoothness in nonparametric models based on heavy tailed series priors. We illustrate it in a variety of settings, showing in particular that the corresponding Bayesian posterior distributions achieve adaptive rates of contraction in the minimax sense (up to logarithmic factors) without the need to sample hyperparameters. Unlike many existing procedures, where a form of direct model (or estimator) selection is performed, the method can be seen as performing a soft selection through the prior tail. In Gaussian regression, such heavy tailed priors are shown to lead to (near-)optimal simultaneous adaptation both in the $L^2$- and $L^\infty$-sense. Results are also derived for linear inverse problems, for anisotropic Besov classes, and for certain losses in more general models through the use of tempered posterior distributions. We present numerical simulations corroborating the theory.

In this work, we analyze the relation between reparametrizations of gradient flow and the induced implicit bias in linear models, which encompass various basic regression tasks. In particular, we aim at understanding the influence of the model parameters - reparametrization, loss, and link function - on the convergence behavior of gradient flow. Our results provide conditions under which the implicit bias can be well-described and convergence of the flow is guaranteed. We furthermore show how to use these insights for designing reparametrization functions that lead to specific implicit biases which are closely connected to $\ell_p$- or trigonometric regularizers.

This paper focuses on discussing Newton's method and its hybrid with machine learning for the steady state Navier-Stokes Darcy model discretized by mixed element methods. First, a Newton iterative method is introduced for solving the relative discretized problem. It is proved technically that this method converges quadratically with the convergence rate independent of the finite element mesh size, under certain standard conditions. Later on, a deep learning algorithm is proposed for solving this nonlinear coupled problem. Following the ideas of an earlier work by Huang, Wang and Yang (2020), an Int-Deep algorithm is constructed by combining the previous two methods so as to further improve the computational efficiency and robustness. A series of numerical examples are reported to show the numerical performance of the proposed methods.

This paper presents significant advancements in the field of abstract reasoning, particularly for Raven's Progressive Matrices (RPM) and Bongard-Logo problems. We first introduce D2C, a method that redefines concept boundaries in these domains and bridges the gap between high-level concepts and their low-dimensional representations. Leveraging this foundation, we propose D3C, a novel approach for tackling Bongard-Logo problems. D3C estimates the distributions of image representations and measures their Sinkhorn distance to achieve remarkable reasoning accuracy. This innovative method provides new insights into the relationships between images and advances the state-of-the-art in abstract reasoning. To further enhance computational efficiency without sacrificing performance, we introduce D3C-cos. This variant of D3C constrains distribution distances, offering a more computationally efficient solution for RPM problems while maintaining high accuracy. Additionally, we present Lico-Net, a baseline network for RPM that integrates D3C and D3C-cos. By estimating and constraining the distributions of regularity representations, Lico-Net addresses both problem-solving and interpretability challenges, achieving state-of-the-art performance. Finally, we extend our methodology with D4C, an adversarial approach that further refines concept boundaries compared to D2C. Tailored for RPM and Bongard-Logo problems, D4C demonstrates significant improvements in addressing the challenges of abstract reasoning. Overall, our contributions advance the field of abstract reasoning, providing new perspectives and practical solutions to long-standing problems.

Here, we examine a fully-discrete Semi-Lagrangian scheme for a mean-field game price formation model. We show that the discretization is monotone as a multivalued operator and prove the uniqueness of the discretized solution. Moreover, we show that the limit of the discretization converges to the weak solution of the continuous price formation mean-field game using monotonicity methods. This scheme performs substantially better than standard methods by giving reliable results within a few iterations, as several numerical simulations and comparisons at the end of the paper illustrate.

We introduce PennyLane's Lightning suite, a collection of high-performance state-vector simulators targeting CPU, GPU, and HPC-native architectures and workloads. Quantum applications such as QAOA, VQE, and synthetic workloads are implemented to demonstrate the supported classical computing architectures and showcase the scale of problems that can be simulated using our tooling. We benchmark the performance of Lightning with backends supporting CPUs, as well as NVidia and AMD GPUs, and compare the results to other commonly used high-performance simulator packages, demonstrating where Lightning's implementations give performance leads. We show improved CPU performance by employing explicit SIMD intrinsics and multi-threading, batched task-based execution across multiple GPUs, and distributed forward and gradient-based quantum circuit executions across multiple nodes. Our data shows we can comfortably simulate a variety of circuits, giving examples with up to 30 qubits on a single device or node, and up to 41 qubits using multiple nodes.

We present a new way to summarize and select mixture models via the hierarchical clustering tree (dendrogram) of an overfitted latent mixing measure. Our proposed method bridges agglomerative hierarchical clustering and mixture modeling. The dendrogram's construction is derived from the theory of convergence of the mixing measures, and as a result, we can both consistently select the true number of mixing components and obtain the pointwise optimal convergence rate for parameter estimation from the tree, even when the model parameters are only weakly identifiable. In theory, it explicates the choice of the optimal number of clusters in hierarchical clustering. In practice, the dendrogram reveals more information on the hierarchy of subpopulations compared to traditional ways of summarizing mixture models. Several simulation studies are carried out to support our theory. We also illustrate the methodology with an application to single-cell RNA sequence analysis.

We propose Stein-type estimators for zero-inflated Bell regression models by incorporating information on model parameters. These estimators combine the advantages of unrestricted and restricted estimators. We derive the asymptotic distributional properties, including bias and mean squared error, for the proposed shrinkage estimators. Monte Carlo simulations demonstrate the superior performance of our shrinkage estimators across various scenarios. Furthermore, we apply the proposed estimators to analyze a real dataset, showcasing their practical utility.

北京阿比特科技有限公司