亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A new numerical continuum \textit{one-domain} approach (ODA) solver is presented for the simulation of the transfer processes between a free fluid and a porous medium. The solver is developed in the \textit{mesoscopic} scale framework, where a continuous variation of the physical parameters of the porous medium (e.g., porosity and permeability) is assumed. The Navier-Stokes-Brinkman equations are solved along with the continuity equation, under the hypothesis of incompressible fluid. The porous medium is assumed to be fully saturated and can potentially be anisotropic. The domain is discretized with unstructured meshes allowing local refinements. A fractional time step procedure is applied, where one predictor and two corrector steps are solved within each time iteration. The predictor step is solved in the framework of a marching in space and time procedure, with some important numerical advantages. The two corrector steps require the solution of large linear systems, whose matrices are sparse, symmetric and positive definite, with $\mathcal{M}$-matrix property over Delaunay-meshes. A fast and efficient solution is obtained using a preconditioned conjugate gradient method. The discretization adopted for the two corrector steps can be regarded as a Two-Point-Flux-Approximation (TPFA) scheme, which, unlike the standard TPFA schemes, does not require the grid mesh to be $\mathbf{K}$-orthogonal, (with $\mathbf{K}$ the anisotropy tensor). As demonstrated with the provided test cases, the proposed scheme correctly retains the anisotropy effects within the porous medium. Furthermore, it overcomes the restrictions of existing mesoscopic scale one-domain approachs proposed in the literature.

相關內容

We address the computation of the degrees of minors of a noncommutative symbolic matrix of form \[ A[c] := \sum_{k=1}^m A_k t^{c_k} x_k, \] where $A_k$ are matrices over a field $\mathbb{K}$, $x_i$ are noncommutative variables, $c_k$ are integer weights, and $t$ is a commuting variable specifying the degree. This problem extends noncommutative Edmonds' problem (Ivanyos et al. 2017), and can formulate various combinatorial optimization problems. Extending the study by Hirai 2018, and Hirai, Ikeda 2022, we provide novel duality theorems and polyhedral characterization for the maximum degrees of minors of $A[c]$ of all sizes, and develop a strongly polynomial-time algorithm for computing them. This algorithm is viewed as a unified algebraization of the classical Hungarian method for bipartite matching and the weight-splitting algorithm for linear matroid intersection. As applications, we provide polynomial-time algorithms for weighted fractional linear matroid matching and linear optimization over rank-2 Brascamp-Lieb polytopes.

We propose a simple method for simulating a general class of non-unitary dynamics as a linear combination of Hamiltonian simulation (LCHS) problems. LCHS does not rely on converting the problem into a dilated linear system problem, or on the spectral mapping theorem. The latter is the mathematical foundation of many quantum algorithms for solving a wide variety of tasks involving non-unitary processes, such as the quantum singular value transformation (QSVT). The LCHS method can achieve optimal cost in terms of state preparation. We also demonstrate an application for open quantum dynamics simulation using the complex absorbing potential method with near-optimal dependence on all parameters.

Optimizing multiple competing objectives is a common problem across science and industry. The inherent inextricable trade-off between those objectives leads one to the task of exploring their Pareto front. A meaningful quantity for the purpose of the latter is the hypervolume indicator, which is used in Bayesian Optimization (BO) and Evolutionary Algorithms (EAs). However, the computational complexity for the calculation of the hypervolume scales unfavorably with increasing number of objectives and data points, which restricts its use in those common multi-objective optimization frameworks. To overcome these restrictions we propose to approximate the hypervolume function with a deep neural network, which we call DeepHV. For better sample efficiency and generalization, we exploit the fact that the hypervolume is scale-equivariant in each of the objectives as well as permutation invariant w.r.t. both the objectives and the samples, by using a deep neural network that is equivariant w.r.t. the combined group of scalings and permutations. We evaluate our method against exact, and approximate hypervolume methods in terms of accuracy, computation time, and generalization. We also apply and compare our methods to state-of-the-art multi-objective BO methods and EAs on a range of synthetic benchmark test cases. The results show that our methods are promising for such multi-objective optimization tasks.

High-dimensional central limit theorems have been intensively studied with most focus being on the case where the data is sub-Gaussian or sub-exponential. However, heavier tails are omnipresent in practice. In this article, we study the critical growth rates of dimension $d$ below which Gaussian approximations are asymptotically valid but beyond which they are not. We are particularly interested in how these thresholds depend on the number of moments $m$ that the observations possess. For every $m\in(2,\infty)$, we construct i.i.d. random vectors $\textbf{X}_1,...,\textbf{X}_n$ in $\mathbb{R}^d$, the entries of which are independent and have a common distribution (independent of $n$ and $d$) with finite $m$th absolute moment, and such that the following holds: if there exists an $\varepsilon\in(0,\infty)$ such that $d/n^{m/2-1+\varepsilon}\not\to 0$, then the Gaussian approximation error (GAE) satisfies $$ \limsup_{n\to\infty}\sup_{t\in\mathbb{R}}\left[\mathbb{P}\left(\max_{1\leq j\leq d}\frac{1}{\sqrt{n}}\sum_{i=1}^n\textbf{X}_{ij}\leq t\right)-\mathbb{P}\left(\max_{1\leq j\leq d}\textbf{Z}_j\leq t\right)\right]=1,$$ where $\textbf{Z} \sim \mathsf{N}_d(\textbf{0}_d,\mathbf{I}_d)$. On the other hand, a result in Chernozhukov et al. (2023a) implies that the left-hand side above is zero if just $d/n^{m/2-1-\varepsilon}\to 0$ for some $\varepsilon\in(0,\infty)$. In this sense, there is a moment-dependent phase transition at the threshold $d=n^{m/2-1}$ above which the limiting GAE jumps from zero to one.

A characteristic mode (CM) method that relies on a global multi-trace formulation (MTF) of surface integral equations is proposed to compute the modes and the resonance frequencies of microstrip patch antennas with finite dielectric substrates and ground planes. Compared to the coupled formulation of electric field and Poggio-Miller-Chang-Harrington-Wu-Tsai integral equations, global MTF allows for more direct implementation of a sub-structure CM method. This is achieved by representing the coupling of the electromagnetic fields on the substrate and ground plane in the form of a numerical Green function matrix, which yields a more compact generalized eigenvalue equation. The resulting sub-structure CM method avoids the cumbersome computation of the multilayered medium Green function (unlike the CM methods that rely on mixed-potential integral equations) and the volumetric discretization of the substrate (unlike the CM methods that rely on volume-surface integral equations), and numerical results show that it is a reliable and accurate approach to predicting the modal behavior of electromagnetic fields on practical microstrip antennas.

A second-order finite volume scheme is proposed and analyzed for a 2X2 system of non-linear partial differential equations. These equations model the dynamics of growing sandpiles created by a vertical source on a flat, bounded rectangular table in multiple dimensions. The well-balancedness of the scheme is ensured through a modified limitation approach allowing the scheme to reduce to well-balanced first-order scheme near the steady state while maintaining the second-order accuracy away from it. The well-balanced property of the scheme is proven analytically in one dimension and demonstrated numerically in two dimensions. It is also shown through the numerical experiments that the second-order scheme reduces the finite time oscillations, takes fewer time iterations for achieving the steady state and gives sharper resolutions of the physical structure of the sandpile, as compared to the first-order schemes existing in the literature.

New lower order $H(\textrm{div})$-conforming finite elements for symmetric tensors are constructed in arbitrary dimension. The space of shape functions is defined by enriching the symmetric quadratic polynomial space with the $(d+1)$-order normal-normal face bubble space. The reduced counterpart has only $d(d+1)^2$ degrees of freedom. In two dimensions, basis functions are explicitly given in terms of barycentric coordinates. Lower order conforming finite element elasticity complexes starting from the Bell element, are developed in two dimensions. These finite elements for symmetric tensors are applied to devise robust mixed finite element methods for the linear elasticity problem, which possess the uniform error estimates with respect to the Lam\'{e} coefficient $\lambda$, and superconvergence for the displacement. Numerical results are provided to verify the theoretical convergence rates.

A numerical method is proposed for simulation of composite open quantum systems. It is based on Lindblad master equations and adiabatic elimination. Each subsystem is assumed to converge exponentially towards a stationary subspace, slightly impacted by some decoherence channels and weakly coupled to the other subsystems. This numerical method is based on a perturbation analysis with an asymptotic expansion. It exploits the formulation of the slow dynamics with reduced dimension. It relies on the invariant operators of the local and nominal dissipative dynamics attached to each subsystem. Second-order expansion can be computed only with local numerical calculations. It avoids computations on the tensor-product Hilbert space attached to the full system. This numerical method is particularly well suited for autonomous quantum error correction schemes. Simulations of such reduced models agree with complete full model simulations for typical gates acting on one and two cat-qubits (Z, ZZ and CNOT) when the mean photon number of each cat-qubit is less than 8. For larger mean photon numbers and gates with three cat-qubits (ZZZ and CCNOT), full model simulations are almost impossible whereas reduced model simulations remain accessible. In particular, they capture both the dominant phase-flip error-rate and the very small bit-flip error-rate with its exponential suppression versus the mean photon number.

We present the conditional determinantal point process (DPP) approach to obtain new (mostly Fredholm determinantal) expressions for various eigenvalue statistics in random matrix theory. It is well-known that many (especially $\beta=2$) eigenvalue $n$-point correlation functions are given in terms of $n\times n$ determinants, i.e., they are continuous DPPs. We exploit a derived kernel of the conditional DPP which gives the $n$-point correlation function conditioned on the event of some eigenvalues already existing at fixed locations. Using such kernels we obtain new determinantal expressions for the joint densities of the $k$ largest eigenvalues, probability density functions of the $k^\text{th}$ largest eigenvalue, density of the first eigenvalue spacing, and more. Our formulae are highly amenable to numerical computations and we provide various numerical experiments. Several numerical values that required hours of computing time could now be computed in seconds with our expressions, which proves the effectiveness of our approach. We also demonstrate that our technique can be applied to an efficient sampling of DR paths of the Aztec diamond domino tiling. Further extending the conditional DPP sampling technique, we sample Airy processes from the extended Airy kernel. Additionally we propose a sampling method for non-Hermitian projection DPPs.

A new mechanical model on noncircular shallow tunnelling considering initial stress field is proposed in this paper by constraining far-field ground surface to eliminate displacement singularity at infinity, and the originally unbalanced tunnel excavation problem in existing solutions is turned to an equilibrium one of mixed boundaries. By applying analytic continuation, the mixed boundaries are transformed to a homogenerous Riemann-Hilbert problem, which is subsequently solved via an efficient and accurate iterative method with boundary conditions of static equilibrium, displacement single-valuedness, and traction along tunnel periphery. The Lanczos filtering technique is used in the final stress and displacement solution to reduce the Gibbs phenomena caused by the constrained far-field ground surface for more accurte results. Several numerical cases are conducted to intensively verify the proposed solution by examining boundary conditions and comparing with existing solutions, and all the results are in good agreements. Then more numerical cases are conducted to investigate the stress and deformation distribution along ground surface and tunnel periphery, and several engineering advices are given. Further discussions on the defects of the proposed solution are also conducted for objectivity.

北京阿比特科技有限公司