亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose an offline-online strategy based on the Localized Orthogonal Decomposition (LOD) method for elliptic multiscale problems with randomly perturbed diffusion coefficient. We consider a periodic deterministic coefficient with local defects that occur with probability $p$. The offline phase pre-computes entries to global LOD stiffness matrices on a single reference element (exploiting the periodicity) for a selection of defect configurations. Given a sample of the perturbed diffusion the corresponding LOD stiffness matrix is then computed by taking linear combinations of the pre-computed entries, in the online phase. Our computable error estimates show that this yields a good coarse-scale approximation of the solution for small $p$, which is illustrated by extensive numerical experiments. This makes the proposed technique attractive already for moderate sample sizes in a Monte Carlo simulation.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

We propose a novel variant of the Localized Orthogonal Decomposition (LOD) method for time-harmonic scattering problems of Helmholtz type with high wavenumber $\kappa$. On a coarse mesh of width $H$, the proposed method identifies local finite element source terms that yield rapidly decaying responses under the solution operator. They can be constructed to high accuracy from independent local snapshot solutions on patches of width $\ell H$ and are used as problem-adapted basis functions in the method. In contrast to the classical LOD and other state-of-the-art multi-scale methods, the localization error decays super-exponentially as the oversampling parameter $\ell$ is increased. This implies that optimal convergence is observed under the substantially relaxed oversampling condition $\ell \gtrsim (\log \tfrac{\kappa}{H})^{(d-1)/d}$ with $d$ denoting the spatial dimension. Numerical experiments demonstrate the significantly improved offline and online performance of the method also in the case of heterogeneous media and perfectly matched layers.

In this paper, the generalized finite element method (GFEM) for solving second order elliptic equations with rough coefficients is studied. New optimal local approximation spaces for GFEMs based on local eigenvalue problems involving a partition of unity are presented. These new spaces have advantages over those proposed in [I. Babuska and R. Lipton, Multiscale Model.\;\,Simul., 9 (2011), pp.~373--406]. First, in addition to a nearly exponential decay rate of the local approximation errors with respect to the dimensions of the local spaces, the rate of convergence with respect to the size of the oversampling region is also established. Second, the theoretical results hold for problems with mixed boundary conditions defined on general Lipschitz domains. Finally, an efficient and easy-to-implement technique for generating the discrete $A$-harmonic spaces is proposed which relies on solving an eigenvalue problem associated with the Dirichlet-to-Neumann operator, leading to a substantial reduction in computational cost. Numerical experiments are presented to support the theoretical analysis and to confirm the effectiveness of the new method.

Learning the parameters of a linear time-invariant dynamical system (LTIDS) is a problem of current interest. In many applications, one is interested in jointly learning the parameters of multiple related LTIDS, which remains unexplored to date. To that end, we develop a joint estimator for learning the transition matrices of LTIDS that share common basis matrices. Further, we establish finite-time error bounds that depend on the underlying sample size, dimension, number of tasks, and spectral properties of the transition matrices. The results are obtained under mild regularity assumptions and showcase the gains from pooling information across LTIDS, in comparison to learning each system separately. We also study the impact of misspecifying the joint structure of the transition matrices and show that the established results are robust in the presence of moderate misspecifications.

Consider two stationary time series with heavy-tailed marginal distributions. We want to detect whether they have a causal relation, that is, if a change in one of them causes a change in the other. Usual methods for causality detection are not well suited if the causal mechanisms only manifest themselves in extremes. In this article, we propose new insight that can help with causal detection in such a non-traditional case. We define the so-called causal tail coefficient for time series, which, under some assumptions, correctly detects the asymmetrical causal relations between different time series. The advantage is that this method works even if nonlinear relations and common ancestors are present. Moreover, we mention how our method can help detect a time delay between the two time series. We describe some of its properties, and show how it performs on some simulations. Finally, we show on a space-weather and hydro-meteorological data sets how this method works in practice.

Fourth-order differential equations play an important role in many applications in science and engineering. In this paper, we present a three-field mixed finite-element formulation for fourth-order problems, with a focus on the effective treatment of the different boundary conditions that arise naturally in a variational formulation. Our formulation is based on introducing the gradient of the solution as an explicit variable, constrained using a Lagrange multiplier. The essential boundary conditions are enforced weakly, using Nitsche's method where required. As a result, the problem is rewritten as a saddle-point system, requiring analysis of the resulting finite-element discretization and the construction of optimal linear solvers. Here, we discuss the analysis of the well-posedness and accuracy of the finite-element formulation. Moreover, we develop monolithic multigrid solvers for the resulting linear systems. Two and three-dimensional numerical results are presented to demonstrate the accuracy of the discretization and efficiency of the multigrid solvers proposed.

In this paper, a generalized finite element method (GFEM) with optimal local approximation spaces for solving high-frequency heterogeneous Helmholtz problems is systematically studied. The local spaces are built from selected eigenvectors of local eigenvalue problems defined on generalized harmonic spaces. At both continuous and discrete levels, $(i)$ wavenumber explicit and nearly exponential decay rates for the local approximation errors are obtained without any assumption on the size of subdomains; $(ii)$ a quasi-optimal and nearly exponential global convergence of the method is established by assuming that the size of subdomains is $O(1/k)$ ($k$ is the wavenumber). A novel resonance effect between the wavenumber and the dimension of local spaces on the decay of error with respect to the oversampling size is implied by the analysis. Furthermore, for fixed dimensions of local spaces, the discrete local errors are proved to converge as $h\rightarrow 0$ ($h$ denoting the mesh size) towards the continuous local errors. The method at the continuous level extends the plane wave partition of unity method [I. Babuska and J. M. Melenk, Int.\;J.\;Numer.\;Methods Eng., 40 (1997), pp.~727--758] to the heterogeneous-coefficients case, and at the discrete level, it delivers an efficient non-iterative domain decomposition method for solving discrete Helmholtz problems resulting from standard FE discretizations. Numerical results are provided to confirm the theoretical analysis and to validate the proposed method.

We present a new analytical and numerical framework for solution of Partial Differential Equations (PDEs) that is based on an exact transformation that moves the boundary constraints into the dynamics of the corresponding governing equation. The framework is based on a Partial Integral Equation (PIE) representation of PDEs, where a PDE equation is transformed into an equivalent PIE formulation that does not require boundary conditions on its solution state. The PDE-PIE framework allows for a development of a generalized PIE-Galerkin approximation methodology for a broad class of linear PDEs with non-constant coefficients governed by non-periodic boundary conditions, including, e.g., Dirichlet, Neumann and Robin boundaries. The significance of this result is that solution to almost any linear PDE can now be constructed in a form of an analytical approximation based on a series expansion using a suitable set of basis functions, such as, e.g., Chebyshev polynomials of the first kind, irrespective of the boundary conditions. In many cases involving homogeneous or simple time-dependent boundary inputs, an analytical integration in time is also possible. We present several PDE solution examples in one spatial variable implemented with the developed PIE-Galerkin methodology using both analytical and numerical integration in time. The developed framework can be naturally extended to multiple spatial dimensions and, potentially, to nonlinear problems.

We give new polynomial lower bounds for a number of dynamic measure problems in computational geometry. These lower bounds hold in the the Word-RAM model, conditioned on the hardness of either the 3SUM problem or the Online Matrix-Vector Mutliplication problem [Henzinger et al., STOC 2015]. In particular we get lower bounds in the incremental and fully-dynamic settings for counting maximal or extremal points in R^3, different variants of Klee's Measure Problem, problems related to finding the largest empty disk in a set of points, and querying the size of the i'th convex layer in a planar set of points. While many conditional lower bounds for dynamic data structures have been proven since the seminal work of Patrascu [STOC 2010], few of them relate to computational geometry problems. This is the first paper focusing on this topic. The problems we consider can all be solved in O(n log n) time in the static case and their dynamic versions have mostly been approached from the perspective of improving known upper bounds. One exception to this is Klee's measure problem in R^2, for which Chan [CGTA 2010] gave an unconditional {\Omega}(\sqrt{n}) lower bound on the worst-case update time. By a similar approach, we show that this also holds for an important special case of Klee's measure problem in R^3 known as the Hypervolume Indicator problem.

In this paper, we develop a Monte Carlo algorithm named the Frozen Gaussian Sampling (FGS) to solve the semiclassical Schr\"odinger equation based on the frozen Gaussian approximation. Due to the highly oscillatory structure of the wave function, traditional mesh-based algorithms suffer from "the curse of dimensionality", which gives rise to more severe computational burden when the semiclassical parameter \(\ep\) is small. The Frozen Gaussian sampling outperforms the existing algorithms in that it is mesh-free in computing the physical observables and is suitable for high dimensional problems. In this work, we provide detailed procedures to implement the FGS for both Gaussian and WKB initial data cases, where the sampling strategies on the phase space balance the need of variance reduction and sampling convenience. Moreover, we rigorously prove that, to reach a certain accuracy, the number of samples needed for the FGS is independent of the scaling parameter \(\ep\). Furthermore, the complexity of the FGS algorithm is of a sublinear scaling with respect to the microscopic degrees of freedom and, in particular, is insensitive to the dimension number. The performance of the FGS is validated through several typical numerical experiments, including simulating scattering by the barrier potential, formation of the caustics and computing the high-dimensional physical observables without mesh.

De-homogenization is becoming an effective method to significantly expedite the design of high-resolution multiscale structures, but existing methods have thus far been confined to simple static compliance minimization. There are two critical challenges to be addressed in accommodating general cases: enabling the design of unit-cell orientation and using free-form microstructures. In this paper, we propose a data-driven de-homogenization method that allows effective design of the unit-cell orientation angles and conformal mapping of spatially varying, complex microstructures. We devise a parameterized microstructure composed of rods in different directions to provide more diversity in stiffness while retaining geometrical simplicity. The microstructural geometry-property relationship is then surrogated by a neural network to avoid costly homogenization. A Cartesian representation of the unit-cell orientation is incorporated into homogenization-based optimization to design the angles. Corresponding high-resolution multiscale structures are obtained from the homogenization-based designs through a conformal mapping constructed with sawtooth function fields. This allows us to assemble complex microstructures with an oriented and compatible tiling pattern, while preserving the local homogenized properties. To demonstrate our method with a specific application, we optimize the frequency response of structures under harmonic excitations within a given frequency range. It is the first time that a sawtooth function is applied in a de-homogenization framework for complex design scenarios beyond static compliance minimization. The examples illustrate that multiscale structures can be generated with high efficiency and much better dynamic performance compared with the macroscale-only optimization. Beyond frequency response design, our proposed framework can be applied to other general problems.

北京阿比特科技有限公司