亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Finite element methods are well-known to admit robust optimal convergence on simplicial meshes satisfying the maximum angle conditions. But how to generalize this condition to polyhedra is unknown in the literature. In this work, we argue that this generation is possible for virtual element methods (VEMs). In particular, we develop an anisotropic analysis framework for VEMs where the virtual spaces and projection spaces remain abstract and can be problem-adapted, carrying forward the ``virtual'' spirit of VEMs. Three anisotropic cases will be analyzed under this framework: (1) elements only contain non-shrinking inscribed balls but are not necessarily star convex to those balls; (2) elements are cut arbitrarily from a background Cartesian mesh, which can extremely shrink; (3) elements contain different materials on which the virtual spaces involve discontinuous coefficients. The error estimates are guaranteed to be independent of polyhedral element shapes. The present work largely improves the current theoretical results in the literature and also broadens the scope of the application of VEMs.

相關內容

Randomized orthogonal projection methods (ROPMs) can be used to speed up the computation of Krylov subspace methods in various contexts. Through a theoretical and numerical investigation, we establish that these methods produce quasi-optimal approximations over the Krylov subspace. Our numerical experiments outline the convergence of ROPMs for all matrices in our test set, with occasional spikes, but overall with a convergence rate similar to that of standard OPMs.

Deep learning has contributed greatly to many successes in artificial intelligence in recent years. Today, it is possible to train models that have thousands of layers and hundreds of billions of parameters. Large-scale deep models have achieved great success, but the enormous computational complexity and gigantic storage requirements make it extremely difficult to implement them in real-time applications. On the other hand, the size of the dataset is still a real problem in many domains. Data are often missing, too expensive, or impossible to obtain for other reasons. Ensemble learning is partially a solution to the problem of small datasets and overfitting. However, ensemble learning in its basic version is associated with a linear increase in computational complexity. We analyzed the impact of the ensemble decision-fusion mechanism and checked various methods of sharing the decisions including voting algorithms. We used the modified knowledge distillation framework as a decision-fusion mechanism which allows in addition compressing of the entire ensemble model into a weight space of a single model. We showed that knowledge distillation can aggregate knowledge from multiple teachers in only one student model and, with the same computational complexity, obtain a better-performing model compared to a model trained in the standard manner. We have developed our own method for mimicking the responses of all teachers at the same time, simultaneously. We tested these solutions on several benchmark datasets. In the end, we presented a wide application use of the efficient multi-teacher knowledge distillation framework. In the first example, we used knowledge distillation to develop models that could automate corrosion detection on aircraft fuselage. The second example describes detection of smoke on observation cameras in order to counteract wildfires in forests.

Learning Markov decision processes (MDP) in an adversarial environment has been a challenging problem. The problem becomes even more challenging with function approximation, since the underlying structure of the loss function and transition kernel are especially hard to estimate in a varying environment. In fact, the state-of-the-art results for linear adversarial MDP achieve a regret of $\tilde{O}(K^{6/7})$ ($K$ denotes the number of episodes), which admits a large room for improvement. In this paper, we investigate the problem with a new view, which reduces linear MDP into linear optimization by subtly setting the feature maps of the bandit arms of linear optimization. This new technique, under an exploratory assumption, yields an improved bound of $\tilde{O}(K^{4/5})$ for linear adversarial MDP without access to a transition simulator. The new view could be of independent interest for solving other MDP problems that possess a linear structure.

A Peskun ordering between two samplers, implying a dominance of one over the other, is known among the Markov chain Monte Carlo community for being a remarkably strong result, but it is also known for being one that is notably difficult to establish. Indeed, one has to prove that the probability to reach a state $\mathbf{y}$ from a state $\mathbf{x}$, using a sampler, is greater than or equal to the probability using the other sampler, and this must hold for all pairs $(\mathbf{x}, \mathbf{y})$ such that $\mathbf{x} \neq \mathbf{y}$. We provide in this paper a weaker version that does not require an inequality between the probabilities for all these states: essentially, the dominance holds asymptotically, as a varying parameter grows without bound, as long as the states for which the probabilities are greater than or equal to belong to a mass-concentrating set. The weak ordering turns out to be useful to compare lifted samplers for partially-ordered discrete state-spaces with their Metropolis--Hastings counterparts. An analysis in great generality yields a qualitative conclusion: they asymptotically perform better in certain situations (and we are able to identify them), but not necessarily in others (and the reasons why are made clear). A thorough study in a specific context of graphical-model simulation is also conducted.

We present a novel framework based on semi-bounded spatial operators for analyzing and discretizing initial boundary value problems on moving and deforming domains. This development extends an existing framework for well-posed problems and energy stable discretizations from stationary domains to the general case including arbitrary mesh motion. In particular, we show that an energy estimate derived in the physical coordinate system is equivalent to a semi-bounded property with respect to a stationary reference domain. The continuous analysis leading up to this result is based on a skew-symmetric splitting of the material time derivative, and thus relies on the property of integration-by-parts. Following this, a mimetic energy stable arbitrary Lagrangian-Eulerian framework for semi-discretization is formulated, based on approximating the material time derivative in a way consistent with discrete summation-by-parts. Thanks to the semi-bounded property, a method-of-lines approach using standard explicit or implicit time integration schemes can be applied to march the system forward in time. The same type of stability arguments applies as for the corresponding stationary domain problem, without regards to additional properties such as discrete geometric conservation. As an additional bonus we demonstrate that discrete geometric conservation, in the sense of exact free-stream preservation, can still be achieved in an automatic way with the new framework. However, we stress that this is not necessary for stability.

We examine the use of the Euler-Maclaurin formula and new derived uniform asymptotic expansions for the numerical evaluation of the Lerch transcendent $\Phi(z, s, a)$ for $z, s, a \in \mathbb{C}$ to arbitrary precision. A detailed analysis of these expansions is accompanied by rigorous error bounds. A complete scheme of computation for large and small values of the parameters and argument is described along with algorithmic details to achieve high performance. The described algorithm has been extensively tested in different regimes of the parameters and compared with current state-of-the-art codes. An open-source implementation of $\Phi(z, s, a)$ based on the algorithms described in this paper is available.

We present a new analytical and numerical framework for solution of Partial Differential Equations (PDEs) that is based on an exact transformation that moves the boundary constraints into the dynamics of the corresponding governing equation. The framework is based on a Partial Integral Equation (PIE) representation of PDEs, where a PDE equation is transformed into an equivalent PIE formulation that does not require boundary conditions on its solution state. The PDE-PIE framework allows for a development of a generalized PIE-Galerkin approximation methodology for a broad class of linear PDEs with non-constant coefficients governed by non-periodic boundary conditions, including, e.g., Dirichlet, Neumann and Robin boundaries. The significance of this result is that solution to almost any linear PDE can now be constructed in a form of an analytical approximation based on a series expansion using a suitable set of basis functions, such as, e.g., Chebyshev polynomials of the first kind, irrespective of the boundary conditions. In many cases involving homogeneous or simple time-dependent boundary inputs, an analytical integration in time is also possible. We present several PDE solution examples in one spatial variable implemented with the developed PIE-Galerkin methodology using both analytical and numerical integration in time. The developed framework can be naturally extended to multiple spatial dimensions and, potentially, to nonlinear problems.

This paper proposes a flexible framework for inferring large-scale time-varying and time-lagged correlation networks from multivariate or high-dimensional non-stationary time series with piecewise smooth trends. Built on a novel and unified multiple-testing procedure of time-lagged cross-correlation functions with a fixed or diverging number of lags, our method can accurately disclose flexible time-varying network structures associated with complex functional structures at all time points. We broaden the applicability of our method to the structure breaks by developing difference-based nonparametric estimators of cross-correlations, achieve accurate family-wise error control via a bootstrap-assisted procedure adaptive to the complex temporal dynamics, and enhance the probability of recovering the time-varying network structures using a new uniform variance reduction technique. We prove the asymptotic validity of the proposed method and demonstrate its effectiveness in finite samples through simulation studies and empirical applications.

Granular material is showing very often in geotechnical engineering, petroleum engineering, material science and physics. The packings of the granular material play a very important role in their mechanical behaviors, such as stress-strain response, stability, permeability and so on. Although packing is such an important research topic that its generation has been attracted lots of attentions for a long time in theoretical, experimental, and numerical aspects, packing of granular material is still a difficult and active research topic, especially the generation of random packing of non-spherical particles. To this end, we will generate packings of same particles with same shapes, numbers, and same size distribution using geometry method and dynamic method, separately. Specifically, we will extend one of Monte Carlo models for spheres to ellipsoids and poly-ellipsoids.

This work proposes a model-reduction approach for the material point method on nonlinear manifolds. Our technique approximates the $\textit{kinematics}$ by approximating the deformation map using an implicit neural representation that restricts deformation trajectories to reside on a low-dimensional manifold. By explicitly approximating the deformation map, its spatiotemporal gradients -- in particular the deformation gradient and the velocity -- can be computed via analytical differentiation. In contrast to typical model-reduction techniques that construct a linear or nonlinear manifold to approximate the (finite number of) degrees of freedom characterizing a given spatial discretization, the use of an implicit neural representation enables the proposed method to approximate the $\textit{continuous}$ deformation map. This allows the kinematic approximation to remain agnostic to the discretization. Consequently, the technique supports dynamic discretizations -- including resolution changes -- during the course of the online reduced-order-model simulation. To generate $\textit{dynamics}$ for the generalized coordinates, we propose a family of projection techniques. At each time step, these techniques: (1) Calculate full-space kinematics at quadrature points, (2) Calculate the full-space dynamics for a subset of `sample' material points, and (3) Calculate the reduced-space dynamics by projecting the updated full-space position and velocity onto the low-dimensional manifold and tangent space, respectively. We achieve significant computational speedup via hyper-reduction that ensures all three steps execute on only a small subset of the problem's spatial domain. Large-scale numerical examples with millions of material points illustrate the method's ability to gain an order of magnitude computational-cost saving -- indeed $\textit{real-time simulations}$ -- with negligible errors.

北京阿比特科技有限公司