亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Barrier terms for Incremental Potential Contact (IPC) energy are crucial for maintaining an intersection and inversion free simulation trajectory. However, existing formulations which directly use distance for measuring the state of contact can restrict implementation design and performance. This is because numerical eigendecompositions are required for guaranteeing positive semi-definiteness of the Hessian of the barrier energy during optimization with Projected-Newton solvers, and alternative Gauss-Newton methods suffer from significantly reduced convergence rates. We rewrite the barrier function of IPC to derive an efficient approximation its Hessian, which can then be used for simultaneous construction and projection to positive semi-definite state. The key idea is to formulate a simplicial geometric measure of contact using mesh boundary elements, where analytic eigensystems are obtained for minimising the proposed barrier energy with superlinear convergence to retain much of the advantage of Newton-type methods. Our approach is suitable for standard second order unconstrained optimization strategies for IPC based collision processing, minimizing nonlinear nonconvex functions where the Hessian may be indefinite. The result is a 3-times speedup over the standard full-space IPC barrier formulation based on direct Euclidean distance measures. We further apply our analytic proxy eigensystems to produce an entirely GPU-based implementation of IPC with significant further acceleration.

相關內容

We couple the L1 discretization of the Caputo fractional derivative in time with the Galerkin scheme to devise a linear numerical method for the semilinear subdiffusion equation. Two important points that we make are: nonsmooth initial data and time-dependent diffusion coefficient. We prove the stability and convergence of the method under weak assumptions concerning regularity of the diffusivity. We find optimal pointwise in space and global in time errors, which are verified with several numerical experiments.

Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads for a high-level programming language. A popular approach consists of combining Python, used for the high-level interface, and C++, used for the computing intensive part of the code. A more convenient and efficient approach would be to use a language that provides both high-level programming and high-performance. The Julia programming language, developed at MIT especially to allow the use of a single language in research activities, has followed this path. In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code development: runtime performance, handling of large projects, interface with legacy code, distributed computing, training, and ease of programming. The study shows that the HEP community would benefit from a large scale adoption of this programming language. The HEP-specific foundation libraries that would need to be consolidated are identified

We propose a finite element discretization for the steady, generalized Navier-Stokes equations for fluids with shear-dependent viscosity, completed with inhomogeneous Dirichlet boundary conditions and an inhomogeneous divergence constraint. We establish (weak) convergence of discrete solutions as well as a priori error estimates for the velocity vector field and the scalar kinematic pressure. Numerical experiments complement the theoretical findings.

We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem, possibly with a local or a boundary control, for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits excellent performances and robustness with respect to the parameters of interest.

The spectral clustering algorithm is often used as a binary clustering method for unclassified data by applying the principal component analysis. To study theoretical properties of the algorithm, the assumption of conditional homoscedasticity is often supposed in existing studies. However, this assumption is restrictive and often unrealistic in practice. Therefore, in this paper, we consider the allometric extension model, that is, the directions of the first eigenvectors of two covariance matrices and the direction of the difference of two mean vectors coincide, and we provide a non-asymptotic bound of the error probability of the spectral clustering algorithm for the allometric extension model. As a byproduct of the result, we obtain the consistency of the clustering method in high-dimensional settings.

Acceleration of gradient-based optimization methods is an issue of significant practical and theoretical interest, particularly in machine learning applications. Most research has focused on optimization over Euclidean spaces, but given the need to optimize over spaces of probability measures in many machine learning problems, it is of interest to investigate accelerated gradient methods in this context too. To this end, we introduce a Hamiltonian-flow approach that is analogous to moment-based approaches in Euclidean space. We demonstrate that algorithms based on this approach can achieve convergence rates of arbitrarily high order. Numerical examples illustrate our claim.

We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

We introduce a proof-theoretic method for showing nondefinability of second-order intuitionistic connectives by quantifier-free schemata. We apply the method to confirm that Taranovsky's "realizability disjunction" connective does not admit a quantifier-free definition, and use it to obtain new results and more nuanced information about the nondefinability of Kreisel's and Po{\l}acik's unary connectives. The finitary and combinatorial nature of our method makes it more resilient to changes in metatheory than common semantic approaches, whose robustness tends to waver once we pass to non-classical and especially anti-classical settings. Furthermore, we can easily transcribe the problem-specific subproofs into univalent type theory and check them using the Agda proof assistant.

A deep generative model yields an implicit estimator for the unknown distribution or density function of the observation. This paper investigates some statistical properties of the implicit density estimator pursued by VAE-type methods from a nonparametric density estimation framework. More specifically, we obtain convergence rates of the VAE-type density estimator under the assumption that the underlying true density function belongs to a locally H\"{o}lder class. Remarkably, a near minimax optimal rate with respect to the Hellinger metric can be achieved by the simplest network architecture, a shallow generative model with a one-dimensional latent variable.

The fundamental problem in toxicity detection task lies in the fact that the toxicity is ill-defined. Jigsaw, a unit within Google and one of the leaders in the field, uses a definition of toxicity given by Dixon et al. - 'rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion'. One can instantly see the issue with this definition, as it gives no quantitative measure of the toxicity and operates with highly subjective cultural terms. Despite all vagueness and flaws, this definition is de-facto widely used by many researchers. In this work we suggest quantative stress-based defenition for the toxicity that overcomes existing shortcomings.

北京阿比特科技有限公司