亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Approximated forms of the RII and RIII redistribution matrices are frequently applied to simplify the numerical solution of the radiative transfer problem for polarized radiation, taking partial frequency redistribution (PRD) effects into account. A widely used approximation for RIII is to consider its expression under the assumption of complete frequency redistribution (CRD) in the observer frame (RIII CRD). The adequacy of this approximation for modeling the intensity profiles has been firmly established. By contrast, its suitability for modeling scattering polarization signals has only been analyzed in a few studies, considering simplified settings. In this work, we aim at quantitatively assessing the impact and the range of validity of the RIII CRD approximation in the modeling of scattering polarization. Methods. We first present an analytic comparison between RIII and RIII CRD. We then compare the results of radiative transfer calculations, out of local thermodynamic equilibrium, performed with RIII and RIII CRD in realistic 1D atmospheric models. We focus on the chromospheric Ca i line at 4227 A and on the photospheric Sr i line at 4607 A.

相關內容

Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g. permutation or rigid transformation. Therefore, continuous and symmetric product functions (such as distance functions) on such complex objects must also be invariant to the product of such group actions. We call these functions symmetric and factor-wise group invariant (or SFGI functions in short). In this paper, we first present a general neural network architecture for approximating SFGI functions. The main contribution of this paper combines this general neural network with a sketching idea to develop a specific and efficient neural network which can approximate the $p$-th Wasserstein distance between point sets. Very importantly, the required model complexity is independent of the sizes of input point sets. On the theoretical front, to the best of our knowledge, this is the first result showing that there exists a neural network with the capacity to approximate Wasserstein distance with bounded model complexity. Our work provides an interesting integration of sketching ideas for geometric problems with universal approximation of symmetric functions. On the empirical front, we present a range of results showing that our newly proposed neural network architecture performs comparatively or better than other models (including a SOTA Siamese Autoencoder based approach). In particular, our neural network generalizes significantly better and trains much faster than the SOTA Siamese AE. Finally, this line of investigation could be useful in exploring effective neural network design for solving a broad range of geometric optimization problems (e.g., $k$-means in a metric space).

Numerical simulations of kinetic problems can become prohibitively expensive due to their large memory footprint and computational costs. A method that has proven to successfully reduce these costs is the dynamical low-rank approximation (DLRA). One key question when using DLRA methods is the construction of robust time integrators that preserve the invariances and associated conservation laws of the original problem. In this work, we demonstrate that the augmented basis update & Galerkin integrator (BUG) preserves solution invariances and the associated conservation laws when using a conservative truncation step and an appropriate time and space discretization. We present numerical comparisons to existing conservative integrators and discuss advantages and disadvantages

There are many numerical methods for solving partial different equations (PDEs) on manifolds such as classical implicit, finite difference, finite element, and isogeometric analysis methods which aim at improving the interoperability between finite element method and computer aided design (CAD) software. However, these approaches have difficulty when the domain has singularities since the solution at the singularity may be multivalued. This paper develops a novel numerical approach to solve elliptic PDEs on real, closed, connected, orientable, and almost smooth algebraic curves and surfaces. Our method integrates numerical algebraic geometry, differential geometry, and a finite difference scheme which is demonstrated on several examples.

Parametricity is a property of the syntax of type theory implying, e.g., that there is only one function having the type of the polymorphic identity function. Parametricity is usually proven externally, and does not hold internally. Internalising it is difficult because once there is a term witnessing parametricity, it also has to be parametric itself and this results in the appearance of higher dimensional cubes. In previous theories with internal parametricity, either an explicit syntax for higher cubes is present or the theory is extended with a new sort for the interval. In this paper we present a type theory with internal parametricity which is a simple extension of Martin-L\"of type theory: there are a few new type formers, term formers and equations. Geometry is not explicit in this syntax, but emergent: the new operations and equations only refer to objects up to dimension 3. We show that this theory is modelled by presheaves over the BCH cube category. Fibrancy conditions are not needed because we use span-based rather than relational parametricity. We define a gluing model for this theory implying that external parametricity and canonicity hold. The theory can be seen as a special case of a new kind of modal type theory, and it is the simplest setting in which the computational properties of higher observational type theory can be demonstrated.

We give an efficient perfect sampling algorithm for weighted, connected induced subgraphs (or graphlets) of rooted, bounded degree graphs. Our algorithm utilizes a vertex-percolation process with a carefully chosen rejection filter and works under a percolation subcriticality condition. We show that this condition is optimal in the sense that the task of (approximately) sampling weighted rooted graphlets becomes impossible in finite expected time for infinite graphs and intractable for finite graphs when the condition does not hold. We apply our sampling algorithm as a subroutine to give near linear-time perfect sampling algorithms for polymer models and weighted non-rooted graphlets in finite graphs, two widely studied yet very different problems. This new perfect sampling algorithm for polymer models gives improved sampling algorithms for spin systems at low temperatures on expander graphs and unbalanced bipartite graphs, among other applications.

We present a numerical iterative optimization algorithm for the minimization of a cost function consisting of a linear combination of three convex terms, one of which is differentiable, a second one is prox-simple and the third one is the composition of a linear map and a prox-simple function. The algorithm's special feature lies in its ability to approximate, in a single iteration run, the minimizers of the cost function for many different values of the parameters determining the relative weight of the three terms in the cost function. A proof of convergence of the algorithm, based on an inexact variable metric approach, is also provided. As a special case, one recovers a generalization of the primal-dual algorithm of Chambolle and Pock, and also of the proximal-gradient algorithm. Finally, we show how it is related to a primal-dual iterative algorithm based on inexact proximal evaluations of the non-smooth terms of the cost function.

Generalized linear models (GLMs) are popular for data-analysis in almost all quantitative sciences, but the choice of likelihood family and link function is often difficult. This motivates the search for likelihoods and links that minimize the impact of potential misspecification. We perform a large-scale simulation study on double-bounded and lower-bounded response data where we systematically vary both true and assumed likelihoods and links. In contrast to previous studies, we also study posterior calibration and uncertainty metrics in addition to point-estimate accuracy. Our results indicate that certain likelihoods and links can be remarkably robust to misspecification, performing almost on par with their respective true counterparts. Additionally, normal likelihood models with identity link (i.e., linear regression) often achieve calibration comparable to the more structurally faithful alternatives, at least in the studied scenarios. On the basis of our findings, we provide practical suggestions for robust likelihood and link choices in GLMs.

Tracking the fundamental frequency (f0) of a monophonic instrumental performance is effectively a solved problem with several solutions achieving 99% accuracy. However, the related task of automatic music transcription requires a further processing step to segment an f0 contour into discrete notes. This sub-task of note segmentation is necessary to enable a range of applications including musicological analysis and symbolic music generation. Building on CREPE, a state-of-the-art monophonic pitch tracking solution based on a simple neural network, we propose a simple and effective method for post-processing CREPE's output to achieve monophonic note segmentation. The proposed method demonstrates state-of-the-art results on two challenging datasets of monophonic instrumental music. Our approach also gives a 97% reduction in the total number of parameters used when compared with other deep learning based methods.

When the eigenvalues of the coefficient matrix for a linear scalar ordinary differential equation are of large magnitude, its solutions exhibit complicated behaviour, such as high-frequency oscillations, rapid growth or rapid decay. The cost of representing such solutions using standard techniques grows with the magnitudes of the eigenvalues. As a consequence, the running times of most solvers for ordinary differential equations also grow with these eigenvalues. However, a large class of scalar ordinary differential equations with slowly-varying coefficients admit slowly-varying phase functions that can be represented at a cost which is bounded independent of the magnitudes of the eigenvalues of the corresponding coefficient matrix. Here, we introduce a numerical algorithm for constructing slowly-varying phase functions which represent the solutions of a linear scalar ordinary differential equation. Our method's running time depends on the complexity of the equation's coefficients, but is bounded independent of the magnitudes of the equation's eigenvalues. Once the phase functions have been constructed, essentially any reasonable initial or boundary value problem for the scalar equation can be easily solved. We present the results of numerical experiments showing that, despite its greater generality, our algorithm is competitive with state-of-the-art methods for solving highly-oscillatory second order differential equations. We also compare our method with Magnus-type exponential integrators and find that our approach is orders of magnitude faster in the high-frequency regime.

For a nonlinear dynamical system that depends on parameters, the paper introduces a novel tensorial reduced-order model (TROM). The reduced model is projection-based, and for systems with no parameters involved, it resembles proper orthogonal decomposition (POD) combined with the discrete empirical interpolation method (DEIM). For parametric systems, TROM employs low-rank tensor approximations in place of truncated SVD, a key dimension-reduction technique in POD with DEIM. Three popular low-rank tensor compression formats are considered for this purpose: canonical polyadic, Tucker, and tensor train. The use of multilinear algebra tools allows the incorporation of information about the parameter dependence of the system into the reduced model and leads to a POD-DEIM type ROM that (i) is parameter-specific (localized) and predicts the system dynamics for out-of-training set (unseen) parameter values, (ii) mitigates the adverse effects of high parameter space dimension, (iii) has online computational costs that depend only on tensor compression ranks but not on the full-order model size, and (iv) achieves lower reduced space dimensions compared to the conventional POD-DEIM ROM. The paper explains the method, analyzes its prediction power, and assesses its performance for two specific parameter-dependent nonlinear dynamical systems.

北京阿比特科技有限公司