亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Partial Least Squares (PLS) regression emerged as an alternative to ordinary least squares for addressing multicollinearity in a wide range of scientific applications. As multidimensional tensor data is becoming more widespread, tensor adaptations of PLS have been developed. Our investigations reveal that the previously established asymptotic result of the PLS estimator for a tensor response breaks down as the tensor dimensions and the number of features increase relative to the sample size. To address this, we propose Sparse Higher Order Partial Least Squares (SHOPS) regression and an accompanying algorithm. SHOPS simultaneously accommodates variable selection, dimension reduction, and tensor association denoising. We establish the asymptotic accuracy of the SHOPS algorithm under a high-dimensional regime and verify these results through comprehensive simulation experiments, and applications to two contemporary high-dimensional biological data analysis.

相關內容

A nonlinear-manifold reduced order model (NM-ROM) is a great way of incorporating underlying physics principles into a neural network-based data-driven approach. We combine NM-ROMs with domain decomposition (DD) for efficient computation. NM-ROMs offer benefits over linear-subspace ROMs (LS-ROMs) but can be costly to train due to parameter scaling with the full-order model (FOM) size. To address this, we employ DD on the FOM, compute subdomain NM-ROMs, and then merge them into a global NM-ROM. This approach has multiple advantages: parallel training of subdomain NM-ROMs, fewer parameters than global NM-ROMs, and adaptability to subdomain-specific FOM features. Each subdomain NM-ROM uses a shallow, sparse autoencoder, enabling hyper-reduction (HR) for improved computational speed. In this paper, we detail an algebraic DD formulation for the FOM, train HR-equipped NM-ROMs for subdomains, and numerically compare them to DD LS-ROMs with HR. Results show a significant accuracy boost, on the order of magnitude, for the proposed DD NM-ROMs over DD LS-ROMs in solving the 2D steady-state Burgers' equation.

In this contribution, we provide a new mass lumping scheme for explicit dynamics in isogeometric analysis (IGA). To this end, an element formulation based on the idea of dual functionals is developed. Non-Uniform Rational B-splines (NURBS) are applied as shape functions and their corresponding dual basis functions are applied as test functions in the variational form, where two kinds of dual basis functions are compared. The first type are approximate dual basis functions (AD) with varying degree of reproduction, resulting in banded mass matrices. Dual basis functions derived from the inversion of the Gram matrix (IG) are the second type and already yield diagonal mass matrices. We will show that it is possible to apply the dual scheme as a transformation of the resulting system of equations based on NURBS as shape and test functions. Hence, it can be easily implemented into existing IGA routines. Treating the application of dual test functions as preconditioner reduces the additional computational effort, but it cannot entirely erase it and the density of the stiffness matrix still remains higher than in standard Bubnov-Galerkin formulations. In return applying additional row-sum lumping to the mass matrices is either not necessary for IG or the caused loss of accuracy is lowered to a reasonable magnitude in the case of AD. Numerical examples show a significantly better approximation of the dynamic behavior for the dual lumping scheme compared to standard NURBS approaches making use of row-sum lumping. Applying IG yields accurate numerical results without additional lumping. But as result of the global support of the IG dual basis functions, fully populated stiffness matrices occur, which are entirely unsuitable for explicit dynamic simulations. Combining AD and row-sum lumping leads to an efficient computation regarding effort and accuracy.

Most studies of adaptive algorithm behavior consider performance measures based on mean values such as the mean-square error. The derived models are useful for understanding the algorithm behavior under different environments and can be used for design. Nevertheless, from a practical point of view, the adaptive filter user has only one realization of the algorithm to obtain the desired result. This letter derives a model for the variance of the squared-error sample curve of the least-mean-square (LMS) adaptive algorithm, so that the achievable cancellation level can be predicted based on the properties of the steady-state squared error. The derived results provide the user with useful design guidelines.

Climate hazards can cause major disasters when they occur simultaneously as compound hazards. To understand the distribution of climate risk and inform adaptation policies, scientists need to simulate a large number of physically realistic and spatially coherent events. Current methods are limited by computational constraints and the probabilistic spatial distribution of compound events is not given sufficient attention. The bottleneck in current approaches lies in modelling the dependence structure between variables, as inference on parametric models suffers from the curse of dimensionality. Generative adversarial networks (GANs) are well-suited to such a problem due to their ability to implicitly learn the distribution of data in high-dimensional settings. We employ a GAN to model the dependence structure for daily maximum wind speed, significant wave height, and total precipitation over the Bay of Bengal, combining this with traditional extreme value theory for controlled extrapolation of the tails. Once trained, the model can be used to efficiently generate thousands of realistic compound hazard events, which can inform climate risk assessments for climate adaptation and disaster preparedness. The method developed is flexible and transferable to other multivariate and spatial climate datasets.

In this work we present a space-time least squares isogeometric discretization of the Schr\"odinger equation and propose a preconditioner for the arising linear system in the parametric domain. Exploiting the tensor product structure of the basis functions, the preconditioner is written as the sum of Kronecker products of matrices. Thanks to an extension to the classical Fast Diagonalization method, the application of the preconditioner is efficient and robust w.r.t. the polynomial degree of the spline space. The time required for the application is almost proportional to the number of degrees-of-freedom, for a serial execution.

In this paper, we consider the two-sample location shift model, a classic semiparametric model introduced by Stein (1956). This model is known for its adaptive nature, enabling nonparametric estimation with full parametric efficiency. Existing nonparametric estimators of the location shift often depend on external tuning parameters, which restricts their practical applicability (Van der Vaart and Wellner, 2021). We demonstrate that introducing an additional assumption of log-concavity on the underlying density can alleviate the need for tuning parameters. We propose a one step estimator for location shift estimation, utilizing log-concave density estimation techniques to facilitate tuning-free estimation of the efficient influence function. While we employ a truncated version of the one step estimator for theoretical adaptivity, our simulations indicate that the one step estimators perform best with zero truncation, eliminating the need for tuning during practical implementation.

We investigate an operator on classes of languages. For each class $C$, it outputs a new class $FO^2(I_C)$ associated with a variant of two-variable first-order logic equipped with a signature$I_C$ built from $C$. For $C = \{\emptyset, A^*\}$, we get the variant $FO^2(<)$ equipped with the linear order. For $C = \{\emptyset, \{\varepsilon\},A^+, A^*\}$, we get the variant $FO^2(<,+1)$, which also includes the successor. If $C$ consists of all Boolean combinations of languages $A^*aA^*$ where $a$ is a letter, we get the variant $FO^2(<,Bet)$, which also includes "between relations". We prove a generic algebraic characterization of the classes $FO^2(I_C)$. It smoothly and elegantly generalizes the known ones for all aforementioned cases. Moreover, it implies that if $C$ has decidable separation (plus mild properties), then $FO^2(I_C)$ has a decidable membership problem. We actually work with an equivalent definition of \fodc in terms of unary temporal logic. For each class $C$, we consider a variant $TL(C)$ of unary temporal logic whose future/past modalities depend on $C$ and such that $TL(C) = FO^2(I_C)$. Finally, we also characterize $FL(C)$ and $PL(C)$, the pure-future and pure-past restrictions of $TL(C)$. These characterizations as well imply that if \Cs is a class with decidable separation, then $FL(C)$ and $PL(C)$ have decidable membership.

We prove closed-form equations for the exact high-dimensional asymptotics of a family of first order gradient-based methods, learning an estimator (e.g. M-estimator, shallow neural network, ...) from observations on Gaussian data with empirical risk minimization. This includes widely used algorithms such as stochastic gradient descent (SGD) or Nesterov acceleration. The obtained equations match those resulting from the discretization of dynamical mean-field theory (DMFT) equations from statistical physics when applied to gradient flow. Our proof method allows us to give an explicit description of how memory kernels build up in the effective dynamics, and to include non-separable update functions, allowing datasets with non-identity covariance matrices. Finally, we provide numerical implementations of the equations for SGD with generic extensive batch-size and with constant learning rates.

Tensor operations play an essential role in various fields of science and engineering, including multiway data analysis. In this study, we establish a few basic properties of the range and null space of a tensor using block circulant matrices and the discrete Fourier matrix. We then discuss the outer inverse of tensors based on $t$-product with a prescribed range and kernel of third-order tensors. We address the relation of this outer inverse with other generalized inverses, such as the Moore-Penrose inverse, group inverse, and Drazin inverse. In addition, we present a few algorithms for computing the outer inverses of the tensors. In particular, a $t$-QR decomposition based algorithm is developed for computing the outer inverses.

We consider in this paper a numerical approximation of Poisson-Nernst-Planck-Navier- Stokes (PNP-NS) system. We construct a decoupled semi-discrete and fully discrete scheme that enjoys the properties of positivity preserving, mass conserving, and unconditionally energy stability. Then, we establish the well-posedness and regularity of the initial and (periodic) boundary value problem of the PNP-NS system under suitable assumptions on the initial data, and carry out a rigorous convergence analysis for the fully discretized scheme. We also present some numerical results to validate the positivity-preserving property and the accuracy of our scheme.

北京阿比特科技有限公司