亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we develop an asymptotic method for constructing confidence regions for the set of all linear subspaces arising from PCA, from which we derive hypothesis tests on this set. Our method is based on the geometry of Riemannian manifolds with which some sets of linear subspaces are endowed.

相關內容

In this paper, we reported our experiments with various strategies to improve code-mixed humour and sarcasm detection. We did all of our experiments for Hindi-English code-mixed scenario, as we have the linguistic expertise for the same. We experimented with three approaches, namely (i) native sample mixing, (ii) multi-task learning (MTL), and (iii) prompting very large multilingual language models (VMLMs). In native sample mixing, we added monolingual task samples in code-mixed training sets. In MTL learning, we relied on native and code-mixed samples of a semantically related task (hate detection in our case). Finally, in our third approach, we evaluated the efficacy of VMLMs via few-shot context prompting. Some interesting findings we got are (i) adding native samples improved humor (raising the F1-score up to 6.76%) and sarcasm (raising the F1-score up to 8.64%) detection, (ii) training MLMs in an MTL framework boosted performance for both humour (raising the F1-score up to 10.67%) and sarcasm (increment up to 12.35% in F1-score) detection, and (iii) prompting VMLMs couldn't outperform the other approaches. Finally, our ablation studies and error analysis discovered the cases where our model is yet to improve. We provided our code for reproducibility.

This paper presents an analysis of properties of two hybrid discretization methods for Gaussian derivatives, based on convolutions with either the normalized sampled Gaussian kernel or the integrated Gaussian kernel followed by central differences. The motivation for studying these discretization methods is that in situations when multiple spatial derivatives of different order are needed at the same scale level, they can be computed significantly more efficiently compared to more direct derivative approximations based on explicit convolutions with either sampled Gaussian kernels or integrated Gaussian kernels. While these computational benefits do also hold for the genuinely discrete approach for computing discrete analogues of Gaussian derivatives, based on convolution with the discrete analogue of the Gaussian kernel followed by central differences, the underlying mathematical primitives for the discrete analogue of the Gaussian kernel, in terms of modified Bessel functions of integer order, may not be available in certain frameworks for image processing, such as when performing deep learning based on scale-parameterized filters in terms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these hybrid discretization methods, in terms of quantitative performance measures concerning the amount of spatial smoothing that they imply, as well as the relative consistency of scale estimates obtained from scale-invariant feature detectors with automatic scale selection, with an emphasis on the behaviour for very small values of the scale parameter, which may differ significantly from corresponding results obtained from the fully continuous scale-space theory, as well as between different types of discretization methods.

In this note, we derive the closed form formulae for moments of Student's t-distribution in the one dimensional case as well as in higher dimensions through a unified probability framework. Interestingly, the closed form expressions for the moments of Student's t-distribution can be written in terms of the familiar Gamma function, Kummer's confluent hypergeometric function, and the hypergeometric function.

This work presents an abstract framework for the design, implementation, and analysis of the multiscale spectral generalized finite element method (MS-GFEM), a particular numerical multiscale method originally proposed in [I. Babuska and R. Lipton, Multiscale Model.\;\,Simul., 9 (2011), pp.~373--406]. MS-GFEM is a partition of unity method employing optimal local approximation spaces constructed from local spectral problems. We establish a general local approximation theory demonstrating exponential convergence with respect to local degrees of freedom under certain assumptions, with explicit dependence on key problem parameters. Our framework applies to a broad class of multiscale PDEs with $L^{\infty}$-coefficients in both continuous and discrete, finite element settings, including highly indefinite problems (convection-dominated diffusion, as well as the high-frequency Helmholtz, Maxwell and elastic wave equations with impedance boundary conditions), and higher-order problems. Notably, we prove a local convergence rate of $O(e^{-cn^{1/d}})$ for MS-GFEM for all these problems, improving upon the $O(e^{-cn^{1/(d+1)}})$ rate shown by Babuska and Lipton. Moreover, based on the abstract local approximation theory for MS-GFEM, we establish a unified framework for showing low-rank approximations to multiscale PDEs. This framework applies to the aforementioned problems, proving that the associated Green's functions admit an $O(|\log\epsilon|^{d})$-term separable approximation on well-separated domains with error $\epsilon>0$. Our analysis improves and generalizes the result in [M. Bebendorf and W. Hackbusch, Numerische Mathematik, 95 (2003), pp.~1-28] where an $O(|\log\epsilon|^{d+1})$-term separable approximation was proved for Poisson-type problems.

We design and investigate a variety of multigrid solvers for high-order local discontinuous Galerkin methods applied to elliptic interface and multiphase Stokes problems. Using the template of a standard multigrid V-cycle, we consider a variety of element-wise block smoothers, including Jacobi, multi-coloured Gauss-Seidel, processor-block Gauss-Seidel, and with special interest, smoothers based on sparse approximate inverse (SAI) methods. In particular, we develop SAI methods that: (i) balance the smoothing of velocity and pressure variables in Stokes problems; and (ii) robustly handles high-contrast viscosity coefficients in multiphase problems. Across a broad range of two- and three-dimensional test cases, including Poisson, elliptic interface, steady-state Stokes, and unsteady Stokes problems, we examine a multitude of multigrid smoother and solver combinations. In every case, there is at least one approach that matches the performance of classical geometric multigrid algorithms, e.g., 4 to 8 iterations to reduce the residual by 10 orders of magnitude. We also discuss their relative merits with regard to simplicity, robustness, computational cost, and parallelisation.

Detection of abrupt spatial changes in physical properties representing unique geometric features such as buried objects, cavities, and fractures is an important problem in geophysics and many engineering disciplines. In this context, simultaneous spatial field and geometry estimation methods that explicitly parameterize the background spatial field and the geometry of the embedded anomalies are of great interest. This paper introduces an advanced inversion procedure for simultaneous estimation using the domain independence property of the Karhunen-Lo\`eve (K-L) expansion. Previous methods pursuing this strategy face significant computational challenges. The associated integral eigenvalue problem (IEVP) needs to be solved repeatedly on evolving domains, and the shape derivatives in gradient-based algorithms require costly computations of the Moore-Penrose inverse. Leveraging the domain independence property of the K-L expansion, the proposed method avoids both of these bottlenecks, and the IEVP is solved only once on a fixed bounding domain. Comparative studies demonstrate that our approach yields two orders of magnitude improvement in K-L expansion gradient computation time. Inversion studies on one-dimensional and two-dimensional seepage flow problems highlight the benefits of incorporating geometry parameters along with spatial field parameters. The proposed method captures abrupt changes in hydraulic conductivity with a lower number of parameters and provides accurate estimates of boundary and spatial-field uncertainties, outperforming spatial-field-only estimation methods.

In this study, we investigate a hybrid-type anisotropic weakly over-penalised symmetric interior penalty method for the Poisson equation on convex domains. Compared with the well-known hybrid discontinuous Galerkin methods, our approach is simple and easy to implement. Our primary contributions are the proposal of a new scheme and the demonstration of a proof for the consistency term, which allows us to estimate the anisotropic consistency error. The key idea of the proof is to apply the relation between the Raviart--Thomas finite element space and a discontinuous space. In numerical experiments, we compare the calculation results for standard and anisotropic mesh partitions.

This study presents a novel representation learning model tailored for dynamic networks, which describes the continuously evolving relationships among individuals within a population. The problem is encapsulated in the dimension reduction topic of functional data analysis. With dynamic networks represented as matrix-valued functions, our objective is to map this functional data into a set of vector-valued functions in a lower-dimensional learning space. This space, defined as a metric functional space, allows for the calculation of norms and inner products. By constructing this learning space, we address (i) attribute learning, (ii) community detection, and (iii) link prediction and recovery of individual nodes in the dynamic network. Our model also accommodates asymmetric low-dimensional representations, enabling the separate study of nodes' regulatory and receiving roles. Crucially, the learning method accounts for the time-dependency of networks, ensuring that representations are continuous over time. The functional learning space we define naturally spans the time frame of the dynamic networks, facilitating both the inference of network links at specific time points and the reconstruction of the entire network structure without direct observation. We validated our approach through simulation studies and real-world applications. In simulations, we compared our methods link prediction performance to existing approaches under various data corruption scenarios. For real-world applications, we examined a dynamic social network replicated across six ant populations, demonstrating that our low-dimensional learning space effectively captures interactions, roles of individual ants, and the social evolution of the network. Our findings align with existing knowledge of ant colony behavior.

In this paper, we focus on efficiently and flexibly simulating the Fokker-Planck equation associated with the Nonlinear Noisy Leaky Integrate-and-Fire (NNLIF) model, which reflects the dynamic behavior of neuron networks. We apply the Galerkin spectral method to discretize the spatial domain by constructing a variational formulation that satisfies complex boundary conditions. Moreover, the boundary conditions in the variational formulation include only zeroth-order terms, with first-order conditions being naturally incorporated. This allows the numerical scheme to be further extended to an excitatory-inhibitory population model with synaptic delays and refractory states. Additionally, we establish the consistency of the numerical scheme. Experimental results, including accuracy tests, blow-up events, and periodic oscillations, validate the properties of our proposed method.

The present paper evaluates the learning behaviour of a transformer-based neural network with regard to an irregular inflectional paradigm. We apply the paradigm cell filling problem to irregular patterns. We approach this problem using the morphological reinflection task and model it as a character sequence-to-sequence learning problem. The test case under investigation are irregular verbs in Spanish. Besides many regular verbs in Spanish L-shaped verbs the first person singular indicative stem irregularly matches the subjunctive paradigm, while other indicative forms remain unaltered. We examine the role of frequency during learning and compare models under differing input frequency conditions. We train the model on a corpus of Spanish with a realistic distribution of regular and irregular verbs to compare it with models trained on input with augmented distributions of (ir)regular words. We explore how the neural models learn this L-shaped pattern using post-hoc analyses. Our experiments show that, across frequency conditions, the models are surprisingly capable of learning the irregular pattern. Furthermore, our post-hoc analyses reveal the possible sources of errors. All code and data are available at \url{//anonymous.4open.science/r/modeling_spanish_acl-7567/} under MIT license.

北京阿比特科技有限公司