亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Extremile (Daouia, Gijbels and Stupfler,2019) is a novel and coherent measure of risk, determined by weighted expectations rather than tail probabilities. It finds application in risk management, and, in contrast to quantiles, it fulfills the axioms of consistency, taking into account the severity of tail losses. However, existing studies (Daouia, Gijbels and Stupfler,2019,2022) on extremile involve unknown distribution functions, making it challenging to obtain a root n-consistent estimator for unknown parameters in linear extremile regression. This article introduces a new definition of linear extremile regression and its estimation method, where the estimator is root n-consistent. Additionally, while the analysis of unlabeled data for extremes presents a significant challenge and is currently a topic of great interest in machine learning for various classification problems, we have developed a semi-supervised framework for the proposed extremile regression using unlabeled data. This framework can also enhance estimation accuracy under model misspecification. Both simulations and real data analyses have been conducted to illustrate the finite sample performance of the proposed methods.

相關內容

Linear regression and classification methods with repeated functional data are considered. For each statistical unit in the sample, a real-valued parameter is observed over time under different conditions. Two regression methods based on fusion penalties are presented. The first one is a generalization of the variable fusion methodology based on the 1-nearest neighbor. The second one, called group fusion lasso, assumes some grouping structure of conditions and allows for homogeneity among the regression coefficient functions within groups. A finite sample numerical simulation and an application on EEG data are presented.

Flexoelectricity - the generation of electric field in response to a strain gradient - is a universal electromechanical coupling, dominant only at small scales due to its requirement of high strain gradients. This phenomenon is governed by a set of coupled fourth-order partial differential equations (PDEs), which require $C^1$ continuity of the basis in finite element methods for the numerical solution. While Isogeometric analysis (IGA) has been proven to meet this continuity requirement due to its higher-order B-spline basis functions, it is limited to simple geometries that can be discretized with a single IGA patch. For the domains, e.g., architected materials, requiring more than one patch for discretization IGA faces the challenge of $C^0$ continuity across the patch boundaries. Here we present a discontinuous Galerkin method-based isogeometric analysis framework, capable of solving fourth-order PDEs of flexoelectricity in the domain of truss-based architected materials. An interior penalty-based stabilization is implemented to ensure the stability of the solution. The present formulation is advantageous over the analogous finite element methods since it only requires the computation of interior boundary contributions on the boundaries of patches. As each strut can be modeled with only two trapezoid patches, the number of $C^0$ continuous boundaries is largely reduced. Further, we consider four unique unit cells to construct the truss lattices and analyze their flexoelectric response. The truss lattices show a higher magnitude of flexoelectricity compared to the solid beam, as well as retain this superior electromechanical response with the increasing size of the structure. These results indicate the potential of architected materials to scale up the flexoelectricity to larger scales, towards achieving universal electromechanical response in meso/macro scale dielectric materials.

We consider the problem of sequential change detection, where the goal is to design a scheme for detecting any changes in a parameter or functional $\theta$ of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes. In this paper, we describe a simple reduction from sequential change detection to sequential estimation using confidence sequences: we begin a new $(1-\alpha)$-confidence sequence at each time step, and proclaim a change when the intersection of all active confidence sequences becomes empty. We prove that the average run length is at least $1/\alpha$, resulting in a change detection scheme with minimal structural assumptions~(thus allowing for possibly dependent observations, and nonparametric distribution classes), but strong guarantees. Our approach bears an interesting parallel with the reduction from change detection to sequential testing of Lorden (1971) and the e-detector of Shin et al. (2022).

Stochastic Primal-Dual Hybrid Gradient (SPDHG) is an algorithm proposed by Chambolle et al. (2018) to efficiently solve a wide class of nonsmooth large-scale optimization problems. In this paper we contribute to its theoretical foundations and prove its almost sure convergence for convex but neither necessarily strongly convex nor smooth functionals, as well as for any random sampling. In addition, we study SPDHG for parallel Magnetic Resonance Imaging reconstruction, where data from different coils are randomly selected at each iteration. We apply SPDHG using a wide range of random sampling methods and compare its performance across a range of settings, including mini-batch size and step size parameters. We show that the sampling can significantly affect the convergence speed of SPDHG and for many cases an optimal sampling can be identified.

We resurrect the infamous harmonic mean estimator for computing the marginal likelihood (Bayesian evidence) and solve its problematic large variance. The marginal likelihood is a key component of Bayesian model selection to evaluate model posterior probabilities; however, its computation is challenging. The original harmonic mean estimator, first proposed by Newton and Raftery in 1994, involves computing the harmonic mean of the likelihood given samples from the posterior. It was immediately realised that the original estimator can fail catastrophically since its variance can become very large (possibly not finite). A number of variants of the harmonic mean estimator have been proposed to address this issue although none have proven fully satisfactory. We present the \emph{learnt harmonic mean estimator}, a variant of the original estimator that solves its large variance problem. This is achieved by interpreting the harmonic mean estimator as importance sampling and introducing a new target distribution. The new target distribution is learned to approximate the optimal but inaccessible target, while minimising the variance of the resulting estimator. Since the estimator requires samples of the posterior only, it is agnostic to the sampling strategy used. We validate the estimator on a variety of numerical experiments, including a number of pathological examples where the original harmonic mean estimator fails catastrophically. We also consider a cosmological application, where our approach leads to $\sim$ 3 to 6 times more samples than current state-of-the-art techniques in 1/3 of the time. In all cases our learnt harmonic mean estimator is shown to be highly accurate. The estimator is computationally scalable and can be applied to problems of dimension $O(10^3)$ and beyond. Code implementing the learnt harmonic mean estimator is made publicly available

This document defines a method for FIR system modelling which is very trivial as it only depends on phase introduction and removal (allpass filters). As magnitude is not altered, the processing is numerically stable. It is limited to phase alteration which maintains the time domain magnitude to force a system within its linear limits.

Improving the resolution of fluorescence microscopy beyond the diffraction limit can be achievedby acquiring and processing multiple images of the sample under different illumination conditions.One of the simplest techniques, Random Illumination Microscopy (RIM), forms the super-resolvedimage from the variance of images obtained with random speckled illuminations. However, thevalidity of this process has not been fully theorized. In this work, we characterize mathematicallythe sample information contained in the variance of diffraction-limited speckled images as a functionof the statistical properties of the illuminations. We show that an unambiguous two-fold resolutiongain is obtained when the speckle correlation length coincides with the width of the observationpoint spread function. Last, we analyze the difference between the variance-based techniques usingrandom speckled illuminations (as in RIM) and those obtained using random fluorophore activation(as in Super-resolution Optical Fluctuation Imaging, SOFI).

When the signal does not have a sparse structure but has sparsity under a certain transformation domain, Nam et al. \cite{NS} introduced the cosparse analysis model, which provides a dual perspective on the sparse representation model. This paper mainly discusses the error estimation of non-convex $\ell_p(0<p<1)$ relaxation cosparse optimization model with noise condition. Compared with the existing literature, under the same conditions, the value range of the $\Omega$-RIP constant $\delta_{7s}$ given in this paper is wider. When $p=0.5$ and $\delta_{7s}=0.5$, the error constants $C_0$ and $C_1$ in this paper are better than those corresponding results in the literature \cite{Cand,LiSong1}. Moreover, when $0<p<1$, the error results of the non-convex relaxation method are significantly smaller than those of the convex relaxation method. The experimental results verify the correctness of the theoretical analysis and illustrate that the $\ell_p(0<p<1)$ method can provide robust reconstruction for cosparse optimization problems.

We propose and compare methods for the analysis of extreme events in complex systems governed by PDEs that involve random parameters, in situations where we are interested in quantifying the probability that a scalar function of the system's solution is above a threshold. If the threshold is large, this probability is small and its accurate estimation is challenging. To tackle this difficulty, we blend theoretical results from large deviation theory (LDT) with numerical tools from PDE-constrained optimization. Our methods first compute parameters that minimize the LDT-rate function over the set of parameters leading to extreme events, using adjoint methods to compute the gradient of this rate function. The minimizers give information about the mechanism of the extreme events as well as estimates of their probability. We then propose a series of methods to refine these estimates, either via importance sampling or geometric approximation of the extreme event sets. Results are formulated for general parameter distributions and detailed expressions are provided when Gaussian distributions. We give theoretical and numerical arguments showing that the performance of our methods is insensitive to the extremeness of the events we are interested in. We illustrate the application of our approach to quantify the probability of extreme tsunami events on shore. Tsunamis are typically caused by a sudden, unpredictable change of the ocean floor elevation during an earthquake. We model this change as a random process, which takes into account the underlying physics. We use the one-dimensional shallow water equation to model tsunamis numerically. In the context of this example, we present a comparison of our methods for extreme event probability estimation, and find which type of ocean floor elevation change leads to the largest tsunamis on shore.

Graph-centric artificial intelligence (graph AI) has achieved remarkable success in modeling interacting systems prevalent in nature, from dynamical systems in biology to particle physics. The increasing heterogeneity of data calls for graph neural architectures that can combine multiple inductive biases. However, combining data from various sources is challenging because appropriate inductive bias may vary by data modality. Multimodal learning methods fuse multiple data modalities while leveraging cross-modal dependencies to address this challenge. Here, we survey 140 studies in graph-centric AI and realize that diverse data types are increasingly brought together using graphs and fed into sophisticated multimodal models. These models stratify into image-, language-, and knowledge-grounded multimodal learning. We put forward an algorithmic blueprint for multimodal graph learning based on this categorization. The blueprint serves as a way to group state-of-the-art architectures that treat multimodal data by choosing appropriately four different components. This effort can pave the way for standardizing the design of sophisticated multimodal architectures for highly complex real-world problems.

北京阿比特科技有限公司