亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a finite volume method preserving the invariant region property (IRP) for the reaction-diffusion systems with quasimonotone functions, including nondecreasing, decreasing, and mixed quasimonotone systems. The diffusion terms and time derivatives are discretized by a finite volume method satisfying the discrete maximum principle (DMP) and the backward Euler method, respectively. The discretization leads to an implicit and nonlinear scheme, and it is proved to preserve the invariant region property unconditionally. We construct an iterative algorithm and prove the invariant region property ar each iteration step. Numerical examples are shown to confirm the accuracy and invariant region property of our scheme.

相關內容

Multi-contrast (MC) Magnetic Resonance Imaging (MRI) reconstruction aims to incorporate a reference image of auxiliary modality to guide the reconstruction process of the target modality. Known MC reconstruction methods perform well with a fully sampled reference image, but usually exhibit inferior performance, compared to single-contrast (SC) methods, when the reference image is missing or of low quality. To address this issue, we propose DuDoUniNeXt, a unified dual-domain MRI reconstruction network that can accommodate to scenarios involving absent, low-quality, and high-quality reference images. DuDoUniNeXt adopts a hybrid backbone that combines CNN and ViT, enabling specific adjustment of image domain and k-space reconstruction. Specifically, an adaptive coarse-to-fine feature fusion module (AdaC2F) is devised to dynamically process the information from reference images of varying qualities. Besides, a partially shared shallow feature extractor (PaSS) is proposed, which uses shared and distinct parameters to handle consistent and discrepancy information among contrasts. Experimental results demonstrate that the proposed model surpasses state-of-the-art SC and MC models significantly. Ablation studies show the effectiveness of the proposed hybrid backbone, AdaC2F, PaSS, and the dual-domain unified learning scheme.

We propose a new concept of lifts of reversible diffusion processes and show that various well-known non-reversible Markov processes arising in applications are lifts in this sense of simple reversible diffusions. Furthermore, we introduce a concept of non-asymptotic relaxation times and show that these can at most be reduced by a square root through lifting, generalising a related result in discrete time. Finally, we demonstrate how the recently developed approach to quantitative hypocoercivity based on space-time Poincar\'e inequalities can be rephrased and simplified in the language of lifts and how it can be applied to find optimal lifts.

This paper presents a novel stochastic optimisation methodology to perform empirical Bayesian inference in semi-blind image deconvolution problems. Given a blurred image and a parametric class of possible operators, the proposed optimisation approach automatically calibrates the parameters of the blur model by maximum marginal likelihood estimation, followed by (non-blind) image deconvolution by maximum-a-posteriori estimation conditionally to the estimated model parameters. In addition to the blur model, the proposed approach also automatically calibrates the noise variance as well as any regularisation parameters. The marginal likelihood of the blur, noise variance, and regularisation parameters is generally computationally intractable, as it requires calculating several integrals over the entire solution space. Our approach addresses this difficulty by using a stochastic approximation proximal gradient optimisation scheme, which iteratively solves such integrals by using a Moreau-Yosida regularised unadjusted Langevin Markov chain Monte Carlo algorithm. This optimisation strategy can be easily and efficiently applied to any model that is log-concave, and by using the same gradient and proximal operators that are required to compute the maximum-a-posteriori solution by convex optimisation. We provide convergence guarantees for the proposed optimisation scheme under realistic and easily verifiable conditions and subsequently demonstrate the effectiveness of the approach with a series of deconvolution experiments and comparisons with alternative strategies from the state of the art.

Recently, order-preserving pattern (OPP) mining, a new sequential pattern mining method, has been proposed to mine frequent relative orders in a time series. Although frequent relative orders can be used as features to classify a time series, the mined patterns do not reflect the differences between two classes of time series well. To effectively discover the differences between time series, this paper addresses the top-k contrast OPP (COPP) mining and proposes a COPP-Miner algorithm to discover the top-k contrast patterns as features for time series classification, avoiding the problem of improper parameter setting. COPP-Miner is composed of three parts: extreme point extraction to reduce the length of the original time series, forward mining, and reverse mining to discover COPPs. Forward mining contains three steps: group pattern fusion strategy to generate candidate patterns, the support rate calculation method to efficiently calculate the support of a pattern, and two pruning strategies to further prune candidate patterns. Reverse mining uses one pruning strategy to prune candidate patterns and consists of applying the same process as forward mining. Experimental results validate the efficiency of the proposed algorithm and show that top-k COPPs can be used as features to obtain a better classification performance.

The minimum covariance determinant (MCD) estimator is a popular method for robustly estimating the mean and covariance of multivariate data. We extend the MCD to the setting where the observations are matrices rather than vectors and introduce the matrix minimum covariance determinant (MMCD) estimators for robust parameter estimation. These estimators hold equivariance properties, achieve a high breakdown point, and are consistent under elliptical matrix-variate distributions. We have also developed an efficient algorithm with convergence guarantees to compute the MMCD estimators. Using the MMCD estimators, we can compute robust Mahalanobis distances that can be used for outlier detection. Those distances can be decomposed into outlyingness contributions from each cell, row, or column of a matrix-variate observation using Shapley values, a concept for outlier explanation recently introduced in the multivariate setting. Simulations and examples reveal the excellent properties and usefulness of the robust estimators.

We propose a numerical method to solve parameter-dependent hyperbolic partial differential equations (PDEs) with a moment approach, based on a previous work from Marx et al. (2020). This approach relies on a very weak notion of solution of nonlinear equations, namely parametric entropy measure-valued (MV) solutions, satisfying linear equations in the space of Borel measures. The infinite-dimensional linear problem is approximated by a hierarchy of convex, finite-dimensional, semidefinite programming problems, called Lasserre's hierarchy. This gives us a sequence of approximations of the moments of the occupation measure associated with the parametric entropy MV solution, which is proved to converge. In the end, several post-treatments can be performed from this approximate moments sequence. In particular, the graph of the solution can be reconstructed from an optimization of the Christoffel-Darboux kernel associated with the approximate measure, that is a powerful approximation tool able to capture a large class of irregular functions. Also, for uncertainty quantification problems, several quantities of interest can be estimated, sometimes directly such as the expectation of smooth functionals of the solutions. The performance of our approach is evaluated through numerical experiments on the inviscid Burgers equation with parametrised initial conditions or parametrised flux function.

Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.

We introduce exploration via linear loss perturbations (EVILL), a randomised exploration method for structured stochastic bandit problems that works by solving for the minimiser of a linearly perturbed regularised negative log-likelihood function. We show that, for the case of generalised linear bandits, EVILL reduces to perturbed history exploration (PHE), a method where exploration is done by training on randomly perturbed rewards. In doing so, we provide a simple and clean explanation of when and why random reward perturbations give rise to good bandit algorithms. We propose data-dependent perturbations not present in previous PHE-type methods that allow EVILL to match the performance of Thompson-sampling-style parameter-perturbation methods, both in theory and in practice. Moreover, we show an example outside generalised linear bandits where PHE leads to inconsistent estimates, and thus linear regret, while EVILL remains performant. Like PHE, EVILL can be implemented in just a few lines of code.

Under interference, the potential outcomes of a unit depend on treatments assigned to other units. A network interference structure is typically assumed to be given and accurate. In this paper, we study the problems resulting from misspecifying these networks. First, we derive bounds on the bias arising from estimating causal effects under a misspecified network. We show that the maximal possible bias depends on the divergence between the assumed network and the true one with respect to the induced exposure probabilities. Then, we propose a novel estimator that leverages multiple networks simultaneously and is unbiased if one of the networks is correct, thus providing robustness to network specification. Additionally, we develop a probabilistic bias analysis that quantifies the impact of a postulated misspecification mechanism on the causal estimates. We illustrate key issues in simulations and demonstrate the utility of the proposed methods in a social network field experiment and a cluster-randomized trial with suspected cross-clusters contamination.

Consider minimizing the entropy of a mixture of states by choosing each state subject to constraints. If the spectrum of each state is fixed, we expect that in order to reduce the entropy of the mixture, we should make the states less distinguishable in some sense. Here, we study a class of optimization problems that are inspired by this situation and shed light on the relevant notions of distinguishability. The motivation for our study is the recently introduced spin alignment conjecture. In the original version of the underlying problem, each state in the mixture is constrained to be a freely chosen state on a subset of $n$ qubits tensored with a fixed state $Q$ on each of the qubits in the complement. According to the conjecture, the entropy of the mixture is minimized by choosing the freely chosen state in each term to be a tensor product of projectors onto a fixed maximal eigenvector of $Q$, which maximally "aligns" the terms in the mixture. We generalize this problem in several ways. First, instead of minimizing entropy, we consider maximizing arbitrary unitarily invariant convex functions such as Fan norms and Schatten norms. To formalize and generalize the conjectured required alignment, we define alignment as a preorder on tuples of self-adjoint operators that is induced by majorization. We prove the generalized conjecture for Schatten norms of integer order, for the case where the freely chosen states are constrained to be classical, and for the case where only two states contribute to the mixture and $Q$ is proportional to a projector. The last case fits into a more general situation where we give explicit conditions for maximal alignment. The spin alignment problem has a natural "dual" formulation, versions of which have further generalizations that we introduce.

北京阿比特科技有限公司