亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a Bayesian method for multivariate changepoint detection that allows for simultaneous inference on the location of a changepoint and the coefficients of a logistic regression model for distinguishing pre-changepoint data from post-changepoint data. In contrast to many methods for multivariate changepoint detection, the proposed method is applicable to data of mixed type and avoids strict assumptions regarding the distribution of the data and the nature of the change. The regression coefficients provide an interpretable description of a potentially complex change. For posterior inference, the model admits a simple Gibbs sampling algorithm based on P\'olya-gamma data augmentation. We establish conditions under which the proposed method is guaranteed to recover the true underlying changepoint. As a testing ground for our method, we consider the problem of detecting topological changes in time series of images. We demonstrate that the proposed method, combined with a novel topological feature embedding, performs well on both simulated and real image data.

相關內容

In this work, we address parametric non-stationary fluid dynamics problems within a model order reduction setting based on domain decomposition. Starting from the optimisation-based domain decomposition approach, we derive an optimal control problem, for which we present a convergence analysis in the case of non-stationary incompressible Navier-Stokes equations. We discretize the problem with the finite element method and we compare different model order reduction techniques: POD-Galerkin and a non-intrusive neural network procedure. We show that the classical POD-Galerkin is more robust and accurate also in transient areas, while the neural network can obtain simulations very quickly though being less precise in the presence of discontinuities in time or parameter domain. We test the proposed methodologies on two fluid dynamics benchmarks with physical parameters and time dependency: the non-stationary backward-facing step and lid-driven cavity flow.

Understanding the internal representations learned by neural networks is a cornerstone challenge in the science of machine learning. While there have been significant recent strides in some cases towards understanding how neural networks implement specific target functions, this paper explores a complementary question -- why do networks arrive at particular computational strategies? Our inquiry focuses on the algebraic learning tasks of modular addition, sparse parities, and finite group operations. Our primary theoretical findings analytically characterize the features learned by stylized neural networks for these algebraic tasks. Notably, our main technique demonstrates how the principle of margin maximization alone can be used to fully specify the features learned by the network. Specifically, we prove that the trained networks utilize Fourier features to perform modular addition and employ features corresponding to irreducible group-theoretic representations to perform compositions in general groups, aligning closely with the empirical observations of Nanda et al. and Chughtai et al. More generally, we hope our techniques can help to foster a deeper understanding of why neural networks adopt specific computational strategies.

A major challenge in computed tomography is reconstructing objects from incomplete data. An increasingly popular solution for these problems is to incorporate deep learning models into reconstruction algorithms. This study introduces a novel approach by integrating a Fourier neural operator (FNO) into the Filtered Backprojection (FBP) reconstruction method, yielding the FNO back projection (FNO-BP) network. We employ moment conditions for sinogram extrapolation to assist the model in mitigating artefacts from limited data. Notably, our deep learning architecture maintains a runtime comparable to classical filtered back projection (FBP) reconstructions, ensuring swift performance during both inference and training. We assess our reconstruction method in the context of the Helsinki Tomography Challenge 2022 and also compare it against regular FBP methods.

Social-ecological systems (SES) research aims to understand the nature of social-ecological phenomena, to find effective ways to foster or manage conditions under which desirable phenomena, such as sustainable resource use, occur or to change conditions or reduce the negative consequences of undesirable phenomena, such as poverty traps. Challenges such as these are often addressed using dynamical systems models (DSM) or agent-based models (ABM). Both modeling approaches have strengths and weaknesses. DSM are praised for their analytical tractability and efficient exploration of asymptotic dynamics and bifurcation, which are enabled by reduced number and heterogeneity of system components. ABM allows representing heterogeneity, agency, learning and interactions of diverse agents within SES, but this also comes at a price such as inefficiency to explore asymptotic dynamics or bifurcations. In this paper we combine DSM and ABM to leverage strengths of each modeling technique and gain deeper insights into dynamics of a system. We start with an ABM and research questions that the ABM was not able to answer. Using results of the ABM analysis as inputs for DSM, we create a DSM. Stability and bifurcation analysis of the DSM gives partial answers to the research questions and direct attention to where additional details are needed. This informs further ABM analysis, prevents burdening the ABM with less important details and reveals new insights about system dynamics. The iterative process and dialogue between the ABM and DSM leads to more complete answers to research questions and surpasses insights provided by each of the models separately. We illustrate the procedure with the example of the emergence of poverty traps in an agricultural system with endogenously driven innovation.

Motivated by a recent work on a preconditioned MINRES for flipped linear systems in imaging, in this note we extend the scope of that research for including more precise boundary conditions such as reflective and anti-reflective ones. We prove spectral results for the matrix-sequences associated to the original problem, which justify the use of the MINRES in the current setting. The theoretical spectral analysis is supported by a wide variety of numerical experiments, concerning the visualization of the spectra of the original matrices in various ways. We also report numerical tests regarding the convergence speed and regularization features of the associated GMRES and MINRES methods. Conclusions and open problems end the present study.

Effect modification occurs when the impact of the treatment on an outcome varies based on the levels of other covariates known as effect modifiers. Modeling of these effect differences is important for etiological goals and for purposes of optimizing treatment. Structural nested mean models (SNMMs) are useful causal models for estimating the potentially heterogeneous effect of a time-varying exposure on the mean of an outcome in the presence of time-varying confounding. A data-driven approach for selecting the effect modifiers of an exposure may be necessary if these effect modifiers are a priori unknown and need to be identified. Although variable selection techniques are available in the context of estimating conditional average treatment effects using marginal structural models, or in the context of estimating optimal dynamic treatment regimens, all of these methods consider an outcome measured at a single point in time. In the context of an SNMM for repeated outcomes, we propose a doubly robust penalized G-estimator for the causal effect of a time-varying exposure with a simultaneous selection of effect modifiers and use this estimator to analyze the effect modification in a study of hemodiafiltration. We prove the oracle property of our estimator, and conduct a simulation study for evaluation of its performance in finite samples and for verification of its double-robustness property. Our work is motivated by and applied to the study of hemodiafiltration for treating patients with end-stage renal disease at the Centre Hospitalier de l'Universit\'e de Montr\'eal. We apply the proposed method to investigate the effect heterogeneity of dialysis facility on the repeated session-specific hemodiafiltration outcomes.

Literature is full of inference techniques developed to estimate the parameters of stochastic dynamical systems driven by the well-known Brownian noise. Such diffusion models are often inappropriate models to properly describe the dynamics reflected in many real-world data which are dominated by jump discontinuities of various sizes and frequencies. To account for the presence of jumps, jump-diffusion models are introduced and some inference techniques are developed. Jump-diffusion models are also inadequate models since they fail to reflect the frequent occurrence as well as the continuous spectrum of natural jumps. It is, therefore, crucial to depart from the classical stochastic systems like diffusion and jump-diffusion models and resort to stochastic systems where the regime of stochasticity is governed by the stochastic fluctuations of L\'evy type. Reconstruction of L\'evy-driven dynamical systems, however, has been a major challenge. The literature on the reconstruction of L\'evy-driven systems is rather poor: there are few reconstruction algorithms developed which suffer from one or several problems such as being data-hungry, failing to provide a full reconstruction of noise parameters, tackling only some specific systems, failing to cope with multivariate data in practice, lacking proper validation mechanisms, and many more. This letter introduces a maximum likelihood estimation procedure which grants a full reconstruction of the system, requires less data, and its implementation for multivariate data is quite straightforward. To the best of our knowledge this contribution is the first to tackle all the mentioned shortcomings. We apply our algorithm to simulated data as well as an ice-core dataset spanning the last glaciation. In particular, we find new insights about the dynamics of the climate in the curse of the last glaciation which was not found in previous studies.

We consider the numerical behavior of the fixed-stress splitting method for coupled poromechanics as undrained regimes are approached. We explain that pressure stability is related to the splitting error of the scheme, not the fact that the discrete saddle point matrix never appears in the fixed-stress approach. This observation reconciles previous results regarding the pressure stability of the splitting method. Using examples of compositional poromechanics with application to geological CO$_2$ sequestration, we see that solutions obtained using the fixed-stress scheme with a low order finite element-finite volume discretization which is not inherently inf-sup stable can exhibit the same pressure oscillations obtained with the corresponding fully implicit scheme. Moreover, pressure jump stabilization can effectively remove these spurious oscillations in the fixed-stress setting, while also improving the efficiency of the scheme in terms of the number of iterations required at every time step to reach convergence.

This paper studies the complexity of classical modal logics and of their extension with fixed-point operators, using translations to transfer results across logics. In particular, we show several complexity results for multi-agent logics via translations to and from the $\mu$-calculus and modal logic, which allow us to transfer known upper and lower bounds. We also use these translations to introduce a terminating tableau system for the logics we study, based on Kozen's tableau for the $\mu$-calculus, and the one of Fitting and Massacci for modal logic. Finally, we show how to encode the tableaux we introduced into $\mu$-calculus formulas. This encoding provides upper bounds for the satisfiability checking of the few logics we previously did not have algorithms for.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司