亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Local patterns play an important role in statistical physics as well as in image processing. Two-dimensional ordinal patterns were studied by Ribeiro et al. who determined permutation entropy and complexity in order to classify paintings and images of liquid crystals. Here we find that the 2 by 2 patterns of neighboring pixels come in three types. The statistics of these types, expressed by two parameters, contains the relevant information to describe and distinguish textures. The parameters are most stable and informative for isotropic structures.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 相關系數 · Analysis · Networking · Performer ·
2023 年 2 月 26 日

Subjective image quality measures based on deep neural networks are very related to models of visual neuroscience. This connection benefits engineering but, more interestingly, the freedom to optimize deep networks in different ways, make them an excellent tool to explore the principles behind visual perception (both human and artificial). Recently, a myriad of networks have been successfully optimized for many interesting visual tasks. Although these nets were not specifically designed to predict image quality or other psychophysics, they have shown surprising human-like behavior. The reasons for this remain unclear. In this work, we perform a thorough analysis of the perceptual properties of pre-trained nets (particularly their ability to predict image quality) by isolating different factors: the goal (the function), the data (learning environment), the architecture, and the readout: selected layer(s), fine-tuning of channel relevance, and use of statistical descriptors as opposed to plain readout of responses. Several conclusions can be drawn. All the models correlate better with human opinion than SSIM. More importantly, some of the nets are in pair of state-of-the-art with no extra refinement or perceptual information. Nets trained for supervised tasks such as classification correlate substantially better with humans than LPIPS (a net specifically tuned for image quality). Interestingly, self-supervised tasks such as jigsaw also perform better than LPIPS. Simpler architectures are better than very deep nets. In simpler nets, correlation with humans increases with depth as if deeper layers were closer to human judgement. This is not true in very deep nets. Consistently with reports on illusions and contrast sensitivity, small changes in the image environment does not make a big difference. Finally, the explored statistical descriptors and concatenations had no major impact.

We introduce the Weak-form Estimation of Nonlinear Dynamics (WENDy) method for estimating model parameters for non-linear systems of ODEs. The core mathematical idea involves an efficient conversion of the strong form representation of a model to its weak form, and then solving a regression problem to perform parameter inference. The core statistical idea rests on the Errors-In-Variables framework, which necessitates the use of the iteratively reweighted least squares algorithm. Further improvements are obtained by using orthonormal test functions, created from a set of $C^{\infty}$ bump functions of varying support sizes. We demonstrate that WENDy is a highly robust and efficient method for parameter inference in differential equations. Without relying on any numerical differential equation solvers, WENDy computes accurate estimates and is robust to large (biologically relevant) levels of measurement noise. For low dimensional systems with modest amounts of data, WENDy is competitive with conventional forward solver-based nonlinear least squares methods in terms of speed and accuracy. For both higher dimensional systems and stiff systems, WENDy is typically both faster (often by orders of magnitude) and more accurate than forward solver-based approaches. We illustrate the method and its performance in some common population and neuroscience models, including logistic growth, Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose, and a Protein Transduction Benchmark model. Software and code for reproducing the examples is available at (//github.com/MathBioCU/WENDy).

Measurement error is ubiquitous in many variables - from blood pressure recordings in physiology to intelligence measures in psychology. Structural equation models (SEMs) account for the process of measurement by explicitly distinguishing between latent variables and their measurement indicators. Users often fit entire SEMs to data, but this can fail if some model parameters are not identified. The model-implied instrumental variables (MIIVs) approach is a more flexible alternative that can estimate subsets of model parameters in identified equations. Numerous methods to identify individual parameters also exist in the field of graphical models (such as DAGs), but many of these do not account for measurement effects. Here, we take the concept of "latent-to-observed" (L2O) transformation from the MIIV approach and develop an equivalent graphical L2O transformation that allows applying existing graphical criteria to latent parameters in SEMs. We combine L2O transformation with graphical instrumental variable criteria to obtain an efficient algorithm for non-iterative parameter identification in SEMs with latent variables. We prove that this graphical L2O transformation with the instrumental set criterion is equivalent to the state-of-the-art MIIV approach for SEMs, and show that it can lead to novel identification strategies when combined with other graphical criteria.

Different statistical samples (e.g., from different locations) offer populations and learning systems observations with distinct statistical properties. Samples under (1) 'Unconfounded' growth preserve systems' ability to determine the independent effects of their individual variables on any outcome-of-interest (and lead, therefore, to fair and interpretable black-box predictions). Samples under (2) 'Externally-Valid' growth preserve their ability to make predictions that generalize across out-of-sample variation. The first promotes predictions that generalize over populations, the second over their shared exogeneous factors. We illustrate these theoretic patterns in the full American census from 1840 to 1940, and samples ranging from the street-level all the way to the national. This reveals sample requirements for generalizability over space, and new connections among the Shapley value, U-Statistics (Unbiased Statistics), and Hyperbolic Geometry.

Nearly all simulation-based games have environment parameters that affect incentives in the interaction but are not explicitly incorporated into the game model. To understand the impact of these parameters on strategic incentives, typical game-theoretic analysis involves selecting a small set of representative values, and constructing and analyzing separate game models for each value. We introduce a novel technique to learn a single model representing a family of closely related games that differ in the number of symmetric players or other ordinal environment parameters. Prior work trains a multi-headed neural network to output mixed-strategy deviation payoffs, which can be used to compute symmetric $\varepsilon$-Nash equilibria. We extend this work by making environment parameters into input dimensions of the regressor, enabling a single model to learn patterns which generalize across the parameter space. For continuous and discrete parameters, our results show that these generalized models outperform existing approaches, achieving better accuracy with far less data. This technique makes thorough analysis of the parameter space more tractable, and promotes analyses that capture relationships between parameters and incentives.

This paper proposes innovations to parameter estimation in a generalised logistic regression model in the context of detecting differential item functioning in multi-item measurements. The two newly proposed iterative algorithms are compared with existing methods in a simulation study, and their use is demonstrated in a real data example. Additionally the study examines software implementation including specification of initial values for iterative algorithms, and asymptotic properties with estimation of standard errors. Overall, the proposed methods gave comparable results to existing ones and were superior in some scenarios.

Time-dependent Maxwell's equations govern electromagnetics. Under certain conditions, we can rewrite these equations into a partial differential equation of second order, which in this case is the vectorial wave equation. For the vectorial wave, we investigate the numerical application and the challenges in the implementation. For this purpose, we consider a space-time variational setting, i.e. time is just another spatial dimension. More specifically, we apply integration by parts in time as well as in space, leading to a space-time variational formulation with different trial and test spaces. Conforming discretizations of tensor-product type result in a Galerkin--Petrov finite element method that requires a CFL condition for stability. For this Galerkin--Petrov variational formulation, we study the CFL condition and its sharpness. To overcome the CFL condition, we use a Hilbert-type transformation that leads to a variational formulation with equal trial and test spaces. Conforming space-time discretizations result in a new Galerkin--Bubnov finite element method that is unconditionally stable. In numerical examples, we demonstrate the effectiveness of this Galerkin--Bubnov finite element method. Furthermore, we investigate different projections of the right-hand side and their influence on the convergence rates. This paper is the first step towards a more stable computation and a better understanding of vectorial wave equations in a conforming space-time approach.

The symmetric $C^0$ interior penalty method is one of the most popular discontinuous Galerkin methods for the biharmonic equation. This paper introduces an automatic local selection of the involved stability parameter in terms of the geometry of the underlying triangulation for arbitrary polynomial degrees. The proposed choice ensures a stable discretization with guaranteed discrete ellipticity constant. Numerical evidence for uniform and adaptive mesh-refinement and various polynomial degrees supports the reliability and efficiency of the local parameter selection and recommends this in practice. The approach is documented in 2D for triangles, but the methodology behind can be generalized to higher dimensions, to non-uniform polynomial degrees, and to rectangular discretizations. Two appendices present the realization of our proposed parameter selection in various established finite element software packages as well as a detailed documentation of a self-contained MATLAB program for the lowest-order $C^0$ interior penalty method.

The remarkable successes of neural networks in a huge variety of inverse problems have fueled their adoption in disciplines ranging from medical imaging to seismic analysis over the past decade. However, the high dimensionality of such inverse problems has simultaneously left current theory, which predicts that networks should scale exponentially in the dimension of the problem, unable to explain why the seemingly small networks used in these settings work as well as they do in practice. To reduce this gap between theory and practice, we provide a general method for bounding the complexity required for a neural network to approximate a H\"older (or uniformly) continuous function defined on a high-dimensional set with a low-complexity structure. The approach is based on the observation that the existence of a Johnson-Lindenstrauss embedding $A\in\mathbb{R}^{d\times D}$ of a given high-dimensional set $S\subset\mathbb{R}^D$ into a low dimensional cube $[-M,M]^d$ implies that for any H\"older (or uniformly) continuous function $f:S\to\mathbb{R}^p$, there exists a H\"older (or uniformly) continuous function $g:[-M,M]^d\to\mathbb{R}^p$ such that $g(Ax)=f(x)$ for all $x\in S$. Hence, if one has a neural network which approximates $g:[-M,M]^d\to\mathbb{R}^p$, then a layer can be added that implements the JL embedding $A$ to obtain a neural network that approximates $f:S\to\mathbb{R}^p$. By pairing JL embedding results along with results on approximation of H\"older (or uniformly) continuous functions by neural networks, one then obtains results which bound the complexity required for a neural network to approximate H\"older (or uniformly) continuous functions on high dimensional sets. The end result is a general theoretical framework which can then be used to better explain the observed empirical successes of smaller networks in a wider variety of inverse problems than current theory allows.

Variational Inference (VI) is an attractive alternative to Markov Chain Monte Carlo (MCMC) due to its computational efficiency in the case of large datasets and/or complex models with high-dimensional parameters. However, evaluating the accuracy of variational approximations remains a challenge. Existing methods characterize the quality of the whole variational distribution, which is almost always poor in realistic applications, even if specific posterior functionals such as the component-wise means or variances are accurate. Hence, these diagnostics are of practical value only in limited circumstances. To address this issue, we propose the TArgeted Diagnostic for Distribution Approximation Accuracy (TADDAA), which uses many short parallel MCMC chains to obtain lower bounds on the error of each posterior functional of interest. We also develop a reliability check for TADDAA to determine when the lower bounds should not be trusted. Numerical experiments validate the practical utility and computational efficiency of our approach on a range of synthetic distributions and real-data examples, including sparse logistic regression and Bayesian neural network models.

北京阿比特科技有限公司