亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess important conjugacy properties. When used as priors for the coefficients vector in probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although this result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such a gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that this family is conjugate to the likelihood induced by multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 泛函 · 操作 · 編程語言 ·
2024 年 10 月 3 日

We propose a new step-wise approach to proving observational equivalence, and in particular reasoning about fragility of observational equivalence. Our approach is based on what we call local reasoning. The local reasoning exploits the graphical concept of neighbourhood, and it extracts a new, formal, concept of robustness as a key sufficient condition of observational equivalence. Moreover, our proof methodology is capable of proving a generalised notion of observational equivalence. The generalised notion can be quantified over syntactically restricted contexts instead of all contexts, and also quantitatively constrained in terms of the number of reduction steps. The operational machinery we use is given by a hypergraph-rewriting abstract machine inspired by Girard's Geometry of Interaction. The behaviour of language features, including function abstraction and application, is provided by hypergraph-rewriting rules. We demonstrate our proof methodology using the call-by-value lambda-calculus equipped with (higher-order) state.

We formalize a complete proof of the regular case of Fermat's Last Theorem in the Lean4 theorem prover. Our formalization includes a proof of Kummer's lemma, that is the main obstruction to Fermat's Last Theorem for regular primes. Rather than following the modern proof of Kummer's lemma via class field theory, we prove it by using Hilbert's Theorems 90-94 in a way that is more amenable to formalization.

We study high-dimensional, ridge-regularized logistic regression in a setting in which the covariates may be missing or corrupted by additive noise. When both the covariates and the additive corruptions are independent and normally distributed, we provide exact characterizations of both the prediction error as well as the estimation error. Moreover, we show that these characterizations are universal: as long as the entries of the data matrix satisfy a set of independence and moment conditions, our guarantees continue to hold. Universality, in turn, enables the detailed study of several imputation-based strategies when the covariates are missing completely at random. We ground our study by comparing the performance of these strategies with the conjectured performance -- stemming from replica theory in statistical physics -- of the Bayes optimal procedure. Our analysis yields several insights including: (i) a distinction between single imputation and a simple variant of multiple imputation and (ii) that adding a simple ridge regularization term to single-imputed logistic regression can yield an estimator whose prediction error is nearly indistinguishable from the Bayes optimal prediction error. We supplement our findings with extensive numerical experiments.

We construct an interpolatory high-order cubature rule to compute integrals of smooth functions over self-affine sets with respect to an invariant measure. The main difficulty is the computation of the cubature weights, which we characterize algebraically, by exploiting a self-similarity property of the integral. We propose an $h$-version and a $p$-version of the cubature, present an error analysis and conduct numerical experiments.

We study combinatorial properties of plateaued functions. All quadratic functions, bent functions and most known APN functions are plateaued, so many cryptographic primitives rely on plateaued functions as building blocks. The main focus of our study is the interplay of the Walsh transform and linearity of a plateaued function, its differential properties, and their value distributions, i.e., the sizes of image and preimage sets. In particular, we study the special case of ``almost balanced'' plateaued functions, which only have two nonzero preimage set sizes, generalizing for instance all monomial functions. We achieve several direct connections and (non)existence conditions for these functions, showing for instance that plateaued $d$-to-$1$ functions (and thus plateaued monomials) only exist for a very select choice of $d$, and we derive for all these functions their linearity as well as bounds on their differential uniformity. We also specifically study the Walsh transform of plateaued APN functions and their relation to their value distribution.

We discuss the second-order differential uniformity of vectorial Boolean functions. The closely related notion of second-order zero differential uniformity has recently been studied in connection to resistance to the boomerang attack. We prove that monomial functions with univariate form $x^d$ where $d=2^{2k}+2^k+1$ and $\gcd(k,n)=1$ have optimal second-order differential uniformity. Computational results suggest that, up to affine equivalence, these might be the only optimal cubic power functions. We begin work towards generalising such conditions to all monomial functions of algebraic degree 3. We also discuss further questions arising from computational results.

Finding eigenvalue distributions for a number of sparse random matrix ensembles can be reduced to solving nonlinear integral equations of the Hammerstein type. While a systematic mathematical theory of such equations exists, it has not been previously applied to sparse matrix problems. We close this gap in the literature by showing how one can employ numerical solutions of Hammerstein equations to accurately recover the spectra of adjacency matrices and Laplacians of random graphs. While our treatment focuses on random graphs for concreteness, the methodology has broad applications to more general sparse random matrices.

When the row and column variables consist of the same category in a two-way contingency table, it is specifically called a square contingency table. Since it is clear that the square contingency tables have an association structure, a primary objective is to examine symmetric relationships and transitions between variables. While various models and measures have been proposed to analyze these structures understanding changes between two variables in behavior at two-time points or cohorts, it is also necessary to require a detailed investigation of individual categories and their interrelationships, such as shifts in brand preferences. This paper proposes a novel approach to correspondence analysis (CA) for evaluating departures from symmetry in square contingency tables with nominal categories, using a power-divergence-type measure. The approach ensures that well-known divergences can also be visualized and, regardless of the divergence used, the CA plot consists of two principal axes with equal contribution rates. Additionally, the scaling is independent of sample size, making it well-suited for comparing departures from symmetry across multiple contingency tables. Confidence regions are also constructed to enhance the accuracy of the CA plot.

We propose a method utilizing physics-informed neural networks (PINNs) to solve Poisson equations that serve as control variates in the computation of transport coefficients via fluctuation formulas, such as the Green--Kubo and generalized Einstein-like formulas. By leveraging approximate solutions to the Poisson equation constructed through neural networks, our approach significantly reduces the variance of the estimator at hand. We provide an extensive numerical analysis of the estimators and detail a methodology for training neural networks to solve these Poisson equations. The approximate solutions are then incorporated into Monte Carlo simulations as effective control variates, demonstrating the suitability of the method for moderately high-dimensional problems where fully deterministic solutions are computationally infeasible.

The digital era has raised many societal challenges, including ICT's rising energy consumption and protecting privacy of personal data processing. This paper considers both aspects in relation to machine learning accuracy in an interdisciplinary exploration. We first present a method to measure the effects of privacy-enhancing techniques on data utility and energy consumption. The environmental-privacy-accuracy trade-offs are discovered through an experimental set-up. We subsequently take a storytelling approach to translate these technical findings to experts in non-ICT fields. We draft two examples for a governmental and auditing setting to contextualise our results. Ultimately, users face the task of optimising their data processing operations in a trade-off between energy, privacy, and accuracy considerations where the impact of their decisions is context-sensitive.

北京阿比特科技有限公司