亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The recent explosion of genetic and high dimensional biobank and 'omic' data has provided researchers with the opportunity to investigate the shared genetic origin (pleiotropy) of hundreds to thousands of related phenotypes. However, existing methods for multi-phenotype genome-wide association studies (GWAS) do not model pleiotropy, are only applicable to a small number of phenotypes, or provide no way to perform inference. To add further complication, raw genetic and phenotype data are rarely observed, meaning analyses must be performed on GWAS summary statistics whose statistical properties in high dimensions are poorly understood. We therefore developed a novel model, theoretical framework, and set of methods to perform Bayesian inference in GWAS of high dimensional phenotypes using summary statistics that explicitly model pleiotropy, beget fast computation, and facilitate the use of biologically informed priors. We demonstrate the utility of our procedure by applying it to metabolite GWAS, where we develop new nonparametric priors for genetic effects on metabolite levels that use known metabolic pathway information and foster interpretable inference at the pathway level.

相關內容

Assessing advancements of technology is essential for creating science and technology policies and making informed investments in the technology market. However, current methods primarily focus on the characteristics of the technologies themselves, making it difficult to accurately assess technologies across various fields and generations. To address this challenge, we propose a novel approach that uses bibliometrics, specifically literature citation networks, to measure changes in knowledge flow throughout the evolution of technology. This method can identify diverse trends in technology development and is an effective tool for evaluating technological advancements. We demonstrate its accuracy and applicability by applying it to mobile communication technology and comparing its quantitative results with other assessment methods. Our work provides critical support for assessing different technical routes and formulating technology policy.

Statistical models typically capture uncertainties in our knowledge of the corresponding real-world processes, however, it is less common for this uncertainty specification to capture uncertainty surrounding the values of the inputs to the model, which are often assumed known. We develop general modelling methodology with uncertain inputs in the context of the Bayes linear paradigm, which involves adjustment of second-order belief specifications over all quantities of interest only, without the requirement for probabilistic specifications. In particular, we propose an extension of commonly-employed second-order modelling assumptions to the case of uncertain inputs, with explicit implementation in the context of regression analysis, stochastic process modelling, and statistical emulation. We apply the methodology to a regression model for extracting aluminium by electrolysis, and emulation of the motivating epidemiological simulator chain to model the impact of an airborne infectious disease.

The Sinc approximation applied to double-exponentially decaying functions is referred to as the DE-Sinc approximation. Because of its high efficiency, this method has been used in various applications. In the Sinc approximation, its mesh size and truncation numbers should be optimally selected to achieve its best performance. However, the standard selection formula has only been ``near-optimally'' selected because the optimal formula of the mesh size cannot be expressed in terms of elementary functions of truncation numbers. In this study, we propose two improved selection formulas. The first one is based on the concept by an earlier research that resulted in a better selection formula for the double-exponential formula. The formula performs slightly better than the standard one, but is still not optimal. As a second selection formula, we introduce a new parameter to propose truly optimal selection formula. We provide explicit error bounds for both selection formulas. Numerical comparisons show that the first formula gives a better error bound than the standard formula, and the second formula gives a much better error bound than the standard and first formulas.

In several large-scale replication projects, statistically non-significant results in both the original and the replication study have been interpreted as a "replication success". Here we discuss the logical problems with this approach: Non-significance in both studies does not ensure that the studies provide evidence for the absence of an effect and "replication success" can virtually always be achieved if the sample sizes are small enough. In addition, the relevant error rates are not controlled. We show how methods, such as equivalence testing and Bayes factors, can be used to adequately quantify the evidence for the absence of an effect and how they can be applied in the replication setting. Using data from the Reproducibility Project: Cancer Biology we illustrate that many original and replication studies with "null results" are in fact inconclusive, and that their replicability is lower than suggested by the non-significance approach. We conclude that it is important to also replicate studies with statistically non-significant results, but that they should be designed, analyzed, and interpreted appropriately.

A class of stochastic Besov spaces $B^p L^2(\Omega;\dot H^\alpha(\mathcal{O}))$, $1\le p\le\infty$ and $\alpha\in[-2,2]$, is introduced to characterize the regularity of the noise in the semilinear stochastic heat equation \begin{equation*} {\rm d} u -\Delta u {\rm d} t =f(u) {\rm d} t + {\rm d} W(t) , \end{equation*} under the following conditions for some $\alpha\in(0,1]$: $$ \Big\| \int_0^te^{-(t-s)A}{\rm d} W(s) \Big\|_{L^2(\Omega;L^2(\mathcal{O}))} \le C t^{\frac{\alpha}{2}} \quad\mbox{and}\quad \Big\| \int_0^te^{-(t-s)A}{\rm d} W(s) \Big\|_{B^\infty L^2(\Omega;\dot H^\alpha(\mathcal{O}))}\le C. $$ The conditions above are shown to be satisfied by both trace-class noises (with $\alpha=1$) and one-dimensional space-time white noises (with $\alpha=\frac12$). The latter would fail to satisfy the conditions with $\alpha=\frac12$ if the stochastic Besov norm $\|\cdot\|_{B^\infty L^2(\Omega;\dot H^\alpha(\mathcal{O}))}$ is replaced by the classical Sobolev norm $\|\cdot\|_{L^2(\Omega;\dot H^\alpha(\mathcal{O}))}$, and this often causes reduction of the convergence order in the numerical analysis of the semilinear stochastic heat equation. In this article, the convergence of a modified exponential Euler method, with a spectral method for spatial discretization, is proved to have order $\alpha$ in both time and space for possibly nonsmooth initial data in $L^4(\Omega;\dot{H}^{\beta}(\mathcal{O}))$ with $\beta>-1$, by utilizing the real interpolation properties of the stochastic Besov spaces and a class of locally refined stepsizes to resolve the singularity of the solution at $t=0$.

Motivated by the recent success of Machine Learning tools in wireless communications, the idea of semantic communication by Weaver from 1949 has gained attention. It breaks with Shannon's classic design paradigm by aiming to transmit the meaning, i.e., semantics, of a message instead of its exact version, allowing for information rate savings. In this work, we apply the Stochastic Policy Gradient (SPG) to design a semantic communication system by reinforcement learning, not requiring a known or differentiable channel model - a crucial step towards deployment in practice. Further, we motivate the use of SPG for both classic and semantic communication from the maximization of the mutual information between received and target variables. Numerical results show that our approach achieves comparable performance to a model-aware approach based on the reparametrization trick, albeit with a decreased convergence rate.

Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability. However, learning a DT from data is a difficult optimization problem, as it is non-convex and non-differentiable. Therefore, common approaches learn DTs using a greedy growth algorithm that minimizes the impurity locally at each internal node. Unfortunately, this greedy procedure can lead to suboptimal trees. In this paper, we present a novel approach for learning hard, axis-aligned DTs with gradient descent. The proposed method uses backpropagation with a straight-through operator on a dense DT representation to jointly optimize all tree parameters. Our approach outperforms existing methods on binary classification benchmarks and achieves competitive results for multi-class tasks.

Blockchain is an emerging decentralized data collection, sharing and storage technology, which have provided abundant transparent, secure, tamper-proof, secure and robust ledger services for various real-world use cases. Recent years have witnessed notable developments of blockchain technology itself as well as blockchain-adopting applications. Most existing surveys limit the scopes on several particular issues of blockchain or applications, which are hard to depict the general picture of current giant blockchain ecosystem. In this paper, we investigate recent advances of both blockchain technology and its most active research topics in real-world applications. We first review the recent developments of consensus mechanisms and storage mechanisms in general blockchain systems. Then extensive literature is conducted on blockchain enabled IoT, edge computing, federated learning and several emerging applications including healthcare, COVID-19 pandemic, social network and supply chain, where detailed specific research topics are discussed in each. Finally, we discuss the future directions, challenges and opportunities in both academia and industry.

This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade. We generalize the formulation of classification margins from classical research to latest DNNs, summarize theoretical connections between the margin, network generalization, and robustness, and introduce recent efforts in enlarging the margins for DNNs comprehensively. Since the viewpoint of different methods is discrepant, we categorize them into groups for ease of comparison and discussion in the paper. Hopefully, our discussions and overview inspire new research work in the community that aim to improve the performance of DNNs, and we also point to directions where the large margin principle can be verified to provide theoretical evidence why certain regularizations for DNNs function well in practice. We managed to shorten the paper such that the crucial spirit of large margin learning and related methods are better emphasized.

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

北京阿比特科技有限公司