亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Driven by the availability of modern software and hardware, Bayesian analysis is becoming more popular in neutron and X-ray reflectometry analysis. The understandability and replicability of these analyses may be harmed by inconsistencies in how the probability distributions central to Bayesian methods are represented in the literature. Herein, we provide advice on how to report the results of Bayesian analysis as applied to neutron and X-ray reflectometry. This includes the clear reporting of initial starting conditions, the prior probabilities, and results of any analysis, and the posterior probabilities that are the Bayesian equivalent of the error bar, to enable replicability and improve understanding. We believe that this advice, grounded in our experience working in the field, will enable greater analytical reproducibility among the reflectometry community, as well as improve the quality and usability of results.

相關內容

We investigate time complexities of finite difference methods for solving the high-dimensional linear heat equation, the high-dimensional linear hyperbolic equation and the multiscale hyperbolic heat system with quantum algorithms (hence referred to as the "quantum difference methods"). For the heat and linear hyperbolic equations we study the impact of explicit and implicit time discretizations on quantum advantages over the classical difference method. For the multiscale problem, we find the time complexity of both the classical treatment and quantum treatment for the explicit scheme scales as $\mathcal{O}(1/\varepsilon)$, where $\varepsilon$ is the scaling parameter, while the scaling for the multiscale Asymptotic-Preserving (AP) schemes does not depend on $\varepsilon$. This indicates that it is still of great importance to develop AP schemes for multiscale problems in quantum computing.

Transformer is a transformative framework that models sequential data and has achieved remarkable performance on a wide range of tasks, but with high computational and energy cost. To improve its efficiency, a popular choice is to compress the models via binarization which constrains the floating-point values into binary ones to save resource consumption owing to cheap bitwise operations significantly. However, existing binarization methods only aim at minimizing the information loss for the input distribution statistically, while ignoring the pairwise similarity modeling at the core of the attention mechanism. To this end, we propose a new binarization paradigm customized to high-dimensional softmax attention via kernelized hashing, called EcoFormer, to map the original queries and keys into low-dimensional binary codes in Hamming space. The kernelized hash functions are learned to match the ground-truth similarity relations extracted from the attention map in a self-supervised way. Based on the equivalence between the inner product of binary codes and the Hamming distance as well as the associative property of matrix multiplication, we can approximate the attention in linear complexity by expressing it as a dot-product of binary codes. Moreover, the compact binary representations of queries and keys enable us to replace most of the expensive multiply-accumulate operations in attention with simple accumulations to save considerable on-chip energy footprint on edge devices. Extensive experiments on both vision and language tasks show that EcoFormer consistently achieves comparable performance with standard attentions while consuming much fewer resources. For example, based on PVTv2-B0 and ImageNet-1K, Ecoformer achieves a 73% energy footprint reduction with only a 0.33% performance drop compared to the standard attention. Code is available at //github.com/ziplab/EcoFormer.

Sparse code multiple access (SCMA) is the most concerning scheme among non-orthogonal multiple access (NOMA) technologies for 5G wireless communication new interface. Another efficient technique in 5G aimed to improve spectral efficiency for local communications is device-to-device (D2D) communications. Therefore, we utilize the SCMA cellular network coexisting with D2D communications for the connection demand of the Internet of things (IOT), and improve the system sum rate performance of the hybrid network. We first derive the information-theoretic expression of the capacity for all users and find the capacity bound of cellular users based on the mutual interference between cellular users and D2D users. Then we consider the power optimization problem for the cellular users and D2D users jointly to maximize the system sum rate. To tackle the non-convex optimization problem, we propose a geometric programming (GP) based iterative power allocation algorithm. Simulation results demonstrate that the proposed algorithm converges fast and well improves the sum rate performance.

Simultaneously transmitting/refracting and reflecting reconfigurable intelligent surface (STAR-RIS) has been introduced to achieve full coverage area. This paper investigate the performance of STAR-RIS assisted non-orthogonal multiple access (NOMA) networks over Rician fading channels, where the incidence signals sent by base station are reflected and transmitted to the nearby user and distant user, respectively. To evaluate the performance of STAR-RIS-NOMA networks, we derive new approximate expressions of outage probability and ergodic rate for a pair of users, in which the imperfect successive interference cancellation (ipSIC) and perfect SIC (pSIC) schemes are taken into consideration. Based on the asymptotic expressions, the diversity orders of the nearby user with ipSIC/pSIC and distant user are achieved carefully. The high signal-to-noise ratio slopes of ergodic rates for nearby user with pSIC and distant user are equal to $one$ and $zero$, respectively. In addition, the system throughput of STAR-RIS-NOMA is discussed in delay-limited and delay-tolerant modes. Simulation results are provided to verify the accuracy of the theoretical analyses and demonstrate that: 1) The outage probability of STAR-RIS-NOMA outperforms that of STAR-RIS assisted orthogonal multiple access (OMA) and conventional cooperative communication systems; 2) With the increasing of reflecting elements $K$ and Rician factor $\kappa $, the STAR-RIS-NOMA networks are capable of attaining the enhanced performance; and 3) The ergodic rates of STAR-RIS-NOMA are superior to that of STAR-RIS-OMA.

In this paper, we consider a structurally damped elastic equation under hinged boundary conditions. Fully-discrete numerical approximation schemes are generated for the null controllability of these parabolic-like PDEs. We mainly use finite element method (FEM) and finite difference method (FDM) approximations to show that the null controllers being approximated via FEM and FDM exhibit exactly the same asymptotics of the associated minimal energy function. For this, we appeal to the theory originally given by R. Triggiani [20] for construction of null controllers of ODE systems. These null controllers are also amenable to our numerical implementation in which we discuss the aspects of FEM and FDM numerical approximations and compare both methodologies. We justify our theoretical results with the numerical experiments given for both approximation schemes.

Prophet inequalities for rewards maximization are fundamental results from optimal stopping theory with several applications to mechanism design and online optimization. We study the cost minimization counterpart of the classical prophet inequality, where one is facing a sequence of costs $X_1, X_2, \dots, X_n$ in an online manner and must ''stop'' at some point and take the last cost seen. Given that the $X_i$'s are independent, drawn from known distributions, the goal is to devise a stopping strategy $S$ (online algorithm) that minimizes the expected cost. We first observe that if the $X_i$'s are not identically distributed, then no strategy can achieve a bounded approximation, no matter if the arrival order is adversarial or random. This leads us to consider the case where the $X_i$'s are I.I.D.. For the I.I.D. case, we give a complete characterization of the optimal stopping strategy. We show that it achieves a (distribution-dependent) constant-factor approximation to the prophet's cost for almost all distributions and that this constant is tight. In particular, for distributions for which the integral of the hazard rate is a polynomial $H(x) = \sum_{i=1}^k a_i x^{d_i}$, where $d_1 < \dots < d_k$, the approximation factor is $\lambda(d_1)$, a decreasing function of $d_1$. Furthermore, for MHR distributions, we show that this constant is at most $2$, and this is again tight. We also analyze single-threshold strategies for the cost prophet inequality problem. We design a threshold that achieves a $\operatorname{O}(\operatorname{polylog}n)$-factor approximation, where the exponent in the logarithmic factor is a distribution-dependent constant, and we show a matching lower bound. We believe that our results are of independent interest for analyzing approximately optimal (posted price-style) mechanisms for procuring items.

Emergency shelters, which reflect the city's ability to respond to and deal with major public emergencies to a certain extent, are essential to a modern urban emergency management system. This paper is based on spatial analysis methods, using Analytic Hierarchy Process to analyze the suitability of the 28 emergency shelters in Wuhan City. The Technique for Order Preference by Similarity to an Ideal Solution is further used to evaluate the accommodation capacity of emergency shelters in central urban areas, which provides a reference for the optimization of existing shelters and the site selection of new shelters, and provides a basis for improving the service capacity of shelters. The results show that the overall situation of emergency shelters in Wuhan is good, with 96\% of the places reaching the medium level or above, but the suitability level needs to be further improved, especially the effectiveness and accessibility. Among the seven central urban areas in Wuhan, Hongshan District has the strongest accommodation capacity while Jianghan District has the weakest, with noticeable differences.

We introduce two new tools to assess the validity of statistical distributions. These tools are based on components derived from a new statistical quantity, the $comparison$ $curve$. The first tool is a graphical representation of these components on a $bar$ $plot$ (B plot), which can provide a detailed appraisal of the validity of the statistical model, in particular when supplemented by acceptance regions related to the model. The knowledge gained from this representation can sometimes suggest an existing $goodness$-$of$-$fit$ test to supplement this visual assessment with a control of the type I error. Otherwise, an adaptive test may be preferable and the second tool is the combination of these components to produce a powerful $\chi^2$-type goodness-of-fit test. Because the number of these components can be large, we introduce a new selection rule to decide, in a data driven fashion, on their proper number to take into consideration. In a simulation, our goodness-of-fit tests are seen to be powerwise competitive with the best solutions that have been recommended in the context of a fully specified model as well as when some parameters must be estimated. Practical examples show how to use these tools to derive principled information about where the model departs from the data.

Variational Bayesian posterior inference often requires simplifying approximations such as mean-field parametrisation to ensure tractability. However, prior work has associated the variational mean-field approximation for Bayesian neural networks with underfitting in the case of small datasets or large model sizes. In this work, we show that invariances in the likelihood function of over-parametrised models contribute to this phenomenon because these invariances complicate the structure of the posterior by introducing discrete and/or continuous modes which cannot be well approximated by Gaussian mean-field distributions. In particular, we show that the mean-field approximation has an additional gap in the evidence lower bound compared to a purpose-built posterior that takes into account the known invariances. Importantly, this invariance gap is not constant; it vanishes as the approximation reverts to the prior. We proceed by first considering translation invariances in a linear model with a single data point in detail. We show that, while the true posterior can be constructed from a mean-field parametrisation, this is achieved only if the objective function takes into account the invariance gap. Then, we transfer our analysis of the linear model to neural networks. Our analysis provides a framework for future work to explore solutions to the invariance problem.

Neural networks have shown tremendous growth in recent years to solve numerous problems. Various types of neural networks have been introduced to deal with different types of problems. However, the main goal of any neural network is to transform the non-linearly separable input data into more linearly separable abstract features using a hierarchy of layers. These layers are combinations of linear and nonlinear functions. The most popular and common non-linearity layers are activation functions (AFs), such as Logistic Sigmoid, Tanh, ReLU, ELU, Swish and Mish. In this paper, a comprehensive overview and survey is presented for AFs in neural networks for deep learning. Different classes of AFs such as Logistic Sigmoid and Tanh based, ReLU based, ELU based, and Learning based are covered. Several characteristics of AFs such as output range, monotonicity, and smoothness are also pointed out. A performance comparison is also performed among 18 state-of-the-art AFs with different networks on different types of data. The insights of AFs are presented to benefit the researchers for doing further research and practitioners to select among different choices. The code used for experimental comparison is released at: \url{//github.com/shivram1987/ActivationFunctions}.

北京阿比特科技有限公司