亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The multiple scattering theory (MST) is a Green's function method that has been widely used in electronic structure calculations for crystalline disordered systems. The key property of the MST method is the scattering path matrix (SPM) that characterizes the Green's function within a local solution representation. This paper studies various approximations of the SPM, under the condition that an appropriate reference is used for perturbation. In particular, we justify the convergence of the SPM approximations with respect to the size of scattering region and scattering length of reference, which are the central numerical parameters to achieve a linear scaling method with MST. We also present some numerical experiments on several typical systems to support the theory.

相關內容

實(shi)體和(he)物理(li)建(jian)模討(tao)論(lun)會(hui)(hui)(SPM)是(shi)國際會(hui)(hui)議系列,每(mei)年在實(shi)體建(jian)模協會(hui)(hui)(SMA),ACM SIGGRAPH和(he)SIAM幾何設(she)計活動組的(de)支持下舉辦。該會(hui)(hui)議的(de)重點是(shi)幾何和(he)物理(li)建(jian)模的(de)各(ge)個(ge)方面(mian),以及(ji)它(ta)們在設(she)計、分(fen)析和(he)制造以及(ji)生物醫(yi)學、地球(qiu)物理(li)、數字娛樂和(he)其他(ta)領域中的(de)應用。該、 官網地址(zhi):

The graduated optimization approach is a heuristic method for finding globally optimal solutions for nonconvex functions and has been theoretically analyzed in several studies. This paper defines a new family of nonconvex functions for graduated optimization, discusses their sufficient conditions, and provides a convergence analysis of the graduated optimization algorithm for them. It shows that stochastic gradient descent (SGD) with mini-batch stochastic gradients has the effect of smoothing the function, the degree of which is determined by the learning rate and batch size. This finding provides theoretical insights on why large batch sizes fall into sharp local minima, why decaying learning rates and increasing batch sizes are superior to fixed learning rates and batch sizes, and what the optimal learning rate scheduling is. To the best of our knowledge, this is the first paper to provide a theoretical explanation for these aspects. Moreover, a new graduated optimization framework that uses a decaying learning rate and increasing batch size is analyzed and experimental results of image classification that support our theoretical findings are reported.

Mutation validation (MV) is a recently proposed approach for model selection, garnering significant interest due to its unique characteristics and potential benefits compared to the widely used cross-validation (CV) method. In this study, we empirically compared MV and $k$-fold CV using benchmark and real-world datasets. By employing Bayesian tests, we compared generalization estimates yielding three posterior probabilities: practical equivalence, CV superiority, and MV superiority. We also evaluated the differences in the capacity of the selected models and computational efficiency. We found that both MV and CV select models with practically equivalent generalization performance across various machine learning algorithms and the majority of benchmark datasets. MV exhibited advantages in terms of selecting simpler models and lower computational costs. However, in some cases MV selected overly simplistic models leading to underfitting and showed instability in hyperparameter selection. These limitations of MV became more evident in the evaluation of a real-world neuroscientific task of predicting sex at birth using brain functional connectivity.

The problems of determining the permutation-representation number (prn) and the representation number of bipartite graphs are open in the literature. Moreover, the decision problem corresponding to the determination of the prn of a bipartite graph is NP-complete. However, these numbers were established for certain subclasses of bipartite graphs, e.g., for crown graphs. Further, it was conjectured that the crown graphs have the highest representation number among the bipartite graphs. In this work, first, we reconcile the relation between the prn of a comparability graph and the dimension of its induced poset and review the upper bounds on the prn of bipartite graphs. Then, we study the prn of bipartite graphs using the notion called neighborhood graphs. This approach substantiates the aforesaid conjecture and gives us theoretical evidence. In this connection, we devise a polynomial-time procedure to construct a word that represents a given bipartite graph permutationally. Accordingly, we provide a better upper bound for the prn of bipartite graphs. Further, we construct a class of bipartite graphs, viz., extended crown graphs, defined over posets and investigate its prn using the neighborhood graphs.

Diffusion generative models have emerged as a powerful framework for addressing problems in structural biology and structure-based drug design. These models operate directly on 3D molecular structures. Due to the unfavorable scaling of graph neural networks (GNNs) with graph size as well as the relatively slow inference speeds inherent to diffusion models, many existing molecular diffusion models rely on coarse-grained representations of protein structure to make training and inference feasible. However, such coarse-grained representations discard essential information for modeling molecular interactions and impair the quality of generated structures. In this work, we present a novel GNN-based architecture for learning latent representations of molecular structure. When trained end-to-end with a diffusion model for de novo ligand design, our model achieves comparable performance to one with an all-atom protein representation while exhibiting a 3-fold reduction in inference time.

Age of information (AoI) and reliability are two critical metrics to support real-time applications in Industrial Internet of Things (IIoT). These metrics reflect different concepts of timely delivery of sensor information. Monitoring traffic serves to maintain fresh status updates, expressed in a low AoI, which is important for proper control and actuation actions. On the other hand, safety-critical information, e.g., emergency alarms, is generated sporadically and must be delivered with high reliability within a predefined deadline. In this work, we investigate the AoI-reliability trade-off in a real-time monitoring scenario that supports two traffic flows, namely AoI-oriented traffic and deadline-oriented traffic. Both traffic flows are transmitted to a central controller over an unreliable shared channel. We derive expressions of the average AoI for the AoI-oriented traffic and reliability, represented by Packet Loss Probability (PLP), for the deadline-oriented traffic using Discrete-Time Markov Chain (DTMC). We also conduct discrete-event simulations in MATLAB to validate the analytical results and evaluate the interaction between the two types of traffic flows. The results clearly demonstrate the tradeoff between the AoI and PLP in such heterogeneous IIoT networks and give insights on how to configure the network to achieve a target pair of AoI and PLP.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a time domain discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence regarding the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.

We propose a new matrix factor model, named RaDFaM, the latent structure of which is strictly derived based on a hierarchical rank decomposition of a matrix. Hierarchy is in the sense that all basis vectors of the column space of each multiplier matrix are assumed the structure of a vector factor model. Compared to the most commonly used matrix factor model that takes the latent structure of a bilinear form, RaDFaM involves additional row-wise and column-wise matrix latent factors. This yields modest dimension reduction and stronger signal intensity from the sight of tensor subspace learning, though poses challenges of new estimation procedure and concomitant inferential theory for a collection of matrix-valued observations. We develop a class of estimation procedure that makes use of the separable covariance structure under RaDFaM and approximate least squares, and derive its superiority in the merit of the peak signal-to-noise ratio. We also establish the asymptotic theory when the matrix-valued observations are uncorrelated or weakly correlated. Numerically, in terms of image/matrix reconstruction, supervised learning, and so forth, we demonstrate the excellent performance of RaDFaM through two matrix-valued sequence datasets of independent 2D images and multinational macroeconomic indices time series, respectively.

The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of Empirical Risk Minimization -- one of the fundamental optimization problems in statistical and machine learning -- the computational effectiveness of Frank-Wolfe methods typically grows linearly in the number of data observations $n$. This is in stark contrast to the case for typical stochastic projection methods. In order to reduce this dependence on $n$, we look to second-order smoothness of typical smooth loss functions (least squares loss and logistic loss, for example) and we propose amending the Frank-Wolfe method with Taylor series-approximated gradients, including variants for both deterministic and stochastic settings. Compared with current state-of-the-art methods in the regime where the optimality tolerance $\varepsilon$ is sufficiently small, our methods are able to simultaneously reduce the dependence on large $n$ while obtaining optimal convergence rates of Frank-Wolfe methods, in both the convex and non-convex settings. We also propose a novel adaptive step-size approach for which we have computational guarantees. Last of all, we present computational experiments which show that our methods exhibit very significant speed-ups over existing methods on real-world datasets for both convex and non-convex binary classification problems.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

北京阿比特科技有限公司