亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A single-step high-order implicit time integration scheme with controllable numerical dissipation at high frequency is presented for the transient analysis of structural dynamic problems. The amount of numerical dissipation is controlled by a user-specified value of the spectral radius $\rho_\infty$ in the high frequency limit. Using this user-specified parameter as a weight factor, a Pad\'e expansion of the matrix exponential solution of the equation of motion is constructed by mixing the diagonal and sub-diagonal expansions. An efficient time stepping scheme is designed where systems of equations similar in complexity to the standard Newmark method are solved recursively. It is shown that the proposed high-order scheme achieves high-frequency dissipation while minimizing low-frequency dissipation and period errors. The effectiveness of dissipation control and efficiency of the scheme are demonstrated with numerical examples. A simple recommendation on the choice of the controlling parameter and time step size is provided. The source code written in MATLAB and FORTRAN is available for download at: //github.com/ChongminSong/HighOrderTimeIntegration.

相關內容

Designing and generating new data under targeted properties has been attracting various critical applications such as molecule design, image editing and speech synthesis. Traditional hand-crafted approaches heavily rely on expertise experience and intensive human efforts, yet still suffer from the insufficiency of scientific knowledge and low throughput to support effective and efficient data generation. Recently, the advancement of deep learning induces expressive methods that can learn the underlying representation and properties of data. Such capability provides new opportunities in figuring out the mutual relationship between the structural patterns and functional properties of the data and leveraging such relationship to generate structural data given the desired properties. This article provides a systematic review of this promising research area, commonly known as controllable deep data generation. Firstly, the potential challenges are raised and preliminaries are provided. Then the controllable deep data generation is formally defined, a taxonomy on various techniques is proposed and the evaluation metrics in this specific domain are summarized. After that, exciting applications of controllable deep data generation are introduced and existing works are experimentally analyzed and compared. Finally, the promising future directions of controllable deep data generation are highlighted and five potential challenges are identified.

We consider stochastic gradient descent and its averaging variant for binary classification problems in a reproducing kernel Hilbert space. In the traditional analysis using a consistency property of loss functions, it is known that the expected classification error converges more slowly than the expected risk even when assuming a low-noise condition on the conditional label probabilities. Consequently, the resulting rate is sublinear. Therefore, it is important to consider whether much faster convergence of the expected classification error can be achieved. In recent research, an exponential convergence rate for stochastic gradient descent was shown under a strong low-noise condition but provided theoretical analysis was limited to the squared loss function, which is somewhat inadequate for binary classification tasks. In this paper, we show an exponential convergence of the expected classification error in the final phase of the stochastic gradient descent for a wide class of differentiable convex loss functions under similar assumptions. As for the averaged stochastic gradient descent, we show that the same convergence rate holds from the early phase of training. In experiments, we verify our analyses on the $L_2$-regularized logistic regression.

The paper is devoted to an approach to solving a problem of the efficiency of parallel computing. The theoretical basis of this approach is the concept of a $Q$-determinant. Any numerical algorithm has a $Q$-determinant. The $Q$-determinant of the algorithm has clear structure and is convenient for implementation. The $Q$-determinant consists of $Q$-terms. Their number is equal to the number of output data items. Each $Q$-term describes all possible ways to compute one of the output data items based on the input data. We also describe a software $Q$-system for studying the parallelism resource of numerical algorithms. This system enables to compute and compare the parallelism resources of numerical algorithms. The application of the $Q$-system is shown on the example of numerical algorithms with different structures of $Q$-determinants. Furthermore, we suggest a method for designing of parallel programs for numerical algorithms. This method is based on a representation of a numerical algorithm in the form of a $Q$-determinant. As a result, we can obtain the program using the parallelism resource of the algorithm completely. Such programs are called $Q$-effective. The results of this research can be applied to increase the implementation efficiency of numerical algorithms, methods, as well as algorithmic problems on parallel computing systems.

We study \textit{rescaled gradient dynamical systems} in a Hilbert space $\mathcal{H}$, where implicit discretization in a finite-dimensional Euclidean space leads to high-order methods for solving monotone equations (MEs). Our framework can be interpreted as a natural generalization of celebrated dual extrapolation method~\citep{Nesterov-2007-Dual} from first order to high order via appeal to the regularization toolbox of optimization theory~\citep{Nesterov-2021-Implementable, Nesterov-2021-Inexact}. More specifically, we establish the existence and uniqueness of a global solution and analyze the convergence properties of solution trajectories. We also present discrete-time counterparts of our high-order continuous-time methods, and we show that the $p^{th}$-order method achieves an ergodic rate of $O(k^{-(p+1)/2})$ in terms of a restricted merit function and a pointwise rate of $O(k^{-p/2})$ in terms of a residue function. Under regularity conditions, the restarted version of $p^{th}$-order methods achieves local convergence with the order $p \geq 2$. Notably, our methods are \textit{optimal} since they have matched the lower bound established for solving the monotone equation problems under a standard linear span assumption~\citep{Lin-2022-Perseus}.

Image Retrieval is commonly evaluated with Average Precision (AP) or Recall@k. Yet, those metrics, are limited to binary labels and do not take into account errors' severity. This paper introduces a new hierarchical AP training method for pertinent image retrieval (HAP-PIER). HAPPIER is based on a new H-AP metric, which leverages a concept hierarchy to refine AP by integrating errors' importance and better evaluate rankings. To train deep models with H-AP, we carefully study the problem's structure and design a smooth lower bound surrogate combined with a clustering loss that ensures consistent ordering. Extensive experiments on 6 datasets show that HAPPIER significantly outperforms state-of-the-art methods for hierarchical retrieval, while being on par with the latest approaches when evaluating fine-grained ranking performances. Finally, we show that HAPPIER leads to better organization of the embedding space, and prevents most severe failure cases of non-hierarchical methods. Our code is publicly available at: //github.com/elias-ramzi/HAPPIER.

In the pursuit of explaining implicit regularization in deep learning, prominent focus was given to matrix and tensor factorizations, which correspond to simplified neural networks. It was shown that these models exhibit an implicit tendency towards low matrix and tensor ranks, respectively. Drawing closer to practical deep learning, the current paper theoretically analyzes the implicit regularization in hierarchical tensor factorization, a model equivalent to certain deep convolutional neural networks. Through a dynamical systems lens, we overcome challenges associated with hierarchy, and establish implicit regularization towards low hierarchical tensor rank. This translates to an implicit regularization towards locality for the associated convolutional networks. Inspired by our theory, we design explicit regularization discouraging locality, and demonstrate its ability to improve the performance of modern convolutional networks on non-local tasks, in defiance of conventional wisdom by which architectural changes are needed. Our work highlights the potential of enhancing neural networks via theoretical analysis of their implicit regularization.

In this work we derive a WKB expansion for the electromagnetic fields solution of the time-harmonic Maxwell equations set in a domain with a thin layer. As a by-product of this expansion we obtain new second order asymptotic models with generalized impedance transmission conditions that turn out to depend on the mean curvature of the boundary of the subdomain surrounded by the thin layer. We show that these models can be easily integrated in finite element methods by developing mixed variational formulations. One application of this work concerns the computation of the electromagnetic field in biological cells.

It remains challenging to deploy existing risk-averse approaches to real-world applications. The reasons are multi-fold, including the lack of global optimality guarantee and the necessity of learning from long-term consecutive trajectories. Long-term consecutive trajectories are prone to involving visiting hazardous states, which is a major concern in the risk-averse setting. This paper proposes Short-Term VOlatility-controlled Policy Search (STOPS), a novel algorithm that solves risk-averse problems by learning from short-term trajectories instead of long-term trajectories. Short-term trajectories are more flexible to generate, and can avoid the danger of hazardous state visitations. By using an actor-critic scheme with an overparameterized two-layer neural network, our algorithm finds a globally optimal policy at a sublinear rate with proximal policy optimization and natural policy gradient, with effectiveness comparable to the state-of-the-art convergence rate of risk-neutral policy-search methods. The algorithm is evaluated on challenging Mujoco robot simulation tasks under the mean-variance evaluation metric. Both theoretical analysis and experimental results demonstrate a state-of-the-art level of STOPS' performance among existing risk-averse policy search methods.

Graph Neural Networks (GNNs) draw their strength from explicitly modeling the topological information of structured data. However, existing GNNs suffer from limited capability in capturing the hierarchical graph representation which plays an important role in graph classification. In this paper, we innovatively propose hierarchical graph capsule network (HGCN) that can jointly learn node embeddings and extract graph hierarchies. Specifically, disentangled graph capsules are established by identifying heterogeneous factors underlying each node, such that their instantiation parameters represent different properties of the same entity. To learn the hierarchical representation, HGCN characterizes the part-whole relationship between lower-level capsules (part) and higher-level capsules (whole) by explicitly considering the structure information among the parts. Experimental studies demonstrate the effectiveness of HGCN and the contribution of each component.

北京阿比特科技有限公司