亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper is concerned with the time-dependent Maxwell's equations for a plane interface between a negative material described by the Drude model and the vacuum, which fill, respectively, two complementary half-spaces. In a first paper, we have constructed a generalized Fourier transform which diagonalizes the Hamiltonian that represents the propagation of transverse electric waves. In this second paper, we use this transform to prove the limiting absorption and limiting amplitude principles, which concern, respectively, the behavior of the resolvent near the continuous spectrum and the long time response of the medium to a time-harmonic source of prescribed frequency. This paper also underlines the existence of an interface resonance which occurs when there exists a particular frequency characterized by a ratio of permittivities and permeabilities equal to $-1$ across the interface. At this frequency, the response of the system to a harmonic forcing term blows up linearly in time. Such a resonance is unusual for wave problem in unbounded domains and corresponds to a non-zero embedded eigenvalue of infinite multiplicity of the underlying operator. This is the time counterpart of the ill-posdness of the corresponding harmonic problem.

相關內容

迄今為止,產品設計師最友好的交互動畫軟件。

Casimir preserving integrators for stochastic Lie-Poisson equations with Stratonovich noise are developed extending Runge-Kutta Munthe-Kaas methods. The underlying Lie-Poisson structure is preserved along stochastic trajectories. A related stochastic differential equation on the Lie algebra is derived. The solution of this differential equation updates the evolution of the Lie-Poisson dynamics by means of the exponential map. The constructed numerical method conserves Casimir-invariants exactly, which is important for long time integration. This is illustrated numerically for the case of the stochastic heavy top.

In fractured poroelastic media under high differential stress, the shearing of fractures and faults and the corresponding propagation of wing cracks can be induced by fluid injection. Focusing on low-pressure stimulation with fluid pressures below the minimum principal stress but above the threshold required to overcome the fracture's frictional resistance to slip, this paper presents a mathematical model and a numerical solution approach for coupling fluid flow with fracture shearing and propagation. Numerical challenges are related to the strong coupling between hydraulic and mechanical processes, the material discontinuity the fractures represent in the medium, the wide range of spatial scales involved, and the strong effect that fracture deformation and propagation have on the physical processes. The solution approach is based on a multiscale strategy. In the macroscale model, flow in and poroelastic deformation of the matrix are coupled with the flow in the fractures and fracture contact mechanics, allowing fractures to frictionally slide. Fracture propagation is handled at the microscale, where the maximum tangential stress criterion triggers the propagation of fractures, and Paris' law governs the fracture growth processes. Simulations show how the shearing of a fracture due to fluid injection is linked to fracture propagation, including cases with hydraulically and mechanically interacting fractures.

The time-dependent quadratic minimization (TDQM) problem appears in many applications and research projects. It has been reported that the zeroing neural network (ZNN) models can effectively solve the TDQM problem. However, the convergent and robust performance of the existing ZNN models are restricted for lack of a joint-action mechanism of adaptive coefficient and integration enhanced term. Consequently, the residual-based adaption coefficient zeroing neural network (RACZNN) model with integration term is proposed in this paper for solving the TDQM problem. The adaptive coefficient is proposed to improve the performance of convergence and the integration term is embedded to ensure the RACZNN model can maintain reliable robustness while perturbed by variant measurement noises. Compared with the state-of-the-art models, the proposed RACZNN model owns faster convergence and more reliable robustness. Then, theorems are provided to prove the convergence of the RACZNN model. Finally, corresponding quantitative numerical experiments are designed and performed in this paper to verify the performance of the proposed RACZNN model.

We introduce a divergence measure between data distributions based on operators in reproducing kernel Hilbert spaces defined by infinitely divisible kernels. The empirical estimator of the divergence is computed using the eigenvalues of positive definite matrices that are obtained by evaluating the kernel over pairs of samples. The new measure shares similar properties to Jensen-Shannon divergence. Convergence of the proposed estimators follows from concentration results based on the difference between the ordered spectrum of the Gram matrices and the integral operators associated with the population quantities. The proposed measure of divergence avoids the estimation of the probability distribution underlying the data. Numerical experiments involving comparing distributions and applications to sampling unbalanced data for classification show that the proposed divergence can achieve state of the art results.

Neural networks have shown tremendous growth in recent years to solve numerous problems. Various types of neural networks have been introduced to deal with different types of problems. However, the main goal of any neural network is to transform the non-linearly separable input data into more linearly separable abstract features using a hierarchy of layers. These layers are combinations of linear and nonlinear functions. The most popular and common non-linearity layers are activation functions (AFs), such as Logistic Sigmoid, Tanh, ReLU, ELU, Swish and Mish. In this paper, a comprehensive overview and survey is presented for AFs in neural networks for deep learning. Different classes of AFs such as Logistic Sigmoid and Tanh based, ReLU based, ELU based, and Learning based are covered. Several characteristics of AFs such as output range, monotonicity, and smoothness are also pointed out. A performance comparison is also performed among 18 state-of-the-art AFs with different networks on different types of data. The insights of AFs are presented to benefit the researchers for doing further research and practitioners to select among different choices. The code used for experimental comparison is released at: \url{//github.com/shivram1987/ActivationFunctions}.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

The focus of Part I of this monograph has been on both the fundamental properties, graph topologies, and spectral representations of graphs. Part II embarks on these concepts to address the algorithmic and practical issues centered round data/signal processing on graphs, that is, the focus is on the analysis and estimation of both deterministic and random data on graphs. The fundamental ideas related to graph signals are introduced through a simple and intuitive, yet illustrative and general enough case study of multisensor temperature field estimation. The concept of systems on graph is defined using graph signal shift operators, which generalize the corresponding principles from traditional learning systems. At the core of the spectral domain representation of graph signals and systems is the Graph Discrete Fourier Transform (GDFT). The spectral domain representations are then used as the basis to introduce graph signal filtering concepts and address their design, including Chebyshev polynomial approximation series. Ideas related to the sampling of graph signals are presented and further linked with compressive sensing. Localized graph signal analysis in the joint vertex-spectral domain is referred to as the vertex-frequency analysis, since it can be considered as an extension of classical time-frequency analysis to the graph domain of a signal. Important topics related to the local graph Fourier transform (LGFT) are covered, together with its various forms including the graph spectral and vertex domain windows and the inversion conditions and relations. A link between the LGFT with spectral varying window and the spectral graph wavelet transform (SGWT) is also established. Realizations of the LGFT and SGWT using polynomial (Chebyshev) approximations of the spectral functions are further considered. Finally, energy versions of the vertex-frequency representations are introduced.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司