亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Boussinesq type equations have been widely studied to model the surface water wave. In this paper, we consider the abcd Boussinesq system which is a family of Boussinesq type equations including many well-known models such as the classical Boussinesq system, BBM-BBM system, Bona-Smith system etc. We propose local discontinuous Galerkin (LDG) methods, with carefully chosen numerical fluxes, to numerically solve this abcd Boussinesq system. The main focus of this paper is to rigorously establish a priori error estimate of the proposed LDG methods for a wide range of the parameters a, b, c, d. Numerical experiments are shown to test the convergence rates, and to demonstrate that the proposed methods can simulate the head-on collision of traveling wave and finite time blow-up behavior well.

相關內容

Multiple antenna arrays play a key role in wireless networks for communications but also localization and sensing. The use of large antenna arrays pushes towards a propagation regime in which the wavefront is no longer plane but spherical. This allows to infer the position and orientation of an arbitrary source from the received signal without the need of using multiple anchor nodes. To understand the fundamental limits of large antenna arrays for localization, this paper fusions wave propagation theory with estimation theory, and computes the Cram{\'e}r-Rao Bound (CRB) for the estimation of the three Cartesian coordinates of the source on the basis of the electromagnetic vector field, observed over a rectangular surface area. To simplify the analysis, we assume that the source is a dipole, whose center is located on the line perpendicular to the surface center, with an orientation a priori known. Numerical and asymptotic results are given to quantify the CRBs, and to gain insights into the effect of various system parameters on the ultimate estimation accuracy. It turns out that surfaces of practical size may guarantee a centimeter-level accuracy in the mmWave bands.

In this work, we present a new high order Discontinuous Galerkin time integration scheme for second-order (in time) differential systems that typically arise from the space discretization of the elastodynamics equation. By rewriting the original equation as a system of first order differential equations we introduce the method and show that the resulting discrete formulation is well-posed, stable and retains super-optimal rate of convergence with respect to the discretization parameters, namely the time step and the polynomial approximation degree. A set of two- and three-dimensional numerical experiments confirm the theoretical bounds. Finally, the method is applied to real geophysical applications.

Nonlinear model order reduction has opened the door to parameter optimization and uncertainty quantification in complex physics problems governed by nonlinear equations. In particular, the computational cost of solving these equations can be reduced by means of local reduced-order bases. This article examines the benefits of a physics-informed cluster analysis for the construction of cluster-specific reduced-order bases. We illustrate that the choice of the dissimilarity measure for clustering is fundamental and highly affects the performances of the local reduced-order bases. It is shown that clustering with an angle-based dissimilarity on simulation data efficiently decreases the intra-cluster Kolmogorov $N$-width. Additionally, an a priori efficiency criterion is introduced to assess the relevance of a ROM-net, a methodology for the reduction of nonlinear physics problems introduced in our previous work in [T. Daniel, F. Casenave, N. Akkari, D. Ryckelynck, Model order reduction assisted by deep neural networks (ROM-net), Advanced Modeling and Simulation in Engineering Sciences 7 (16), 2020]. This criterion also provides engineers with a very practical method for ROM-nets' hyperparameters calibration under constrained computational costs for the training phase. On five different physics problems, our physics-informed clustering strategy significantly outperforms classic strategies for the construction of local reduced-order bases in terms of projection errors.

In this paper, we formulate and study substructuring type algorithm for the Cahn-Hilliard (CH) equation, which was originally proposed to describe the phase separation phenomenon for binary melted alloy below the critical temperature and since then it has appeared in many fields ranging from tumour growth simulation, image processing, thin liquid films, population dynamics etc. Being a non-linear equation, it is important to develop robust numerical techniques to solve the CH equation. Here we present the formulation of Dirichlet-Neumann (DN) and Neumann-Neumann (NN) methods applied to CH equation and study their convergence behaviour. We consider the domain-decomposition based DN and NN methods in one and two space dimension for two subdomains and extend the study for multi-subdomain setting for NN method. We verify our findings with numerical results.

The recently developed physics-informed machine learning has made great progress for solving nonlinear partial differential equations (PDEs), however, it may fail to provide reasonable approximations to the PDEs with discontinuous solutions. In this paper, we focus on the discrete time physics-informed neural network (PINN), and propose a hybrid PINN scheme for the nonlinear PDEs. In this approach, the local solution structures are classified as smooth and nonsmooth scales by introducing a discontinuity indicator, and then the automatic differentiation technique is employed for resolving smooth scales, while an improved weighted essentially non-oscillatory (WENO) scheme is adopted to capture discontinuities. We then test the present approach by considering the viscous and inviscid Burgers equations , and it is shown that compared with original discrete time PINN, the present hybrid approach has a better performance in approximating the discontinuous solution even at a relatively larger time step.

A key challenge of big data analytics is how to collect a large volume of (labeled) data. Crowdsourcing aims to address this challenge via aggregating and estimating high-quality data (e.g., sentiment label for text) from pervasive clients/users. Existing studies on crowdsourcing focus on designing new methods to improve the aggregated data quality from unreliable/noisy clients. However, the security aspects of such crowdsourcing systems remain under-explored to date. We aim to bridge this gap in this work. Specifically, we show that crowdsourcing is vulnerable to data poisoning attacks, in which malicious clients provide carefully crafted data to corrupt the aggregated data. We formulate our proposed data poisoning attacks as an optimization problem that maximizes the error of the aggregated data. Our evaluation results on one synthetic and two real-world benchmark datasets demonstrate that the proposed attacks can substantially increase the estimation errors of the aggregated data. We also propose two defenses to reduce the impact of malicious clients. Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

We present a continuous formulation of machine learning, as a problem in the calculus of variations and differential-integral equations, very much in the spirit of classical numerical analysis and statistical physics. We demonstrate that conventional machine learning models and algorithms, such as the random feature model, the shallow neural network model and the residual neural network model, can all be recovered as particular discretizations of different continuous formulations. We also present examples of new models, such as the flow-based random feature model, and new algorithms, such as the smoothed particle method and spectral method, that arise naturally from this continuous formulation. We discuss how the issues of generalization error and implicit regularization can be studied under this framework.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司