亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a kind of differential equations d/dt y(t) = R(y(t))y(t) + f(y(t)) with energy conservation. Such conservative models appear for instance in quantum physics, engineering and molecular dynamics. A new class of energy-preserving schemes is constructed by the ideas of scalar auxiliary variable (SAV) and splitting, from which the nonlinearly implicit schemes have been improved to be linearly implicit. The energy conservation and error estimates are rigorously derived. Based on these results, it is shown that the new proposed schemes have unconditionally energy stability and can be implemented with a cost of solving a linearly implicit system. Numerical experiments are done to confirm these good features of the new schemes.

相關內容

Federated Recommender Systems (FedRecs) are considered privacy-preserving techniques to collaboratively learn a recommendation model without sharing user data. Since all participants can directly influence the systems by uploading gradients, FedRecs are vulnerable to poisoning attacks of malicious clients. However, most existing poisoning attacks on FedRecs are either based on some prior knowledge or with less effectiveness. To reveal the real vulnerability of FedRecs, in this paper, we present a new poisoning attack method to manipulate target items' ranks and exposure rates effectively in the top-$K$ recommendation without relying on any prior knowledge. Specifically, our attack manipulates target items' exposure rate by a group of synthetic malicious users who upload poisoned gradients considering target items' alternative products. We conduct extensive experiments with two widely used FedRecs (Fed-NCF and Fed-LightGCN) on two real-world recommendation datasets. The experimental results show that our attack can significantly improve the exposure rate of unpopular target items with extremely fewer malicious users and fewer global epochs than state-of-the-art attacks. In addition to disclosing the security hole, we design a novel countermeasure for poisoning attacks on FedRecs. Specifically, we propose a hierarchical gradient clipping with sparsified updating to defend against existing poisoning attacks. The empirical results demonstrate that the proposed defending mechanism improves the robustness of FedRecs.

Schedule-based transit assignment describes congestion in public transport services by modeling the interactions of passenger behavior in a time-space network built directly on a transit schedule. This study investigates the theoretical properties of scheduled-based Markovian transit assignment with boarding queues. When queues exist at a station, passenger boarding flows are loaded according to the residual vehicle capacity, which depends on the flows of passengers already on board with priority. An equilibrium problem is formulated under this nonseparable link cost structure as well as explicit capacity constraints. The network generalized extreme value (NGEV) model, a general class of additive random utility models with closed-form expression, is used to describe the path choice behavior of passengers. A set of formulations for the equilibrium problem is presented, including variational inequality and fixed-point problems, from which the day-to-day dynamics of passenger flows and costs are derived. It is shown that Lyapunov functions associated with the dynamics can be obtained and guarantee the desirable solution properties of existence, uniqueness, and global stability of the equilibria. In terms of dealing with stochastic equilibrium with explicit capacity constraints and non-separable link cost functions, the present theoretical analysis is a generalization of the existing day-to-day dynamics in the context of general traffic assignment.

We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.

We analyze the bit complexity of efficient algorithms for fundamental optimization problems, such as linear regression, $p$-norm regression, and linear programming (LP). State-of-the-art algorithms are iterative, and in terms of the number of arithmetic operations, they match the current time complexity of multiplying two $n$-by-$n$ matrices (up to polylogarithmic factors). However, previous work has typically assumed infinite precision arithmetic, and due to complicated inverse maintenance techniques, the actual running times of these algorithms are unknown. To settle the running time and bit complexity of these algorithms, we demonstrate that a core common subroutine, known as \emph{inverse maintenance}, is backward-stable. Additionally, we show that iterative approaches for solving constrained weighted regression problems can be accomplished with bounded-error pre-conditioners. Specifically, we prove that linear programs can be solved approximately in matrix multiplication time multiplied by polylog factors that depend on the condition number $\kappa$ of the matrix and the inner and outer radius of the LP problem. $p$-norm regression can be solved approximately in matrix multiplication time multiplied by polylog factors in $\kappa$. Lastly, linear regression can be solved approximately in input-sparsity time multiplied by polylog factors in $\kappa$. Furthermore, we present results for achieving lower than matrix multiplication time for $p$-norm regression by utilizing faster solvers for sparse linear systems.

The propagation of charged particles through a scattering medium in the presence of a magnetic field can be described by a Fokker-Planck equation with Lorentz force. This model is studied both, from a theoretical and a numerical point of view. A particular trace estimate is derived for the relevant function spaces to clarify the meaning of boundary values. Existence of a weak solution is then proven by the Rothe method. In the second step of our investigations, a fully practicable discretization scheme is proposed based on implicit time-stepping through the energy levels and a spherical-harmonics finite-element discretization with respect to the remaining variables. A full error analysis of the resulting scheme is given, and numerical results are presented to illustrate the theoretical results and the performance of the proposed method.

Linear partial differential equations (PDEs) are an important, widely applied class of mechanistic models, describing physical processes such as heat transfer, electromagnetism, and wave propagation. In practice, specialized numerical methods based on discretization are used to solve PDEs. They generally use an estimate of the unknown model parameters and, if available, physical measurements for initialization. Such solvers are often embedded into larger scientific models with a downstream application and thus error quantification plays a key role. However, by ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error. In this work, we approach this problem in a principled fashion by interpreting solving linear PDEs as physics-informed Gaussian process (GP) regression. Our framework is based on a key generalization of the Gaussian process inference theorem to observations made via an arbitrary bounded linear operator. Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements. Demonstrating the strength of this formulation, we prove that it strictly generalizes methods of weighted residuals, a central class of PDE solvers including collocation, finite volume, pseudospectral, and (generalized) Galerkin methods such as finite element and spectral methods. This class can thus be directly equipped with a structured error estimate. In summary, our results enable the seamless integration of mechanistic models as modular building blocks into probabilistic models by blurring the boundaries between numerical analysis and Bayesian inference.

In this paper we investigate the stability properties of the so-called gBBKS and GeCo methods, which belong to the class of nonstandard schemes and preserve the positivity as well as all linear invariants of the underlying system of ordinary differential equations for any step size. A stability investigation for these methods, which are outside the class of general linear methods, is challenging since the iterates are always generated by a nonlinear map even for linear problems. Recently, a stability theorem was derived presenting criteria for understanding such schemes. For the analysis, the schemes are applied to general linear equations and proven to be generated by $\mathcal C^1$-maps with locally Lipschitz continuous first derivatives. As a result, the above mentioned stability theorem can be applied to investigate the Lyapunov stability of non-hyperbolic fixed points of the numerical method by analyzing the spectrum of the corresponding Jacobian of the generating map. In addition, if a fixed point is proven to be stable, the theorem guarantees the local convergence of the iterates towards it. In the case of first and second order gBBKS schemes the stability domain coincides with that of the underlying Runge--Kutta method. Furthermore, while the first order GeCo scheme converts steady states to stable fixed points for all step sizes and all linear test problems of finite size, the second order GeCo scheme has a bounded stability region for the considered test problems. Finally, all theoretical predictions from the stability analysis are validated numerically.

Higher order finite difference Weighted Essentially Non-Oscillatory (WENO) schemes have been constructed for conservation laws. For multidimensional problems, they offer high order accuracy at a fraction of the cost of a finite volume WENO or DG scheme of comparable accuracy. This makes them quite attractive for several science and engineering applications. But, to the best of our knowledge, such schemes have not been extended to non-linear hyperbolic systems with non-conservative products. In this paper, we perform such an extension which improves the domain of applicability of such schemes. The extension is carried out by writing the scheme in fluctuation form. We use the HLLI Riemann solver of Dumbser and Balsara (2016) as a building block for carrying out this extension. Because of the use of an HLL building block, the resulting scheme has a proper supersonic limit. The use of anti-diffusive fluxes ensures that stationary discontinuities can be preserved by the scheme, thus expanding its domain of applicability. Our new finite difference WENO formulation uses the same WENO reconstruction that was used in classical versions, making it very easy for users to transition over to the present formulation. For conservation laws, the new finite difference WENO is shown to perform as well as the classical version of finite difference WENO, with two major advantages:- 1) It can capture jumps in stationary linearly degenerate wave families exactly. 2) It only requires the reconstruction to be applied once. Several examples from hyperbolic PDE systems with non-conservative products are shown which indicate that the scheme works and achieves its design order of accuracy for smooth multidimensional flows. Stringent Riemann ... *Abstract truncated, see PDF*

In this work we show an error estimate for a first order Gaussian beam at a fold caustic, approximating time-harmonic waves governed by the Helmholtz equation. For the caustic that we study the exact solution can be constructed using Airy functions and there are explicit formulae for the Gaussian beam parameters. Via precise comparisons we show that the pointwise error on the caustic is of the order $O(k^{-5/6})$ where $k$ is the wave number in Helmholtz.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司