亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We say that $\Gamma$, the boundary of a bounded Lipschitz domain, is locally dilation invariant if, at each $x\in \Gamma$, $\Gamma$ is either locally $C^1$ or locally coincides (in some coordinate system centred at $x$) with a Lipschitz graph $\Gamma_x$ such that $\Gamma_x=\alpha_x\Gamma_x$, for some $\alpha_x\in (0,1)$. In this paper we study, for such $\Gamma$, the essential spectrum of $D_\Gamma$, the double-layer (or Neumann-Poincar\'e) operator of potential theory, on $L^2(\Gamma)$. We show, via localisation and Floquet-Bloch-type arguments, that this essential spectrum %of $D_\Gamma$ %on such $\Gamma$ is the union of the spectra of related continuous families of operators $K_t$, for $t\in [-\pi,\pi]$; moreover, each $K_t$ is compact if $\Gamma$ is $C^1$ except at finitely many points. For the 2D case where, additionally, $\Gamma$ is piecewise analytic, we construct convergent sequences of approximations to the essential spectrum of $D_\Gamma$; each approximation is the union of the eigenvalues of finitely many finite matrices arising from Nystr\"om-method approximations to the operators $K_t$. Through error estimates with explicit constants, we also construct functionals that determine whether any particular locally-dilation-invariant piecewise-analytic $\Gamma$ satisfies the well-known spectral radius conjecture, that the essential spectral radius of $D_\Gamma$ on $L^2(\Gamma)$ is $<1/2$ for all Lipschitz $\Gamma$. We illustrate this theory with examples; for each we show that the essential spectral radius is $<1/2$, providing additional support for the conjecture. We also, via new results on the invariance of the essential spectral radius under locally-conformal $C^{1,\beta}$ diffeomorphisms, show that the spectral radius conjecture holds for all Lipschitz curvilinear polyhedra.

相關內容

Recently, causal inference has attracted increasing attention from researchers of recommender systems (RS), which analyzes the relationship between a cause and its effect and has a wide range of real-world applications in multiple fields. Causal inference can model the causality in recommender systems like confounding effects and deal with counterfactual problems such as offline policy evaluation and data augmentation. Although there are already some valuable surveys on causal recommendations, these surveys introduce approaches in a relatively isolated way and lack theoretical analysis of existing methods. Due to the unfamiliarity with causality to RS researchers, it is both necessary and challenging to comprehensively review the relevant studies from the perspective of causal theory, which might be instructive for the readers to propose new approaches in practice. This survey attempts to provide a systematic review of up-to-date papers in this area from a theoretical standpoint. Firstly, we introduce the fundamental concepts of causal inference as the basis of the following review. Then we propose a new taxonomy from the perspective of causal techniques and further discuss technical details about how existing methods apply causal inference to address specific recommender issues. Finally, we highlight some promising directions for future research in this field.

Explicit exploration in the action space was assumed to be indispensable for online policy gradient methods to avoid a drastic degradation in sample complexity, for solving general reinforcement learning problems over finite state and action spaces. In this paper, we establish for the first time an $\tilde{\mathcal{O}}(1/\epsilon^2)$ sample complexity for online policy gradient methods without incorporating any exploration strategies. The essential development consists of two new on-policy evaluation operators and a novel analysis of the stochastic policy mirror descent method (SPMD). SPMD with the first evaluation operator, called value-based estimation, tailors to the Kullback-Leibler divergence. Provided the Markov chains on the state space of generated policies are uniformly mixing with non-diminishing minimal visitation measure, an $\tilde{\mathcal{O}}(1/\epsilon^2)$ sample complexity is obtained with a linear dependence on the size of the action space. SPMD with the second evaluation operator, namely truncated on-policy Monte Carlo (TOMC), attains an $\tilde{\mathcal{O}}(\mathcal{H}_{\mathcal{D}}/\epsilon^2)$ sample complexity, where $\mathcal{H}_{\mathcal{D}}$ mildly depends on the effective horizon and the size of the action space with properly chosen Bregman divergence (e.g., Tsallis divergence). SPMD with TOMC also exhibits stronger convergence properties in that it controls the optimality gap with high probability rather than in expectation. In contrast to explicit exploration, these new policy gradient methods can prevent repeatedly committing to potentially high-risk actions when searching for optimal policies.

We present an extension of the linear sampling method for solving the sound-soft inverse acoustic scattering problem with randomly distributed point sources. The theoretical justification of our sampling method is based on the Helmholtz--Kirchhoff identity, the cross-correlation between measurements, and the volume and imaginary near-field operators, which we introduce and analyze. Implementations in MATLAB using boundary elements, the SVD, Tikhonov regularization, and Morozov's discrepancy principle are also discussed. We demonstrate the robustness and accuracy of our algorithms with several numerical experiments in two dimensions.

In applications of group testing in networks, e.g. identifying individuals who are infected by a disease spread over a network, exploiting correlation among network nodes provides fundamental opportunities in reducing the number of tests needed. We model and analyze group testing on $n$ correlated nodes whose interactions are specified by a graph $G$. We model correlation through an edge-faulty random graph formed from $G$ in which each edge is dropped with probability $1-r$, and all nodes in the same component have the same state. We consider three classes of graphs: cycles and trees, $d$-regular graphs and stochastic block models or SBM, and obtain lower and upper bounds on the number of tests needed to identify the defective nodes. Our results are expressed in terms of the number of tests needed when the nodes are independent and they are in terms of $n$, $r$, and the target error. In particular, we quantify the fundamental improvements that exploiting correlation offers by the ratio between the total number of nodes $n$ and the equivalent number of independent nodes in a classic group testing algorithm. The lower bounds are derived by illustrating a strong dependence of the number of tests needed on the expected number of components. In this regard, we establish a new approximation for the distribution of component sizes in "$d$-regular trees" which may be of independent interest and leads to a lower bound on the expected number of components in $d$-regular graphs. The upper bounds are found by forming dense subgraphs in which nodes are more likely to be in the same state. When $G$ is a cycle or tree, we show an improvement by a factor of $log(1/r)$. For grid, a graph with almost $2n$ edges, the improvement is by a factor of ${(1-r) \log(1/r)}$, indicating drastic improvement compared to trees. When $G$ has a larger number of edges, as in SBM, the improvement can scale in $n$.

An independent set of a graph $G$ is a vertex subset $I$ such that there is no edge joining any two vertices in $I$. Imagine that a token is placed on each vertex of an independent set of $G$. The $\mathsf{TS}$- ($\mathsf{TS}_k$-) reconfiguration graph of $G$ takes all non-empty independent sets (of size $k$) as its nodes, where $k$ is some given positive integer. Two nodes are adjacent if one can be obtained from the other by sliding a token on some vertex to one of its unoccupied neighbors. This paper focuses on the structure and realizability of these reconfiguration graphs. More precisely, we study two main questions for a given graph $G$: (1) Whether the $\mathsf{TS}_k$-reconfiguration graph of $G$ belongs to some graph class $\mathcal{G}$ (including complete graphs, paths, cycles, complete bipartite graphs, connected split graphs, maximal outerplanar graphs, and complete graphs minus one edge) and (2) If $G$ satisfies some property $\mathcal{P}$ (including $s$-partitedness, planarity, Eulerianity, girth, and the clique's size), whether the corresponding $\mathsf{TS}$- ($\mathsf{TS}_k$-) reconfiguration graph of $G$ also satisfies $\mathcal{P}$, and vice versa. Additionally, we give a decomposition result for splitting a $\mathsf{TS}_k$-reconfiguration graph into smaller pieces.

In this paper, we give pointwise estimates of a Vorono\"i-based finite volume approximation of the Laplace-Beltrami operator on Vorono\"i-Delaunay decompositions of the sphere. These estimates are the basis for a local error analysis, in the maximum norm, of the approximate solution of the Poisson equation and its gradient. Here, we consider the Vorono\"i-based finite volume method as a perturbation of the finite element method. Finally, using regularized Green's functions, we derive quasi-optimal convergence order in the maximum-norm with minimal regularity requirements. Numerical examples show that the convergence is at least as good as predicted.

A class of implicit Milstein type methods is introduced and analyzed in the present article for stochastic differential equations (SDEs) with non-globally Lipschitz drift and diffusion coefficients. By incorporating a pair of method parameters $\theta, \eta \in [0, 1]$ into both the drift and diffusion parts, the new schemes are indeed a kind of drift-diffusion double implicit methods. Within a general framework, we offer upper mean-square error bounds for the proposed schemes, based on certain error terms only getting involved with the exact solution processes. Such error bounds help us to easily analyze mean-square convergence rates of the schemes, without relying on a priori high-order moment estimates of numerical approximations. Putting further globally polynomial growth condition, we successfully recover the expected mean-square convergence rate of order one for the considered schemes with $\theta \in [\tfrac12, 1], \eta \in [0, 1]$. Also, some of the proposed schemes are applied to solve three SDE models evolving in the positive domain $(0, \infty)$. More specifically, the particular drift-diffusion implicit Milstein method ($ \theta = \eta = 1 $) is utilized to approximate the Heston $\tfrac32$-volatility model and the stochastic Lotka-Volterra competition model. The semi-implicit Milstein method ($\theta =1, \eta = 0$) is used to solve the Ait-Sahalia interest rate model. Thanks to the previously obtained error bounds, we reveal the optimal mean-square convergence rate of the positivity preserving schemes under more relaxed conditions, compared with existing relevant results in the literature. Numerical examples are also reported to confirm the previous findings.

We study the hidden-action principal-agent problem in an online setting. In each round, the principal posts a contract that specifies the payment to the agent based on each outcome. The agent then makes a strategic choice of action that maximizes her own utility, but the action is not directly observable by the principal. The principal observes the outcome and receives utility from the agent's choice of action. Based on past observations, the principal dynamically adjusts the contracts with the goal of maximizing her utility. We introduce an online learning algorithm and provide an upper bound on its Stackelberg regret. We show that when the contract space is $[0,1]^m$, the Stackelberg regret is upper bounded by $\widetilde O(\sqrt{m} \cdot T^{1-1/(2m+1)})$, and lower bounded by $\Omega(T^{1-1/(m+2)})$, where $\widetilde O$ omits logarithmic factors. This result shows that exponential-in-$m$ samples are sufficient and necessary to learn a near-optimal contract, resolving an open problem on the hardness of online contract design. Moreover, when contracts are restricted to some subset $\mathcal{F} \subset [0,1]^m$, we define an intrinsic dimension of $\mathcal{F}$ that depends on the covering number of the spherical code in the space and bound the regret in terms of this intrinsic dimension. When $\mathcal{F}$ is the family of linear contracts, we show that the Stackelberg regret grows exactly as $\Theta(T^{2/3})$. The contract design problem is challenging because the utility function is discontinuous. Bounding the discretization error in this setting has been an open problem. In this paper, we identify a limited set of directions in which the utility function is continuous, allowing us to design a new discretization method and bound its error. This approach enables the first upper bound with no restrictions on the contract and action space.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司