亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The aim of this article is to investigate the well-posedness, stability and convergence of solutions to the time-dependent Maxwell's equations for electric field in conductive media in continuous and discrete settings. The situation we consider would represent a physical problem where a subdomain is emerged in a homogeneous medium, characterized by constant dielectric permittivity and conductivity functions. It is well known that in these homogeneous regions the solution to the Maxwell's equations also solves the wave equation which makes calculations very efficient. In this way our problem can be considered as a coupling problem for which we derive stability and convergence analysis. A number of numerical examples validate theoretical convergence rates of the proposed stabilized explicit finite element scheme.

相關內容

Dephasing is a prominent noise mechanism that afflicts quantum information carriers, and it is one of the main challenges towards realizing useful quantum computation, communication, and sensing. Here we consider discrimination and estimation of bosonic dephasing channels, when using the most general adaptive strategies allowed by quantum mechanics. We reduce these difficult quantum problems to simple classical ones based on the probability densities defining the bosonic dephasing channels. By doing so, we rigorously establish the optimal performance of various distinguishability and estimation tasks and construct explicit strategies to achieve this performance. To the best of our knowledge, this is the first example of a non-Gaussian bosonic channel for which there are exact solutions for these tasks.

Deep denoisers have shown excellent performance in solving inverse problems in signal and image processing. In order to guarantee the convergence, the denoiser needs to satisfy some Lipschitz conditions like non-expansiveness. However, enforcing such constraints inevitably compromises recovery performance. This paper introduces a novel training strategy that enforces a weaker constraint on the deep denoiser called pseudo-contractiveness. By studying the spectrum of the Jacobian matrix, relationships between different denoiser assumptions are revealed. Effective algorithms based on gradient descent and Ishikawa process are derived, and further assumptions of strict pseudo-contractiveness yield efficient algorithms using half-quadratic splitting and forward-backward splitting. The proposed algorithms theoretically converge strongly to a fixed point. A training strategy based on holomorphic transformation and functional calculi is proposed to enforce the pseudo-contractive denoiser assumption. Extensive experiments demonstrate superior performance of the pseudo-contractive denoiser compared to related denoisers. The proposed methods are competitive in terms of visual effects and quantitative values.

We consider the posets of equivalence relations on finite sets under the standard embedding ordering and under the consecutive embedding ordering. In the latter case, the relations are also assumed to have an underlying linear order, which governs consecutive embeddings. For each poset we ask the well quasi-order and atomicity decidability questions: Given finitely many equivalence relations $\rho_1,\dots,\rho_k$, is the downward closed set Av$(\rho_1,\dots,\rho_k)$ consisting of all equivalence relations which do not contain any of $\rho_1,\dots,\rho_k$: (a) well-quasi-ordered, meaning that it contains no infinite antichains? and (b) atomic, meaning that it is not a union of two proper downward closed subsets, or, equivalently, that it satisfies the joint embedding property?

We propose a generalization of nonlinear stability of numerical one-step integrators to Riemannian manifolds in the spirit of Butcher's notion of B-stability. Taking inspiration from Simpson-Porco and Bullo, we introduce non-expansive systems on such manifolds and define B-stability of integrators. In this first exposition, we provide concrete results for a geodesic version of the Implicit Euler (GIE) scheme. We prove that the GIE method is B-stable on Riemannian manifolds with non-positive sectional curvature. We show through numerical examples that the GIE method is expansive when applied to a certain non-expansive vector field on the 2-sphere, and that the GIE method does not necessarily possess a unique solution for large enough step sizes. Finally, we derive a new improved global error estimate for general Lie group integrators.

We propose a diffusion approximation method to the continuous-state Markov Decision Processes (MDPs) that can be utilized to address autonomous navigation and control in unstructured off-road environments. In contrast to most decision-theoretic planning frameworks that assume fully known state transition models, we design a method that eliminates such a strong assumption that is often extremely difficult to engineer in reality. We first take the second-order Taylor expansion of the value function. The Bellman optimality equation is then approximated by a partial differential equation, which only relies on the first and second moments of the transition model. By combining the kernel representation of the value function, we design an efficient policy iteration algorithm whose policy evaluation step can be represented as a linear system of equations characterized by a finite set of supporting states. We first validate the proposed method through extensive simulations in 2D obstacle avoidance and 2.5D terrain navigation problems. The results show that the proposed approach leads to a much superior performance over several baselines. We then develop a system that integrates our decision-making framework with onboard perception and conduct real-world experiments in both cluttered indoor and unstructured outdoor environments. The results from the physical systems further demonstrate the applicability of our method in challenging real-world environments.

Users want to know the reliability of the recommendations; they do not accept high predictions if there is no reliability evidence. Recommender systems should provide reliability values associated with the predictions. Research into reliability measures requires the existence of simple, plausible and universal reliability quality measures. Research into recommender system quality measures has focused on accuracy. Moreover, novelty, serendipity and diversity have been studied; nevertheless there is an important lack of research into reliability/confidence quality measures. This paper proposes a reliability quality prediction measure (RPI) and a reliability quality recommendation measure (RRI). Both quality measures are based on the hypothesis that the more suitable a reliability measure is, the better accuracy results it will provide when applied. These reliability quality measures show accuracy improvements when appropriated reliability values are associated with their predictions (i.e. high reliability values associated with correct predictions or low reliability values associated with incorrect predictions). The proposed reliability quality metrics will lead to the design of brand new recommender system reliability measures. These measures could be applied to different matrix factorization techniques and to content-based, context-aware and social recommendation approaches. The recommender system reliability measures designed could be tested, compared and improved using the proposed reliability quality metrics.

We consider nonparametric Bayesian inference in a multidimensional diffusion model with reflecting boundary conditions based on discrete high-frequency observations. We prove a general posterior contraction rate theorem in $L^2$-loss, which is applied to Gaussian priors. The resulting posteriors, as well as their posterior means, are shown to converge to the ground truth at the minimax optimal rate over H\"older smoothness classes in any dimension. Of independent interest and as part of our proofs, we show that certain frequentist penalized least squares estimators are also minimax optimal.

Collecting large quantities of high-quality data is often prohibitively expensive or impractical, and a crucial bottleneck in machine learning. One may instead augment a small set of $n$ data points from the target distribution with data from more accessible sources like public datasets, data collected under different circumstances, or synthesized by generative models. Blurring distinctions, we refer to such data as `surrogate data'. We define a simple scheme for integrating surrogate data into training and use both theoretical models and empirical studies to explore its behavior. Our main findings are: $(i)$ Integrating surrogate data can significantly reduce the test error on the original distribution; $(ii)$ In order to reap this benefit, it is crucial to use optimally weighted empirical risk minimization; $(iii)$ The test error of models trained on mixtures of real and surrogate data is well described by a scaling law. This can be used to predict the optimal weighting and the gain from surrogate data.

The interest in network analysis of bibliographic data has grown substantially in recent years, yet comprehensive statistical models for examining the complete dynamics of scientific networks based on bibliographic data are generally lacking. Current empirical studies often focus on models restricting analysis either to paper citation networks (paper-by-paper) or author networks (author-by-author). However, such networks encompass not only direct connections between papers, but also indirect relationships between the references of papers connected by a citation link. In this paper, we extend recently developed relational hyperevent models (RHEM) for analyzing scientific networks. We introduce new covariates representing theoretically meaningful and empirically interesting sub-network configurations. The model accommodates testing hypotheses considering: (i) the polyadic nature of scientific publication events, and (ii) the interdependencies between authors and references of current and prior papers. We implement the model using purpose-built, publicly available open-source software, demonstrating its empirical value in an analysis of a large publicly available scientific network dataset. Assessing the relative strength of various effects reveals that both the hyperedge structure of publication events, as well as the interconnection between authors and references significantly improve our understanding and interpretation of collaborative scientific production.

In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies, where agents can create fake identities to manipulate outcomes, has not yet been studied. In this paper, we delve into the intricacies of different cost-sharing mechanisms proposed in the literature highlighting its non Sybil-resistance nature. Furthermore, we prove that under mild conditions, a Sybil-proof cost-sharing mechanism for public excludable goods is at least $(n/2+1)-$approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. To circumvent these negative results, we introduce the concept of $\textit{Sybil Welfare Invariant}$ mechanisms, where a mechanism maintains its worst-case welfare under Sybil-strategies for every set of prior beliefs even when the mechanism is not Sybil-proof. Finally, we prove that the Shapley value mechanism for public excludable goods holds this property, and so deduce that the worst-case social cost of this mechanism is the $n$th harmonic number $\mathcal H_n$ even under equilibrium of the game with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding carries important implications for decentralized autonomous organizations (DAOs), indicating that they are capable of funding public excludable goods efficiently, even when the total number of agents is unknown.

北京阿比特科技有限公司