亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

How to achieve the tradeoff between privacy and utility is one of fundamental problems in private data analysis.In this paper, we give a rigourous differential privacy analysis of networks in the appearance of covariates via a generalized $\beta$-model, which has an $n$-dimensional degree parameter $\beta$ and a $p$-dimensional homophily parameter $\gamma$.Under $(k_n, \epsilon_n)$-edge differential privacy, we use the popular Laplace mechanism to release the network statistics.The method of moments is used to estimate the unknown model parameters. We establish the conditions guaranteeing consistency of the differentially private estimators $\widehat{\beta}$ and $\widehat{\gamma}$ as the number of nodes $n$ goes to infinity, which reveal an interesting tradeoff between a privacy parameter and model parameters. The consistency is shown by applying a two-stage Newton's method to obtain the upper bound of the error between $(\widehat{\beta},\widehat{\gamma})$ and its true value $(\beta, \gamma)$ in terms of the $\ell_\infty$ distance, which has a convergence rate of rough order $1/n^{1/2}$ for $\widehat{\beta}$ and $1/n$ for $\widehat{\gamma}$, respectively. Further, we derive the asymptotic normalities of $\widehat{\beta}$ and $\widehat{\gamma}$, whose asymptotic variances are the same as those of the non-private estimators under some conditions. Our paper sheds light on how to explore asymptotic theory under differential privacy in a principled manner; these principled methods should be applicable to a class of network models with covariates beyond the generalized $\beta$-model. Numerical studies and a real data analysis demonstrate our theoretical findings.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國(guo)際網絡會議(yi)。 Publisher:IFIP。 SIT:

The aim of change-point detection is to discover the changes in behavior that lie behind time sequence data. In this article, we study the case where the data comes from an inhomogeneous Poisson process or a marked Poisson process. We present a methodology for detecting multiple offline change-points based on a minimum contrast estimator. In particular, we explain how to handle the continuous nature of the process with the available discrete observations. In addition, we select the appropriate number of regimes via a cross-validation procedure which is really handy here due to the nature of the Poisson process. Through experiments on simulated and real data sets, we demonstrate the interest of the proposed method. The proposed method has been implemented in the R package \texttt{CptPointProcess} R.

In this paper, we investigate the tumor instability by employing both analytical and numerical techniques to validate previous results and extend the analytical findings presented in a prior study by Feng et al 2023. Building upon the insights derived from the analytical reconstruction of key results in the aforementioned work in one dimension (1D) and two dimensions (2D), we extend our analysis to three dimensions (3D). Specifically, we focus on the determination of boundary instability using perturbation and asymptotic analysis along with spherical harmonics. Additionally, we have validated our analytical results in a two-dimensional framework by implementing the Alternating Directional Implicit (ADI) method, as detailed in Witelski and Bowen (2003). Our primary focus has been on ensuring that the numerical simulation of the propagation speed aligns accurately with the analytical findings. Furthermore, we have matched the simulated boundary stability with the analytical predictions derived from the evolution function, which will be defined in subsequent sections of our paper. These alignment is essential for accurately determining the stability or instability of tumor boundaries.

Utilizing massive web-scale datasets has led to unprecedented performance gains in machine learning models, but also imposes outlandish compute requirements for their training. In order to improve training and data efficiency, we here push the limits of pruning large-scale multimodal datasets for training CLIP-style models. Today's most effective pruning method on ImageNet clusters data samples into separate concepts according to their embedding and prunes away the most prototypical samples. We scale this approach to LAION and improve it by noting that the pruning rate should be concept-specific and adapted to the complexity of the concept. Using a simple and intuitive complexity measure, we are able to reduce the training cost to a quarter of regular training. By filtering from the LAION dataset, we find that training on a smaller set of high-quality data can lead to higher performance with significantly lower training costs. More specifically, we are able to outperform the LAION-trained OpenCLIP-ViT-B32 model on ImageNet zero-shot accuracy by 1.1p.p. while only using 27.7% of the data and training compute. Despite a strong reduction in training cost, we also see improvements on ImageNet dist. shifts, retrieval tasks and VTAB. On the DataComp Medium benchmark, we achieve a new state-of-the-art ImageNet zero-shot accuracy and a competitive average zero-shot accuracy on 38 evaluation tasks.

The presence of intermediate confounders, also called recanting witnesses, is a fundamental challenge to the investigation of causal mechanisms in mediation analysis, preventing the identification of natural path-specific effects. Proposed alternative parameters (such as randomizational interventional effects) are problematic because they can be non-null even when there is no mediation for any individual in the population; i.e., they are not an average of underlying individual-level mechanisms. In this paper we develop a novel method for mediation analysis in settings with intermediate confounding, with guarantees that the causal parameters are summaries of the individual-level mechanisms of interest. The method is based on recently proposed ideas that view causality as the transfer of information, and thus replace recanting witnesses by draws from their conditional distribution, what we call "recanting twins". We show that, in the absence of intermediate confounding, recanting twin effects recover natural path-specific effects. We present the assumptions required for identification of recanting twins effects under a standard structural causal model, as well as the assumptions under which the recanting twin identification formulas can be interpreted in the context of the recently proposed separable effects models. To estimate recanting-twin effects, we develop efficient semi-parametric estimators that allow the use of data driven methods in the estimation of the nuisance parameters. We present numerical studies of the methods using synthetic data, as well as an application to evaluate the role of new-onset anxiety and depressive disorder in explaining the relationship between gabapentin/pregabalin prescription and incident opioid use disorder among Medicaid beneficiaries with chronic pain.

In this paper, we propose a new efficient method for calculating the Gerber-Shiu discounted penalty function. Generally, the Gerber-Shiu function usually satisfies a class of integro-differential equation. We introduce the physics-informed neural networks (PINN) which embed a differential equation into the loss of the neural network using automatic differentiation. In addition, PINN is more free to set boundary conditions and does not rely on the determination of the initial value. This gives us an idea to calculate more general Gerber-Shiu functions. Numerical examples are provided to illustrate the very good performance of our approximation.

In this paper we study the convergence of a second order finite volume approximation of the scalar conservation law. This scheme is based on the generalized Riemann problem (GRP) solver. We firstly investigate the stability of the GRP scheme and find that it might be entropy unstable when the shock wave is generated. By adding an artificial viscosity we propose a new stabilized GRP scheme. Under the assumption that numerical solutions are uniformly bounded, we prove consistency and convergence of this new GRP method.

This paper proposes a novel neural network framework, denoted as spectral integrated neural networks (SINNs), for resolving three-dimensional forward and inverse dynamic problems. In the SINNs, the spectral integration method is applied to perform temporal discretization, and then a fully connected neural network is adopted to solve resulting partial differential equations (PDEs) in the spatial domain. Specifically, spatial coordinates are employed as inputs in the network architecture, and the output layer is configured with multiple outputs, each dedicated to approximating solutions at different time instances characterized by Gaussian points used in the spectral method. By leveraging the automatic differentiation technique and spectral integration scheme, the SINNs minimize the loss function, constructed based on the governing PDEs and boundary conditions, to obtain solutions for dynamic problems. Additionally, we utilize polynomial basis functions to expand the unknown function, aiming to enhance the performance of SINNs in addressing inverse problems. The conceived framework is tested on six forward and inverse dynamic problems, involving nonlinear PDEs. Numerical results demonstrate the superior performance of SINNs over the popularly used physics-informed neural networks in terms of convergence speed, computational accuracy and efficiency. It is also noteworthy that the SINNs exhibit the capability to deliver accurate and stable solutions for long-time dynamic problems.

Spatial areal models encounter the well-known and challenging problem of spatial confounding. This issue makes it arduous to distinguish between the impacts of observed covariates and spatial random effects. Despite previous research and various proposed methods to tackle this problem, finding a definitive solution remains elusive. In this paper, we propose a simplified version of the spatial+ approach that involves dividing the covariate into two components. One component captures large-scale spatial dependence, while the other accounts for short-scale dependence. This approach eliminates the need to separately fit spatial models for the covariates. We apply this method to analyse two forms of crimes against women, namely rapes and dowry deaths, in Uttar Pradesh, India, exploring their relationship with socio-demographic covariates. To evaluate the performance of the new approach, we conduct extensive simulation studies under different spatial confounding scenarios. The results demonstrate that the proposed method provides reliable estimates of fixed effects and posterior correlations between different responses.

This paper explores an iterative coupling approach to solve linear thermo-poroelasticity problems, with its application as a high-fidelity discretization utilizing finite elements during the training of projection-based reduced order models. One of the main challenges in addressing coupled multi-physics problems is the complexity and computational expenses involved. In this study, we introduce a decoupled iterative solution approach, integrated with reduced order modeling, aimed at augmenting the efficiency of the computational algorithm. The iterative coupling technique we employ builds upon the established fixed-stress splitting scheme that has been extensively investigated for Biot's poroelasticity. By leveraging solutions derived from this coupled iterative scheme, the reduced order model employs an additional Galerkin projection onto a reduced basis space formed by a small number of modes obtained through proper orthogonal decomposition. The effectiveness of the proposed algorithm is demonstrated through numerical experiments, showcasing its computational prowess.

Entanglement is a striking feature of quantum mechanics, and it has a key property called unextendibility. In this paper, we present a framework for quantifying and investigating the unextendibility of general bipartite quantum states. First, we define the unextendible entanglement, a family of entanglement measures based on the concept of a state-dependent set of free states. The intuition behind these measures is that the more entangled a bipartite state is, the less entangled each of its individual systems is with a third party. Second, we demonstrate that the unextendible entanglement is an entanglement monotone under two-extendible quantum operations, including local operations and one-way classical communication as a special case. Normalization and faithfulness are two other desirable properties of unextendible entanglement, which we establish here. We further show that the unextendible entanglement provides efficiently computable benchmarks for the rate of exact entanglement or secret key distillation, as well as the overhead of probabilistic entanglement or secret key distillation.

北京阿比特科技有限公司