亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Vaccination is widely acknowledged as one of the most effective tools for preventing disease. However, there has been a rise in parental refusal and delay of childhood vaccination in recent years in the United States. This trend undermines the maintenance of herd immunity and elevates the likelihood of outbreaks of vaccine-preventable diseases. Our aim is to identify demographic or socioeconomic characteristics associated with vaccine refusal, which could help public health professionals and medical providers develop interventions targeted to concerned parents. We examine US county-level vaccine refusal data for patients under five years of age collected on a monthly basis during the period 2012--2015. These data exhibit challenging features: zero inflation, spatial dependence, seasonal variation, spatially-varying dispersion, and a large sample size (approximately 3,000 counties per month). We propose a flexible zero-inflated Conway--Maxwell--Poisson (ZICOMP) regression model that addresses these challenges. Because ZICOMP models have an intractable normalizing function, it is challenging to do Bayesian inference for these models. We propose a new hybrid Monte Carlo algorithm that permits efficient sampling and provides asymptotically exact estimates of model parameters.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · · 正則化項 · 穩健性 · 正交 ·
2023 年 3 月 20 日

The Sorted L-One Estimator (SLOPE) is a popular regularization method in regression, which induces clustering of the estimated coefficients. That is, the estimator can have coefficients of identical magnitude. In this paper, we derive an asymptotic distribution of SLOPE for the ordinary least squares, Huber, and Quantile loss functions, and use it to study the clustering behavior in the limit. This requires a stronger type of convergence since clustering properties do not follow merely from the classical weak convergence. We establish asymptotic control of the false discovery rate for the asymptotic orthogonal design of the regressor. We also show how to extend the framework to a broader class of regularizers other than SLOPE.

In recent years, recommender systems are crucially important for the delivery of personalized services that satisfy users' preferences. With personalized recommendation services, users can enjoy a variety of recommendations such as movies, books, ads, restaurants, and more. Despite the great benefits, personalized recommendations typically require the collection of personal data for user modelling and analysis, which can make users susceptible to attribute inference attacks. Specifically, the vulnerability of existing centralized recommenders under attribute inference attacks leaves malicious attackers a backdoor to infer users' private attributes, as the systems remember information of their training data (i.e., interaction data and side information). An emerging practice is to implement recommender systems in the federated setting, which enables all user devices to collaboratively learn a shared global recommender while keeping all the training data on device. However, the privacy issues in federated recommender systems have been rarely explored. In this paper, we first design a novel attribute inference attacker to perform a comprehensive privacy analysis of the state-of-the-art federated recommender models. The experimental results show that the vulnerability of each model component against attribute inference attack is varied, highlighting the need for new defense approaches. Therefore, we propose a novel adaptive privacy-preserving approach to protect users' sensitive data in the presence of attribute inference attacks and meanwhile maximize the recommendation accuracy. Extensive experimental results on two real-world datasets validate the superior performance of our model on both recommendation effectiveness and resistance to inference attacks.

A class of implicit Milstein type methods is introduced and analyzed in the present article for stochastic differential equations (SDEs) with non-globally Lipschitz drift and diffusion coefficients. By incorporating a pair of method parameters $\theta, \eta \in [0, 1]$ into both the drift and diffusion parts, the new schemes are indeed a kind of drift-diffusion double implicit methods. Within a general framework, we offer upper mean-square error bounds for the proposed schemes, based on certain error terms only getting involved with the exact solution processes. Such error bounds help us to easily analyze mean-square convergence rates of the schemes, without relying on a priori high-order moment estimates of numerical approximations. Putting further globally polynomial growth condition, we successfully recover the expected mean-square convergence rate of order one for the considered schemes with $\theta \in [\tfrac12, 1], \eta \in [0, 1]$. Also, some of the proposed schemes are applied to solve three SDE models evolving in the positive domain $(0, \infty)$. More specifically, the particular drift-diffusion implicit Milstein method ($ \theta = \eta = 1 $) is utilized to approximate the Heston $\tfrac32$-volatility model and the stochastic Lotka-Volterra competition model. The semi-implicit Milstein method ($\theta =1, \eta = 0$) is used to solve the Ait-Sahalia interest rate model. Thanks to the previously obtained error bounds, we reveal the optimal mean-square convergence rate of the positivity preserving schemes under more relaxed conditions, compared with existing relevant results in the literature. Numerical examples are also reported to confirm the previous findings.

Body weight, as an essential physiological trait, is of considerable significance in many applications like body management, rehabilitation, and drug dosing for patient-specific treatments. Previous works on the body weight estimation task are mainly vision-based, using 2D/3D, depth, or infrared images, facing problems in illumination, occlusions, and especially privacy issues. The pressure mapping mattress is a non-invasive and privacy-preserving tool to obtain the pressure distribution image over the bed surface, which strongly correlates with the body weight of the lying person. To extract the body weight from this image, we propose a deep learning-based model, including a dual-branch network to extract the deep features and pose features respectively. A contrastive learning module is also combined with the deep-feature branch to help mine the mutual factors across different postures of every single subject. The two groups of features are then concatenated for the body weight regression task. To test the model's performance over different hardware and posture settings, we create a pressure image dataset of 10 subjects and 23 postures, using a self-made pressure-sensing bedsheet. This dataset, which is made public together with this paper, together with a public dataset, are used for the validation. The results show that our model outperforms the state-of-the-art algorithms over both 2 datasets. Our research constitutes an important step toward fully automatic weight estimation in both clinical and at-home practice. Our dataset is available for research purposes at: //github.com/USTCWzy/MassEstimation.

The coresets approach, also called subsampling or subset selection, aims to select a subsample as a surrogate for the observed sample. Such an approach has been used pervasively in large-scale data analysis. Existing coresets methods construct the subsample using a subset of rows from the predictor matrix. Such methods can be significantly inefficient when the predictor matrix is sparse or numerically sparse. To overcome the limitation, we develop a novel element-wise subset selection approach, called core-elements, for large-scale least squares estimation in classical linear regression. We provide a deterministic algorithm to construct the core-elements estimator, only requiring an $O(\mbox{nnz}(\mathbf{X})+rp^2)$ computational cost, where $\mathbf{X}$ is an $n\times p$ predictor matrix, $r$ is the number of elements selected from each column of $\mathbf{X}$, and $\mbox{nnz}(\cdot)$ denotes the number of non-zero elements. Theoretically, we show that the proposed estimator is unbiased and approximately minimizes an upper bound of the estimation variance. We also provide an approximation guarantee by deriving a coresets-like finite sample bound for the proposed estimator. To handle potential outliers in the data, we further combine core-elements with the median-of-means procedure, resulting in an efficient and robust estimator with theoretical consistency guarantees. Numerical studies on various synthetic and open-source datasets demonstrate the proposed method's superior performance compared to mainstream competitors.

Understanding and analyzing markets is crucial, yet analytical equilibrium solutions remain largely infeasible. Recent breakthroughs in equilibrium computation rely on zeroth-order policy gradient estimation. These approaches commonly suffer from high variance and are computationally expensive. The use of fully differentiable simulators would enable more efficient gradient estimation. However, the discrete allocation of goods in economic simulations is a non-differentiable operation. This renders the first-order Monte Carlo gradient estimator inapplicable and the learning feedback systematically misleading. We propose a novel smoothing technique that creates a surrogate market game, in which first-order methods can be applied. We provide theoretical bounds on the resulting bias which justifies solving the smoothed game instead. These bounds also allow choosing the smoothing strength a priori such that the resulting estimate has low variance. Furthermore, we validate our approach via numerous empirical experiments. Our method theoretically and empirically outperforms zeroth-order methods in approximation quality and computational efficiency.

Applying simple linear regression models, an economist analysed a published dataset from an influential annual ranking in 2016 and 2017 of consumer outlets for Dutch New Herring and concluded that the ranking was manipulated. His finding was promoted by his university in national and international media, and this led to public outrage and ensuing discontinuation of the survey. We reconstitute the dataset, correcting errors and exposing features already important in a descriptive analysis of the data. The economist has continued his investigations, and in a follow-up publication repeats the same accusations. We point out errors in his reasoning and show that alleged evidence for deliberate manipulation of the ranking could easily be an artefact of specification errors. Temporal and spatial factors are both important and complex, and their effects cannot be captured using simple models, given the small sample sizes and many factors determining perceived taste of a food product.

Parental refusal and delay of childhood vaccination has increased in recent years in the United States. This phenomenon challenges maintenance of herd immunity and increases the risk of outbreaks of vaccine-preventable diseases. We examine US county-level vaccine refusal for patients under five years of age collected during the period 2012--2015 from an administrative healthcare dataset. We model these data with a Bayesian zero-inflated negative binomial regression model to capture social and political processes that are associated with vaccine refusal, as well as factors that affect our measurement of vaccine refusal.Our work highlights fine-scale socio-demographic characteristics associated with vaccine refusal nationally, finds that spatial clustering in refusal can be explained by such factors, and has the potential to aid in the development of targeted public health strategies for optimizing vaccine uptake.

The concept of causality plays an important role in human cognition . In the past few decades, causal inference has been well developed in many fields, such as computer science, medicine, economics, and education. With the advancement of deep learning techniques, it has been increasingly used in causal inference against counterfactual data. Typically, deep causal models map the characteristics of covariates to a representation space and then design various objective optimization functions to estimate counterfactual data unbiasedly based on the different optimization methods. This paper focuses on the survey of the deep causal models, and its core contributions are as follows: 1) we provide relevant metrics under multiple treatments and continuous-dose treatment; 2) we incorporate a comprehensive overview of deep causal models from both temporal development and method classification perspectives; 3) we assist a detailed and comprehensive classification and analysis of relevant datasets and source code.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司