亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We address the problem of solving strongly convex and smooth minimization problems using stochastic gradient descent (SGD) algorithm with a constant step size. Previous works suggested to combine the Polyak-Ruppert averaging procedure with the Richardson-Romberg extrapolation technique to reduce the asymptotic bias of SGD at the expense of a mild increase of the variance. We significantly extend previous results by providing an expansion of the mean-squared error of the resulting estimator with respect to the number of iterations $n$. More precisely, we show that the mean-squared error can be decomposed into the sum of two terms: a leading one of order $\mathcal{O}(n^{-1/2})$ with explicit dependence on a minimax-optimal asymptotic covariance matrix, and a second-order term of order $\mathcal{O}(n^{-3/4})$ where the power $3/4$ can not be improved in general. We also extend this result to the $p$-th moment bound keeping optimal scaling of the remainders with respect to $n$. Our analysis relies on the properties of the SGD iterates viewed as a time-homogeneous Markov chain. In particular, we establish that this chain is geometrically ergodic with respect to a suitably defined weighted Wasserstein semimetric.

相關內容

Identifying and understanding the co-occurrence of multiple long-term conditions (MLTC) in individuals with intellectual disabilities (ID) is vital for effective healthcare management. These individuals often face earlier onset and higher prevalence of MLTCs, yet specific co-occurrence patterns remain unexplored. This study applies an unsupervised approach to characterise MLTC clusters based on shared disease trajectories using electronic health records (EHRs) from 13069 individuals with ID in Wales (2000-2021). Disease associations and temporal directionality were assessed, followed by spectral clustering to group shared trajectories. The population consisted of 52.3% males and 47.7% females, with an average of 4.5 conditions per patient. Males under 45 formed a single cluster dominated by neurological conditions (32.4%), while males above 45 had three clusters, the largest characterised circulatory (51.8%). Females under 45 formed one cluster with digestive conditions (24.6%) as most prevalent, while those aged 45 and older showed two clusters: one dominated by circulatory (34.1%), and the other by digestive (25.9%) and musculoskeletal (21.9%) system conditions. Mental illness, epilepsy, and reflux were common across groups. These clusters offer insights into disease progression in individuals with ID, informing targeted interventions and personalised healthcare strategies.

Task-oriented dialogue systems rely on predefined conversation schemes (dialogue flows) often represented as directed acyclic graphs. These flows can be manually designed or automatically generated from previously recorded conversations. Due to variations in domain expertise or reliance on different sets of prior conversations, these dialogue flows can manifest in significantly different graph structures. Despite their importance, there is no standard method for evaluating the quality of dialogue flows. We introduce FuDGE (Fuzzy Dialogue-Graph Edit Distance), a novel metric that evaluates dialogue flows by assessing their structural complexity and representational coverage of the conversation data. FuDGE measures how well individual conversations align with a flow and, consequently, how well a set of conversations is represented by the flow overall. Through extensive experiments on manually configured flows and flows generated by automated techniques, we demonstrate the effectiveness of FuDGE and its evaluation framework. By standardizing and optimizing dialogue flows, FuDGE enables conversational designers and automated techniques to achieve higher levels of efficiency and automation.

Collaborative problem solving (CPS) is widely recognized as a critical 21st century skill. Efficiently coding communication data is a big challenge in scaling up research on assessing CPS. This paper reports the findings on using ChatGPT to directly code CPS chat data by benchmarking performance across multiple datasets and coding frameworks. We found that ChatGPT-based coding outperformed human coding in tasks where the discussions were characterized by colloquial languages but fell short in tasks where the discussions dealt with specialized scientific terminology and contexts. The findings offer practical guidelines for researchers to develop strategies for efficient and scalable analysis of communication data from CPS tasks.

The uniform one-dimensional fragment of first-order logic was introduced a few years ago as a generalization of the two-variable fragment to contexts involving relations of arity greater than two. Quantifiers in this logic are used in blocks, each block consisting only of existential quantifiers or only of universal quantifiers. In this paper we consider the possibility of mixing both types of quantifiers in blocks. We show the finite (exponential) model property and NExpTime-completeness of the satisfiability problem for two restrictions of the resulting formalism: in the first we require that every block of quantifiers is either purely universal or ends with the existential quantifier, in the second we restrict the number of variables to three; in both equality is not allowed. We also extend the second variation to a rich subfragment of the three-variable fragment (without equality) that still has the finite model property and decidable, NExpTime{}-complete satisfiability.

We introduce discretizations of infinite-dimensional optimization problems with total variation regularization and integrality constraints on the optimization variables. We advance the discretization of the dual formulation of the total variation term with Raviart--Thomas functions which is known from literature for certain convex problems. Since we have an integrality constraint, the previous analysis from Caillaud and Chambolle [10] does not hold anymore. Even weaker $\Gamma$-convergence results do not hold anymore because the recovery sequences generally need to attain non-integer values to recover the total variation of the limit function. We solve this issue by introducing a discretization of the input functions on an embedded, finer mesh. A superlinear coupling of the mesh sizes implies an averaging on the coarser mesh of the Raviart--Thomas ansatz, which enables to recover the total variation of integer-valued limit functions with integer-valued discretized input functions. Moreover, we are able to estimate the discretized total variation of the recovery sequence by the total variation of its limit and an error depending on the mesh size ratio. For the discretized optimization problems, we additionally add a constraint that vanishes in the limit and enforces compactness of the sequence of minimizers, which yields their convergence to a minimizer of the original problem. This constraint contains a degree of freedom whose admissible range is determined. Its choice may have a strong impact on the solutions in practice as we demonstrate with an example from imaging.

The Yang and Prentice (YP) regression models have garnered interest from the scientific community due to their ability to analyze data whose survival curves exhibit intersection. These models include proportional hazards (PH) and proportional odds (PO) models as specific cases. However, they encounter limitations when dealing with multivariate survival data due to potential dependencies between the times-to-event. A solution is introducing a frailty term into the hazard functions, making it possible for the times-to-event to be considered independent, given the frailty term. In this study, we propose a new class of YP models that incorporate frailty. We use the exponential distribution, the piecewise exponential distribution (PE), and Bernstein polynomials (BP) as baseline functions. Our approach adopts a Bayesian methodology. The proposed models are evaluated through a simulation study, which shows that the YP frailty models with BP and PE baselines perform similarly to the generator parametric model of the data. We apply the models in two real data sets.

We study the effect of the streamline upwind/Petrov Galerkin (SUPG) stabilized finite element method on the discretization of optimal control problems governed by linear advection-diffusion equations. We compare two approaches for the numerical solution of such optimal control problems. In the discretize-then-optimize approach, the optimal control problem is first discretized, using the SUPG method for the discretization of the advection-diffusion equation, and then the resulting finite dimensional optimization problem is solved. In the optimize-then-discretize approach one first computes the infinite dimensional optimality system, involving the advection-diffusion equation as well as the adjoint advection-diffusion equation, and then discretizes this optimality system using the SUPG method for both the original and the adjoint equations. These approaches lead to different results. The main result of this paper are estimates for the error between the solution of the infinite dimensional optimal control problem and their approximations computed using the previous approaches. For a class of problems prove that the optimize-then-discretize approach has better asymptotic convergence properties if finite elements of order greater than one are used. For linear finite elements our theoretical convergence results for both approaches are comparable, except in the zero diffusion limit where again the optimize-then-discretize approach seems favorable. Numerical examples are presented to illustrate some of the theoretical results.

The Softmax attention mechanism in Transformer models is notoriously computationally expensive, particularly due to its quadratic complexity, posing significant challenges in vision applications. In contrast, linear attention provides a far more efficient solution by reducing the complexity to linear levels. However, compared to Softmax attention, linear attention often experiences significant performance degradation. Our experiments indicate that this performance drop is due to the low-rank nature of linear attention's feature map, which hinders its ability to adequately model complex spatial information. In this paper, to break the low-rank dilemma of linear attention, we conduct rank analysis from two perspectives: the KV buffer and the output features. Consequently, we introduce Rank-Augmented Linear Attention (RALA), which rivals the performance of Softmax attention while maintaining linear complexity and high efficiency. Based on RALA, we construct the Rank-Augmented Vision Linear Transformer (RAVLT). Extensive experiments demonstrate that RAVLT achieves excellent performance across various vision tasks. Specifically, without using any additional labels, data, or supervision during training, RAVLT achieves an 84.4% Top-1 accuracy on ImageNet-1k with only 26M parameters and 4.6G FLOPs. This result significantly surpasses previous linear attention mechanisms, fully illustrating the potential of RALA. Code will be available at //github.com/qhfan/RALA.

We introduce and characterize the operational diversity order (ODO) in fading channels, as a proxy to the classical notion of diversity order at any arbitrary operational signal-to-noise ratio (SNR). Thanks to this definition, relevant insights are brought up in a number of cases: (i) We quantify that in dominant line-of-sight scenarios an increased diversity order is attainable compared to that achieved asymptotically, even in the single-antenna case; (ii) this effect is attenuated, but still visible, in the presence of an additional dominant specular component; (iii) the decay slope in Rayleigh product channels increases very slowly, never fully achieving unitary slope for a finite SNR.

We propose a nonparametric, kernel-based joint estimator for conditional mean and covariance matrices in large unbalanced panels. Our estimator, with proven consistency and finite-sample guarantees, is applied to a comprehensive panel of monthly US stock excess returns from 1962 to 2021, conditioned on macroeconomic and firm-specific covariates. The estimator captures time-varying cross-sectional dependencies effectively, demonstrating robust statistical performance. In asset pricing, it generates conditional mean-variance efficient portfolios with out-of-sample Sharpe ratios that substantially exceed those of equal-weighted benchmarks.

北京阿比特科技有限公司