亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the transformed hazards model with intermittently observed time-dependent covariates for the censored outcome. Existing work assumes the availability of the whole trajectory of the time-dependent covariates, which is not realistic. We propose to combine kernel-weighted log-likelihood and sieve maximum log-likelihood estimation to conduct statistical inference. The method is robust and easy to implement. We establish the asymptotic properties of the proposed estimator and contribute to a rigorous theoretical framework for general kernel-weighted sieve M-estimators. Numerical studies corroborate our theoretical results and show that the proposed method has favorable performance over existing methods. An application to a COVID-19 study in Wuhan illustrates the practical utility of our method.

相關內容

We revisit the recent polynomial-time algorithm for the MAX WEIGHT INDEPENDENT SET (MWIS) problem in bounded-degree graphs that do not contain a fixed graph whose every component is a subdivided claw as an induced subgraph [Abrishami, Dibek, Chudnovsky, Rz\k{a}\.zewski, SODA 2022]. First, we show that with an arguably simpler approach we can obtain a faster algorithm with running time $n^{\mathcal{O}(\Delta^2)}$, where $n$ is the number of vertices of the instance and $\Delta$ is the maximum degree. Then we combine our technique with known results concerning tree decompositions and provide a polynomial-time algorithm for MWIS in graphs excluding a fixed graph whose every component is a subdivided claw as an induced subgraph, and a fixed biclique as a subgraph.

Linear Discriminant Analysis (LDA) is one of the oldest and most popular linear methods for supervised classification problems. In this paper, we demonstrate that it is possible to compute the exact projection vector from LDA models based on unlabelled data, if some minimal prior information is available. More precisely, we show that only one of the following three pieces of information is actually sufficient to compute the LDA projection vector if only unlabelled data are available: (1) the class average of one of the two classes, (2) the difference between both class averages (up to a scaling), or (3) the class covariance matrices (up to a scaling). These theoretical results are validated in numerical experiments, demonstrating that this minimally informed Linear Discriminant Analysis (MILDA) model closely matches the performance of a supervised LDA model. Furthermore, we show that the MILDA projection vector can be computed in a closed form with a computational cost comparable to LDA and is able to quickly adapt to non-stationary data, making it well-suited to use as an adaptive classifier.

Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation. Malicious actors now have unprecedented freedom to misbehave, leading to severe societal unrest and dire consequences, as exemplified by events such as the Capitol assault during the US presidential election and the Antivaxx movement during the COVID-19 pandemic. Understanding online language has become more pressing than ever. While existing works predominantly focus on content analysis, we aim to shift the focus towards understanding harmful behaviors by relating content to their respective authors. Numerous novel approaches attempt to learn the stylistic features of authors in texts, but many of these approaches are constrained by small datasets or sub-optimal training losses. To overcome these limitations, we introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 10^6 authored texts involving 70k heterogeneous authors. Our model leverages Supervised Contrastive Loss to teach the model to minimize the distance between texts authored by the same individual. This author pretext pre-training task yields competitive performance at zero-shot with PAN challenges on attribution and clustering. Additionally, we attain promising results on PAN verification challenges using a single dense layer, with our model serving as an embedding encoder. Finally, we present results from our test partition on Reddit. Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80\% accuracy. We share our pre-trained model at huggingface (//huggingface.co/AIDA-UPM/star) and our code is available at (//github.com/jahuerta92/star)

We study the problem of fairly and efficiently allocating indivisible chores among agents with additive disutility functions. We consider the widely-used envy-based fairness properties of EF1 and EFX, in conjunction with the efficiency property of fractional Pareto-optimality (fPO). Existence (and computation) of an allocation that is simultaneously EF1/EFX and fPO are challenging open problems, and we make progress on both of them. We show existence of an allocation that is - EF1+fPO, when there are three agents, - EF1+fPO, when there are at most two disutility functions, - EFX+fPO, for three agents with bivalued disutilities. These results are constructive, based on strongly polynomial-time algorithms. We also investigate non-existence and show that an allocation that is EFX+fPO need not exist, even for two agents.

This work is concerned with cone-beam computed tomography with circular source trajectory, where the reconstruction inverse problem requires an accurate knowledge of source, detector and rotational axis relative positions and orientations. We address this problem as a preceding step of the reconstruction process directly from the acquired projections. The method estimates both the detector shift (orthogonal to focal and rotational axes) and the in-plane detector rotation, relative to source and rotational axis. The obtained algorithm is based on a fan-beam symmetry condition and the variable projection optimization approach with a low computational cost. Therefore, the alignment problem for fan-beam tomography is addressed as well. The methods are validated with simulated and real industrial tomographic data with code examples available for both fan- and cone-beam geometries.

We study the effect of using weaker forms of data-fidelity terms in generalized Tikhonov regularization accounting for model uncertainties. We show that relaxed data-consistency conditions can be beneficial for integrating available prior knowledge.

We study the efficiency of non-truthful auctions for auto-bidders with both return on spend (ROS) and budget constraints. The efficiency of a mechanism is measured by the price of anarchy (PoA), which is the worst case ratio between the liquid welfare of any equilibrium and the optimal (possibly randomized) allocation. Our first main result is that the first-price auction (FPA) is optimal, among deterministic mechanisms, in this setting. Without any assumptions, the PoA of FPA is $n$ which we prove is tight for any deterministic mechanism. However, under a mild assumption that a bidder's value for any query does not exceed their total budget, we show that the PoA is at most $2$. This bound is also tight as it matches the optimal PoA without a budget constraint. We next analyze two randomized mechanisms: randomized FPA (rFPA) and "quasi-proportional" FPA. We prove two results that highlight the efficacy of randomization in this setting. First, we show that the PoA of rFPA for two bidders is at most $1.8$ without requiring any assumptions. This extends prior work which focused only on an ROS constraint. Second, we show that quasi-proportional FPA has a PoA of $2$ for any number of bidders, without any assumptions. Both of these bypass lower bounds in the deterministic setting. Finally, we study the setting where bidders are assumed to bid uniformly. We show that uniform bidding can be detrimental for efficiency in deterministic mechanisms while being beneficial for randomized mechanisms, which is in stark contrast with the settings without budget constraints.

Gaussian boson sampling, a computational model that is widely believed to admit quantum supremacy, has already been experimentally demonstrated and is claimed to surpass the classical simulation capabilities of even the most powerful supercomputers today. However, whether the current approach limited by photon loss and noise in such experiments prescribes a scalable path to quantum advantage is an open question. To understand the effect of photon loss on the scalability of Gaussian boson sampling, we analytically derive the asymptotic operator entanglement entropy scaling, which relates to the simulation complexity. As a result, we observe that efficient tensor network simulations are likely possible under the $N_\text{out}\propto\sqrt{N}$ scaling of the number of surviving photons orange$N_\text{out}$ in the number of input photons $N$. We numerically verify this result using a tensor network algorithm with $U(1)$ symmetry, and overcome previous challenges due to the large local Hilbert space dimensions in Gaussian boson sampling with hardware acceleration. Additionally, we observe that increasing the photon number through larger squeezing does not increase the entanglement entropy significantly. Finally, we numerically find the bond dimension necessary for fixed accuracy simulations, providing more direct evidence for the complexity of tensor networks.

Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal $O(n \log n)$ sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using $L^p$-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of $L^p$-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the $L^p$-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph $G(n,d/n)$, where the previously known algorithms run in time $n^{O(\log d)}$ or applied only to large $d$. We refine these algorithmic bounds significantly, and develop fast $n^{1+o(1)}$ algorithms based on Glauber dynamics that apply to all $d$, throughout the uniqueness regime.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

北京阿比特科技有限公司