亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A nonhomogeneous hidden semi-Markov model is proposed to segment toroidal time series according to a finite number of latent regimes and, simultaneously, estimate the influence of time-varying covariates on the process' survival under each regime. The model is a mixture of toroidal densities, whose parameters depend on the evolution of a semi-Markov chain, which is in turn modulated by time-varying covariates through a proportional hazards assumption. Parameter estimates are obtained using an EM algorithm that relies on an efficient augmentation of the latent process. The proposal is illustrated on a time series of wind and wave directions recorded during winter.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 語義相似度 · Perplexity · HTTPS · 數據集 ·
2024 年 2 月 12 日

The use of third-party datasets and pre-trained machine learning models poses a threat to NLP systems due to possibility of hidden backdoor attacks. Existing attacks involve poisoning the data samples such as insertion of tokens or sentence paraphrasing, which either alter the semantics of the original texts or can be detected. Our main difference from the previous work is that we use the reposition of a two words in a sentence as a trigger. By designing and applying specific part-of-speech (POS) based rules for selecting these tokens, we maintain high attack success rate on SST-2 and AG classification datasets while outperforming existing attacks in terms of perplexity and semantic similarity to the clean samples. In addition, we show the robustness of our attack to the ONION defense method. All the code and data for the paper can be obtained at //github.com/alekseevskaia/OrderBkd.

The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models' capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.

This research incorporates realized volatility and overnight information into risk models, wherein the overnight return often contributes significantly to the total return volatility. Extending a semi-parametric regression model based on asymmetric Laplace distribution, we propose a family of RES-CAViaR-oc models by adding overnight return and realized measures as a nowcasting technique for simultaneously forecasting Value-at-Risk (VaR) and expected shortfall (ES). We utilize Bayesian methods to estimate unknown parameters and forecast VaR and ES jointly for the proposed model family. We also conduct extensive backtests based on joint elicitability of the pair of VaR and ES during the out-of sample period. Our empirical study on four international stock indices confirms that overnight return and realized volatility are vital in tail risk forecasting.

Mendelian randomization (MR) is an instrumental variable (IV) approach to infer causal relationships between exposures and outcomes with genome-wide association studies (GWAS) summary data. However, the multivariable inverse-variance weighting (IVW) approach, which serves as the foundation for most MR approaches, cannot yield unbiased causal effect estimates in the presence of many weak IVs. To address this problem, we proposed the MR using Bias-corrected Estimating Equation (MRBEE) that can infer unbiased causal relationships with many weak IVs and account for horizontal pleiotropy simultaneously. While the practical significance of MRBEE was demonstrated in our parallel work (Lorincz-Comi (2023)), this paper established the statistical theories of multivariable IVW and MRBEE with many weak IVs. First, we showed that the bias of the multivariable IVW estimate is caused by the error-in-variable bias, whose scale and direction are inflated and influenced by weak instrument bias and sample overlaps of exposures and outcome GWAS cohorts, respectively. Second, we investigated the asymptotic properties of multivariable IVW and MRBEE, showing that MRBEE outperforms multivariable IVW regarding unbiasedness of causal effect estimation and asymptotic validity of causal inference. Finally, we applied MRBEE to examine myopia and revealed that education and outdoor activity are causal to myopia whereas indoor activity is not.

Background and Objective: Vital sign monitoring in the Intensive Care Unit (ICU) is crucial for enabling prompt interventions for patients. This underscores the need for an accurate predictive system. Therefore, this study proposes a novel deep learning approach for forecasting Heart Rate (HR), Systolic Blood Pressure (SBP), and Diastolic Blood Pressure (DBP) in the ICU. Methods: We extracted $24,886$ ICU stays from the MIMIC-III database which contains data from over $46$ thousand patients, to train and test the model. The model proposed in this study, Transformer-based Diffusion Probabilistic Model for Sparse Time Series Forecasting (TDSTF), merges Transformer and diffusion models to forecast vital signs. The TDSTF model showed state-of-the-art performance in predicting vital signs in the ICU, outperforming other models' ability to predict distributions of vital signs and being more computationally efficient. The code is available at //github.com/PingChang818/TDSTF. Results: The results of the study showed that TDSTF achieved a Standardized Average Continuous Ranked Probability Score (SACRPS) of $0.4438$ and a Mean Squared Error (MSE) of $0.4168$, an improvement of $18.9\%$ and $34.3\%$ over the best baseline model, respectively. The inference speed of TDSTF is more than $17$ times faster than the best baseline model. Conclusion: TDSTF is an effective and efficient solution for forecasting vital signs in the ICU, and it shows a significant improvement compared to other models in the field.

Multi-product formulas (MPF) are linear combinations of Trotter circuits offering high-quality simulation of Hamiltonian time evolution with fewer Trotter steps. Here we report two contributions aimed at making multi-product formulas more viable for near-term quantum simulations. First, we extend the theory of Trotter error with commutator scaling developed by Childs, Su, Tran et al. to multi-product formulas. Our result implies that multi-product formulas can achieve a quadratic reduction of Trotter error in 1-norm (nuclear norm) on arbitrary time intervals compared with the regular product formulas without increasing the required circuit depth or qubit connectivity. The number of circuit repetitions grows only by a constant factor. Second, we introduce dynamic multi-product formulas with time-dependent coefficients chosen to minimize a certain efficiently computable proxy for the Trotter error. We use a minimax estimation method to make dynamic multi-product formulas robust to uncertainty from algorithmic errors, sampling and hardware noise. We call this method Minimax MPF and we provide a rigorous bound on its error.

A new shock-tracking technique that avoids re-meshing the computational grid around the moving shock-front was recently proposed by the authors (Ciallella et al., 2020). The method combines the unstructured shock-fitting (Paciorri and Bonfiglioli,2009) approach, developed in the last decade by some of the authors, with ideas coming from embedded boundary methods. In particular, second-order extrapolations based on Taylor series expansions are employed to transfer the solution and retain high order of accuracy. This paper describes the basic idea behind the new method and further algorithmic improvements which make the extrapolated Discontinuity Tracking Technique (eDIT) capable of dealing with complex shock-topologies featuring shock-shock and shock-wall interactions occurring in steady problems. This method paves the way to a new class of shock-tracking techniques truly independent on the mesh structure and flow solver. Various test-cases are included to prove the potential of the method, demonstrate the key features of the methodology, and thoroughly evaluate several technical aspects related to the extrapolation from/onto the shock, and their impact on accuracy, and conservation.

The paper is concerned with inference for a parameter of interest in models that share a common interpretation for that parameter, but that may differ appreciably in other respects. We study the general structure of models under which the maximum likelihood estimator of the parameter of interest is consistent under arbitrary misspecification of the nuisance part of the model. A specialization of the general results to matched-comparison and two-groups problems gives a more explicit condition in terms of a new notion of symmetric parametrization, leading to an appreciable broadening and unification of existing results in those problems. The role of a generalized definition of parameter orthogonality is highlighted, as well as connections to Neyman orthogonality. The issues involved in obtaining inferential guarantees beyond consistency are discussed.

The elasticity of the DTW metric provides a more flexible comparison between time series and is used in numerous machine learning domains such as classification or clustering. However, it does not align the measurements at the beginning and end of time series if they have a shift occurring right at the start of one series, with the omitted part appearing at the end of that series. Due to the cyclicity of such series - which lack a definite beginning or end - we rely on the Cyclic DTW approach to propose a less computationally expensive approximation of this calculation method. This approximation will then be employed in conjunction with the K-Means clustering method.

We analyze correlation structures in financial markets by coarse graining the Pearson correlation matrices according to market sectors to obtain Guhr matrices using Guhr's correlation method according to Ref. [P. Rinn {\it et. al.}, Europhysics Letters 110, 68003 (2015)]. We compare the results for the evolution of market states and the corresponding transition matrices with those obtained using Pearson correlation matrices. The behavior of market states is found to be similar for both the coarse grained and Pearson matrices. However, the number of relevant variables is reduced by orders of magnitude.

北京阿比特科技有限公司