亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Correlation-diversified portfolios can be constructed by finding the maximum independent sets (MISs) in market graphs with edges corresponding to correlations between two stocks. The computational complexity to find the MIS increases exponentially as the size of the market graph increases, making the MIS selection in a large-scale market graph difficult. Here we construct a diversified portfolio by solving the MIS problem for a large-scale market graph with a combinatorial optimization solver (an Ising machine) based on a quantum-inspired algorithm called simulated bifurcation (SB) and investigate the investment performance of the constructed portfolio using long-term historical market data. Comparisons using stock universes of various sizes [TOPIX 100, Nikkei 225, TOPIX 1000, and TOPIX (including approximately 2,000 constituents)] show that the SB-based solver outperforms conventional MIS solvers in terms of computation-time and solution-accuracy. By using the SB-based solver, we optimized the parameters of a MIS portfolio strategy through iteration of the backcast simulation that calculates the performance of the MIS portfolio strategy based on a large-scale universe covering more than 1,700 Japanese stocks for a long period of 10 years. It has been found that the best MIS portfolio strategy (Sharpe ratio = 1.16, annualized return/risk = 16.3%/14.0%) outperforms the major indices such as TOPIX (0.66, 10.0%/15.2%) and MSCI Japan Minimum Volatility Index (0.64, 7.7%/12.1%) for the period from 2013 to 2023.

相關內容

The covXtreme software provides functionality for estimation of marginal and conditional extreme value models, non-stationary with respect to covariates, and environmental design contours. Generalised Pareto (GP) marginal models of peaks over threshold are estimated, using a piecewise-constant representation for the variation of GP threshold and scale parameters on the (potentially multidimensional) covariate domain of interest. The conditional variation of one or more associated variates, given a large value of a single conditioning variate, is described using the conditional extremes model of Heffernan and Tawn (2004), the slope term of which is also assumed to vary in a piecewise constant manner with covariates. Optimal smoothness of marginal and conditional extreme value model parameters with respect to covariates is estimated using cross-validated roughness-penalised maximum likelihood estimation. Uncertainties in model parameter estimates due to marginal and conditional extreme value threshold choice, and sample size, are quantified using a bootstrap resampling scheme. Estimates of environmental contours using various schemes, including the direct sampling approach of Huseby et al. 2013, are calculated by simulation or numerical integration under fitted models. The software was developed in MATLAB for metocean applications, but is applicable generally to multivariate samples of peaks over threshold. The software can be downloaded from GitHub, with an accompanying user guide.

Traditional approaches to urban income segregation focus on static residential patterns, often failing to capture the dynamic nature of social mixing at the neighborhood level. Leveraging high-resolution location-based data from mobile phones, we capture the interplay of three different income groups (high, medium, low) based on their daily routines. We propose a three-dimensional space to analyze social mixing, which is embedded in the temporal dynamics of urban activities. This framework offers a more detailed perspective on social interactions, closely linked to the geographical features of each neighborhood. While residential areas fail to encourage social mixing in the nighttime, the working hours foster inclusion, with the city center showing a heightened level of interaction. As evening sets in, leisure areas emerge as potential facilitators for social interactions, depending on urban features such as public transport and a variety of Points Of Interest. These characteristics significantly modulate the magnitude and type of social stratification involved in social mixing, also underscoring the significance of urban design in either bridging or widening socio-economic divides.

FP8 formats are gaining popularity to boost the computational efficiency for training and inference of large deep learning models. Their main challenge is that a careful choice of scaling is needed to prevent degradation due to the reduced dynamic range compared to higher-precision formats. Although there exists ample literature about selecting such scalings for INT formats, this critical aspect has yet to be addressed for FP8. This paper presents a methodology to select the scalings for FP8 linear layers, based on dynamically updating per-tensor scales for the weights, gradients and activations. We apply this methodology to train and validate large language models of the type of GPT and Llama 2 using FP8, for model sizes ranging from 111M to 70B. To facilitate the understanding of the FP8 dynamics, our results are accompanied by plots of the per-tensor scale distribution for weights, activations and gradients during both training and inference.

The problem of multi-object tracking (MOT) consists in detecting and tracking all the objects in a video sequence while keeping a unique identifier for each object. It is a challenging and fundamental problem for robotics. In precision agriculture the challenge of achieving a satisfactory solution is amplified by extreme camera motion, sudden illumination changes, and strong occlusions. Most modern trackers rely on the appearance of objects rather than motion for association, which can be ineffective when most targets are static objects with the same appearance, as in the agricultural case. To this end, on the trail of SORT [5], we propose AgriSORT, a simple, online, real-time tracking-by-detection pipeline for precision agriculture based only on motion information that allows for accurate and fast propagation of tracks between frames. The main focuses of AgriSORT are efficiency, flexibility, minimal dependencies, and ease of deployment on robotic platforms. We test the proposed pipeline on a novel MOT benchmark specifically tailored for the agricultural context, based on video sequences taken in a table grape vineyard, particularly challenging due to strong self-similarity and density of the instances. Both the code and the dataset are available for future comparisons.

Principal variables analysis (PVA) is a technique for selecting a subset of variables that capture as much of the information in a dataset as possible. Existing approaches for PVA are based on the Pearson correlation matrix, which is not well-suited to describing the relationships between non-Gaussian variables. We propose a generalized approach to PVA enabling the use of different types of correlation, and we explore using Spearman, Gaussian copula, and polychoric correlations as alternatives to Pearson correlation when performing PVA. We compare performance in simulation studies varying the form of the true multivariate distribution over a wide range of possibilities. Our results show that on continuous non-Gaussian data, using generalized PVA with Gaussian copula or Spearman correlations provides a major improvement in performance compared to Pearson. Meanwhile, on ordinal data, generalized PVA with polychoric correlations outperforms the rest by a wide margin. We apply generalized PVA to a dataset of 102 clinical variables measured on individuals with X-linked dystonia parkinsonism (XDP), a rare neurodegenerative disorder, and we find that using different types of correlation yields substantively different sets of principal variables.

This paper explores how and when to use common random number (CRN) simulation to evaluate MCMC convergence rates. We discuss how CRN simulation is closely related to theoretical convergence rate techniques such as one-shot coupling and coupling from the past. We present conditions under which the CRN technique generates an unbiased estimate of the Wasserstein distance between two random variables. We also discuss how unbiasedness of the Wasserstein distance between two Markov chains over a single iteration does not extend to unbiasedness over multiple iterations. We provide an upper bound on the Wasserstein distance of a Markov chain to its stationary distribution after $N$ steps in terms of averages over CRN simulations. We apply our result to a Bayesian regression Gibbs sampler.

We show how quantum-inspired 2d tensor networks can be used to efficiently and accurately simulate the largest quantum processors from IBM, namely Eagle (127 qubits), Osprey (433 qubits) and Condor (1121 qubits). We simulate the dynamics of a complex quantum many-body system -- specifically, the kicked Ising experiment considered recently by IBM in Nature 618, p. 500-505 (2023) -- using graph-based Projected Entangled Pair States (gPEPS), which was proposed by some of us in PRB 99, 195105 (2019). Our results show that simple tensor updates are already sufficient to achieve very large unprecedented accuracy with remarkably low computational resources for this model. Apart from simulating the original experiment for 127 qubits, we also extend our results to 433 and 1121 qubits, thus setting a benchmark for the newest IBM quantum machines. We also report accurate simulations for infinitely-many qubits. Our results show that gPEPS are a natural tool to efficiently simulate quantum computers with an underlying lattice-based qubit connectivity, such as all quantum processors based on superconducting qubits.

Predicting startup success presents a formidable challenge due to the inherently volatile landscape of the entrepreneurial ecosystem. The advent of extensive databases like Crunchbase jointly with available open data enables the application of machine learning and artificial intelligence for more accurate predictive analytics. This paper focuses on startups at their Series B and Series C investment stages, aiming to predict key success milestones such as achieving an Initial Public Offering (IPO), attaining unicorn status, or executing a successful Merger and Acquisition (M\&A). We introduce novel deep learning model for predicting startup success, integrating a variety of factors such as funding metrics, founder features, industry category. A distinctive feature of our research is the use of a comprehensive backtesting algorithm designed to simulate the venture capital investment process. This simulation allows for a robust evaluation of our model's performance against historical data, providing actionable insights into its practical utility in real-world investment contexts. Evaluating our model on Crunchbase's, we achieved a 14 times capital growth and successfully identified on B round high-potential startups including Revolut, DigitalOcean, Klarna, Github and others. Our empirical findings illuminate the importance of incorporating diverse feature sets in enhancing the model's predictive accuracy. In summary, our work demonstrates the considerable promise of deep learning models and alternative unstructured data in predicting startup success and sets the stage for future advancements in this research area.

Previous research primarily characterized price movements according to time intervals, resulting in temporal discontinuity and overlooking crucial activities in financial markets. Directional Change (DC) is an alternative approach to sampling price data, highlighting significant points while blurring out noise details in price movements. However, traditional DC treated the thresholds of upward and downward trends with distinct intrinsic patterns as equivalent and preset them as fixed values, which are dependent on the subjective judgment of traders. To enhance the generalization performance of this methodology, we improved DC by introducing a modified threshold selection technique. Specifically, we addressed upward and downward trends distinctly by incorporating a decay coefficient. Further, we simultaneously optimized the threshold and decay coefficient using the Bayesian Optimization Algorithm (BOA). Additionally, we recognized the abnormal market state by regime change detection based on the Hidden Markov Model (RCD-HMM) to reduce the risk. Our Intelligent Trading Algorithm (ITA) was constructed based on above methods and the experiments were carried out on tick data from diverse currency pairs in the forex market. The experimental results showed a significant increase in profit and reduction in risk of DC-based trading strategies, which demonstrated the effectiveness of our proposed methods.

Large-scale communication networks, such as the internet, rely on routing packets of data through multiple intermediate nodes to transmit information from a sender to a receiver. In this paper, we develop a model of a quantum communication network that routes information simultaneously along multiple paths passing through intermediate stations. We demonstrate that a quantum routing approach can in principle extend the distance over which information can be transmitted reliably. Surprisingly, the benefit of quantum routing also applies to the transmission of classical information: even if the transmitted data is purely classical, delocalising it on multiple routes can enhance the achievable transmission distance. Our findings highlight the potential of a future quantum internet not only for achieving secure quantum communication and distributed quantum computing but also for extending the range of classical data transmission.

北京阿比特科技有限公司