亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper addresses the challenging scheduling problem of coflows with release times, with the objective of minimizing the total weighted completion time. Previous literature has predominantly concentrated on establishing the scheduling order of coflows. In advancing this research, we contribute by optimizing performance through the determination of the flow scheduling order. The proposed approximation algorithm achieves approximation ratios of $3$ and $2+\frac{1}{LB}$ for arbitrary and zero release times, respectively, where $LB$ is the minimum lower bound of coflow completion time. To further improve time complexity, we streamline linear programming by employing interval-indexed relaxation, thereby reducing the number of variables. As a result, for $\epsilon>0$, the approximation algorithm achieves approximation ratios of $3 + \epsilon$ and $2 + \epsilon$ for arbitrary and zero release times, respectively. Notably, these advancements surpass the previously best-known approximation ratios of 5 and 4 for arbitrary and zero release times, respectively, as established by Shafiee and Ghaderi.

相關內容

We study cyclic scheduling for generate-at-will (GAW) multi-source status update systems with heterogeneous service times and packet drop probabilities, with the aim of minimizing the weighted sum age of information (AoI), called system AoI, or the weighted sum peak AoI (PAoI), called system PAoI. In particular, we obtain well-performing cyclic schedulers which can easily scale to thousands of information sources and which also have low online implementation complexity. The proposed schedulers are comparatively studied against existing scheduling algorithms in terms of computational load and system AoI/PAoI performance, to validate their effectiveness.

We address long-standing open questions raised by Williamson, Goemans, Vazirani and Mihail pertaining to the design of approximation algorithms for problems in network design via the primal-dual method (Combinatorica 15(3):435-454, 1995). Williamson et al. prove an approximation guarantee of two for connectivity augmentation problems where the connectivity requirements can be specified by so-called uncrossable functions. They state: ``Extending our algorithm to handle non-uncrossable functions remains a challenging open problem. The key feature of uncrossable functions is that there exists an optimal dual solution which is laminar. This property characterizes uncrossable functions\dots\ A larger open issue is to explore further the power of the primal-dual approach for obtaining approximation algorithms for other combinatorial optimization problems.'' Our main result proves that the primal-dual algorithm of Williamson et al. achieves an approximation ratio of 16 for a class of functions that generalizes the notion of an uncrossable function. There exist instances that can be handled by our methods where none of the optimal dual solutions has a laminar support. We present three applications of our main result. (1) A 16-approximation algorithm for augmenting a family of small cuts of a graph $G$. (2) A $16 \cdot {\lceil k/u_{min} \rceil}$-approximation algorithm for the Cap-$k$-ECSS problem which is as follows: Given an undirected graph $G = (V,E)$ with edge costs $c \in \mathbb{Q}_{\geq 0}^E$ and edge capacities $u \in \mathbb{Z}_{\geq 0}^E$, find a minimum-cost subset of the edges $F\subseteq E$ such that the capacity of any cut in $(V,F)$ is at least $k$; we use $u_{min}$ to denote the minimum capacity of an edge in $E$. (3) An $O(1)$-approximation algorithm for the model of $(p,2)$-Flexible Graph Connectivity.

This paper delves into the analysis of nonlinear deformation induced by dielectric actuation in pre-stressed ideal dielectric elastomers. It formulates a nonlinear ordinary differential equation governing this deformation based on the hyperelastic model under dielectric stress. Through numerical integration and neural network approximations, the relationship between voltage and stretch is established. Neural networks are employed to approximate solutions for voltage-to-stretch and stretch-to-voltage transformations obtained via an explicit Runge-Kutta method. The effectiveness of these approximations is demonstrated by leveraging them for compensating nonlinearity through the waveshaping of the input signal. The comparative analysis highlights the superior accuracy of the approximated solutions over baseline methods, resulting in minimized harmonic distortions when utilizing dielectric elastomers as acoustic actuators. This study underscores the efficacy of the proposed approach in mitigating nonlinearities and enhancing the performance of dielectric elastomers in acoustic actuation applications.

In the dynamic and data-driven landscape of financial markets, this paper introduces MarketSenseAI, a novel AI-driven framework leveraging the advanced reasoning capabilities of GPT-4 for scalable stock selection. MarketSenseAI incorporates Chain of Thought and In-Context Learning methodologies to analyze a wide array of data sources, including market price dynamics, financial news, company fundamentals, and macroeconomic reports emulating the decision making process of prominent financial investment teams. The development, implementation, and empirical validation of MarketSenseAI are detailed, with a focus on its ability to provide actionable investment signals (buy, hold, sell) backed by cogent explanations. A notable aspect of this study is the use of GPT-4 not only as a predictive tool but also as an evaluator, revealing the significant impact of the AI-generated explanations on the reliability and acceptance of the suggested investment signals. In an extensive empirical evaluation with S&P 100 stocks, MarketSenseAI outperformed the benchmark index by 13%, achieving returns up to 40%, while maintaining a risk profile comparable to the market. These results demonstrate the efficacy of Large Language Models in complex financial decision-making and mark a significant advancement in the integration of AI into financial analysis and investment strategies. This research contributes to the financial AI field, presenting an innovative approach and underscoring the transformative potential of AI in revolutionizing traditional financial analysis investment methodologies.

In this paper, we explore a continuous modeling approach for deep-learning-based speech enhancement, focusing on the denoising process. We use a state variable to indicate the denoising process. The starting state is noisy speech and the ending state is clean speech. The noise component in the state variable decreases with the change of the state index until the noise component is 0. During training, a UNet-like neural network learns to estimate every state variable sampled from the continuous denoising process. In testing, we introduce a controlling factor as an embedding, ranging from zero to one, to the neural network, allowing us to control the level of noise reduction. This approach enables controllable speech enhancement and is adaptable to various application scenarios. Experimental results indicate that preserving a small amount of noise in the clean target benefits speech enhancement, as evidenced by improvements in both objective speech measures and automatic speech recognition performance.

With the widespread use of biometric recognition, several issues related to the privacy and security provided by this technology have been recently raised and analysed. As a result, the early common belief among the biometrics community of templates irreversibility has been proven wrong. It is now an accepted fact that it is possible to reconstruct from an unprotected template a synthetic sample that matches the bona fide one. This reverse engineering process, commonly referred to as \textit{inverse biometrics}, constitutes a severe threat for biometric systems from two different angles: on the one hand, sensitive personal data (i.e., biometric data) can be derived from compromised unprotected templates; on the other hand, other powerful attacks can be launched building upon these reconstructed samples. Given its important implications, biometric stakeholders have produced over the last fifteen years numerous works analysing the different aspects related to inverse biometrics: development of reconstruction algorithms for different characteristics; proposal of methodologies to assess the vulnerabilities of biometric systems to the aforementioned algorithms; development of countermeasures to reduce the possible effects of attacks. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem.

In this paper we prove convergence rates for time discretisation schemes for semi-linear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator $A$ is the generator of a strongly continuous semigroup $S$ on a Hilbert space $X$, and the focus is on non-parabolic problems. The main results are optimal bounds for the uniform strong error $$\mathrm{E}_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|^p\Big)^{1/p},$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$. The usual schemes such as exponential Euler, implicit Euler, and Crank-Nicolson, etc.\ are included as special cases. Under conditions on the nonlinearity and the noise, we show - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (linear equation, additive noise, general $S$); - $\mathrm{E}_{k}^{\infty}\lesssim \sqrt{k} \log(T/k)$ (nonlinear equation, multiplicative noise, contractive $S$); - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (nonlinear wave equation, multiplicative noise) for a large class of time discretisation schemes. The logarithmic factor can be removed if the exponential Euler method is used with a (quasi)-contractive $S$. The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler pointwise strong error $$\mathrm{E}_k:=\bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|^p\bigg)^{1/p}.$$ Applications to Maxwell equations, Schr\"odinger equations, and wave equations are included. For these equations, our results improve and reprove several existing results with a unified method and provide the first results known for implicit Euler and Crank-Nicolson.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another. Our source codes are publicly available at: //github.com/balcilar/Spectral-Designed-Graph-Convolutions

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司