亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we prove measurability of event for which a general continuous-time stochastic process satisfies continuous-time Metric Temporal Logic (MTL) formula. Continuous-time MTL can define temporal constrains for physical system in natural way. Then there are several researches that deal with probability of continuous MTL semantics for stochastic processes. However, proving measurability for such events is by no means an obvious task, even though it is essential. The difficulty comes from the semantics of "until operator", which is defined by logical sum of uncountably many propositions. Given the difficulty involved in proving the measurability of such an event using classical measure-theoretic methods, we employ a theorem from stochastic analysis. This theorem is utilized to prove the measurability of hitting times for stochastic processes, and it stands as a profound result within the theory of capacity. Next, we provide an example that illustrates the failure of probability approximation when discretizing the continuous semantics of MTL formulas with respect to time. Additionally, we prove that the probability of the discretized semantics converges to that of the continuous semantics when we impose restrictions on diamond operators to prevent nesting.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

In this paper, we consider a reconfigurable intelligent surface (RIS)-assisted multiple-input multiple-output communication system with multiple antennas at both the base station (BS) and the user. We plan to maximize the achievable rate through jointly optimizing the transmit precoding matrix, the receive combining matrix, and the RIS reflection matrix under the constraints of the transmit power at the BS and the unit-modulus reflection at the RIS. Regarding the non-trivial problem form, we initially reformulate it into an considerable problem to make it tractable by utilizing the relationship between the achievable rate and the weighted minimum mean squared error. Next, the transmit precoding matrix, the receive combining matrix, and the RIS reflection matrix are alternately optimized. In particular, the optimal transmit precoding matrix and receive combining matrix are obtained in closed forms. Furthermore, a pair of computationally efficient methods are proposed for the RIS reflection matrix, namely the semi-definite relaxation (SDR) method and the successive closed form (SCF) method. We theoretically prove that both methods are ensured to converge, and the SCF-based algorithm is able to converges to a Karush-Kuhn-Tucker point of the problem.

In this paper, we use a new method to prove cut-elimination of weak intuitionistic tense logic. This method focuses on splitting the contraction rule and cut rules. Further general theories and applications of this method shall be developed in the future.

In this paper, we make the first attempt to understand and test potential computation efficiency robustness in state-of-the-art LLMs. By analyzing the working mechanism and implementation of 20,543 public-accessible LLMs, we observe a fundamental property in LLMs that could be manipulated in an adversarial manner to reduce computation efficiency significantly. Our key motivation is to generate test inputs that could sufficiently delay the generation of EOS such that LLMs would have to go through enough iterations to satisfy the pre-configured threshold. We present \tool, which can work under both white-box setting and black-box setting. In the white-box scenario, \tool develops a gradient-guided technique that searches for a minimal and unnoticeable perturbation at character-level, token-level, and structure-level. In the black-box scenario, \tool employs a causal inference-based approach to find critical tokens and similarly applies three levels of imperceptible perturbation to them. Both the white-box and black-box settings effectively delay the appearance of EOS, compelling these inputs to reach the naturally-unreachable threshold. To demonstrate the effectiveness of \tool, we conduct a systematic evaluation on nine public-available LLMs: Google T5, AllenAI WMT14, Helsinki-NLP translator, Facebook FairSeq, UNICAMP-DL translator, MarianMT, Google FLAN-T5, MBZUAI LaMini-GPT and Salesforce CodeGen. Experimental results show that \tool can increase on average LLMs' response latency and energy consumption by 325\% to 3244\% and 344\% to 3616\%, respectively, by perturbing just one character or token in the input sentence.

In this paper, we generalize the Jacobi eigenvalue algorithm to compute all eigenvalues and eigenvectors of a dual quaternion Hermitian matrix and show the convergence. We also propose a three-step Jacobi eigenvalue algorithm to compute the eigenvalues when a dual quaternion Hermitian matrix has two eigenvalues with identical standard parts but different dual parts and prove the convergence. Numerical experiments are presented to illustrate the efficiency and stability of the proposed Jacobi eigenvalue algorithm compaired to the power method and the Rayleigh quotient iteration method.

In this work we study the rate-distortion function (RDF) for lossy compression of asynchronously-sampled continuous-time (CT) wide-sense cyclostationary (WSCS) Gaussian processes with memory. As the case of synchronous sampling, i.e., when the sampling interval is commensurate with the period of the cyclostationary statistics, has already been studied, we focus on discrete-time (DT) processes obtained by asynchronous sampling, i.e., when the sampling interval is incommensurate with the period of the cyclostationary statistics of the CT WSCS source process. It is further assumed that the sampling interval is smaller than the maximal autocorrelation length of the CT source process, which implies that the DT process possesses memory. Thus, the sampled process is a DT wide-sense almost cyclostationary (WSACS) processes with memory. This problem is motivated by the fact that man-made communications signals are modelled as CT WSCS processes; hence, applications of such sampling include, e.g., compress-and-forward relaying and recording systems. The main challenge follows because, with asynchronous sampling, the DT sampled process is not information-stable, and hence the characterization of its RDF should be carried out within the information-spectrum framework instead of using conventional information-theoretic arguments. This work expands upon our previous work which addressed the special case in which the DT process is independent across time. The existence of dependence between the samples requires new tools to obtain the characterization of the RDF.

In this paper, we present the derivation of a multicontinuum model for the coupled flow and transport equations by applying multicontinuum homogenization. We perform the multicontinuum expansion for both flow and transport solutions and formulate novel coupled constraint cell problems to capture the multiscale property, where oversampled regions are utilized to avoid boundary effects. Assuming the smoothness of macroscopic variables, we obtain a multicontinuum system composed of macroscopic elliptic equations and convection-diffusion-reaction equations with homogenized effective properties. Finally, we present numerical results for various coefficient fields and boundary conditions to validate our proposed algorithm.

In this paper, we introduce a data augmentation approach specifically tailored to enhance intersectional fairness in classification tasks. Our method capitalizes on the hierarchical structure inherent to intersectionality, by viewing groups as intersections of their parent categories. This perspective allows us to augment data for smaller groups by learning a transformation function that combines data from these parent groups. Our empirical analysis, conducted on four diverse datasets including both text and images, reveals that classifiers trained with this data augmentation approach achieve superior intersectional fairness and are more robust to ``leveling down'' when compared to methods optimizing traditional group fairness metrics.

In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model re-training. Code is available.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.

北京阿比特科技有限公司