亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Spatial symmetries and invariances play an important role in the behaviour of materials and should be respected in the description and modelling of material properties. The focus here is the class of physically symmetric and positive definite tensors, as they appear often in the description of materials, and one wants to be able to prescribe certain classes of spatial symmetries and invariances for each member of the whole ensemble, while at the same time demanding that the mean or expected value of the ensemble be subject to a possibly 'higher' spatial invariance class. We formulate a modelling framework which not only respects these two requirements$-$positive definiteness and invariance$-$but also allows a fine control over orientation on one hand, and strength/size on the other. As the set of positive definite tensors is not a linear space, but rather an open convex cone in the linear space of physically symmetric tensors, we consider it advantageous to widen the notion of mean to the so-called Fr\'echet mean on a metric space, which is based on distance measures or metrics between positive definite tensors other than the usual Euclidean one. It is shown how the random ensemble can be modelled and generated, independently in its scaling and orientational or directional aspects, with a Lie algebra representation via a memoryless transformation. The parameters which describe the elements in this Lie algebra are then to be considered as random fields on the domain of interest. As an example, a 2D and a 3D model of steady-state heat conduction in a human proximal femur, a bone with high material anisotropy, is modelled with a random thermal conductivity tensor, and the numerical results show the distinct impact of incorporating into the constitutive model different material uncertainties$-$scaling, orientation, and prescribed material symmetry$-$on the desired quantities of interest.

相關內容

Complete complementary codes (CCCs) play a vital role not only in wireless communication, particularly in multicarrier systems where achieving an interference-free environment is of paramount importance, but also in the construction of other codes that necessitate appropriate functions to meet the diverse demands within today's landscape of wireless communication evaluation. This research is focused on the area of constructing $q$-ary functions for both of {traditional and spectrally null constraint (SNC) CCCs}\footnote{When no codes in CCCs having zero components, we call it as traditonal CCCs, else, we call it as SNC-CCCs in this pape.} of flexible length, set size and alphabet. We construct traditional CCCs with lengths, defined as $L = \prod_{i=1}^k p_i^{m_i}$, set sizes, defined as $K = \prod_{i=1}^k p_i^{n_i+1}$, and an alphabet size of $q=\prod_{i=1}^k p_i$, such that $p_1<p_2<\cdots<p_k $. The parameters $m_1, m_2, \ldots, m_k$ (each greater than or equal to $2$) are positive integers, while $n_1, n_2, \ldots, n_k$ are non-negative integers satisfying $n_i \leq m_i-1$, and the variable $k$ represents a positive integer. To achieve these specific parameters, we define $q$-ary functions over a domain $\mathbf{Z}_{p_1}^{m_1}\times \cdots \times \mathbf{Z}_{p_k}^{m_k}$ that is considered a proper subset of $\mathbb{Z}_{q}^m$ and encompasses $\prod_{i=1}^k p_i^{m_i}$ vectors, where $\mathbf{Z}_{p_i}^{m_i}=\{0,1,\hdots,p_i-1\}^{m_i}$, and the value of $m$ is derived from the sum of $m_1, m_2, \ldots, m_k$. This organization of the domain allows us to encompass all conceivable integer-valued length sequences over the alphabet $\mathbb{Z}_q$. It has been demonstrated that by constraining a $q$-ary function that generates traditional CCCs, we can derive SNC-CCCs with identical length and alphabet, yet a smaller or equal set size compared to the traditional CCCs.

Reinforcement Learning (RL) has been widely applied to many control tasks and substantially improved the performances compared to conventional control methods in many domains where the reward function is well defined. However, for many real-world problems, it is often more convenient to formulate optimization problems in terms of rewards and constraints simultaneously. Optimizing such constrained problems via reward shaping can be difficult as it requires tedious manual tuning of reward functions with several interacting terms. Recent formulations which include constraints mostly require a pre-training phase, which often needs human expertise to collect data or assumes having a sub-optimal policy readily available. We propose a new constrained RL method called CSAC-LB (Constrained Soft Actor-Critic with Log Barrier Function), which achieves competitive performance without any pre-training by applying a linear smoothed log barrier function to an additional safety critic. It implements an adaptive penalty for policy learning and alleviates the numerical issues that are known to complicate the application of the log barrier function method. As a result, we show that with CSAC-LB, we achieve state-of-the-art performance on several constrained control tasks with different levels of difficulty and evaluate our methods in a locomotion task on a real quadruped robot platform.

Previous stance detection studies typically concentrate on evaluating stances within individual instances, thereby exhibiting limitations in effectively modeling multi-party discussions concerning the same specific topic, as naturally transpire in authentic social media interactions. This constraint arises primarily due to the scarcity of datasets that authentically replicate real social media contexts, hindering the research progress of conversational stance detection. In this paper, we introduce a new multi-turn conversation stance detection dataset (called \textbf{MT-CSD}), which encompasses multiple targets for conversational stance detection. To derive stances from this challenging dataset, we propose a global-local attention network (\textbf{GLAN}) to address both long and short-range dependencies inherent in conversational data. Notably, even state-of-the-art stance detection methods, exemplified by GLAN, exhibit an accuracy of only 50.47\%, highlighting the persistent challenges in conversational stance detection. Furthermore, our MT-CSD dataset serves as a valuable resource to catalyze advancements in cross-domain stance detection, where a classifier is adapted from a different yet related target. We believe that MT-CSD will contribute to advancing real-world applications of stance detection research. Our source code, data, and models are available at \url{//github.com/nfq729/MT-CSD}.

Determining the optimal fidelity for the transmission of quantum information over noisy quantum channels is one of the central problems in quantum information theory. Recently, [Berta-Borderi-Fawzi-Scholz, Mathematical Programming, 2021] introduced an asymptotically converging semidefinite programming hierarchy of outer bounds for this quantity. However, the size of the semidefinite programs (SDPs) grows exponentially with respect to the level of the hierarchy, thus making their computation unscalable. In this work, by exploiting the symmetries in the SDP, we show that, for a fixed output dimension of the quantum channel, we can compute the SDP in time polynomial with respect to the level of the hierarchy and input dimension. As a direct consequence of our result, the optimal fidelity can be approximated with an accuracy of $\epsilon$ in $\mathrm{poly}(1/\epsilon, \text{input dimension})$ time.

We present a general framework for applying learning algorithms and heuristical guidance to the verification of Markov decision processes (MDPs). The primary goal of our techniques is to improve performance by avoiding an exhaustive exploration of the state space, instead focussing on particularly relevant areas of the system, guided by heuristics. Our work builds on the previous results of Br{\'{a}}zdil et al., significantly extending it as well as refining several details and fixing errors. The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios. The first assumes that full knowledge of the MDP is available, in particular precise transition probabilities. It performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP without knowing the exact transition dynamics. Here, we obtain probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. In particular, the latter is an extension of statistical model-checking (SMC) for unbounded properties in MDPs. In contrast to other related approaches, we do not restrict our attention to time-bounded (finite-horizon) or discounted properties, nor assume any particular structural properties of the MDP.

Multi-modal emotion recognition has recently gained a lot of attention since it can leverage diverse and complementary relationships over multiple modalities, such as audio, visual, and text. Most state-of-the-art methods for multimodal fusion rely on recurrent networks or conventional attention mechanisms that do not effectively leverage the complementary nature of the modalities. In this paper, we focus on dimensional emotion recognition based on the fusion of facial, vocal, and text modalities extracted from videos. Specifically, we propose a recursive cross-modal attention (RCMA) to effectively capture the complementary relationships across the modalities in a recursive fashion. The proposed model is able to effectively capture the inter-modal relationships by computing the cross-attention weights across the individual modalities and the joint representation of the other two modalities. To further improve the inter-modal relationships, the obtained attended features of the individual modalities are again fed as input to the cross-modal attention to refine the feature representations of the individual modalities. In addition to that, we have used Temporal convolution networks (TCNs) to capture the temporal modeling (intra-modal relationships) of the individual modalities. By deploying the TCNs as well cross-modal attention in a recursive fashion, we are able to effectively capture both intra- and inter-modal relationships across the audio, visual, and text modalities. Experimental results on validation-set videos from the AffWild2 dataset indicate that our proposed fusion model is able to achieve significant improvement over the baseline for the sixth challenge of Affective Behavior Analysis in-the-Wild 2024 (ABAW6) competition.

The defects of the traditional strapdown inertial navigation algorithms become well acknowledged and the corresponding enhanced algorithms have been quite recently proposed trying to mitigate both theoretical and algorithmic defects. In this paper, the analytical accuracy evaluation of both the traditional algorithms and the enhanced algorithms is investigated, against the true reference for the first time enabled by the functional iteration approach having provable convergence. The analyses by the help of MATLAB Symbolic Toolbox show that the resultant error orders of all algorithms under investigation are consistent with those in the existing literatures, and the enhanced attitude algorithm notably reduces error orders of the traditional counterpart, while the impact of the enhanced velocity algorithm on error order reduction is insignificant. Simulation results agree with analyses that the superiority of the enhanced algorithm over the traditional one in the body-frame attitude computation scenario diminishes significantly in the entire inertial navigation computation scenario, while the functional iteration approach possesses significant accuracy superiority even under sustained lowly dynamic conditions.

For decades, aspects of the topological architecture, and of the mechanical as well as other physical behaviors of periodic lattice truss materials (PLTMs) have been massively studied. Their approximate infinite design space presents a double-edged sword, implying on one hand dramatic designability in fulfilling the requirement of various performance, but on the other hand unexpected intractability in determining the best candidate with tailoring properties. In recent years, the development of additive manufacturing and artificial intelligence spurs an explosion in the methods exploring the design space and searching its boundaries. However, regrettably, a normative description with sufficient information of PLTMs applying to machine learning has not yet been constructed, which confines the inverse design to some discrete and small scrutinized space. In the current paper, we develop a system of canonical descriptors for PLTMs, encoding not only the geometrical configurations but also mechanical properties into matrix forms to establish good quantitative correlations between structures and mechanical behaviors. The system mainly consists of the geometry matrix for the lattice node configuration, density, stretching and bending stiffness matrices for the lattice strut properties, as well as packing matrix for the principal periodic orientation. All these matrices are theoretically derived based on the intrinsic nature of PLTMs, leading to concise descriptions and sufficient information. The characteristics, including the completeness and uniqueness, of the descriptors are analyzed. In addition, we discuss how the current system of descriptors can be applied to the database construction and material discovery, and indicate the possible open problems.

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

The concept of causality plays an important role in human cognition . In the past few decades, causal inference has been well developed in many fields, such as computer science, medicine, economics, and education. With the advancement of deep learning techniques, it has been increasingly used in causal inference against counterfactual data. Typically, deep causal models map the characteristics of covariates to a representation space and then design various objective optimization functions to estimate counterfactual data unbiasedly based on the different optimization methods. This paper focuses on the survey of the deep causal models, and its core contributions are as follows: 1) we provide relevant metrics under multiple treatments and continuous-dose treatment; 2) we incorporate a comprehensive overview of deep causal models from both temporal development and method classification perspectives; 3) we assist a detailed and comprehensive classification and analysis of relevant datasets and source code.

北京阿比特科技有限公司