亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we propose a novel partial compress-and-forward (PCF) scheme for improving the maximum achievable transmission rate of a diamond relay network with two noisy relays. PCF combines conventional compress-and-forward (CF) and amplify-and-forward (AF) protocols, enabling one relay to operate alternately in the CF or the AF mode, while the other relay works purely in the CF mode. As the direct link from the source to the destination is unavailable, and there is no noiseless relay in the diamond network, messages received from both relays must act as side information for each other and must be decoded jointly. We propose a joint decoder to decode two Luby transform (LT) codes received from both relays corresponding to the same original message. Numerical results show that PCF can achieve significant performance improvements compared to decode-and-forward (DF) and pure CF protocols when at least the channels connected to one of the relays are of high quality.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

We present Re-weighted Gradient Descent (RGD), a novel optimization technique that improves the performance of deep neural networks through dynamic sample importance weighting. Our method is grounded in the principles of distributionally robust optimization (DRO) with Kullback-Leibler divergence. RGD is simple to implement, computationally efficient, and compatible with widely used optimizers such as SGD and Adam. We demonstrate the broad applicability and impact of RGD by achieving state-of-the-art results on diverse benchmarks, including improvements of +0.7% (DomainBed), +1.44% (tabular classification), +1.94% (GLUE with BERT), and +1.01% (ImageNet-1K with ViT).

The rapid progress in machine learning in recent years has been based on a highly productive connection to gradient-based optimization. Further progress hinges in part on a shift in focus from pattern recognition to decision-making and multi-agent problems. In these broader settings, new mathematical challenges emerge that involve equilibria and game theory instead of optima. Gradient-based methods remain essential -- given the high dimensionality and large scale of machine-learning problems -- but simple gradient descent is no longer the point of departure for algorithm design. We provide a gentle introduction to a broader framework for gradient-based algorithms in machine learning, beginning with saddle points and monotone games, and proceeding to general variational inequalities. While we provide convergence proofs for several of the algorithms that we present, our main focus is that of providing motivation and intuition.

Transformer-based Single Image Deraining (SID) methods have achieved remarkable success, primarily attributed to their robust capability in capturing long-range interactions. However, we've noticed that current methods handle rain-affected and unaffected regions concurrently, overlooking the disparities between these areas, resulting in confusion between rain streaks and background parts, and inabilities to obtain effective interactions, ultimately resulting in suboptimal deraining outcomes. To address the above issue, we introduce the Region Transformer (Regformer), a novel SID method that underlines the importance of independently processing rain-affected and unaffected regions while considering their combined impact for high-quality image reconstruction. The crux of our method is the innovative Region Transformer Block (RTB), which integrates a Region Masked Attention (RMA) mechanism and a Mixed Gate Forward Block (MGFB). Our RTB is used for attention selection of rain-affected and unaffected regions and local modeling of mixed scales. The RMA generates attention maps tailored to these two regions and their interactions, enabling our model to capture comprehensive features essential for rain removal. To better recover high-frequency textures and capture more local details, we develop the MGFB as a compensation module to complete local mixed scale modeling. Extensive experiments demonstrate that our model reaches state-of-the-art performance, significantly improving the image deraining quality. Our code and trained models are publicly available.

In this paper we will derive an non-local (``integral'') equation which transforms a three-dimensional acoustic transmission problem with \emph{variable} coefficients, non-zero absorption, and mixed boundary conditions to a non-local equation on a ``skeleton'' of the domain $\Omega\subset\mathbb{R}^{3}$, where ``skeleton'' stands for the union of the interfaces and boundaries of a Lipschitz partition of $\Omega$. To that end, we introduce and analyze abstract layer potentials as solutions of auxiliary coercive full space variational problems and derive jump conditions across domain interfaces. This allows us to formulate the non-local skeleton equation as a \emph{direct method} for the unknown Cauchy data of the solution of the original partial differential equation. We establish coercivity and continuity of the variational form of the skeleton equation based on auxiliary full space variational problems. Explicit expressions for Green's functions is not required and all our estimates are \emph{explicit} in the complex wave number.

To achieve high-accuracy manipulation in the presence of unknown disturbances, we propose two novel efficient and robust motion control schemes for high-dimensional robot manipulators. Both controllers incorporate an unknown system dynamics estimator (USDE) to estimate disturbances without requiring acceleration signals and the inverse of inertia matrix. Then, based on the USDE framework, an adaptive-gain controller and a super-twisting sliding mode controller are designed to speed up the convergence of tracking errors and strengthen anti-perturbation ability. The former aims to enhance feedback portions through error-driven control gains, while the latter exploits finite-time convergence of discontinuous switching terms. We analyze the boundedness of control signals and the stability of the closed-loop system in theory, and conduct real hardware experiments on a robot manipulator with seven degrees of freedom (DoF). Experimental results verify the effectiveness and improved performance of the proposed controllers, and also show the feasibility of implementation on high-dimensional robots.

A new moving mesh scheme based on the Lagrange-Galerkin method for the approximation of the one-dimensional convection-diffusion equation is studied. The mesh movement, which is prescribed by a discretized dynamical system for the nodal points, follows the direction of convection. It is shown that under a restriction of the time increment the mesh movement cannot lead to an overlap of the elements and therefore an invalid mesh. For the linear element, optimal error estimates in the $\ell^\infty(L^2) \cap \ell^2(H_0^1)$ norm are proved in case of both, a first-order backward Euler method and a second-order two-step method in time. These results are based on new estimates of the time dependent interpolation operator derived in this work. Preservation of the total mass is verified for both choices of the time discretization. Numerical experiments are presented that confirm the error estimates and demonstrate that the proposed moving mesh scheme can circumvent limitations that the Lagrange-Galerkin method on a fixed mesh exhibits.

Recommendation algorithms play a pivotal role in shaping our media choices, which makes it crucial to comprehend their long-term impact on user behavior. These algorithms are often linked to two critical outcomes: homogenization, wherein users consume similar content despite disparate underlying preferences, and the filter bubble effect, wherein individuals with differing preferences only consume content aligned with their preferences (without much overlap with other users). Prior research assumes a trade-off between homogenization and filter bubble effects and then shows that personalized recommendations mitigate filter bubbles by fostering homogenization. However, because of this assumption of a tradeoff between these two effects, prior work cannot develop a more nuanced view of how recommendation systems may independently impact homogenization and filter bubble effects. We develop a more refined definition of homogenization and the filter bubble effect by decomposing them into two key metrics: how different the average consumption is between users (inter-user diversity) and how varied an individual's consumption is (intra-user diversity). We then use a novel agent-based simulation framework that enables a holistic view of the impact of recommendation systems on homogenization and filter bubble effects. Our simulations show that traditional recommendation algorithms (based on past behavior) mainly reduce filter bubbles by affecting inter-user diversity without significantly impacting intra-user diversity. Building on these findings, we introduce two new recommendation algorithms that take a more nuanced approach by accounting for both types of diversity.

In this work, we present ODHD, an algorithm for outlier detection based on hyperdimensional computing (HDC), a non-classical learning paradigm. Along with the HDC-based algorithm, we propose IM-ODHD, a computing-in-memory (CiM) implementation based on hardware/software (HW/SW) codesign for improved latency and energy efficiency. The training and testing phases of ODHD may be performed with conventional CPU/GPU hardware or our IM-ODHD, SRAM-based CiM architecture using the proposed HW/SW codesign techniques. We evaluate the performance of ODHD on six datasets from different application domains using three metrics, namely accuracy, F1 score, and ROC-AUC, and compare it with multiple baseline methods such as OCSVM, isolation forest, and autoencoder. The experimental results indicate that ODHD outperforms all the baseline methods in terms of these three metrics on every dataset for both CPU/GPU and CiM implementations. Furthermore, we perform an extensive design space exploration to demonstrate the tradeoff between delay, energy efficiency, and performance of ODHD. We demonstrate that the HW/SW codesign implementation of the outlier detection on IM-ODHD is able to outperform the GPU-based implementation of ODHD by at least 331.5x/889x in terms of training/testing latency (and on average 14.0x/36.9x in terms of training/testing energy consumption.

In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.

Answering questions that require reading texts in an image is challenging for current models. One key difficulty of this task is that rare, polysemous, and ambiguous words frequently appear in images, e.g., names of places, products, and sports teams. To overcome this difficulty, only resorting to pre-trained word embedding models is far from enough. A desired model should utilize the rich information in multiple modalities of the image to help understand the meaning of scene texts, e.g., the prominent text on a bottle is most likely to be the brand. Following this idea, we propose a novel VQA approach, Multi-Modal Graph Neural Network (MM-GNN). It first represents an image as a graph consisting of three sub-graphs, depicting visual, semantic, and numeric modalities respectively. Then, we introduce three aggregators which guide the message passing from one graph to another to utilize the contexts in various modalities, so as to refine the features of nodes. The updated nodes have better features for the downstream question answering module. Experimental evaluations show that our MM-GNN represents the scene texts better and obviously facilitates the performances on two VQA tasks that require reading scene texts.

北京阿比特科技有限公司