亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The diverse spectrum of material characteristics including band gap, mechanical moduli, color, phonon and electronic density of states, along with catalytic and surface properties are intricately intertwined with the atomic structure and the corresponding interatomic bond-lengths. This interconnection extends to the manifestation of interplanar spacings within a crystalline lattice. Analysis of these interplanar spacings and the comprehension of any deviations, whether it be lattice compression or expansion, commonly referred to as strain, hold paramount significance in unraveling various unknowns within the field. Transmission Electron Microscopy (TEM) is widely used to capture atomic-scale ordering, facilitating direct investigation of interplanar spacings. However, creating critical contour maps for visualizing and interpreting lattice stresses in TEM images remains a challenging task. Here we developed a Python code for TEM image processing that can handle a wide range of materials including nanoparticles, 2D materials, pure crystals and solid solutions. This algorithm converts local differences in interplanar spacings into contour maps allowing for a visual representation of lattice expansion and compression. The tool is very generic and can significantly aid in analyzing material properties using TEM images, allowing for a more in-depth exploration of the underlying science behind strain engineering via strain contour maps at the atomic level.

相關內容

The interpretability of models has become a crucial issue in Machine Learning because of algorithmic decisions' growing impact on real-world applications. Tree ensemble methods, such as Random Forests or XgBoost, are powerful learning tools for classification tasks. However, while combining multiple trees may provide higher prediction quality than a single one, it sacrifices the interpretability property resulting in "black-box" models. In light of this, we aim to develop an interpretable representation of a tree-ensemble model that can provide valuable insights into its behavior. First, given a target tree-ensemble model, we develop a hierarchical visualization tool based on a heatmap representation of the forest's feature use, considering the frequency of a feature and the level at which it is selected as an indicator of importance. Next, we propose a mixed-integer linear programming (MILP) formulation for constructing a single optimal multivariate tree that accurately mimics the target model predictions. The goal is to provide an interpretable surrogate model based on oblique hyperplane splits, which uses only the most relevant features according to the defined forest's importance indicators. The MILP model includes a penalty on feature selection based on their frequency in the forest to further induce sparsity of the splits. The natural formulation has been strengthened to improve the computational performance of {mixed-integer} software. Computational experience is carried out on benchmark datasets from the UCI repository using a state-of-the-art off-the-shelf solver. Results show that the proposed model is effective in yielding a shallow interpretable tree approximating the tree-ensemble decision function.

Multivariate spatio-temporal models are widely applicable, but specifying their structure is complicated and may inhibit wider use. We introduce the R package tinyVAST from two viewpoints: the software user and the statistician. From the user viewpoint, tinyVAST adapts a widely used formula interface to specify generalized additive models, and combines this with arguments to specify spatial and spatio-temporal interactions among variables. These interactions are specified using arrow notation (from structural equation models), or an extended arrow-and-lag notation that allows simultaneous, lagged, and recursive dependencies among variables over time. The user also specifies a spatial domain for areal (gridded), continuous (point-count), or stream-network data. From the statistician viewpoint, tinyVAST constructs sparse precision matrices representing multivariate spatio-temporal variation, and parameters are estimated by specifying a generalized linear mixed model (GLMM). This expressive interface encompasses vector autoregressive, empirical orthogonal functions, spatial factor analysis, and ARIMA models. To demonstrate, we fit to data from two survey platforms sampling corals, sponges, rockfishes, and flatfishes in the Gulf of Alaska and Aleutian Islands. We then compare eight alternative model structures using different assumptions about habitat drivers and survey detectability. Model selection suggests that towed-camera and bottom trawl gears have spatial variation in detectability but sample the same underlying density of flatfishes and rockfishes, and that rockfishes are positively associated with sponges while flatfishes are negatively associated with corals. We conclude that tinyVAST can be used to test complicated dependencies representing alternative structural assumptions for research and real-world policy evaluation.

While deep neural networks have achieved remarkable performance, data augmentation has emerged as a crucial strategy to mitigate overfitting and enhance network performance. These techniques hold particular significance in industrial manufacturing contexts. Recently, image mixing-based methods have been introduced, exhibiting improved performance on public benchmark datasets. However, their application to industrial tasks remains challenging. The manufacturing environment generates massive amounts of unlabeled data on a daily basis, with only a few instances of abnormal data occurrences. This leads to severe data imbalance. Thus, creating well-balanced datasets is not straightforward due to the high costs associated with labeling. Nonetheless, this is a crucial step for enhancing productivity. For this reason, we introduce ContextMix, a method tailored for industrial applications and benchmark datasets. ContextMix generates novel data by resizing entire images and integrating them into other images within the batch. This approach enables our method to learn discriminative features based on varying sizes from resized images and train informative secondary features for object recognition using occluded images. With the minimal additional computation cost of image resizing, ContextMix enhances performance compared to existing augmentation techniques. We evaluate its effectiveness across classification, detection, and segmentation tasks using various network architectures on public benchmark datasets. Our proposed method demonstrates improved results across a range of robustness tasks. Its efficacy in real industrial environments is particularly noteworthy, as demonstrated using the passive component dataset.

Audio-visual speech separation methods aim to integrate different modalities to generate high-quality separated speech, thereby enhancing the performance of downstream tasks such as speech recognition. Most existing state-of-the-art (SOTA) models operate in the time domain. However, their overly simplistic approach to modeling acoustic features often necessitates larger and more computationally intensive models in order to achieve SOTA performance. In this paper, we present a novel time-frequency domain audio-visual speech separation method: Recurrent Time-Frequency Separation Network (RTFS-Net), which applies its algorithms on the complex time-frequency bins yielded by the Short-Time Fourier Transform. We model and capture the time and frequency dimensions of the audio independently using a multi-layered RNN along each dimension. Furthermore, we introduce a unique attention-based fusion technique for the efficient integration of audio and visual information, and a new mask separation approach that takes advantage of the intrinsic spectral nature of the acoustic features for a clearer separation. RTFS-Net outperforms the previous SOTA method using only 10% of the parameters and 18% of the MACs. This is the first time-frequency domain audio-visual speech separation method to outperform all contemporary time-domain counterparts.

Image steganography, defined as the practice of concealing information within another image, traditionally encounters security challenges when its methods become publicly known or are under attack. To address this, a novel private key-based image steganography technique has been introduced. This approach ensures the security of the hidden information, as access requires a corresponding private key, regardless of the public knowledge of the steganography method. Experimental evidence has been presented, demonstrating the effectiveness of our method and showcasing its real-world applicability. Furthermore, a critical challenge in the invertible image steganography process has been identified by us: the transfer of non-essential, or `garbage', information from the secret to the host pipeline. To tackle this issue, the decay weight has been introduced to control the information transfer, effectively filtering out irrelevant data and enhancing the performance of image steganography. The code for this technique is publicly accessible at //github.com/yanghangAI/DKiS, and a practical demonstration can be found at //yanghang.site/hidekey.

Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments. However, existing FR-IQMs, including traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP, LPIPS, and DISTS, still fall short in capturing the complexities and nuances of human perception. In this work, rather than devising a novel IQM model, we seek to improve upon the perceptual quality of existing FR-IQM methods. We achieve this by considering visual masking, an important characteristic of the human visual system that changes its sensitivity to distortions as a function of local image content. Specifically, for a given FR-IQM metric, we propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility. Since the ground truth visual masks are difficult to obtain, we demonstrate how they can be derived in a self-supervised manner solely based on mean opinion scores (MOS) collected from an FR-IQM dataset. Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.

This paper investigates the replication of experiments by Billock and Tsou [PNAS, 2007] using the controllability of neural fields of Amari-type modelling the cortical activity in the primary visual cortex (V1), focusing on a regular funnel pattern localised in the fovea or the peripheral visual field. The aim is to understand and model the visual phenomena observed in these experiments, emphasising their nonlinear nature. The study involves designing sensory inputs simulating the visual stimuli from Billock and Tsou's experiments. The after-images induced by these inputs are then theoretically and numerically studied to determine their capacity to replicate the experimentally observed visual effects. A key aspect of this research is investigating the effects induced by the nonlinear nature of neural responses. In particular, by highlighting the importance of both excitatory and inhibitory neurons in the emergence of certain visual phenomena, this study suggests that an interplay of both types of neuronal activities plays an essential role in visual processes, challenging the assumption that the latter is mainly driven by excitatory activities alone.

The present work deals with the numerical resolution of coupled 3D-2D problems arising from the simulation of fluid flow in fractured porous media modeled via the Discrete Fracture and Matrix (DFM) model. According to the DFM model, fractures are represented as planar interfaces immersed in a 3D porous matrix and can behave as preferential flow paths, in the case of conductive fractures, or can actually be a barrier for the flow, when, instead, the permeability in the normal-to-fracture direction is small compared to the permeability of the matrix. Consequently, the pressure solution in a DFM can be discontinuous across a barrier, as a result of the geometrical dimensional reduction operated on the fracture. The present work is aimed at developing a numerical scheme suitable for the simulation of the flow in a DFM with fractures and barriers, using a mesh for the 3D matrix non conforming to the fractures and that is ready for domain decomposition. This is achieved starting from a PDE-constrained optimization method, currently available in literature only for conductive fractures in a DFM. First, a novel formulation of the optimization problem is defined to account for non permeable fractures. These are described by a filtration-like coupling at the interface with the surrounding porous matrix. Also the extended finite element method with discontinuous enrichment functions is used to reproduce the pressure solution in the matrix around a barrier. The method is presented here in its simplest form, for clarity of exposition, i.e. considering the case of a single fracture in a 3D domain, also providing a proof of the well posedness of the resulting discrete problem. Four validation examples are proposed to show the viability and the effectiveness of the method.

Batteryless energy harvesting systems enable a wide array of new sensing, computation, and communication platforms untethered by power delivery or battery maintenance demands. Energy harvesters charge a buffer capacitor from an unreliable environmental source until enough energy is stored to guarantee a burst of operation despite changes in power input. Current platforms use a fixed-size buffer chosen at design time to meet constraints on charge time or application longevity, but static energy buffers are a poor fit for the highly volatile power sources found in real-world deployments: fixed buffers waste energy both as heat when they reach capacity during a power surplus and as leakage when they fail to charge the system during a power deficit. To maximize batteryless system performance in the face of highly dynamic input power, we propose REACT: a responsive buffering circuit which varies total capacitance according to net input power. REACT uses a variable capacitor bank to expand capacitance to capture incoming energy during a power surplus and reconfigures internal capacitors to reclaim additional energy from each capacitor as power input falls. Compared to fixed-capacity systems, REACT captures more energy, maximizes usable energy, and efficiently decouples system voltage from stored charge -- enabling low-power and high-performance designs previously limited by ambient power. Our evaluation on real-world platforms shows that REACT eliminates the tradeoff between responsiveness, efficiency, and longevity, increasing the energy available for useful work by an average 25.6% over static buffers optimized for reactivity and capacity, improving event responsiveness by an average 7.7x without sacrificing capacity, and enabling programmer directed longevity guarantees.

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.

北京阿比特科技有限公司