亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Dynamic shearing banding and fracturing in unsaturated porous media is a significant problem in engineering and science. This article proposes a multiphase micro-periporomechanics (uPPM) paradigm for modeling dynamic shear banding and fracturing in unsaturated porous media. Periporomechanics (PPM) is a nonlocal reformulation of classical poromechanics to model continuous and discontinuous deformation/fracture and fluid flow in porous media through a single framework. In PPM, a multiphase porous material is postulated as a collection of a finite number of mixed material points. The length scale in PPM that dictates the nonlocal interaction between material points is a mathematical object that lacks a direct physical meaning. As a novelty, in the coupled uPPM, a microstructure-based material length scale is incorporated by considering micro-rotations of the solid skeleton following the Cosserat continuum theory for solids. As a new contribution, we reformulate the second-order work for detecting material instability and the energy-based crack criterion and J-integral for modeling fracturing in the uPPM paradigm. The stabilized Cosserat PPM correspondence principle that mitigates the multiphase zero-energy mode instability is augmented to include unsaturated fluid flow. We have numerically implemented the novel uPPM paradigm through a dual-way fractional-step algorithm in time and a hybrid Lagrangian-Eulerian meshfree method in space. Numerical examples are presented to demonstrate the robustness and efficacy of the proposed uPPM paradigm for modeling shear banding and fracturing in unsaturated porous media.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Less · 回合 · Unix · Processing(編程語言) ·
2023 年 12 月 19 日

PyPartMC is a Pythonic interface to PartMC, a stochastic, particle-resolved aerosol model implemented in Fortran. Both PyPartMC and PartMC are free, libre, and open-source. PyPartMC reduces the number of steps and mitigates the effort necessary to install and utilize the resources of PartMC. Without PyPartMC, setting up PartMC requires: working with UNIX shell, providing Fortran and C libraries, and performing standard Fortran and C source code configuration, compilation and linking. This can be challenging for those less experienced with computational research or those intending to use PartMC in environments where provision of UNIX tools is less straightforward (e.g., on Windows). PyPartMC offers a single-step installation/upgrade process of PartMC and all dependencies through the pip Python package manager on Linux, macOS, and Windows. This allows streamlined access to the unmodified and versioned Fortran internals of the PartMC codebase from both Python and other interoperable environments (e.g., Julia through PyCall). Consequently, users of PyPartMC can setup, run, process and visualize output of PartMC simulations using a single general-purpose programming language.

Taking advantage of the structure of large datasets to pre-train Deep Learning models is a promising strategy to decrease the need for supervised data. Self-supervised learning methods, such as contrastive and its variation are a promising way towards obtaining better representations in many Deep Learning applications. Soundscape ecology is one application in which annotations are expensive and scarce, therefore deserving investigation to approximate methods that do not require annotations to those that rely on supervision. Our study involves the use of the methods Barlow Twins and VICReg to pre-train different models with the same small dataset with sound patterns of bird and anuran species. In a downstream task to classify those animal species, the models obtained results close to supervised ones, pre-trained in large generic datasets, and fine-tuned with the same task.

Substantial efforts have been made in developing various Decision Modeling formalisms, both from industry and academia. A challenging problem is that of expressing decision knowledge in the context of incomplete knowledge. In such contexts, decisions depend on what is known or not known. We argue that none of the existing formalisms for modeling decisions are capable of correctly capturing the epistemic nature of such decisions, inevitably causing issues in situations of uncertainty. This paper presents a new language for modeling decisions with incomplete knowledge. It combines three principles: stratification, autoepistemic logic, and definitions. A knowledge base in this language is a hierarchy of epistemic theories, where each component theory may epistemically reason on the knowledge in lower theories, and decisions are made using definitions with epistemic conditions.

In this article, we study some anisotropic singular perturbations for a class of linear elliptic problems. A uniform estimates for conforming $Q_1$ finite element method are derived, and some other results of convergence and regularity for the continuous problem are proved.

This paper presents a learnable solver tailored to solve discretized linear partial differential equations (PDEs). This solver requires only problem-specific training data, without using specialized expertise. Its development is anchored by three core principles: (1) a multilevel hierarchy to promote rapid convergence, (2) adherence to linearity concerning the right-hand side of equations, and (3) weights sharing across different levels to facilitate adaptability to various problem sizes. Built on these foundational principles, we introduce a network adept at solving PDEs discretized on structured grids, even when faced with heterogeneous coefficients. The cornerstone of our proposed solver is the convolutional neural network (CNN), chosen for its capacity to learn from structured data and its similar computation pattern as multigrid components. To evaluate its effectiveness, the solver was trained to solve convection-diffusion equations featuring heterogeneous diffusion coefficients. The solver exhibited swift convergence to high accuracy over a range of grid sizes, extending from $31 \times 31$ to $4095 \times 4095$. Remarkably, our method outperformed the classical Geometric Multigrid (GMG) solver, demonstrating a speedup of approximately 3 to 8 times. Furthermore, we explored the solver's generalizability to untrained coefficient distributions. The findings showed consistent reliability across various other coefficient distributions, revealing that when trained on a mixed coefficient distribution, the solver is nearly as effective in generalizing to all types of coefficient distributions.

Quantum computing becomes more of a reality as time passes, bringing several cybersecurity challenges. Modern cryptography is based on the computational complexity of specific mathematical problems, but as new quantum-based computers appear, classical methods might not be enough to secure communications. In this paper, we analyse the state of the Galileo Open Service Navigation Message Authentication (OSNMA) to overcome these new threats. This analysis and its assessment have been performed using OSNMA documentation, reviewing the available Post Quantum Cryptography (PQC) algorithms competing in the National Institute of Standards and Technology (NIST) standardization process, and studying the possibility of its implementation in the Galileo service. The main barrier to adopting the PQC approach is the size of both the signature and the key. The analysis shows that OSNMA is not yet prepared to face the quantum threat, and a significant change would be required. This work concludes by assessing different temporal countermeasures that can be implemented to sustain the system's integrity in the short term.

We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to languages not seen while training the self-supervised backbone. Moreover, without training on bilingual or parallel examples, ParrotTTS can transfer voices across languages while preserving the speaker specific characteristics, e.g., synthesizing fluent Hindi speech using a French speaker's voice and accent. We present extensive results in monolingual and multi-lingual scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models using only a fraction of paired data as latter.

In this article, we create an artificial neural network (ANN) that combines both classical and modern techniques for determining the key length of a Vigen\`{e}re cipher. We provide experimental evidence supporting the accuracy of our model for a wide range of parameters. We also discuss the creation and features of this ANN along with a comparative analysis between our ANN, the index of coincidence, and the twist-based algorithms.

Trajectory prediction in traffic scenes involves accurately forecasting the behaviour of surrounding vehicles. To achieve this objective it is crucial to consider contextual information, including the driving path of vehicles, road topology, lane dividers, and traffic rules. Although studies demonstrated the potential of leveraging heterogeneous context for improving trajectory prediction, state-of-the-art deep learning approaches still rely on a limited subset of this information. This is mainly due to the limited availability of comprehensive representations. This paper presents an approach that utilizes knowledge graphs to model the diverse entities and their semantic connections within traffic scenes. Further, we present nuScenes Knowledge Graph (nSKG), a knowledge graph for the nuScenes dataset, that models explicitly all scene participants and road elements, as well as their semantic and spatial relationships. To facilitate the usage of the nSKG via graph neural networks for trajectory prediction, we provide the data in a format, ready-to-use by the PyG library. All artefacts can be found here: //github.com/boschresearch/nuScenes_Knowledge_Graph

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司