亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modern 'smart' materials have complex heterogeneous microscale structure, often with unknown macroscale closure but one we need to realise for large scale engineering and science. The multiscale Equation-Free Patch Scheme empowers us to non-intrusively, efficiently, and accurately predict the large scale, system level, solutions through computations on only small sparse patches of the given detailed microscale system. Here the microscale system is that of a 2D beam of heterogeneous elasticity, with either fixed fixed, fixed-free, or periodic boundary conditions. We demonstrate that the described multiscale Patch Scheme simply, efficiently, and stably predicts the beam's macroscale, with a controllable accuracy, at finite scale separation. Dynamical systems theory supports the scheme. This article points the way for others to use this systematic non-intrusive approach, via a developing toolbox of functions, to model and compute accurately macroscale system-levels of general complex physical and engineering systems.

相關內容

The formation of shear shock waves in the brain has been proposed as one of the plausible explanations for deep intracranial injuries. In fact, such singular solutions emerge naturally in soft viscoelastic tissues under dynamic loading conditions. To improve our understanding of the mechanical processes at hand, the development of dedicated computational models is needed. The present study concerns three-dimensional numerical models of incompressible viscoelastic solids whose motion is analysed by means of shock-capturing finite volume methods. More specifically, we focus on the use of the artificial compressibility method, a technique that has been frequently employed in computational fluid dynamics. The material behaviour is deduced from the Fung--Simo quasi-linear viscoelasiticity theory (QLV) where the elastic response is of Yeoh type. We analyse the accuracy of the method and demonstrate its applicability for the study of nonlinear wave propagation in soft solids. The numerical results cover accuracy tests, shock formation and wave diffraction.

Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads for a high-level programming language. A popular approach consists of combining Python, used for the high-level interface, and C++, used for the computing intensive part of the code. A more convenient and efficient approach would be to use a language that provides both high-level programming and high-performance. The Julia programming language, developed at MIT especially to allow the use of a single language in research activities, has followed this path. In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code development: runtime performance, handling of large projects, interface with legacy code, distributed computing, training, and ease of programming. The study shows that the HEP community would benefit from a large scale adoption of this programming language. The HEP-specific foundation libraries that would need to be consolidated are identified

Adversarially robust streaming algorithms are required to process a stream of elements and produce correct outputs, even when each stream element can be chosen depending on earlier algorithm outputs. As with classic streaming algorithms, which must only be correct for the worst-case fixed stream, adversarially robust algorithms with access to randomness can use significantly less space than deterministic algorithms. We prove that for the Missing Item Finding problem in streaming, the space complexity also significantly depends on how adversarially robust algorithms are permitted to use randomness. (In contrast, the space complexity of classic streaming algorithms does not depend as strongly on the way randomness is used.) For Missing Item Finding on streams of length $r$ with elements in $\{1,...n\}$, and $\le 1/\text{poly}(n)$ error, we show that when $r = O(2^{\sqrt{\log n}})$, "random seed" adversarially robust algorithms, which only use randomness at initialization, require $r^{\Omega(1)}$ bits of space, while "random tape" adversarially robust algorithms, which may make random decisions at any time, may use $O(\text{polylog}(r))$ random bits. When $r = \Theta(\sqrt{n})$, "random tape" adversarially robust algorithms need $r^{\Omega(1)}$ space, while "random oracle" adversarially robust algorithms, which can read from a long random string for free, may use $O(\text{polylog}(r))$ space. The space lower bound for the "random seed" case follows, by a reduction given in prior work, from a lower bound for pseudo-deterministic streaming algorithms given in this paper.

Electrical submersible pumps (ESP) are the second most used artificial lifting equipment in the oil and gas industry due to their high flow rates and boost pressures. They often have to handle multiphase flows, which usually contain a mixture of hydrocarbons, water, and/or sediments. Given these circumstances, emulsions are commonly formed. It is a liquid-liquid flow composed of two immiscible fluids whose effective viscosity and density differ from the single phase separately. In this context, accurate modeling of ESP systems is crucial for optimizing oil production and implementing control strategies. However, real-time and direct measurement of fluid and system characteristics is often impractical due to time constraints and economy. Hence, indirect methods are generally considered to estimate the system parameters. In this paper, we formulate a machine learning model based on Physics-Informed Neural Networks (PINNs) to estimate crucial system parameters. In order to study the efficacy of the proposed PINN model, we conduct computational studies using not only simulated but also experimental data for different water-oil ratios. We evaluate the state variable's dynamics and unknown parameters for various combinations when only intake and discharge pressure measurements are available. We also study structural and practical identifiability analyses based on commonly available pressure measurements. The PINN model could reduce the requirement of expensive field laboratory tests used to estimate fluid properties.

Peptides are formed by the dehydration condensation of multiple amino acids. The primary structure of a peptide can be represented either as an amino acid sequence or as a molecular graph consisting of atoms and chemical bonds. Previous studies have indicated that deep learning routes specific to sequential and graphical peptide forms exhibit comparable performance on downstream tasks. Despite the fact that these models learn representations of the same modality of peptides, we find that they explain their predictions differently. Considering sequential and graphical models as two experts making inferences from different perspectives, we work on fusing expert knowledge to enrich the learned representations for improving the discriminative performance. To achieve this, we propose a peptide co-modeling method, RepCon, which employs a contrastive learning-based framework to enhance the mutual information of representations from decoupled sequential and graphical end-to-end models. It considers representations from the sequential encoder and the graphical encoder for the same peptide sample as a positive pair and learns to enhance the consistency of representations between positive sample pairs and to repel representations between negative pairs. Empirical studies of RepCon and other co-modeling methods are conducted on open-source discriminative datasets, including aggregation propensity, retention time, antimicrobial peptide prediction, and family classification from Peptide Database. Our results demonstrate the superiority of the co-modeling approach over independent modeling, as well as the superiority of RepCon over other methods under the co-modeling framework. In addition, the attribution on RepCon further corroborates the validity of the approach at the level of model explanation.

We present a method to improve the calibration of deep ensembles in the small training data regime in the presence of unlabeled data. Our approach is extremely simple to implement: given an unlabeled set, for each unlabeled data point, we simply fit a different randomly selected label with each ensemble member. We provide a theoretical analysis based on a PAC-Bayes bound which guarantees that if we fit such a labeling on unlabeled data, and the true labels on the training data, we obtain low negative log-likelihood and high ensemble diversity on testing samples. Empirically, through detailed experiments, we find that for low to moderately-sized training sets, our ensembles are more diverse and provide better calibration than standard ensembles, sometimes significantly.

In this paper, we consider the variable-order time fractional mobile/immobile diffusion (TF-MID) equation in two-dimensional spatial domain, where the fractional order $\alpha(t)$ satisfies $0<\alpha_{*}\leq \alpha(t)\leq \alpha^{*}<1$. We combine the quadratic spline collocation (QSC) method and the $L1^+$ formula to propose a QSC-$L1^+$ scheme. It can be proved that, the QSC-$L1^+$ scheme is unconditionally stable and convergent with $\mathcal{O}(\tau^{\min{\{3-\alpha^*-\alpha(0),2\}}} + \Delta x^{2}+\Delta y^{2})$, where $\tau$, $\Delta x$ and $\Delta y$ are the temporal and spatial step sizes, respectively. With some proper assumptions on $\alpha(t)$, the QSC-$L1^+$ scheme has second temporal convergence order even on the uniform mesh, without any restrictions on the solution of the equation. We further construct a novel alternating direction implicit (ADI) framework to develop an ADI-QSC-$L1^+$ scheme, which has the same unconditionally stability and convergence orders. In addition, a fast implementation for the ADI-QSC-$L1^+$ scheme based on the exponential-sum-approximation (ESA) technique is proposed. Moreover, we also introduce the optimal QSC method to improve the spatial convergence to fourth-order. Numerical experiments are attached to support the theoretical analysis, and to demonstrate the effectiveness of the proposed schemes.

Recently, advancements in deep learning-based superpixel segmentation methods have brought about improvements in both the efficiency and the performance of segmentation. However, a significant challenge remains in generating superpixels that strictly adhere to object boundaries while conveying rich visual significance, especially when cross-surface color correlations may interfere with objects. Drawing inspiration from neural structure and visual mechanisms, we propose a biological network architecture comprising an Enhanced Screening Module (ESM) and a novel Boundary-Aware Label (BAL) for superpixel segmentation. The ESM enhances semantic information by simulating the interactive projection mechanisms of the visual cortex. Additionally, the BAL emulates the spatial frequency characteristics of visual cortical cells to facilitate the generation of superpixels with strong boundary adherence. We demonstrate the effectiveness of our approach through evaluations on both the BSDS500 dataset and the NYUv2 dataset.

Classical forgery attacks against Offset Two-round (OTR) structures require some harsh conditions, such as some plaintext and ciphertext pairs need to be known, and the success probability is not too high. To solve these problems, a quantum forgery attack on OTR structure using Simon's algorithm is proposed. The attacker intercept the ciphertext-tag pair $(C,T)$ between the sender and receiver, while Simon's algorithm is used to find the period of the tag generation function in OTR, then we can successfully forge new ciphertext $C'$ ($C'\ne C$) for intercepted tag $T$. For a variant of OTR structure (Pr{/o}st-OTR-Even-Mansour structure), a universal forgery attack, in which it is easy to generate the correct tag of any given message if the attacker is allowed to change a single block in it, is proposed. It first obtains the secret parameter L using Simon's algorithm, then the secret parameter L is used to find the keys $k_1$ and $k_2$, so that an attacker can forge the changed messages. It only needs several plaintext blocks to help obtain the keys to forge any messages. Performance analysis shows that the query complexity of our attack is $O(n)$, and its success probability is very close to 1.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司