亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Understanding crack propagation in structures subjected to fluid loads is crucial in various engineering applications, ranging from underwater pipelines to aircraft components. This study investigates the dynamic response of structures, including their damage and fracture behaviour under hydrodynamic load, emphasizing the fluid-structure interaction (FSI) phenomena by applying Smoothed Particle Hydrodynamics (SPH). The developed framework employs weakly compressible SPH (WCSPH) to model the fluid flow and a pseudo-spring-based SPH solver for modelling the structural response. For improved accuracy in FSI modelling, the $\delta$-SPH technique is implemented to enhance pressure calculations within the fluid phase. The pseudo-spring analogy is employed for modelling material damage, where particle interactions are confined to their immediate neighbours. These particles are linked by springs, which don't contribute to system stiffness but determine the interaction strength between connected pairs. It is assumed that a crack propagates through a spring connecting a particle pair when the damage indicator of that spring exceeds a predefined threshold. The developed framework is extensively validated through a dam break case, oscillation of a deformable solid beam, dam break through a deformable elastic solid, and breaking dam impact on a deformable solid obstacle. Numerical outcomes are subsequently compared with the findings from existing literature. The ability of the framework to accurately depict material damage and fracture is showcased through a simulation of water impact on a deformable solid obstacle with an initial notch.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · MoDELS · 連接主義 · 正則化項 · TOOLS ·
2023 年 11 月 20 日

Embedded devices are specialised devices designed for one or only a few purposes. They are often part of a larger system, through wired or wireless connection. Those embedded devices that are connected to other computers or embedded systems through the Internet are called Internet of Things (IoT for short) devices. With their widespread usage and their insufficient protection, these devices are increasingly becoming the target of malware attacks. Companies often cut corners to save manufacturing costs or misconfigure when producing these devices. This can be lack of software updates, ports left open or security defects by design. Although these devices may not be as powerful as a regular computer, their large number makes them suitable candidates for botnets. Other types of IoT devices can even cause health problems since there are even pacemakers connected to the Internet. This means, that without sufficient defence, even directed assaults are possible against people. The goal of this thesis project is to provide better security for these devices with the help of machine learning algorithms and reverse engineering tools. Specifically, I study the applicability of control-flow related data of executables for malware detection. I present a malware detection method with two phases. The first phase extracts control-flow related data using static binary analysis. The second phase classifies binary executables as either malicious or benign using a neural network model. I train the model using a dataset of malicious and benign ARM applications.

We propose new linear combinations of compositions of a basic second-order scheme with appropriately chosen coefficients to construct higher order numerical integrators for differential equations. They can be considered as a generalization of extrapolation methods and multi-product expansions. A general analysis is provided and new methods up to order 8 are built and tested. The new approach is shown to reduce the latency problem when implemented in a parallel environment and leads to schemes that are significantly more efficient than standard extrapolation when the linear combination is delayed by a number of steps.

Pairwise sequence comparison is one of the most fundamental problems in string processing. The most common metric to quantify the similarity between sequences S and T is edit distance, d(S,T), which corresponds to the number of characters that need to be substituted, deleted from, or inserted into S to generate T. However, fewer edit operations may be sufficient for some string pairs to transform one string to the other if larger rearrangements are permitted. Block edit distance refers to such changes in substring level (i.e., blocks) that "penalizes" entire block removals, insertions, copies, and reversals with the same cost as single-character edits (Lopresti & Tomkins, 1997). Most studies to calculate block edit distance to date aimed only to characterize the distance itself for applications in sequence nearest neighbor search without reporting the full alignment details. Although a few tools try to solve block edit distance for genomic sequences, such as GR-Aligner, they have limited functionality and are no longer maintained. Here, we present SABER, an algorithm to solve block edit distance that supports block deletions, block moves, and block reversals in addition to the classical single-character edit operations. Our algorithm runs in O(m^2.n.l_range) time for |S|=m, |T|=n and the permitted block size range of l_range; and can report all breakpoints for the block operations. We also provide an implementation of SABER currently optimized for genomic sequences (i.e., generated by the DNA alphabet), although the algorithm can theoretically be used for any alphabet. SABER is available at //github.com/BilkentCompGen/saber

The aim of this work is to extend the usual optimal experimental design paradigm to experiments where the settings of one or more factors are functions. Such factors are known as profile factors, or as dynamic factors. For these new experiments, a design consists of combinations of functions for each run of the experiment. After briefly introducing the class of profile factors, basis functions are described with primary focus given on the B-spline basis system, due to its computational efficiency and useful properties. Basis function expansions are applied to a functional linear model consisting of profile factors, reducing the problem to an optimisation of basis coefficients. The methodology developed comprises special cases, including combinations of profile and non-functional factors, interactions, and polynomial effects. The method is finally applied to an experimental design problem in a Biopharmaceutical study that is performed using the Ambr250 modular bioreactor.

Diffusion model has become a main paradigm for synthetic data generation in many subfields of modern machine learning, including computer vision, language model, or speech synthesis. In this paper, we leverage the power of diffusion model for generating synthetic tabular data. The heterogeneous features in tabular data have been main obstacles in tabular data synthesis, and we tackle this problem by employing the auto-encoder architecture. When compared with the state-of-the-art tabular synthesizers, the resulting synthetic tables from our model show nice statistical fidelities to the real data, and perform well in downstream tasks for machine learning utilities. We conducted the experiments over $15$ publicly available datasets. Notably, our model adeptly captures the correlations among features, which has been a long-standing challenge in tabular data synthesis. Our code is available at //github.com/UCLA-Trustworthy-AI-Lab/AutoDiffusion.

Temporal data, obtained in the setting where it is only possible to observe one time point per trajectory, is widely used in different research fields, yet remains insufficiently addressed from the statistical point of view. Such data often contain observations of a large number of entities, in which case it is of interest to identify a small number of representative behavior types. In this paper, we propose a new method performing clustering simultaneously with alignment of temporal objects inferred from these data, providing insight into the relationships between the entities. A series of simulations confirm the ability of the proposed approach to leverage multiple properties of the complex data we target such as accessible uncertainties, correlations and a small number of time points. We illustrate it on real data encoding cellular response to a radiation treatment with high energy, supported with the results of an enrichment analysis.

How do score-based generative models (SBMs) learn the data distribution supported on a low-dimensional manifold? We investigate the score model of a trained SBM through its linear approximations and subspaces spanned by local feature vectors. During diffusion as the noise decreases, the local dimensionality increases and becomes more varied between different sample sequences. Importantly, we find that the learned vector field mixes samples by a non-conservative field within the manifold, although it denoises with normal projections as if there is an energy function in off-manifold directions. At each noise level, the subspace spanned by the local features overlap with an effective density function. These observations suggest that SBMs can flexibly mix samples with the learned score field while carefully maintaining a manifold-like structure of the data distribution.

Data pooling offers various advantages, such as increasing the sample size, improving generalization, reducing sampling bias, and addressing data sparsity and quality, but it is not straightforward and may even be counterproductive. Assessing the effectiveness of pooling datasets in a principled manner is challenging due to the difficulty in estimating the overall information content of individual datasets. Towards this end, we propose incorporating a data source prediction module into standard object detection pipelines. The module runs with minimal overhead during inference time, providing additional information about the data source assigned to individual detections. We show the benefits of the so-called dataset affinity score by automatically selecting samples from a heterogeneous pool of vehicle datasets. The results show that object detectors can be trained on a significantly sparser set of training samples without losing detection accuracy.

I present an R package called edibble that facilitates the design of experiments by encapsulating elements of the experiment in a series of composable functions. This package is an interpretation of "the grammar of experimental designs" by Tanaka (2023) in the R programming language. The main features of the edibble package are demonstrated, illustrating how it can be used to create a wide array of experimental designs. The implemented system aims to encourage cognitive thinking for holistic planning and data management of experiments in a streamlined workflow. This workflow can increase the inherent value of experimental data by reducing potential errors or noise with careful preplanning, as well as, ensuring fit-for-purpose analysis of experimental data.

In many applications, a stochastic system is studied using a model implicitly defined via a simulator. We develop a simulation-based parameter inference method for implicitly defined models. Our method differs from traditional likelihood-based inference in that it uses a metamodel for the distribution of a log-likelihood estimator. The metamodel is built on a local asymptotic normality (LAN) property satisfied by the simulation-based log-likelihood estimator under certain conditions. A method for hypothesis test is developed under the metamodel. Our method can enable accurate parameter estimation and uncertainty quantification where other Monte Carlo methods for parameter inference become highly inefficient due to large Monte Carlo variance. We demonstrate our method using numerical examples including a mechanistic model for the population dynamics of infectious disease.

北京阿比特科技有限公司