亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Triply periodic minimal surface (TPMS) is emerging as an important way of designing microstructures. However, there has been limited use of commercial CAD/CAM/CAE software packages for TPMS design and manufacturing. This is mainly because TPMS is consistently described in the functional representation (F-rep) format, while modern CAD/CAM/CAE tools are built upon the boundary representation (B-rep) format. One possible solution to this gap is translating TPMS to STEP, which is the standard data exchange format of CAD/CAM/CAE. Following this direction, this paper proposes a new translation method with error-controlling and $C^2$ continuity-preserving features. It is based on an approximation error-driven TPMS sampling algorithm and a constrained-PIA algorithm. The sampling algorithm controls the deviation between the original and translated models. With it, an error bound of $2\epsilon$ on the deviation can be ensured if two conditions called $\epsilon$-density and $\epsilon$-approximation are satisfied. The constrained-PIA algorithm enforces $C^2$ continuity constraints during TPMS approximation, and meanwhile attaining high efficiency. A theoretical convergence proof of this algorithm is also given. The effectiveness of the translation method has been demonstrated by a series of examples and comparisons.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Performer · 模型平均 · 優化器 · 圖片分類 ·
2024 年 6 月 25 日

Test-time augmentation (TTA) is a well-known technique employed during the testing phase of computer vision tasks. It involves aggregating multiple augmented versions of input data. Combining predictions using a simple average formulation is a common and straightforward approach after performing TTA. This paper introduces a novel framework for optimizing TTA, called BayTTA (Bayesian-based TTA), which is based on Bayesian Model Averaging (BMA). First, we generate a model list associated with different variations of the input data created through TTA. Then, we use BMA to combine model predictions weighted by their respective posterior probabilities. Such an approach allows one to take into account model uncertainty, and thus to enhance the predictive performance of the related machine learning or deep learning model. We evaluate the performance of BayTTA on various public data, including three medical image datasets comprising skin cancer, breast cancer, and chest X-ray images and two well-known gene editing datasets, CRISPOR and GUIDE-seq. Our experimental results indicate that BayTTA can be effectively integrated into state-of-the-art deep learning models used in medical image analysis as well as into some popular pre-trained CNN models such as VGG-16, MobileNetV2, DenseNet201, ResNet152V2, and InceptionRes-NetV2, leading to the enhancement in their accuracy and robustness performance.

Practical parameter identifiability in ODE-based epidemiological models is a known issue, yet one that merits further study. It is essentially ubiquitous due to noise and errors in real data. In this study, to avoid uncertainty stemming from data of unknown quality, simulated data with added noise are used to investigate practical identifiability in two distinct epidemiological models. Particular emphasis is placed on the role of initial conditions, which are assumed unknown, except those that are directly measured. Instead of just focusing on one method of estimation, we use and compare results from various broadly used methods, including maximum likelihood and Markov Chain Monte Carlo (MCMC) estimation. Among other findings, our analysis revealed that the MCMC estimator is overall more robust than the point estimators considered. Its estimates and predictions are improved when the initial conditions of certain compartments are fixed so that the model becomes globally identifiable. For the point estimators, whether fixing or fitting the that are not directly measured improves parameter estimates is model-dependent. Specifically, in the standard SEIR model, fixing the initial condition for the susceptible population S(0) improved parameter estimates, while this was not true when fixing the initial condition of the asymptomatic population in a more involved model. Our study corroborates the change in quality of parameter estimates upon usage of pre-peak or post-peak time-series under consideration. Finally, our examples suggest that in the presence of significantly noisy data, the value of structural identifiability is moot.

A gradient enhanced ADMM algorithm for optimal transport on general surfaces is proposed in this paper. Based on Benamou and Brenier's dynamical formulation, we combine gradient recovery techniques on surfaces with the ADMM algorithm, not only improving the computational accuracy, but also providing a novel method to deal with dual variables in the algorithm. This method avoids the use of stagger grids, has better accuracy and is more robust comparing to other averaging techniques.

We consider the problem of function approximation by two-layer neural nets with random weights that are "nearly Gaussian" in the sense of Kullback-Leibler divergence. Our setting is the mean-field limit, where the finite population of neurons in the hidden layer is replaced by a continuous ensemble. We show that the problem can be phrased as global minimization of a free energy functional on the space of (finite-length) paths over probability measures on the weights. This functional trades off the $L^2$ approximation risk of the terminal measure against the KL divergence of the path with respect to an isotropic Brownian motion prior. We characterize the unique global minimizer and examine the dynamics in the space of probability measures over weights that can achieve it. In particular, we show that the optimal path-space measure corresponds to the F\"ollmer drift, the solution to a McKean-Vlasov optimal control problem closely related to the classic Schr\"odinger bridge problem. While the F\"ollmer drift cannot in general be obtained in closed form, thus limiting its potential algorithmic utility, we illustrate the viability of the mean-field Langevin diffusion as a finite-time approximation under various conditions on entropic regularization. Specifically, we show that it closely tracks the F\"ollmer drift when the regularization is such that the minimizing density is log-concave.

The Loewner framework is an interpolatory approach designed for approximating linear and nonlinear systems. The goal here is to extend this framework to linear parametric systems with an arbitrary number n of parameters. One main innovation established here is the construction of data-based realizations for any number of parameters. Equally importantly, we show how to alleviate the computational burden, by avoiding the explicit construction of large-scale n-dimensional Loewner matrices of size $N \times N$. This reduces the complexity from $O(N^3)$ to about $O(N^{1.4})$, thus taming the curse of dimensionality and making the solution scalable to very large data sets. To achieve this, a new generalized multivariate rational function realization is defined. Then, we introduce the n-dimensional multivariate Loewner matrices and show that they can be computed by solving a coupled set of Sylvester equations. The null space of these Loewner matrices then allows the construction of the multivariate barycentric transfer function. The principal result of this work is to show how the null space of the n-dimensional Loewner matrix can be computed using a sequence of 1-dimensional Loewner matrices, leading to a drastic computational burden reduction. Finally, we suggest two algorithms (one direct and one iterative) to construct, directly from data, multivariate (or parametric) realizations ensuring (approximate) interpolation. Numerical examples highlight the effectiveness and scalability of the method.

Recently, addressing spatial confounding has become a major topic in spatial statistics. However, the literature has provided conflicting definitions, and many proposed definitions do not address the issue of confounding as it is understood in causal inference. We define spatial confounding as the existence of an unmeasured causal confounder with a spatial structure. We present a causal inference framework for nonparametric identification of the causal effect of a continuous exposure on an outcome in the presence of spatial confounding. We propose double machine learning (DML), a procedure in which flexible models are used to regress both the exposure and outcome variables on confounders to arrive at a causal estimator with favorable robustness properties and convergence rates, and we prove that this approach is consistent and asymptotically normal under spatial dependence. As far as we are aware, this is the first approach to spatial confounding that does not rely on restrictive parametric assumptions (such as linearity, effect homogeneity, or Gaussianity) for both identification and estimation. We demonstrate the advantages of the DML approach analytically and in simulations. We apply our methods and reasoning to a study of the effect of fine particulate matter exposure during pregnancy on birthweight in California.

This paper presents a platform to facilitate the deployment of applications in Internet of Things (IoT) devices. The platform allows to the programmers to use a Function-as-a-Service programming paradigm that are managed and configured in a Platform-as-a-Service web tool. The tool also allows to establish interoperability between the functions of the applications. The proposed platform obtained faster and easier deployments of the applications and the resource usages of the IoT devices also were lower in relation to a deployment process based in containers of Docker.

Transparency is an important aspect of human-robot interaction (HRI), as it can improve system trust and usability leading to improved communication and performance. However, most transparency models focus only on the amount of information given to users. In this paper, we propose a bidirectional transparency model, termed a transparency-based action (TBA) model, which in addition to providing transparency information (robot-to-human), allows the robot to take actions based on transparency information received from the human (robot-of-human and human-to-robot). We implemented a three-level (High, Medium and Low) TBA model on a robotic system trainer in two pilot studies (with students as participants) to examine its impact on acceptance and HRI. Based on the pilot studies results, the Medium TBA level was not included in the main experiment, which was conducted with older adults (aged 75-85). In that experiment, two TBA levels were compared: Low (basic information including only robot-to-human transparency) and High (including additional information relating to predicted outcomes with robot-of-human and human-to-robot transparency). The results revealed a significant difference between the two TBA levels of the model in terms of perceived usefulness, ease of use, and attitude. The High TBA level resulted in improved user acceptance and was preferred by the users.

In this paper, we innovatively develop uniform/variable-time-step weighted and shifted BDF2 (WSBDF2) methods for the anisotropic Cahn-Hilliard (CH) model, combining the scalar auxiliary variable (SAV) approach with two types of stabilized techniques. Using the concept of $G$-stability, the uniform-time-step WSBDF2 method is theoretically proved to be energy-stable. Due to the inapplicability of the relevant G-stability properties, another technique is adopted in this work to demonstrate the energy stability of the variable-time-step WSBDF2 method. In addition, the two numerical schemes are all mass-conservative.Finally, numerous numerical simulations are presented to demonstrate the stability and accuracy of these schemes.

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.

北京阿比特科技有限公司