亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The work presents elecode, open-source software for various electrical engineering applications that require considering electromagnetic processes. The primary focus of the software is power engineering applications. However, the software does not impose any specific limitations preventing other uses. In contrast to other open-source software based on the Finite Difference Time Domain (FDTD) method, elecode implements various thin wire modeling techniques which allow simulating complex objects consisting of wires. In addition, implemented graphical user interface (GUI) helps modify models conveniently. The software provides auxiliary numerical methods for simulations and measurements of the electrical soil properties, allows conducting lightning-related simulations (including those involving isolation breakdown models), and calculations of grounding characteristics. The part of the code responsible for FDTD simulations is well tested in previous works. Recently, the code was rewritten in order to add a convenient interface for using it as a library, command-line program, or GUI program. Finally, the code was released under an open-source license. The main capabilities of the software are described in the work. Several simulation examples covering main software features are presented. elecode is available at //gitlab.com/dmika/elecode.

相關內容

Unmanned aerial vehicles (UAVs) in cellular networks have garnered considerable interest. One of their applications is as flying base stations (FBSs), which can increase coverage and quality of service (QoS). Because FBSs are battery-powered, regulating their energy usage is a vital aspect of their use; and therefore the appropriate placement and trajectories of FBSs throughout their operation are critical to overcoming this challenge. In this paper, we propose a method of solving a multi-FBS 3D trajectory problem that considers FBS energy consumption, operation time, flight distance limits, and inter-cell interference constraints. Our method is divided into two phases: FBS placement and FBS trajectory. In taking this approach, we break the problem into several snapshots. First, we find the minimum number of FBSs required and their proper 3D positions in each snapshot. Then, between every two snapshots, the trajectory phase is executed. The optimal path between the origin and destination of each FBS is determined during the trajectory phase by utilizing a proposed binary linear problem (BLP) model that considers FBS energy consumption and flight distance constraints. Then, the shortest path for each FBS is determined while taking obstacles and collision avoidance into consideration. The number of FBSs needed may vary between snapshots, so we present an FBS set management (FSM) technique to manage the set of FBSs and their power. The results demonstrate that the proposed approach is applicable to real-world situations and that the outcomes are consistent with expectations.

A substantial fraction of the time that computational modellers dedicate to developing their models is actually spent trouble-shooting and debugging their code. However, how this process unfolds is seldom spoken about, maybe because it is hard to articulate as it relies mostly on the mental catalogues we have built with the experience of past failures. To help newcomers to the field of material modelling, here we attempt to fill this gap and provide a perspective on how to identify and fix mistakes in computational solid mechanics models. To this aim, we describe the components that make up such a model and then identify possible sources of errors. In practice, finding mistakes is often better done by considering the symptoms of what is going wrong. As a consequence, we provide strategies to narrow down where in the model the problem may be, based on observation and a catalogue of frequent causes of observed errors. In a final section, we also discuss how one-time bug-free models can be kept bug-free in view of the fact that computational models are typically under continual development. We hope that this collection of approaches and suggestions serves as a "road map" to find and fix mistakes in computational models, and more importantly, keep the problems solved so that modellers can enjoy the beauty of material modelling and simulation

The availability of property data is one of the major bottlenecks in the development of chemical processes, often requiring time-consuming and expensive experiments or limiting the design space to a small number of known molecules. This bottleneck has been the motivation behind the continuing development of predictive property models. For the property prediction of novel molecules, group contribution methods have been groundbreaking. In recent times, machine learning has joined the more established property prediction models. However, even with recent successes, the integration of physical constraints into machine learning models remains challenging. Physical constraints are vital to many thermodynamic properties, such as the Gibbs-Dunham relation, introducing an additional layer of complexity into the prediction. Here, we introduce SPT-NRTL, a machine learning model to predict thermodynamically consistent activity coefficients and provide NRTL parameters for easy use in process simulations. The results show that SPT-NRTL achieves higher accuracy than UNIFAC in the prediction of activity coefficients across all functional groups and is able to predict many vapor-liquid-equilibria with near experimental accuracy, as illustrated for the exemplary mixtures water/ethanol and chloroform/n-hexane. To ease the application of SPT-NRTL, NRTL-parameters of 100 000 000 mixtures are calculated with SPT-NRTL and provided online.

We consider the $\varepsilon$-Consensus-Halving problem, in which a set of heterogeneous agents aim at dividing a continuous resource into two (not necessarily contiguous) portions that all of them simultaneously consider to be of approximately the same value (up to $\varepsilon$). This problem was recently shown to be PPA-complete, for $n$ agents and $n$ cuts, even for very simple valuation functions. In a quest to understand the root of the complexity of the problem, we consider the setting where there is only a constant number of agents, and we consider both the computational complexity and the query complexity of the problem. For agents with monotone valuation functions, we show a dichotomy: for two agents the problem is polynomial-time solvable, whereas for three or more agents it becomes PPA-complete. Similarly, we show that for two monotone agents the problem can be solved with polynomially-many queries, whereas for three or more agents, we provide exponential query complexity lower bounds. These results are enabled via an interesting connection to a monotone Borsuk-Ulam problem, which may be of independent interest. For agents with general valuations, we show that the problem is PPA-complete and admits exponential query complexity lower bounds, even for two agents.

The term NeuralODE describes the structural combination of an Artifical Neural Network (ANN) and a numerical solver for Ordinary Differential Equations (ODEs), the former acts as the right-hand side of the ODE to be solved. This concept was further extended by a black-box model in the form of a Functional Mock-up Unit (FMU) to obtain a subclass of NeuralODEs, named NeuralFMUs. The resulting structure features the advantages of first-principle and data-driven modeling approaches in one single simulation model: A higher prediction accuracy compared to conventional First Principle Models (FPMs), while also a lower training effort compared to purely data-driven models. We present an intuitive workflow to setup and use NeuralFMUs, enabling the encapsulation and reuse of existing conventional models exported from common modeling tools. Moreover, we exemplify this concept by deploying a NeuralFMU for a consumption simulation based on a Vehicle Longitudinal Dynamics Model (VLDM), which is a typical use case in automotive industry. Related challenges that are often neglected in scientific use cases, like real measurements (e.g. noise), an unknown system state or high-frequent discontinuities, are handled in this contribution. For the aim to build a hybrid model with a higher prediction quality than the original FPM, we briefly highlight two open-source libraries: FMI.jl for integrating FMUs into the Julia programming environment, as well as an extension to this library called FMIFlux.jl, that allows for the integration of FMUs into a neural network topology to finally obtain a NeuralFMU.

Advances in the development of largely automated microscopy methods such as MERFISH for imaging cellular structures in mouse brains are providing spatial detection of micron resolution gene expression. While there has been tremendous progress made in the field Computational Anatomy (CA) to perform diffeomorphic mapping technologies at the tissue scales for advanced neuroinformatic studies in common coordinates, integration of molecular- and cellular-scale populations through statistical averaging via common coordinates remains yet unattained. This paper describes the first set of algorithms for calculating geodesics in the space of diffeomorphisms, what we term Image-Varifold LDDMM,extending the family of large deformation diffeomorphic metric mapping (LDDMM) algorithms to accommodate the "copy and paste" varifold action of particles which extends consistently to the tissue scales. We represent the brain data as geometric measures, termed as {\em image varifolds} supported by a large number of unstructured points, % (i.e., not aligned on a 2D or 3D grid), each point representing a small volume in space % (which may be incompletely described) and carrying a list of densities of {\em features} elements of a high-dimensional feature space. The shape of image varifold brain spaces is measured by transforming them by diffeomorphisms. The metric between image varifolds is obtained after embedding these objects in a linear space equipped with the norm, yielding a so-called "chordal metric."

Subsampling or subdata selection is a useful approach in large-scale statistical learning. Most existing studies focus on model-based subsampling methods which significantly depend on the model assumption. In this paper, we consider the model-free subsampling strategy for generating subdata from the original full data. In order to measure the goodness of representation of a subdata with respect to the original data, we propose a criterion, generalized empirical F-discrepancy (GEFD), and study its theoretical properties in connection with the classical generalized L2-discrepancy in the theory of uniform designs. These properties allow us to develop a kind of low-GEFD data-driven subsampling method based on the existing uniform designs. By simulation examples and a real case study, we show that the proposed subsampling method is superior to the random sampling method. Moreover, our method keeps robust under diverse model specifications while other popular subsampling methods are under-performing. In practice, such a model-free property is more appealing than the model-based subsampling methods, where the latter may have poor performance when the model is misspecified, as demonstrated in our simulation studies.

Hoare-style program logics are a popular and effective technique for software verification. Relational program logics are an instance of this approach that enables reasoning about relationships between the execution of two or more programs. Existing relational program logics have focused on verifying that all runs of a collection of programs do not violate a specified relational behavior. Several important relational properties, including refinement and noninterference, do not fit into this category, as they also mandate the existence of specific desirable executions. This paper presents RHLE, a logic for verifying these sorts of relational $\forall\exists$ properties. Key to our approach is a novel form of function specification that employs a variant of ghost variables to ensure that valid implementations exhibit certain behaviors. We have used a program verifier based on RHLE to verify a diverse set of relational $\forall\exists$ properties drawn from the literature.

We increasingly rely on digital services and the conveniences they provide. Processing of personal data is integral to such services and thus privacy and data protection are a growing concern, and governments have responded with regulations such as the EU's GDPR. Following this, organisations that make software have legal obligations to document the privacy and data protection of their software. This work must involve both software developers that understand the code and the organisation's data protection officer or legal department that understands privacy and the requirements of a Data Protection and Impact Assessment (DPIA). To help developers and non-technical people such as lawyers document the privacy and data protection behaviour of software, we have developed an automatic software analysis technique. This technique is based on static program analysis to characterise the flow of privacy-related data. The results of the analysis can be presented as a graph of privacy flows and operations -- that is understandable also for non-technical people. We argue that our technique facilitates collaboration between technical and non-technical people in documenting the privacy behaviour of the software. We explain how to use the results produced by our technique to answer a series of privacy-relevant questions needed for a DPIA. To illustrate our work, we show both detailed and abstract analysis results from applying our analysis technique to the secure messaging service Signal and to the client of the cloud service NextCloud and show how their privacy flow-graphs inform the writing of a DPIA.

Genomic data are subject to various sources of confounding, such as demographic variables, biological heterogeneity, and batch effects. To identify genomic features associated with a variable of interest in the presence of confounders, the traditional approach involves fitting a confounder-adjusted regression model to each genomic feature, followed by multiplicity correction. This study shows that the traditional approach was sub-optimal and proposes a new two-dimensional false discovery rate control framework (2dFDR+) that provides significant power improvement over the conventional method and applies to a wide range of settings. 2dFDR+ uses marginal independence test statistics as auxiliary information to filter out less promising features, and FDR control is performed based on conditional independence test statistics in the remaining features. 2dFDR+ provides (asymptotically) valid inference from samples in settings where the conditional distribution of the genomic variables given the covariate of interest and the confounders is arbitrary and completely unknown. To achieve this goal, our method requires the conditional distribution of the covariate given the confounders to be known or can be estimated from the data. We develop a new procedure to simultaneously select the two cutoff values for the marginal and conditional independence test statistics. 2dFDR+ is proved to offer asymptotic FDR control and dominate the power of the traditional procedure. Promising finite sample performance is demonstrated via extensive simulations and real data applications.

北京阿比特科技有限公司