亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

PyPartMC is a Pythonic interface to PartMC, a stochastic, particle-resolved aerosol model implemented in Fortran. Both PyPartMC and PartMC are free, libre, and open-source. PyPartMC reduces the number of steps and mitigates the effort necessary to install and utilize the resources of PartMC. Without PyPartMC, setting up PartMC requires: working with UNIX shell, providing Fortran and C libraries, and performing standard Fortran and C source code configuration, compilation and linking. This can be challenging for those less experienced with computational research or those intending to use PartMC in environments where provision of UNIX tools is less straightforward (e.g., on Windows). PyPartMC offers a single-step installation process of PartMC and all dependencies through the pip Python package manager on Linux, macOS, and Windows. This allows streamlined access to the unmodified and versioned Fortran internals of the PartMC codebase from both Python and other interoperable environments (e.g., Julia through PyCall). Consequently, users of PyPartMC can setup, run, process and visualize output of PartMC simulations using a single general-purpose programming language.

相關內容

Ant Colony Optimization (ACO) is a meta-heuristic algorithm that has been successfully applied to various Combinatorial Optimization Problems (COPs). Traditionally, customizing ACO for a specific problem requires the expert design of knowledge-driven heuristics. In this paper, we propose DeepACO, a generic framework that leverages deep reinforcement learning to automate heuristic designs. DeepACO serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications. As a neural-enhanced meta-heuristic, DeepACO consistently outperforms its ACO counterparts on eight COPs using a single neural model and a single set of hyperparameters. As a Neural Combinatorial Optimization method, DeepACO performs better than or on par with problem-specific methods on canonical routing problems. Our code is publicly available at //github.com/henry-yeh/DeepACO.

Photoplethysmography is a non-invasive optical technique that measures changes in blood volume within tissues. It is commonly and increasingly used for in a variety of research and clinical application to assess vascular dynamics and physiological parameters. Yet, contrary to heart rate variability measures, a field which has seen the development of stable standards and advanced toolboxes and software, no such standards and open tools exist for continuous photoplethysmogram (PPG) analysis. Consequently, the primary objective of this research was to identify, standardize, implement and validate key digital PPG biomarkers. This work describes the creation of a standard Python toolbox, denoted pyPPG, for long-term continuous PPG time series analysis recorded using a standard finger-based transmission pulse oximeter. The improved PPG peak detector had an F1-score of 88.19% for the state-of-the-art benchmark when evaluated on 2,054 adult polysomnography recordings totaling over 91 million reference beats. This algorithm outperformed the open-source original Matlab implementation by ~5% when benchmarked on a subset of 100 randomly selected MESA recordings. More than 3,000 fiducial points were manually annotated by two annotators in order to validate the fiducial points detector. The detector consistently demonstrated high performance, with a mean absolute error of less than 10 ms for all fiducial points. Based on these fiducial points, pyPPG engineers a set of 74 PPG biomarkers. Studying the PPG time series variability using pyPPG can enhance our understanding of the manifestations and etiology of diseases. This toolbox can also be used for biomarker engineering in training data-driven models. pyPPG is available on physiozoo.org

This manuscript is a research resource description and presents a large and novel Electronic Health Records (EHR) data resource, American Family Cohort (AFC). The AFC data is derived from Centers for Medicare and Medicaid Services (CMS) certified American Board of Family Medicine (ABFM) PRIME registry. The PRIME registry is the largest national Qualified Clinical Data Registry (QCDR) for Primary Care. The data is converted to a popular common data model, the Observational Health Data Sciences and Informatics (OHDSI) Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). The resource presents approximately 90 million encounters for 7.5 million patients. All 100% of the patients present age, gender, and address information, and 73% report race. Nealy 93% of patients have lab data in LOINC, 86% have medication data in RxNorm, 93% have diagnosis in SNOWMED and ICD, 81% have procedures in HCPCS or CPT, and 61% have insurance information. The richness, breadth, and diversity of this research accessible and research ready data is expected to accelerate observational studies in many diverse areas. We expect this resource to facilitate research in many years to come.

Much of the Earth and many surfaces of extraterrestrial bodies are composed of in-cohesive particle matter. Locomoting on granular terrain is challenging for common robotic devices, either wheeled or legged. In this work, we discover a robust alternative locomotion mechanism on granular media -- generating movement via self-vibration. To demonstrate the effectiveness of this locomotion mechanism, we develop a cube-shaped robot with an embedded vibratory motor and conduct systematic experiments on diverse granular terrains of various particle properties. We investigate how locomotion changes as a function of vibration frequency/intensity on granular terrains. Compared to hard surfaces, we find such a vibratory locomotion mechanism enables the robot to move faster, and more stable on granular surfaces, facilitated by the interaction between the body and surrounding granules. The simplicity in structural design and controls of this robotic system indicates that vibratory locomotion can be a valuable alternative way to produce robust locomotion on granular terrains. We further demonstrate that such cube-shape robots can be used as modular units for morphologically structured vibratory robots with capabilities of maneuverable forward and turning motions, showing potential practical scenarios for robotic systems.

Lately, Large Language Models have been widely used in code generation. GPT4 is considered the most potent Large Language Model from Openai. In this paper, we examine GPT3.5 and GPT4 as coding assistants. More specifically, we have constructed appropriate tests to check whether the two systems can a) answer typical questions that can arise during the code development, b) produce reliable code, and c) contribute to code debugging. The test results are impressive. The performance of GPT4 is outstanding and signals an increase in the productivity of programmers and the reorganization of software development procedures based on these new tools.

Accurate calibration is crucial for using multiple cameras to triangulate the position of objects precisely. However, it is also a time-consuming process that needs to be repeated for every displacement of the cameras. The standard approach is to use a printed pattern with known geometry to estimate the intrinsic and extrinsic parameters of the cameras. The same idea can be applied to event-based cameras, though it requires extra work. By using frame reconstruction from events, a printed pattern can be detected. A blinking pattern can also be displayed on a screen. Then, the pattern can be directly detected from the events. Such calibration methods can provide accurate intrinsic calibration for both frame- and event-based cameras. However, using 2D patterns has several limitations for multi-camera extrinsic calibration, with cameras possessing highly different points of view and a wide baseline. The 2D pattern can only be detected from one direction and needs to be of significant size to compensate for its distance to the camera. This makes the extrinsic calibration time-consuming and cumbersome. To overcome these limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.

Assouad-Nagata dimension addresses both large and small scale behaviors of metric spaces and is a refinement of Gromov's asymptotic dimension. A metric space $M$ is a minor-closed metric if there exists an (edge-)weighted graph $G$ satisfying a fixed minor-closed property such that the underlying space of $M$ is the vertex-set of $G$, and the metric of $M$ is the distance function in $G$. Minor-closed metrics naturally arise when removing redundant edges of the underlying graphs by using edge-deletion and edge-contraction. In this paper, we determine the Assouad-Nagata dimension of every minor-closed metric. It is a common generalization of known results about the asymptotic dimension of $H$-minor free unweighted graphs and about the Assouad-Nagata dimension of complete Riemannian surfaces with finite Euler genus and their corollaries.

Generative Artificial Intelligence (GenAI) in Education has in a few short months moved from being the topic of discussion around speculative education futures to a very concrete reality. It is clear that the future of education, as all industries, is collaboration with GenAI. GenAI attributes make it well suited for social and constructivist approaches to learning that value collaboration, community and the construction of knowledge and skills through active learning. This article presents an approach to designing education in collaboration with GenAI, based on digital education frameworks adapted for this new hybrid of the AI age.

We propose a parallel (distributed) version of the spectral proper orthogonal decomposition (SPOD) technique. The parallel SPOD algorithm distributes the spatial dimension of the dataset preserving time. This approach is adopted to preserve the non-distributed fast Fourier transform of the data in time, thereby avoiding the associated bottlenecks. The parallel SPOD algorithm is implemented in the PySPOD (//github.com/MathEXLab/PySPOD) library and makes use of the standard message passing interface (MPI) library, implemented in Python via mpi4py (//mpi4py.readthedocs.io/en/stable/). An extensive performance evaluation of the parallel package is provided, including strong and weak scalability analyses. The open-source library allows the analysis of large datasets of interest across the scientific community. Here, we present applications in fluid dynamics and geophysics, that are extremely difficult (if not impossible) to achieve without a parallel algorithm. This work opens the path toward modal analyses of big quasi-stationary data, helping to uncover new unexplored spatio-temporal patterns.

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.

北京阿比特科技有限公司