亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper studies time-dependent electromagnetic scattering from metamaterials that are described by dispersive material laws. We consider the numerical treatment of a scattering problem in which a dispersive material law, for a causal and passive homogeneous material, determines the wave-material interaction in the scatterer. The resulting problem is nonlocal in time inside the scatterer and is posed on an unbounded domain. Well-posedness of the scattering problem is shown using a formulation that is fully given on the surface of the scatterer via a time-dependent boundary integral equation. Discretizing this equation by convolution quadrature in time and boundary elements in space yields a provably stable and convergent method that is fully parallel in time and space. Under regularity assumptions on the exact solution we derive error bounds with explicit convergence rates in time and space. Numerical experiments illustrate the theoretical results and show the effectiveness of the method.

相關內容

Cognitive Radio Network (CRN) provides effective capabilities for resource allocation with the valuable spectrum resources in the network. It provides the effective allocation of resources to the unlicensed users or Secondary Users (SUs) to access the spectrum those are unused by the licensed users or Primary Users (Pus). This paper develops an Optimal Relay Selection scheme with the spectrum-sharing scheme in CRN. The proposed Cross-Layer Spider Swarm Shifting is implemented in CRN for the optimal relay selection with Spider Swarm Optimization (SSO). The shortest path is estimated with the data shifting model for the data transmission path in the CRN. This study examines a cognitive relay network (CRN) with interference restrictions imposed by a mobile end user (MU). Half-duplex communication is used in the proposed system model between a single primary user (PU) and a single secondary user (SU). Between the SU source and SU destination, an amplify and forward (AF) relaying mechanism is also used. While other nodes (SU Source, SU relays, and PU) are supposed to be immobile in this scenario, the mobile end user (SU destination) is assumed to travel at high vehicle speeds. The suggested method achieves variety by placing a selection combiner at the SU destination and dynamically selecting the optimal relay for transmission based on the greatest signal-to-noise (SNR) ratio. The performance of the proposed Cross-Layer Spider Swarm Shifting model is compared with the Spectrum Sharing Optimization with QoS Guarantee (SSO-QG). The comparative analysis expressed that the proposed Cross-Layer Spider Swarm Shifting model delay is reduced by 15% compared with SSO-QG. Additionally, the proposed Cross-Layer Spider Swarm Shifting exhibits the improved network performance of ~25% higher throughput compared with SSO-QG.

The paper describes a dataset that was collected by infrared thermography, which is a non-contact, non-intrusive technique to collect data and analyze the built environment in various aspects. While most studies focus on the city and building scales, the rooftop observatory provides high temporal and spatial resolution observations with dynamic interactions on the district scale. The rooftop infrared thermography observatory with a multi-modal platform that is capable of assessing a wide range of dynamic processes in urban systems was deployed in Singapore. It was placed on the top of two buildings that overlook the outdoor context of the campus of the National University of Singapore. The platform collects remote sensing data from tropical areas on a temporal scale, allowing users to determine the temperature trend of individual features such as buildings, roads, and vegetation. The dataset includes 1,365,921 thermal images collected on average at approximately 10 seconds intervals from two locations during ten months.

In this work, an efficient and robust isogeometric three-dimensional solid-beam finite element is developed for large deformations and finite rotations with merely displacements as degrees of freedom. The finite strain theory and hyperelastic constitutive models are considered and B-Spline and NURBS are employed for the finite element discretization. Similar to finite elements based on Lagrange polynomials, also NURBS-based formulations are affected by the non-physical phenomena of locking, which constrains the field variables and negatively impacts the solution accuracy and deteriorates convergence behavior. To avoid this problem within the context of a Solid-Beam formulation, the Assumed Natural Strain (ANS) method is applied to alleviate membrane and transversal shear locking and the Enhanced Assumed Strain (EAS) method against Poisson thickness locking. Furthermore, the Mixed Integration Point (MIP) method is employed to make the formulation more efficient and robust. The proposed novel isogeometric solid-beam element is tested on several single-patch and multi-patch benchmark problems, and it is validated against classical solid finite elements and isoparametric solid-beam elements. The results show that the proposed formulation can alleviate the locking effects and significantly improve the performance of the isogeometric solid-beam element. With the developed element, efficient and accurate predictions of mechanical properties of lattice-based structured materials can be achieved. The proposed solid-beam element inherits both the merits of solid elements e.g. flexible boundary conditions and of the beam elements i.e. higher computational efficiency.

The impact of outliers and anomalies on model estimation and data processing is of paramount importance, as evidenced by the extensive body of research spanning various fields over several decades: thousands of research papers have been published on the subject. As a consequence, numerous reviews, surveys, and textbooks have sought to summarize the existing literature, encompassing a wide range of methods from both the statistical and data mining communities. While these endeavors to organize and summarize the research are invaluable, they face inherent challenges due to the pervasive nature of outliers and anomalies in all data-intensive applications, irrespective of the specific application field or scientific discipline. As a result, the resulting collection of papers remains voluminous and somewhat heterogeneous. To address the need for knowledge organization in this domain, this paper implements the first systematic meta-survey of general surveys and reviews on outlier and anomaly detection. Employing a classical systematic survey approach, the study collects nearly 500 papers using two specialized scientific search engines. From this comprehensive collection, a subset of 56 papers that claim to be general surveys on outlier detection is selected using a snowball search technique to enhance field coverage. A meticulous quality assessment phase further refines the selection to a subset of 25 high-quality general surveys. Using this curated collection, the paper investigates the evolution of the outlier detection field over a 20-year period, revealing emerging themes and methods. Furthermore, an analysis of the surveys sheds light on the survey writing practices adopted by scholars from different communities who have contributed to this field. Finally, the paper delves into several topics where consensus has emerged from the literature. These include taxonomies of outlier types, challenges posed by high-dimensional data, the importance of anomaly scores, the impact of learning conditions, difficulties in benchmarking, and the significance of neural networks. Non-consensual aspects are also discussed, particularly the distinction between local and global outliers and the challenges in organizing detection methods into meaningful taxonomies.

Singularly perturbed boundary value problems pose a significant challenge for their numerical approximations because of the presence of sharp boundary layers. These sharp boundary layers are responsible for the stiffness of solutions, which leads to large computational errors, if not properly handled. It is well-known that the classical numerical methods as well as the Physics-Informed Neural Networks (PINNs) require some special treatments near the boundary, e.g., using extensive mesh refinements or finer collocation points, in order to obtain an accurate approximate solution especially inside of the stiff boundary layer. In this article, we modify the PINNs and construct our new semi-analytic SL-PINNs suitable for singularly perturbed boundary value problems. Performing the boundary layer analysis, we first find the corrector functions describing the singular behavior of the stiff solutions inside boundary layers. Then we obtain the SL-PINN approximations of the singularly perturbed problems by embedding the explicit correctors in the structure of PINNs or by training the correctors together with the PINN approximations. Our numerical experiments confirm that our new SL-PINN methods produce stable and accurate approximations for stiff solutions.

A model for corrosion-induced cracking of reinforced concrete subjected to non-uniform chloride-induced corrosion is presented. The gradual corrosion initiation of the steel surface is investigated by simulating chloride transport considering binding. The transport of iron from the steel surface, its subsequent precipitation into rust, and the associated precipitation-induced pressure are explicitly modelled. Model results, obtained through finite element simulations, agree very well with experimental data, showing significantly improved accuracy over uniform corrosion modelling. The results obtained from case studies reveal that crack-facilitated transport of chlorides cannot be neglected, that the size of the anodic region must be considered, and that precipitate accumulation in pores can take years.

We establish an $L_1$-bound between the coefficients of the optimal causal filter applied to the data-generating process and its finite sample approximation. Here, we assume that the data-generating process is a second-order stationary time series with either short or long memory autocovariances. To derive the $L_1$-bound, we first provide an exact expression for the coefficients of the causal filter and their approximations in terms of the absolute convergent series of the multistep ahead infinite and finite predictor coefficients, respectively. Then, we prove a so-called uniform Baxter's inequality to obtain a bound for the difference between the infinite and finite multistep ahead predictor coefficients in both short and long memory time series. The $L_1$-approximation error bound for the causal filter coefficients can be used to evaluate the performance of the linear predictions of time series through the mean squared error criterion.

We investigate long-term cognitive effects of an intervention, where systolic blood pressure (sBP) is monitored at more optimal levels, in a large representative sample. A limitation with previous research on the potential risk reduction of such interventions is that they do not properly account for the reduction of mortality rates. Hence, one can only speculate whether the effect is a result from changes in cognition or changes in mortality. As such, we extend previous research by providing both an etiological and a prognostic effect estimate. To do this we propose a Bayesian semi-parametric estimation approach for an incremental intervention, using the extended G-formula. We also introduce a novel sparsity-inducing Dirichlet hyperprior for longitudinal data, demonstrate the usefulness of our approach in simulations, and compare the performance relative to other Bayesian decision tree ensemble approaches. In our study, there were no significant prognostic- or etiological effects across all ages, indicating that sBP interventions likely do not have a strong effect on memory neither at the population level nor at the individual level.

We discuss avoidance of sure loss and coherence results for semicopulas and standardized functions, i.e., for grounded, 1-increasing functions with value $1$ at $(1,1,\ldots, 1)$. We characterize the existence of a $k$-increasing $n$-variate function $C$ fulfilling $A\leq C\leq B$ for standardized $n$-variate functions $A,B$ and discuss the method for constructing this function. Our proofs also include procedures for extending functions on some countably infinite mesh to functions on the unit box. We provide a characterization when $A$ respectively $B$ coincides with the pointwise infimum respectively supremum of the set of all $k$-increasing $n$-variate functions $C$ fulfilling $A\leq C\leq B$.

We consider the numerical approximation of a continuum model of antiferromagnetic and ferrimagnetic materials. The state of the material is described in terms of two unit-length vector fields, which can be interpreted as the magnetizations averaging the spins of two sublattices. For the static setting, which requires the solution of a constrained energy minimization problem, we introduce a discretization based on first-order finite elements and prove its $\Gamma$-convergence. Then, we propose and analyze two iterative algorithms for the computation of low-energy stationary points. The algorithms are obtained from (semi-)implicit time discretizations of gradient flows of the energy. Finally, we extend the algorithms to the dynamic setting, which consists of a nonlinear system of two Landau-Lifshitz-Gilbert equations solved by the two fields, and we prove unconditional stability and convergence of the finite element approximations toward a weak solution of the problem. Numerical experiments assess the performance of the algorithms and demonstrate their applicability for the simulation of physical processes involving antiferromagnetic and ferrimagnetic materials.

北京阿比特科技有限公司