亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Full Waveform Inversion (FWI) is a successful and well-established inverse method for reconstructing material models from measured wave signals. In the field of seismic exploration, FWI has proven particularly successful in the reconstruction of smoothly varying material deviations. In contrast, non-destructive testing (NDT) often requires the detection and specification of sharp defects in a specimen. If the contrast between materials is low, FWI can be successfully applied to these problems as well. However, so far the method is not fully suitable to image defects such as voids, which are characterized by a high contrast in the material parameters. In this paper, we introduce a dimensionless scaling function $\gamma$ to model voids in the forward and inverse scalar wave equation problem. Depending on which material parameters this function $\gamma$ scales, different modeling approaches are presented, leading to three formulations of mono-parameter FWI and one formulation of two-parameter FWI. The resulting problems are solved by first-order optimization, where the gradient is computed by an ajdoint state method. The corresponding Fr\'echet kernels are derived for each approach and the associated minimization is performed using an L-BFGS algorithm. A comparison between the different approaches shows that scaling the density with $\gamma$ is most promising for parameterizing voids in the forward and inverse problem. Finally, in order to consider arbitrary complex geometries known a priori, this approach is combined with an immersed boundary method, the finite cell method (FCM).

相關內容

Estimating the expectations of functionals applied to sums of random variables (RVs) is a well-known problem encountered in many challenging applications. Generally, closed-form expressions of these quantities are out of reach. A naive Monte Carlo simulation is an alternative approach. However, this method requires numerous samples for rare event problems. Therefore, it is paramount to use variance reduction techniques to develop fast and efficient estimation methods. In this work, we use importance sampling (IS), known for its efficiency in requiring fewer computations to achieve the same accuracy requirements. We propose a state-dependent IS scheme based on a stochastic optimal control formulation, where the control is dependent on state and time. We aim to calculate rare event quantities that could be written as an expectation of a functional of the sums of independent RVs. The proposed algorithm is generic and can be applied without restrictions on the univariate distributions of RVs or the functional applied to the sum. We apply this approach to the log-normal distribution to compute the left tail and cumulative distribution of the ratio of independent RVs. For each case, we numerically demonstrate that the proposed state-dependent IS algorithm compares favorably to most well-known estimators dealing with similar problems.

We deal with a long-standing problem about how to design an energy-stable numerical scheme for solving the motion of a closed curve under {\sl anisotropic surface diffusion} with a general anisotropic surface energy $\gamma(\boldsymbol{n})$ in two dimensions, where $\boldsymbol{n}$ is the outward unit normal vector. By introducing a novel symmetric positive definite surface energy matrix $Z_k(\boldsymbol{n})$ depending on the Cahn-Hoffman $\boldsymbol{\xi}$-vector and a stabilizing function $k(\boldsymbol{n})$, we first reformulate the anisotropic surface diffusion into a conservative form and then derive a new symmetrized variational formulation for the anisotropic surface diffusion with weakly or strongly anisotropic surface energies. A semi-discretization in space for the symmetrized variational formulation is proposed and its area (or mass) conservation and energy dissipation are proved. The semi-discretization is then discretized in time by either an implicit structural-preserving scheme (SP-PFEM) which preserves the area in the discretized level or a semi-implicit energy-stable method (ES-PFEM) which needs only solve a linear system at each time step. Under a relatively simple and mild condition on $\gamma(\boldsymbol{n})$, we show that both SP-PFEM and ES-PFEM are unconditionally energy-stable for almost all anisotropic surface energies $\gamma(\boldsymbol{n})$ arising in practical applications. Specifically, for several commonly-used anisotropic surface energies, we construct $Z_k(\boldsymbol{n})$ explicitly. Finally, extensive numerical results are reported to demonstrate the high performance of the proposed numerical schemes.

An integrated experimental, computational, and non-deterministic approach is demonstrated to predict the damage tolerance of an aluminum plate reinforced with a co-cured bonded quasi-isotropic E-glass/epoxy composite overlay and to determine the most sensitive material parameters and their ranges of influence on the damage tolerance of the hybrid system. To simulate the complex progressive damage in the repaired structure, a high fidelity three-dimensional finite element model is developed and validated using four-point bend testing to investigate potential damage mechanisms. A surrogate model is then generated to explore the complex parameter space of this model. Global sensitivity analysis and uncertainty quantification are performed for non-deterministic analysis to characterize the energy absorption capability of the patched structure relative to these influential design properties. Additionally, correlating the data quality of the material parameters with the sensitivity analysis results provides practical guidelines for model improvement and the design optimization of the patched structure.

Let $L$ be a finite lattice and $\mathcal{E}(L)$ be the set of join endomorphisms of $L$. We consider the problem of given $L$ and $f,g \in \mathcal{E}(L)$, finding the greatest lower bound $f \sqcap_{{\scriptsize \mathcal{E}(L)}} g$ in the lattice $\mathcal{E}(L)$. (1) We show that if $L$ is distributive, the problem can be solved in time $O(n)$ where $n=| L |$. The previous upper bound was $O(n^2)$. (2) We provide new algorithms for arbitrary lattices and give experimental evidence that they are significantly faster than the existing algorithm. (3) We characterize the standard notion of distributed knowledge of a group as the greatest lower bound of the join-endomorphisms representing the knowledge of each member of the group. (4) We show that deciding whether an agent has the distributed knowledge of two other agents can be computed in time $O(n^2)$ where $n$ is the size of the underlying set of states. (5) For the special case of $S5$ knowledge, we show that it can be decided in time $O(n\alpha_{n})$ where $\alpha_{n}$ is the inverse of the Ackermann function.

Machine learning (ML) models are costly to train as they can require a significant amount of data, computational resources and technical expertise. Thus, they constitute valuable intellectual property that needs protection from adversaries wanting to steal them. Ownership verification techniques allow the victims of model stealing attacks to demonstrate that a suspect model was in fact stolen from theirs. Although a number of ownership verification techniques based on watermarking or fingerprinting have been proposed, most of them fall short either in terms of security guarantees (well-equipped adversaries can evade verification) or computational cost. A fingerprinting technique introduced at ICLR '21, Dataset Inference (DI), has been shown to offer better robustness and efficiency than prior methods. The authors of DI provided a correctness proof for linear (suspect) models. However, in the same setting, we prove that DI suffers from high false positives (FPs) -- it can incorrectly identify an independent model trained with non-overlapping data from the same distribution as stolen. We further prove that DI also triggers FPs in realistic, non-linear suspect models. We then confirm empirically that DI leads to FPs, with high confidence. Second, we show that DI also suffers from false negatives (FNs) -- an adversary can fool DI by regularising a stolen model's decision boundaries using adversarial training, thereby leading to an FN. To this end, we demonstrate that DI fails to identify a model adversarially trained from a stolen dataset -- the setting where DI is the hardest to evade. Finally, we discuss the implications of our findings, the viability of fingerprinting-based ownership verification in general, and suggest directions for future work.

This paper presents two results concerning uniform confidence intervals for the tail index and the extreme quantile. First, we show that it is impossible to construct a length-optimal confidence interval satisfying the correct uniform coverage over a local non-parametric family of tail distributions. Second, in light of the impossibility result, we construct honest confidence intervals that are uniformly valid by incorporating the worst-case bias in the local non-parametric family. The proposed method is applied to simulated data and a real data set of National Vital Statistics from National Center for Health Statistics.

We build a general framework which establishes a one-to-one correspondence between species abundance distribution (SAD) and species accumulation curve (SAC). The appearance rates of the species and the appearance times of individuals of each species are modeled as Poisson processes. The number of species can be finite or infinite. Hill numbers are extended to the framework. We introduce a linear derivative ratio family of models, $\mathrm{LDR}_1$, of which the ratio of the first and the second derivatives of the expected SAC is a linear function. A D1/D2 plot is proposed to detect this linear pattern in the data. By extrapolation of the curve in the D1/D2 plot, a species richness estimator that extends Chao1 estimator is introduced. The SAD of $\mathrm{LDR}_1$ is the Engen's extended negative binomial distribution, and the SAC encompasses several popular parametric forms including the power law. Family $\mathrm{LDR}_1$ is extended in two ways: $\mathrm{LDR}_2$ which allows species with zero detection probability, and $\mathrm{RDR}_1$ where the derivative ratio is a rational function. Real data are analyzed to demonstrate the proposed methods. We also consider the scenario where we record only a few leading appearance times of each species. We show how maximum likelihood inference can be performed when only the empirical SAC is observed, and elucidate its advantages over the traditional curve-fitting method.

In distributed or federated optimization and learning, communication between the different computing units is often the bottleneck and gradient compression is widely used to reduce the number of bits sent within each communication round of iterative methods. There are two classes of compression operators and separate algorithms making use of them. In the case of unbiased random compressors with bounded variance (e.g., rand-k), the DIANA algorithm of Mishchenko et al. (2019), which implements a variance reduction technique for handling the variance introduced by compression, is the current state of the art. In the case of biased and contractive compressors (e.g., top-k), the EF21 algorithm of Richt\'arik et al. (2021), which instead implements an error-feedback mechanism, is the current state of the art. These two classes of compression schemes and algorithms are distinct, with different analyses and proof techniques. In this paper, we unify them into a single framework and propose a new algorithm, recovering DIANA and EF21 as particular cases. Our general approach works with a new, larger class of compressors, which has two parameters, the bias and the variance, and includes unbiased and biased compressors as particular cases. This allows us to inherit the best of the two worlds: like EF21 and unlike DIANA, biased compressors, like top-k, whose good performance in practice is recognized, can be used. And like DIANA and unlike EF21, independent randomness at the compressors allows to mitigate the effects of compression, with the convergence rate improving when the number of parallel workers is large. This is the first time that an algorithm with all these features is proposed. We prove its linear convergence under certain conditions. Our approach takes a step towards better understanding of two so-far distinct worlds of communication-efficient distributed learning.

Given well-shuffled data, can we determine whether the data items are statistically (in)dependent? Formally, we consider the problem of testing whether a set of exchangeable random variables are independent. We will show that this is possible and develop tests that can confidently reject the null hypothesis that data is independent and identically distributed and have high power for (some) exchangeable distributions. We will make no structural assumptions on the underlying sample space. One potential application is in Deep Learning, where data is often scraped from the whole internet, with duplications abound, which can render data non-iid and test-set evaluation prone to give wrong answers.

Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word embedding matrix accounts for a significant proportion of the model's memory footprint, due to the large input vocabulary and embedding dimensions. Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models. We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions. Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary. We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model. Our method is able to compress the BERT_BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB. Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.

北京阿比特科技有限公司