亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose and analyze a seamless extended Discontinuous Galerkin (DG) discretization of advection-diffusion equations on semi-infinite domains. The semi-infinite half line is split into a finite subdomain where the model uses a standard polynomial basis, and a semi-unbounded subdomain where scaled Laguerre functions are employed as basis and test functions. Numerical fluxes enable the coupling at the interface between the two subdomains in the same way as standard single domain DG interelement fluxes. A novel linear analysis on the extended DG model yields unconditional stability with respect to the P\'eclet number. Errors due to the use of different sets of basis functions on different portions of the domain are negligible, as highlighted in numerical experiments with the linear advection-diffusion and viscous Burgers' equations. With an added damping term on the semi-infinite subdomain, the extended framework is able to efficiently simulate absorbing boundary conditions without additional conditions at the interface. A few modes in the semi-infinite subdomain are found to suffice to deal with outgoing single wave and wave train signals more accurately than standard approaches at a given computational cost, thus providing an appealing model for fluid flow simulations in unbounded regions.

相關內容

Soft robots are made of compliant and deformable materials and can perform tasks challenging for conventional rigid robots. The inherent compliance of soft robots makes them more suitable and adaptable for interactions with humans and the environment. However, this preeminence comes at a cost: their continuum nature makes it challenging to develop robust model-based control strategies. Specifically, an adaptive control approach addressing this challenge has not yet been applied to physical soft robotic arms. This work presents a reformulation of dynamics for a soft continuum manipulator using the Euler-Lagrange method. The proposed model eliminates the simplifying assumption made in previous works and provides a more accurate description of the robot's inertia. Based on our model, we introduce a task-space adaptive control scheme. This controller is robust against model parameter uncertainties and unknown input disturbances. The controller is implemented on a physical soft continuum arm. A series of experiments were carried out to validate the effectiveness of the controller in task-space trajectory tracking under different payloads. The controller outperforms the state-of-the-art method both in terms of accuracy and robustness. Moreover, the proposed model-based control design is flexible and can be generalized to any continuum robotic arm with an arbitrary number of continuum segments.

Dense vertex-to-vertex correspondence between 3D faces is a fundamental and challenging issue for 3D&2D face analysis. While the sparse landmarks have anatomically ground-truth correspondence, the dense vertex correspondences on most facial regions are unknown. In this view, the current literatures commonly result in reasonable but diverse solutions, which deviate from the optimum to the 3D face dense registration problem. In this paper, we revisit dense registration by a dimension-degraded problem, i.e. proportional segmentation of a line, and employ an iterative dividing and diffusing method to reach the final solution uniquely. This method is then extended to 3D surface by formulating a local registration problem for dividing and a linear least-square problem for diffusing, with constraints on fixed features. On this basis, we further propose a multi-resolution algorithm to accelerate the computational process. The proposed method is linked to a novel local scaling metric, where we illustrate the physical meaning as smooth rearrangement for local cells of 3D facial shapes. Extensive experiments on public datasets demonstrate the effectiveness of the proposed method in various aspects. Generally, the proposed method leads to coherent local registrations and elegant mesh grid routines for fine-grained 3D face dense registrations, which benefits many downstream applications significantly. It can also be applied to dense correspondence for other format of data which are not limited to face. The core code will be publicly available at //github.com/NaughtyZZ/3D_face_dense_registration.

We consider the Sparse Principal Component Analysis (SPCA) problem under the well-known spiked covariance model. Recent work has shown that the SPCA problem can be reformulated as a Mixed Integer Program (MIP) and can be solved to global optimality, leading to estimators that are known to enjoy optimal statistical properties. However, current MIP algorithms for SPCA are unable to scale beyond instances with a thousand features or so. In this paper, we propose a new estimator for SPCA which can be formulated as a MIP. Different from earlier work, we make use of the underlying spiked covariance model and properties of the multivariate Gaussian distribution to arrive at our estimator. We establish statistical guarantees for our proposed estimator in terms of estimation error and support recovery. We propose a custom algorithm to solve the MIP which is significantly more scalable than off-the-shelf solvers; and demonstrate that our approach can be much more computationally attractive compared to earlier exact MIP-based approaches for the SPCA problem. Our numerical experiments on synthetic and real datasets show that our algorithms can address problems with up to 20000 features in minutes; and generally result in favorable statistical properties compared to existing popular approaches for SPCA.

We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning (e.g., for the training of neural networks). Sparsity is an important feature to ensure robustness against noisy data, but also to find models that are interpretable and easy to analyze due to the small number of relevant terms. It is common practice to enforce sparsity by adding the $\ell_1$-norm as a weighted penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when we minimize the main objective and the $\ell_1$-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the weighting method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method which is specifically tailored to MOPs with two objective functions one of which is the $\ell_1$-norm. Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight that can be gained by this multiobjective approach.

Analyzing the effect of business cycle on rating transitions has been a subject of great interest these last fifteen years, particularly due to the increasing pressure coming from regulators for stress testing. In this paper, we consider that the dynamics of rating migrations is governed by an unobserved latent factor. Under a point process filtering framework, we explain how the current state of the hidden factor can be efficiently inferred from observations of rating histories. We then adapt the classical Baum-Welsh algorithm to our setting and show how to estimate the latent factor parameters. Once calibrated, we may reveal and detect economic changes affecting the dynamics of rating migration, in real-time. To this end we adapt a filtering formula which can then be used for predicting future transition probabilities according to economic regimes without using any external covariates. We propose two filtering frameworks: a discrete and a continuous version. We demonstrate and compare the efficiency of both approaches on fictive data and on a corporate credit rating database. The methods could also be applied to retail credit loans.

Machine translation systems are vulnerable to domain mismatch, especially in a low-resource scenario. Out-of-domain translations are often of poor quality and prone to hallucinations, due to exposure bias and the decoder acting as a language model. We adopt two approaches to alleviate this problem: lexical shortlisting restricted by IBM statistical alignments, and hypothesis re-ranking based on similarity. The methods are computationally cheap, widely known, but not extensively experimented on domain adaptation. We demonstrate success on low-resource out-of-domain test sets, however, the methods are ineffective when there is sufficient data or too great domain mismatch. This is due to both the IBM model losing its advantage over the implicitly learned neural alignment, and issues with subword segmentation of out-of-domain words.

We study spectral graph convolutional neural networks (GCNNs), where filters are defined as continuous functions of the graph shift operator (GSO) through functional calculus. A spectral GCNN is not tailored to one specific graph and can be transferred between different graphs. It is hence important to study the GCNN transferability: the capacity of the network to have approximately the same repercussion on different graphs that represent the same phenomenon. Transferability ensures that GCNNs trained on certain graphs generalize if the graphs in the test set represent the same phenomena as the graphs in the training set. In this paper, we consider a model of transferability based on graphon analysis. Graphons are limit objects of graphs, and, in the graph paradigm, two graphs represent the same phenomenon if both approximate the same graphon. Our main contributions can be summarized as follows: 1) we prove that any fixed GCNN with continuous filters is transferable under graphs that approximate the same graphon, 2) we prove transferability for graphs that approximate unbounded graphon shift operators, which are defined in this paper, and, 3) we obtain non-asymptotic approximation results, proving linear stability of GCNNs. This extends current state-of-the-art results which show asymptotic transferability for polynomial filters under graphs that approximate bounded graphons.

The paper presents open-source computational workflows for assessing the "Exposure to sunlight" and "View out" criteria as defined in the European standard EN 17037 "Daylight in Buildings", issued by the European Committee for Standardization. In addition to these factors, the standard document also addresses daylight provision and protection from glare, both of which fall out of the scope of this paper. The purpose of the standard is stated as 'encouraging building designers to assess and ensure successfully daylit spaces'. The standard document proposes verification methods for performing such assessments, albeit without recommending a simulation procedure for computing the aforementioned criteria. The workflows proposed in this paper are arguably the first attempt to standardize these assessment methods using de-facto open-source standard technologies currently used in practice. The approach of this work is twofold: establish that the compliance check can be systematically performed on a 3D model by a novel simulation tool developed by the authors; and highlighting the additional assumptions that need to be implemented to build a robust and unambiguous tool within existing open-source frameworks.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

北京阿比特科技有限公司