亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this letter we introduce the non-linear partial differential equation (PDE) $\partial^2_{\tau} \pi \propto (\vec\nabla \pi)^2$ showing a new type of instability. Such equations appear in the effective field theory (EFT) of dark energy for the $k$-essence model as well as in many other theories based on the EFT formalism. We demonstrate the occurrence of instability in the cosmological context using a relativistic $N$-body code, and we study it mathematically in 3+1 dimensions within spherical symmetry. We show that this term dominates for the low speed of sound limit where some important linear terms are suppressed.

相關內容

Deep learning is also known as hierarchical learning, where the learner _learns_ to represent a complicated target function by decomposing it into a sequence of simpler functions to reduce sample and time complexity. This paper formally analyzes how multi-layer neural networks can perform such hierarchical learning _efficiently_ and _automatically_ by SGD on the training objective. On the conceptual side, we present a theoretical characterizations of how certain types of deep (i.e. super-constant layer) neural networks can still be sample and time efficiently trained on some hierarchical tasks, when no existing algorithm (including layerwise training, kernel method, etc) is known to be efficient. We establish a new principle called "backward feature correction", where the errors in the lower-level features can be automatically corrected when training together with the higher-level layers. We believe this is a key behind how deep learning is performing deep (hierarchical) learning, as opposed to layerwise learning or simulating some non-hierarchical method. On the technical side, we show for every input dimension $d > 0$, there is a concept class of degree $\omega(1)$ multi-variate polynomials so that, using $\omega(1)$-layer neural networks as learners, SGD can learn any function from this class in $\mathsf{poly}(d)$ time to any $\frac{1}{\mathsf{poly}(d)}$ error, through learning to represent it as a composition of $\omega(1)$ layers of quadratic functions using "backward feature correction." In contrast, we do not know any other simpler algorithm (including layerwise training, applying kernel method sequentially, training a two-layer network, etc) that can learn this concept class in $\mathsf{poly}(d)$ time even to any $d^{-0.01}$ error. As a side result, we prove $d^{\omega(1)}$ lower bounds for several non-hierarchical learners, including any kernel methods.

Lattices with a circulant generator matrix represent a subclass of cyclic lattices. This subclass can be described by a basis containing a vector and its circular shifts. In this paper, we present certain conditions under which the norm expression of an arbitrary vector of this type of lattice is substantially simplified, and then investigate some of the lattices obtained under these conditions. We exhibit systems of nonlinear equations whose solutions yield lattices as dense as $D_n$ in odd dimensions. As far as even dimensions, we obtain lattices denser than $A_n$ as long as $n \in 2\mathbb{Z} \backslash 4\mathbb{Z}$.

Personalized prediction is a machine learning approach that predicts a person's future observations based on their past labeled observations and is typically used for sequential tasks, e.g., to predict daily mood ratings. When making personalized predictions, a model can combine two types of trends: (a) trends shared across people, i.e., person-generic trends, such as being happier on weekends, and (b) unique trends for each person, i.e., person-specific trends, such as a stressful weekly meeting. Mixed effect models are popular statistical models to study both trends by combining person-generic and person-specific parameters. Though linear mixed effect models are gaining popularity in machine learning by integrating them with neural networks, these integrations are currently limited to linear person-specific parameters: ruling out nonlinear person-specific trends. In this paper, we propose Neural Mixed Effect (NME) models to optimize nonlinear person-specific parameters anywhere in a neural network in a scalable manner. NME combines the efficiency of neural network optimization with nonlinear mixed effects modeling. Empirically, we observe that NME improves performance across six unimodal and multimodal datasets, including a smartphone dataset to predict daily mood and a mother-adolescent dataset to predict affective state sequences where half the mothers experience at least moderate symptoms of depression. Furthermore, we evaluate NME for two model architectures, including for neural conditional random fields (CRF) to predict affective state sequences where the CRF learns nonlinear person-specific temporal transitions between affective states. Analysis of these person-specific transitions on the mother-adolescent dataset shows interpretable trends related to the mother's depression symptoms.

Under a nonlinear regression model with univariate response an algorithm for the generation of sequential adaptive designs is studied. At each stage, the current design is augmented by adding $p$ design points where $p$ is the dimension of the parameter of the model. The augmenting $p$ points are such that, at the current parameter estimate, they constitute the locally D-optimal design within the set of all saturated designs. Two relevant subclasses of nonlinear regression models are focused on, which were considered in previous work of the authors on the adaptive Wynn algorithm: firstly, regression models satisfying the `saturated identifiability condition' and, secondly, generalized linear models. Adaptive least squares estimators and adaptive maximum likelihood estimators in the algorithm are shown to be strongly consistent and asymptotically normal, under appropriate assumptions. For both model classes, if a condition of `saturated D-optimality' is satisfied, the almost sure asymptotic D-optimality of the generated design sequence is implied by the strong consistency of the adaptive estimators employed by the algorithm. The condition states that there is a saturated design which is locally D-optimal at the true parameter point (in the class of all designs).

We consider the problem of empirical Bayes estimation for (multivariate) Poisson means. Existing solutions that have been shown theoretically optimal for minimizing the regret (excess risk over the Bayesian oracle that knows the prior) have several shortcomings. For example, the classical Robbins estimator does not retain the monotonicity property of the Bayes estimator and performs poorly under moderate sample size. Estimators based on the minimum distance and non-parametric maximum likelihood (NPMLE) methods correct these issues, but are computationally expensive with complexity growing exponentially with dimension. Extending the approach of Barbehenn and Zhao (2022), in this work we construct monotone estimators based on empirical risk minimization (ERM) that retain similar theoretical guarantees and can be computed much more efficiently. Adapting the idea of offset Rademacher complexity Liang et al. (2015) to the non-standard loss and function class in empirical Bayes, we show that the shape-constrained ERM estimator attains the minimax regret within constant factors in one dimension and within logarithmic factors in multiple dimensions.

The multivariate adaptive regression spline (MARS) is one of the popular estimation methods for nonparametric multivariate regressions. However, as MARS is based on marginal splines, to incorporate interactions of covariates, products of the marginal splines must be used, which leads to an unmanageable number of basis functions when the order of interaction is high and results in low estimation efficiency. In this paper, we improve the performance of MARS by using linear combinations of the covariates which achieve sufficient dimension reduction. The special basis functions of MARS facilitate calculation of gradients of the regression function, and estimation of the linear combinations is obtained via eigen-analysis of the outer-product of the gradients. Under some technical conditions, the asymptotic theory is established for the proposed estimation method. Numerical studies including both simulation and empirical applications show its effectiveness in dimension reduction and improvement over MARS and other commonly-used nonparametric methods in regression estimation and prediction.

This paper examines functional equivariance, recently introduced by McLachlan and Stern [Found. Comput. Math. (2022)], from the perspective of backward error analysis. We characterize the evolution of certain classes of observables (especially affine and quadratic) by structure-preserving numerical integrators in terms of their modified vector fields. Several results on invariant preservation and symplecticity of modified vector fields are thereby generalized to describe the numerical evolution of non-invariant observables.

In this article, we present an overview of different a posteriori error analysis and postprocessing methods proposed in the context of nonlinear eigenvalue problems, e.g. arising inelectronic structure calculations for the calculation of the ground state and compare them. Weprovide two equivalent error reconstructions based either on a second-order Taylor expansionof the minimized energy, or a first-order expansion of the nonlinear eigenvalue equation. Wethen show how several a posteriori error estimations as well as post-processing methods can beformulated as specific applications of the derived reconstructed errors, and we compare theirrange of applicability as well as numerical cost and precision.

The accurate and efficient simulation of Partial Differential Equations (PDEs) in and around arbitrarily defined geometries is critical for many application domains. Immersed boundary methods (IBMs) alleviate the usually laborious and time-consuming process of creating body-fitted meshes around complex geometry models (described by CAD or other representations, e.g., STL, point clouds), especially when high levels of mesh adaptivity are required. In this work, we advance the field of IBM in the context of the recently developed Shifted Boundary Method (SBM). In the SBM, the location where boundary conditions are enforced is shifted from the actual boundary of the immersed object to a nearby surrogate boundary, and boundary conditions are corrected utilizing Taylor expansions. This approach allows choosing surrogate boundaries that conform to a Cartesian mesh without losing accuracy or stability. Our contributions in this work are as follows: (a) we show that the SBM numerical error can be greatly reduced by an optimal choice of the surrogate boundary, (b) we mathematically prove the optimal convergence of the SBM for this optimal choice of the surrogate boundary, (c) we deploy the SBM on massively parallel octree meshes, including algorithmic advances to handle incomplete octrees, and (d) we showcase the applicability of these approaches with a wide variety of simulations involving complex shapes, sharp corners, and different topologies. Specific emphasis is given to Poisson's equation and the linear elasticity equations.

Deep learning has shown great potential for modeling the physical dynamics of complex particle systems such as fluids (in Lagrangian descriptions). Existing approaches, however, require the supervision of consecutive particle properties, including positions and velocities. In this paper, we consider a partially observable scenario known as fluid dynamics grounding, that is, inferring the state transitions and interactions within the fluid particle systems from sequential visual observations of the fluid surface. We propose a differentiable two-stage network named NeuroFluid. Our approach consists of (i) a particle-driven neural renderer, which involves fluid physical properties into the volume rendering function, and (ii) a particle transition model optimized to reduce the differences between the rendered and the observed images. NeuroFluid provides the first solution to unsupervised learning of particle-based fluid dynamics by training these two models jointly. It is shown to reasonably estimate the underlying physics of fluids with different initial shapes, viscosity, and densities. It is a potential alternative approach to understanding complex fluid mechanics, such as turbulence, that are difficult to model using traditional methods of mathematical physics.

北京阿比特科技有限公司