亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this contribution, we present an extension of the thermodynamic topology optimization that accounts for a non-linear material behavior due to the evolution of plastic strains. In contrast to physical loading and unloading processes, a virtual unloading due to stiffness evolution during the optimization process must not result in a hysteresis in the stress/strain diagram. Therefore, this problem is usually resolved by simulating the entire load path for each optimization step. To avoid this time-consuming procedure, we develop a surrogate material model that yields identical results for the loading case but does not dissipate energy during the virtual unloading. The model is embedded into our strategy for topology optimization that routes back to thermodynamic extremal principles. We present the derivation of all model equations and suitable strategies for numerical solution. Then, we validate our model by means of solving optimization problems for several boundary value problems and empirically show that our novel material models yields very similar distributions of the plastic strains as classical elasto-plastic material models. Consequently, thermodynamic topology optimization allows to optimize elasto-plastic materials at low numerical costs without the need of computing the entire load history for each optimization step.

相關內容

We present a non-nested multilevel algorithm for solving the Poisson equation discretized at scattered points using polyharmonic radial basis function (PHS-RBF) interpolations. We append polynomials to the radial basis functions to achieve exponential convergence of discretization errors. The interpolations are performed over local clouds of points and the Poisson equation is collocated at each of the scattered points, resulting in a sparse set of discrete equations for the unkown variables. To solve this set of equations, we have developed a non-nested multilevel algorithm utilizing multiple independently generated coarse sets of points. The restriction and prolongation operators are also constructed with the same RBF interpolations procedure. The performance of the algorithm for Dirichlet and all-Neumann boundary conditions is evaluated in three model geometries using a manufactured solution. For Dirichlet boundary conditions, rapid convergence is observed using SOR point solver as the relaxation scheme. For cases of all-Neumann boundary conditions, convergence is seen to slow down with the degree of the appended polynomial. However, when the multilevel procedure is combined with a GMRES algorithm, the convergence is seen to significantly improve. The GMRES accelerated multilevel algorithm is included in a fractional step method to solve incompressible Navier-Stokes equations.

Characterizing the connection between material design decisions/parameters and their effective properties allows for accelerated materials development and optimization. We present a global sensitivity analysis of woven composite thermophysical properties, including density, volume fraction, thermal conductivity, specific heat, moduli, permeability, and tortuosity, predicted using mesoscale finite element simulations. The mesoscale simulations use microscale approximations for the tow and matrix phases. We performed Latin hypercube sampling of viable input parameter ranges, and the resulting effective property distributions are analyzed using a surrogate model to determine the correlations between material parameters and responses, interactions between properties, and finally Sobol' indices and sensitivities. We demonstrate that both constituent physical properties and the mesoscale geometry strongly influence the composite material properties.

Uncertainty often plays an important role in dynamic flow problems. In this paper, we consider both, a stationary and a dynamic flow model with uncertain boundary data on networks. We introduce two different ways how to compute the probability for random boundary data to be feasible, discussing their advantages and disadvantages. In this context, feasible means, that the flow corresponding to the random boundary data meets some box constraints at the network junctions. The first method is the spheric radial decomposition and the second method is a kernel density estimation. In both settings, we consider certain optimization problems and we compute derivatives of the probabilistic constraint using the kernel density estimator. Moreover, we derive necessary optimality conditions for the stationary and the dynamic case. Throughout the paper, we use numerical examples to illustrate our results by comparing them with a classical Monte Carlo approach to compute the desired probability.

In this article, we propose a new approach, optimize then agree for minimizing a sum $ f = \sum_{i=1}^n f_i(x)$ of convex objective functions over a directed graph. The optimize then agree approach decouples the optimization step and the consensus step in a distributed optimization framework. The key motivation for optimize then agree is to guarantee that the disagreement between the estimates of the agents during every iteration of the distributed optimization algorithm remains under any apriori specified tolerance; existing algorithms do not provide such a guarantee which is required in many practical scenarios. In this method, each agent during each iteration maintains an estimate of the optimal solution and, utilizes its locally available gradient information along with a finite-time approximate consensus protocol to move towards the optimal solution (hence the name Gradient-Consensus algorithm). We establish that the proposed algorithm has a global R-linear rate of convergence if the aggregate function $f$ is strongly convex and Lipschitz differentiable. We also show that under the relaxed assumption of $f_i$'s being convex and Lipschitz differentiable, the objective function error residual decreases at a Q-linear rate (in terms of the number of gradient computation steps) until it reaches a small value, which can be managed using the tolerance value specified on the finite-time approximate consensus protocol; no existing method in the literature has such strong convergence guarantees when $f_i$ are not necessarily strongly convex functions. The communication overhead for the improved guarantees on meeting constraints and better convergence of our algorithm is $O(k\log k)$ iterates in comparison to $O(k)$ of the traditional algorithms. Further, we numerically evaluate the performance of the proposed algorithm by solving a distributed logistic regression problem.

Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited. In this work, we make an attempt along this line by reformulating the training procedure from the trajectory optimization perspective. We first show that most widely-used algorithms for training DNNs can be linked to the Differential Dynamic Programming (DDP), a celebrated second-order trajectory optimization algorithm rooted in the Approximate Dynamic Programming. In this vein, we propose a new variant of DDP that can accept batch optimization for training feedforward networks, while integrating naturally with the recent progress in curvature approximation. The resulting algorithm features layer-wise feedback policies which improve convergence rate and reduce sensitivity to hyper-parameter over existing methods. We show that the algorithm is competitive against state-ofthe-art first and second order methods. Our work opens up new avenues for principled algorithmic design built upon the optimal control theory.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, maximizing variance and preservation of pairwise relative distances. The derivation of their asymptotic correlation and numerical experiments tell that a projection usually cannot satisfy both objectives. In a standard classification problem we determine projections on the input data that balance them and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning frameworks. We introduce new variational loss functions that enable integration of additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of the proposed loss functions increase the accuracy.

We present an end-to-end framework for solving the Vehicle Routing Problem (VRP) using reinforcement learning. In this approach, we train a single model that finds near-optimal solutions for problem instances sampled from a given distribution, only by observing the reward signals and following feasibility rules. Our model represents a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. On capacitated VRP, our approach outperforms classical heuristics and Google's OR-Tools on medium-sized instances in solution quality with comparable computation time (after training). We demonstrate how our approach can handle problems with split delivery and explore the effect of such deliveries on the solution quality. Our proposed framework can be applied to other variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems.

Dynamic topic models (DTMs) model the evolution of prevalent themes in literature, online media, and other forms of text over time. DTMs assume that word co-occurrence statistics change continuously and therefore impose continuous stochastic process priors on their model parameters. These dynamical priors make inference much harder than in regular topic models, and also limit scalability. In this paper, we present several new results around DTMs. First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs). This allows us to explore topics that develop smoothly over time, that have a long-term memory or are temporally concentrated (for event detection). Second, we show how to perform scalable approximate inference in these models based on ideas around stochastic variational inference and sparse Gaussian processes. This way we can train a rich family of DTMs to massive data. Our experiments on several large-scale datasets show that our generalized model allows us to find interesting patterns that were not accessible by previous approaches.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司