亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Compared with traditional design methods, generative design significantly attracts engineers in various disciplines. In thiswork, howto achieve the real-time generative design of optimized structures with various diversities and controllable structural complexities is investigated. To this end, a modified Moving Morphable Component (MMC) method together with novel strategies are adopted to generate high-quality dataset. The complexity level of optimized structures is categorized by the topological invariant. By improving the cost function, the WGAN is trained to produce optimized designs with the input of loading position and complexity level in real time. It is found that, diverse designs with a clear load transmission path and crisp boundary, even not requiring further optimization and different from any reference in the dataset, can be generated by the proposed model. This method holds great potential for future applications of machine learning enhanced intelligent design.

相關內容

We propose Stein-type estimators for zero-inflated Bell regression models by incorporating information on model parameters. These estimators combine the advantages of unrestricted and restricted estimators. We derive the asymptotic distributional properties, including bias and mean squared error, for the proposed shrinkage estimators. Monte Carlo simulations demonstrate the superior performance of our shrinkage estimators across various scenarios. Furthermore, we apply the proposed estimators to analyze a real dataset, showcasing their practical utility.

Activation Patching is a method of directly computing causal attributions of behavior to model components. However, applying it exhaustively requires a sweep with cost scaling linearly in the number of model components, which can be prohibitively expensive for SoTA Large Language Models (LLMs). We investigate Attribution Patching (AtP), a fast gradient-based approximation to Activation Patching and find two classes of failure modes of AtP which lead to significant false negatives. We propose a variant of AtP called AtP*, with two changes to address these failure modes while retaining scalability. We present the first systematic study of AtP and alternative methods for faster activation patching and show that AtP significantly outperforms all other investigated methods, with AtP* providing further significant improvement. Finally, we provide a method to bound the probability of remaining false negatives of AtP* estimates.

We address the problem of testing conditional mean and conditional variance for non-stationary data. We build e-values and p-values for four types of non-parametric composite hypotheses with specified mean and variance as well as other conditions on the shape of the data-generating distribution. These shape conditions include symmetry, unimodality, and their combination. Using the obtained e-values and p-values, we construct tests via e-processes, also known as testing by betting, as well as some tests based on combining p-values for comparison. Although we mainly focus on one-sided tests, the two-sided test for the mean is also studied. Simulation and empirical studies are conducted under a few settings, and they illustrate features of the methods based on e-processes.

The comparison of frequency distributions is a common statistical task with broad applications. However, existing measures do not explicitly quantify the magnitude and direction by which one distribution is shifted relative to another. In the present study, we define distributional shift (DS) as the concentration of frequencies towards the lowest discrete class, e.g., the left-most bin of a histogram. We measure DS via the sum of cumulative frequencies and define relative distributional shift (RDS) as the difference in DS between distributions. Using simulated random sampling, we show that RDS is highly related to measures that are widely used to compare frequency distributions. Focusing on specific applications, we show that DS and RDS provide insights into healthcare billing distributions, ecological species-abundance distributions, and economic distributions of wealth. RDS has the unique advantage of being a signed (i.e., directional) measure based on a simple difference in an intuitive property that, in turn, serves as a measure of rarity, poverty, and scarcity.

Motivated by cryptographic applications, we investigate two machine learning approaches to modular multiplication: namely circular regression and a sequence-to-sequence transformer model. The limited success of both methods demonstrated in our results gives evidence for the hardness of tasks involving modular multiplication upon which cryptosystems are based.

We derive sharp-interface models for one-dimensional brittle fracture via the inverse-deformation approach. Methods of Gamma-convergence are employed to obtain the singular limits of previously proposed models. The latter feature a local, non-convex stored energy of inverse strain, augmented by small interfacial energy, formulated in terms of the inverse-strain gradient. They predict spontaneous fracture with exact crack-opening discontinuities, without the use of damage (phase) fields or pre-existing cracks; crack faces are endowed with a thin layer of surface energy. The models obtained herewith inherit the same properties, except that surface energy is now concentrated at the crack faces. Accordingly, we construct energy-minimizing configurations. For a composite bar with a breakable layer, our results predict a pattern of equally spaced cracks whose number is given as an increasing function of applied load.

In large-scale, data-driven applications, parameters are often only known approximately due to noise and limited data samples. In this paper, we focus on high-dimensional optimization problems with linear constraints under uncertain conditions. To find high quality solutions for which the violation of the true constraints is limited, we develop a linear shrinkage method that blends random matrix theory and robust optimization principles. It aims to minimize the Frobenius distance between the estimated and the true parameter matrix, especially when dealing with a large and comparable number of constraints and variables. This data-driven method excels in simulations, showing superior noise resilience and more stable performance in both obtaining high quality solutions and adhering to the true constraints compared to traditional robust optimization. Our findings highlight the effectiveness of our method in improving the robustness and reliability of optimization in high-dimensional, data-driven scenarios.

Although metaheuristics have been widely recognized as efficient techniques to solve real-world optimization problems, implementing them from scratch remains difficult for domain-specific experts without programming skills. In this scenario, metaheuristic optimization frameworks are a practical alternative as they provide a variety of algorithms composed of customized elements, as well as experimental support. Recently, many engineering problems require to optimize multiple or even many objectives, increasing the interest in appropriate metaheuristic algorithms and frameworks that might integrate new specific requirements while maintaining the generality and reusability principles they were conceived for. Based on this idea, this paper introduces JCLEC-MO, a Java framework for both multi- and many-objective optimization that enables engineers to apply, or adapt, a great number of multi-objective algorithms with little coding effort. A case study is developed and explained to show how JCLEC-MO can be used to address many-objective engineering problems, often requiring the inclusion of domain-specific elements, and to analyze experimental outcomes by means of conveniently connected R utilities.

Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions. Defining appropriate loss functions is therefore critical to successfully solving problems in this field. We present a survey of the most commonly used loss functions for a wide range of different applications, divided into classification, regression, ranking, sample generation and energy based modelling. Overall, we introduce 33 different loss functions and we organise them into an intuitive taxonomy. Each loss function is given a theoretical backing and we describe where it is best used. This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司