亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We have developed a simulation technique that uses non-linear finite element analysis and elastic fracture mechanics to compute physically plausible motion for three-dimensional, solid objects as they break, crack, or tear. When these objects deform beyond their mechanical limits, the system automatically determines where fractures should begin and in what directions they should propagate. The system allows fractures to propagate in arbitrary directions by dynamically restructuring the elements of a tetrahedral mesh. Because cracks are not limited to the original element boundaries, the objects can form irregularly shaped shards and edges as they shatter. The result is realistic fracture patterns such as the ones shown in our examples. This paper presents an overview of the fracture algorithm, the details are presented in our ACM SIGGRAPH 1999 and 2002 papers.

相關內容

A code of length $n$ is said to be (combinatorially) $(\rho,L)$-list decodable if the Hamming ball of radius $\rho n$ around any vector in the ambient space does not contain more than $L$ codewords. We study a recently introduced class of higher order MDS codes, which are closely related (via duality) to codes that achieve a generalized Singleton bound for list decodability. For some $\ell\geq 1$, higher order MDS codes of length $n$, dimension $k$, and order $\ell$ are denoted as $(n,k)$-MDS($\ell$) codes. We present a number of results on the structure of these codes, identifying the `extend-ability' of their parameters in various scenarios. Specifically, for some parameter regimes, we identify conditions under which $(n_1,k_1)$-MDS($\ell_1$) codes can be obtained from $(n_2,k_2)$-MDS($\ell_2$) codes, via various techniques. We believe that these results will aid in efficient constructions of higher order MDS codes. We also obtain a new field size upper bound for the existence of such codes, which arguably improves over the best known existing bound, in some parameter regimes.

In this paper, we propose a topology optimization (TO) framework where the design is parameterized by a set of convex polygons. Extending feature mapping methods in TO, the representation allows for direct extraction of the geometry. In addition, the method allows one to impose geometric constraints such as feature size control directly on the polygons that are otherwise difficult to impose in density or level set based approaches. The use of polygons provides for more more varied shapes than simpler primitives like bars, plates, or circles. The polygons are defined as the feasible set of a collection of halfspaces. Varying the halfspace's parameters allows for us to obtain diverse configurations of the polygons. Furthermore, the halfspaces are differentiably mapped onto a background mesh to allow for analysis and gradient driven optimization. The proposed framework is illustrated through numerous examples of 2D structural compliance minimization TO. Some of the key limitations and future research are also summarized.

Structural causal models provide a formalism to express causal relations between variables of interest. Models and variables can represent a system at different levels of abstraction, whereby relations may be coarsened and refined according to the need of a modeller. However, switching between different levels of abstraction requires evaluating a trade-off between the consistency and the information loss among different models. In this paper we introduce a family of interventional measures that an agent may use to evaluate such a trade-off. We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions. Finally, we illustrate the flexibility of our setup by empirically showing how different measures and algorithmic choices may lead to different abstractions.

Methods for population estimation and inference have evolved over the past decade to allow for the incorporation of spatial information when using capture-recapture study designs. Traditional approaches to specifying spatial capture-recapture (SCR) models often rely on an individual-based detection function that decays as a detection location is farther from an individual's activity center. Traditional SCR models are intuitive because they incorporate mechanisms of animal space use based on their assumptions about activity centers. We generalize SCR models to accommodate a wide range of space use patterns, including for those individuals that may exhibit traditional elliptical utilization distributions. Our approach uses underlying Gaussian processes to characterize the space use of individuals. This allows us to account for multimodal space use patterns as well as nonlinear corridors and barriers to movement. We refer to this class of models as geostatistical capture-recapture (GCR) models. We adapt a recursive computing strategy to fit GCR models to data in stages, some of which can be parallelized. This technique facilitates implementation and leverages modern multicore and distributed computing environments. We demonstrate the application of GCR models by analyzing both simulated data and a data set involving capture histories of snowshoe hares in central Colorado, USA.

Some patients with COVID-19 show changes in signs and symptoms such as temperature and oxygen saturation days before being positively tested for SARS-CoV-2, while others remain asymptomatic. It is important to identify these subgroups and to understand what biological and clinical predictors are related to these subgroups. This information will provide insights into how the immune system may respond differently to infection and can further be used to identify infected individuals. We propose a flexible nonparametric mixed-effects mixture model that identifies risk factors and classifies patients with biological changes. We model the latent probability of biological changes using a logistic regression model and trajectories in the latent groups using smoothing splines. We developed an EM algorithm to maximize the penalized likelihood for estimating all parameters and mean functions. We evaluate our methods by simulations and apply the proposed model to investigate changes in temperature in a cohort of COVID-19-infected hemodialysis patients.

Millimeter-Wave (mm-Wave) Radio Access Networks (RANs) are a promising solution to tackle the overcrowding of the sub-6 GHz spectrum, offering wider and underutilized bands. However, they are characterized by inherent technical challenges, such as a limited propagation range and blockage losses caused by obstacles. Integrated Access and Backhaul (IAB) and Reconfigurable Intelligent Surfaces (RIS) are two technologies devised to face these challenges. This work analyzes the optimal network layout of RANs equipped with IAB and RIS in real urban scenarios using MILP formulations to derive practical design guidelines. In particular, it shows how optimizing the peak user throughput of such networks improves the achievable peak throughput, compared to the traditional mean-throughput maximization approaches, without actually sacrificing mean throughputs. In addition, it indicates star-like topologies as the best network layout to achieve the highest peak throughputs.

Achieving resource efficiency while preserving end-user experience is non-trivial for cloud application operators. As cloud applications progressively adopt microservices, resource managers are faced with two distinct levels of system behavior: the end-to-end application latency and per-service resource usage. Translation between these two levels, however, is challenging because user requests traverse heterogeneous services that collectively (but unevenly) contribute to the end-to-end latency. This paper presents Autothrottle, a bi-level learning-assisted resource management framework for SLO-targeted microservices. It architecturally decouples mechanisms of application SLO feedback and service resource control, and bridges them with the notion of performance targets. This decoupling enables targeted control policies for these two mechanisms, where we combine lightweight heuristics and learning techniques. We evaluate Autothrottle on three microservice applications, with workload traces from production scenarios. Results show its superior CPU resource saving, up to 26.21% over the best-performing baseline, and up to 93.84% over all baselines.

It is not difficult to think of applications that can be modelled as graph problems in which placing some facility or commodity at a vertex has some positive or negative effect on the values of all the vertices out to some distance, and we want to be able to calculate quickly the cumulative effect on any vertex's value at any time or the list of the most beneficial or most detrimential effects on a vertex. In this paper we show how, given an edge-weighted graph with constant-size separators, we can support the following operations on it in time polylogarithmic in the number of vertices and the number of facilities placed on the vertices, where distances between vertices are measured with respect to the edge weights: Add (v, f, w, d) places a facility of weight w and with effect radius d onto vertex v. Remove (v, f) removes a facility f previously placed on v using Add from v. Sum (v) or Sum (v, d) returns the total weight of all facilities affecting v or, with a distance parameter d, the total weight of all facilities whose effect region intersects the ``circle'' with radius d around v. Top (v, k) or Top (v, k, d) returns the k facilities of greatest weight that affect v or, with a distance parameter d, whose effect region intersects the ``circle'' with radius d around v. The weights of the facilities and the operation that Sum uses to ``sum'' them must form a semigroup. For Top queries, the weights must be drawn from a total order.

Despite the advancement of machine learning techniques in recent years, state-of-the-art systems lack robustness to "real world" events, where the input distributions and tasks encountered by the deployed systems will not be limited to the original training context, and systems will instead need to adapt to novel distributions and tasks while deployed. This critical gap may be addressed through the development of "Lifelong Learning" systems that are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability. Unfortunately, efforts to improve these capabilities are typically treated as distinct areas of research that are assessed independently, without regard to the impact of each separate capability on other aspects of the system. We instead propose a holistic approach, using a suite of metrics and an evaluation framework to assess Lifelong Learning in a principled way that is agnostic to specific domains or system techniques. Through five case studies, we show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems. We highlight how the proposed suite of metrics quantifies performance trade-offs present during Lifelong Learning system development - both the widely discussed Stability-Plasticity dilemma and the newly proposed relationship between Sample Efficient and Robust Learning. Further, we make recommendations for the formulation and use of metrics to guide the continuing development of Lifelong Learning systems and assess their progress in the future.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司