亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The present article proposes a partitioned Dirichlet-Neumann algorithm, that allows to address unique challenges arising from a novel mixed-dimensional coupling of very slender fibers embedded in fluid flow using a regularized mortar finite element type discretization. Here, the fibers are modeled via one-dimensional (1D) partial differential equations based on geometrically exact nonlinear beam theory, while the flow is described by the three-dimensional (3D) incompressible Navier-Stokes equations. The arising truly mixed-dimensional 1D-3D coupling scheme constitutes a novel numerical strategy, that naturally necessitates specifically tailored algorithmic solution schemes to ensure an accurate and efficient computational treatment. In particular, we present a strongly coupled partitioned solution algorithm based on a Quasi-Newton method for applications involving fibers with high slenderness ratios that usually present a challenge with regard to the well-known added mass effect. The influence of all employed algorithmic and numerical parameters, namely the applied acceleration technique, the employed constraint regularization parameter as well as shape functions, on efficiency and results of the solution procedure is studied through appropriate examples. Finally, the convergence of the two-way coupled mixed-dimensional problem solution under uniform mesh refinement is demonstrated, and the method's capabilities in capturing flow phenomena at large geometric scale separation is illustrated by the example of a submersed vegetation canopy.

相關內容

An assumption that has often been used by researchers to model the interference in a wireless network is the unit disk graph model. While many theoretical results and performance guarantees have been obtained under this model, an open research direction is to extend these results to hypergraph interference models. Motivated by recent results that the worst-case performance of the distributed maximal scheduling algorithm is characterized by the interference degree of the hypergraph, in the present work we investigate properties of the interference degree of the hypergraph and the structure of hypergraphs arising from physical constraints. We show that the problem of computing the interference degree of a hypergraph is NP-hard and we prove some properties and results concerning this hypergraph invariant. We investigate which hypergraphs are realizable, i.e. which hypergraphs arise in practice, based on physical constraints, as the interference model of a wireless network. In particular, a question that arises naturally is: what is the maximal value of $r$ such that the hypergraph $K_{1,r}$ is realizable? We determine this quantity for various values of the path loss exponent of signal propagation. We also investigate hypergraphs generated by line networks.

Relay-enabled backscatter communication (BC) is an intriguing paradigm to alleviate energy shortage and improve throughput of Internet-of-Things (IoT) devices. Most of the existing works focus on the resource allocation that considered the unequal and continuous time allocation for both source-relay and relay-destination links. However, the continuous time allocation may be infeasible since in practice, the time allocation shall be carried out in integral multiple of the subframe duration unit. In this article, we study a discrete time scheme from the perspective of frame structure, where one transmission block is divided into two phases and the linear mapping is employed as a re-encoding method to determine the number of subframes for both phases and the power allocation for each subframe in a relay-enabled BC system. Based on this, we derive an accurate system-throughput expression and formulate a mixed-integral non-convex optimization problem to maximize the system throughput by jointly optimizing the power reflection coefficient (PRC) of the IoT node, the power allocation of the hybrid access point (HAP) and the linear mapping matrix, and solve it via a three-step approach. Accordingly, we propose a low complexity iterative algorithm to obtain the throughput maximization-based resource allocation solution. Numerical results analyze the performance of our proposed algorithm, verify the superiority of our proposed scheme, and evaluate the impacts of network parameters on the system throughput.

An algorithm for robust initial orbit determination (IOD) under perturbed orbital dynamics is presented. By leveraging map inversion techniques defined in the algebra of Taylor polynomials, this tool is capable of not only returning an highly accurate solution to the IOD problem, but also estimating a range of validity for the aforementioned solution in which the true orbit state should lie. Automatic domain splitting is then used on top of the IOD routines to ensure the local truncation error introduced by a polynomial representation of the state estimate remains below a predefined threshold to meet the specified requirements in accuracy. The algorithm is adapted to three types of ground based sensors, namely range radars, Doppler-only radars and optical telescopes by taking into account their different constraints in terms of available measurements and sensor noise. Its improved performance with respect to a Keplerian based IOD solution is finally demonstrated with large scale numerical simulations over a subset of tracked objects in low Earth orbit.

In this paper, we study a general low-rank matrix recovery problem with linear measurements corrupted by some noise. The objective is to understand under what conditions on the restricted isometry property (RIP) of the problem local search methods can find the ground truth with a small error. By analyzing the landscape of the non-convex problem, we first propose a global guarantee on the maximum distance between an arbitrary local minimizer and the ground truth under the assumption that the RIP constant is smaller than $1/2$. We show that this distance shrinks to zero as the intensity of the noise reduces. Our new guarantee is sharp in terms of the RIP constant and is much stronger than the existing results. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. Next, we prove the strict saddle property, which guarantees the global convergence of the perturbed gradient descent method in polynomial time. The developed results demonstrate how the noise intensity and the RIP constant of the problem affect the landscape of the problem.

Given data on the choices made by consumers for different offer sets, a key challenge is to develop parsimonious models that describe and predict consumer choice behavior while being amenable to prescriptive tasks such as pricing and assortment optimization. The marginal distribution model (MDM) is one such model, that requires only the specification of marginal distributions of the random utilities. This paper aims to establish necessary and sufficient conditions for given choice data to be consistent with the MDM hypothesis, inspired by the utility of similar characterizations for the random utility model (RUM). This endeavor leads to an exact characterization of the set of choice probabilities that the MDM can represent. Verifying the consistency of choice data with this characterization is equivalent to solving a polynomial-sized linear program. Since the analogous verification task for RUM is computationally intractable and neither of these models subsumes the other, MDM is helpful in striking a balance between tractability and representational power. The characterization is convenient to be used with robust optimization for making data-driven sales and revenue predictions for new unseen assortments. When the choice data lacks consistency with the MDM hypothesis, finding the best-fitting MDM choice probabilities reduces to solving a mixed integer convex program. The results extend naturally to the case where the alternatives can be grouped based on the similarity of the marginal distributions of the utilities. Numerical experiments show that MDM provides better representational power and prediction accuracy than multinominal logit and significantly better computational performance than RUM.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.

Low rank matrix approximations appear in a number of scientific computing applications. We consider the Nystr\"{o}m method for approximating a positive semidefinite matrix $A$. In the case that $A$ is very large or its entries can only be accessed once, a single-pass version may be necessary. In this work, we perform a complete rounding error analysis of the single-pass Nystr\"{o}m method in two precisions, where the computation of the expensive matrix product with $A$ is assumed to be performed in the lower of the two precisions. Our analysis gives insight into how the sketching matrix and shift should be chosen to ensure stability, implementation aspects which have been commented on in the literature but not yet rigorously justified. We further develop a heuristic to determine how to pick the lower precision, which confirms the general intuition that the lower the desired rank of the approximation, the lower the precision we can use without detriment. We also demonstrate that our mixed precision Nystr\"{o}m method can be used to inexpensively construct limited memory preconditioners for the conjugate gradient method and derive a bound the condition number of the resulting preconditioned coefficient matrix. We present numerical experiments on a set of matrices with various spectral decays and demonstrate the utility of our mixed precision approach.

Many food products involve mixtures of ingredients, where the mixtures can be expressed as combinations of ingredient proportions. In many cases, the quality and the consumer preference may also depend on the way in which the mixtures are processed. The processing is generally defined by the settings of one or more process variables. Experimental designs studying the joint impact of the mixture ingredient proportions and the settings of the process variables are called mixture-process variable experiments. In this article, we show how to combine mixture-process variable experiments and discrete choice experiments, to quantify and model consumer preferences for food products that can be viewed as processed mixtures. First, we describe the modeling of data from such combined experiments. Next, we describe how to generate D- and I-optimal designs for choice experiments involving mixtures and process variables, and we compare the two kinds of designs using two examples.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司