亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper provides two parallel solutions on the mixed boundary value problem of a unit annulus subjected to a partially fixed outer periphery and an arbitrary traction acting along the inner periphery using the complex variable method. The analytic continuation is applied to turn the mixed boundary value problem into a Riemann-Hilbert problem across the free segment along the outer periphery. Two parallel interpreting methods of the unused traction and displacement boundary condition along the outer periphery together with the traction boundary condition along the inner periphery respectively form two parallel complex linear constraint sets, which are then iteratively solved via a successive approximation method to reach the same stable stress and displacement solutions with the Lanczos filtering technique. Finally, four typical numerical cases coded by \texttt{FORTRAN} are carried out and compared to the same cases performed on \texttt{ABAQUS}. The results indicate that these two parallel solutions are both accurate, stable, robust, and fast, and validate that these two parallel solutions are numerically equivalent.

相關內容

The Multilevel Monte Carlo (MLMC) method has proven to be an effective variance-reduction statistical method for Uncertainty Quantification (UQ) in Partial Differential Equation (PDE) models, combining model computations at different levels to create an accurate estimate. Still, the computational complexity of the resulting method is extremely high, particularly for 3D models, which requires advanced algorithms for the efficient exploitation of High Performance Computing (HPC). In this article we present a new implementation of the MLMC in massively parallel computer architectures, exploiting parallelism within and between each level of the hierarchy. The numerical approximation of the PDE is performed using the finite element method but the algorithm is quite general and could be applied to other discretization methods as well, although the focus is on parallel sampling. The two key ingredients of an efficient parallel implementation are a good processor partition scheme together with a good scheduling algorithm to assign work to different processors. We introduce a multiple partition of the set of processors that permits the simultaneous execution of different levels and we develop a dynamic scheduling algorithm to exploit it. The problem of finding the optimal scheduling of distributed tasks in a parallel computer is an NP-complete problem. We propose and analyze a new greedy scheduling algorithm to assign samples and we show that it is a 2-approximation, which is the best that may be expected under general assumptions. On top of this result we design a distributed memory implementation using the Message Passing Interface (MPI) standard. Finally we present a set of numerical experiments illustrating its scalability properties.

Multi-signature aggregates signatures from multiple users on the same message into a joint signature, which is widely applied in blockchain to reduce the percentage of signatures in blocks and improve the throughput of transactions. The $k$-sum attacks are one of the major challenges to design secure multi-signature schemes. In this work, we address $k$-sum attacks from a novel angle by defining a Public Third Party (PTP), which is an automatic process that can be verifiable by the public and restricts the signing phase from continuing until receiving commitments from all signers. Further, a two-round multi-signature scheme MEMS with PTP is proposed, which is secure based on discrete logarithm assumption in the random oracle model. As each signer communicates directly with the PTP instead of other co-signers, the total amount of communications is significantly reduced. In addition, as PTP participates in the computation of the aggregation and signing algorithms, the computation cost left for each signer and verifier remains the same as the basis Schnorr signature. To the best of our knowledge, this is the maximum efficiency that a Schnorr-based multi-signature scheme can achieve. Further, MEMS is applied in blockchain platform, e.g., Fabric, to improve the transaction efficiency.

We are interested in the nonparametric estimation of the probability density of price returns, using the kernel approach. The output of the method heavily relies on the selection of a bandwidth parameter. Many selection methods have been proposed in the statistical literature. We put forward an alternative selection method based on a criterion coming from information theory and from the physics of complex systems: the bandwidth to be selected maximizes a new measure of complexity, with the aim of avoiding both overfitting and underfitting. We review existing methods of bandwidth selection and show that they lead to contradictory conclusions regarding the complexity of the probability distribution of price returns. This has also some striking consequences in the evaluation of the relevance of the efficient market hypothesis. We apply these methods to real financial data, focusing on the Bitcoin.

This paper proposes a new procedure to validate the multi-factor pricing theory by testing the presence of alpha in linear factor pricing models with a large number of assets. Because the market's inefficient pricing is likely to occur to a small fraction of exceptional assets, we develop a testing procedure that is particularly powerful against sparse signals. Based on the high-dimensional Gaussian approximation theory, we propose a simulation-based approach to approximate the limiting null distribution of the test. Our numerical studies show that the new procedure can deliver a reasonable size and achieve substantial power improvement compared to the existing tests under sparse alternatives, and especially for weak signals.

In the field of computational finance, it is common for the quantity of interest to be expected values of functions of random variables via stochastic differential equations (SDEs). For SDEs with globally Lipschitz coefficients and commutative diffusion coefficients, the explicit Milstein scheme, relying on only Brownian increments and thus easily implementable, can be combined with the multilevel Monte Carlo (MLMC) method proposed by Giles \cite{giles2008multilevel} to give the optimal overall computational cost $\mathcal{O}(\epsilon^{-2})$, where $\epsilon$ is the required target accuracy. For multi-dimensional SDEs that do not satisfy the commutativity condition, a kind of one-half order truncated Milstein-type scheme without L\'evy areas is introduced by Giles and Szpruch \cite{giles2014antithetic}, which combined with the antithetic MLMC gives the optimal computational cost under globally Lipschitz conditions. In the present work, we turn to SDEs with non-globally Lipschitz continuous coefficients, for which a family of modified Milstein-type schemes without L\'evy areas is proposed. The expected one-half order of strong convergence is recovered in a non-globally Lipschitz setting, where the diffusion coefficients are allowed to grow superlinearly. This helps us to analyze the relevant variance of the multilevel estimator and the optimal computational cost is finally achieved for the antithetic MLMC. The analysis of both the convergence rate and the desired variance in the non-globally Lipschitz setting is highly non-trivial and non-standard arguments are developed to overcome some essential difficulties. Numerical experiments are provided to confirm the theoretical findings.

This paper aims at obtaining, by means of integral transforms, analytical approximations in short times of solutions to boundary value problems for the one-dimensional reaction-diffusion equation with constant coefficients. The general form of the equation is considered on a bounded generic interval and the three classical types of boundary conditions, i.e., Dirichlet as well as Neumann and mixed boundary conditions are considered in a unified way. The Fourier and Laplace integral transforms are successively applied and an exact solution is obtained in the Laplace domain. This operational solution is proven to be the accurate Laplace transform of the infinite series obtained by the Fourier decomposition method and presented in the literature as solutions to this type of problem. On the basis of this unified operational solution, four cases are distinguished where innovative formulas expressing consistent analytical approximations in short time limits are derived with respect to the behavior of the solution at the boundaries. Compared to the infinite series solutions, the analytical approximations may open new perspectives and applications, among which can be noted the improvement of numerical efficiency in simulations of one-dimensional moving boundary problems, such as in Stefan models.

We extend our previous work on two-party election competition [Lin, Lu & Chen 2021] to the setting of three or more parties. An election campaign among two or more parties is viewed as a game of two or more players. Each of them has its own candidates as the pure strategies to play. People, as voters, comprise supporters for each party, and a candidate brings utility for the the supporters of each party. Each player nominates exactly one of its candidates to compete against the other party's. A candidate is assumed to win the election with higher odds if it brings more utility for all the people. The payoff of each player is the expected utility its supporters get. The game is egoistic if every candidate benefits her party's supporters more than any candidate from the competing party does. In this work, we first argue that the election game always has a pure Nash equilibrium when the winner is chosen by the hardmax function, while there exist game instances in the three-party election game such that no pure Nash equilibrium exists even the game is egoistic. Next, we propose two sufficient conditions for the egoistic election game to have a pure Nash equilibrium. Based on these conditions, we propose a fixed-parameter tractable algorithm to compute a pure Nash equilibrium of the egoistic election game. Finally, perhaps surprisingly, we show that the price of anarchy of the egoistic election game is upper bounded by the number of parties. Our findings suggest that the election becomes unpredictable when more than two parties are involved and, moreover, the social welfare deteriorates with the number of participating parties in terms of possibly increasing price of anarchy. This work alternatively explains why the two-party system is prevalent in democratic countries.

We consider a multi-agent delegation mechanism without money. In our model, given a set of agents, each agent has a fixed number of solutions which is exogenous to the mechanism, and privately sends a signal, e.g., a subset of solutions, to the principal. Then, the principal selects a final solution based on the agents' signals. In stark contrast to single-agent setting by Kleinberg and Kleinberg (EC'18) with an approximate Bayesian mechanism, we show that there exists efficient approximate prior-independent mechanisms with both information and performance gain, thanks to the competitive tension between the agents. Interestingly, however, the amount of such a compelling power significantly varies with respect to the information available to the agents, and the degree of correlation between the principal's and the agent's utility. Technically, we conduct a comprehensive study on the multi-agent delegation problem and derive several results on the approximation factors of Bayesian/prior-independent mechanisms in complete/incomplete information settings. As a special case of independent interest, we obtain comparative statics regarding the number of agents which implies the dominance of the multi-agent setting ($n \ge 2$) over the single-agent setting ($n=1$) in terms of the principal's utility. We further extend our problem by considering an examination cost of the mechanism and derive some analogous results in the complete information setting.

Language models have been shown to perform remarkably well on a wide range of natural language processing tasks. In this paper, we propose a novel system that uses language models to perform multi-step logical reasoning. Our system incorporates explicit planning into its inference procedure, thus able to make more informed reasoning decisions at each step by looking ahead into their future effects. Moreover, we propose a training strategy that safeguards the planning process from being led astray by spurious features. Our full system significantly outperforms other competing methods on multiple standard datasets. When using a T5 model as its core component, our system performs competitively compared to GPT-3 despite having only about 1B parameters (i.e., 175 times smaller than GPT-3). When using GPT-3.5, it significantly outperforms chain-of-thought prompting on the challenging PrOntoQA dataset. We have conducted extensive empirical studies to demonstrate that explicit planning plays a crucial role in the system's performance.

Seamlessly interacting with humans or robots is hard because these agents are non-stationary. They update their policy in response to the ego agent's behavior, and the ego agent must anticipate these changes to co-adapt. Inspired by humans, we recognize that robots do not need to explicitly model every low-level action another agent will make; instead, we can capture the latent strategy of other agents through high-level representations. We propose a reinforcement learning-based framework for learning latent representations of an agent's policy, where the ego agent identifies the relationship between its behavior and the other agent's future strategy. The ego agent then leverages these latent dynamics to influence the other agent, purposely guiding them towards policies suitable for co-adaptation. Across several simulated domains and a real-world air hockey game, our approach outperforms the alternatives and learns to influence the other agent.

北京阿比特科技有限公司