亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Since the advent of ultra-reliable and low-latency communications (URLLC), the requirements of low-latency applications tend to be completely characterized by a single pre-defined latency-reliability target. That is, operation is optimal whenever the pre-defined latency threshold is met but the system is assumed to be in error when the latency threshold is violated. This vision is severely limited and does not capture the real requirements of most applications, where multiple latency thresholds can be defined, together with incentives or rewards associated with meeting each of them. Such formulation is a generalization of the single-threshold case popularized by URLLC and, in the asymptotic case, approximates to defining a cost for each point in the support of the latency distribution. In this paper, we explore the implications of defining multiple latency targets on the design of access protocols and on the optimization of repetition-based access strategies in orthogonal and non-orthogonal multiple access scenarios with users that present heterogeneous traffic characteristics and requirements. We observe that the access strategies of the users can be effectively adapted to the requirements of the application by carefully defining the latency targets and the associated rewards.

相關內容

Most existing neural network-based approaches for solving stochastic optimal control problems using the associated backward dynamic programming principle rely on the ability to simulate the underlying state variables. However, in some problems, this simulation is infeasible, leading to the discretization of state variable space and the need to train one neural network for each data point. This approach becomes computationally inefficient when dealing with large state variable spaces. In this paper, we consider a class of this type of stochastic optimal control problems and introduce an effective solution employing multitask neural networks. To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks. Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.

We analyze the effects of enforcing vs. exempting access ISP from net neutrality regulations when platforms are present and operate two-sided pricing in their business models. This study is conducted in a scenario where users and Content Providers (CPs) have access to the internet by means of their serving ISPs and to a platform that intermediates and matches users and CPs, among other service offerings. Our hypothesis is that platform two-sided pricing interacts in a relevant manner with the access ISP, which may be allowed (an hypothetical non-neutrality scenario) or not (the current neutrality regulation status) to apply two-sided pricing on its service business model. We preliminarily conclude that the platforms are extracting surplus from the CPs under the current net neutrality regime for the ISP, and that the platforms would not be able to do so under the counter-factual situation where the ISPs could apply two-sided prices.

The performance of data fusion and tracking algorithms often depends on parameters that not only describe the sensor system, but can also be task-specific. While for the sensor system tuning these variables is time-consuming and mostly requires expert knowledge, intrinsic parameters of targets under track can even be completely unobservable until the system is deployed. With state-of-the-art sensor systems growing more and more complex, the number of parameters naturally increases, necessitating the automatic optimization of the model variables. In this paper, the parameters of an interacting multiple model (IMM) filter are optimized solely using measurements, thus without necessity for any ground-truth data. The resulting method is evaluated through an ablation study on simulated data, where the trained model manages to match the performance of a filter parametrized with ground-truth values.

In this work, a Generalized Finite Difference (GFD) scheme is presented for effectively computing the numerical solution of a parabolic-elliptic system modelling a bacterial strain with density-suppressed motility. The GFD method is a meshless method known for its simplicity for solving non-linear boundary value problems over irregular geometries. The paper first introduces the basic elements of the GFD method, and then an explicit-implicit scheme is derived. The convergence of the method is proven under a bound for the time step, and an algorithm is provided for its computational implementation. Finally, some examples are considered comparing the results obtained with a regular mesh and an irregular cloud of points.

Typical pipelines for model geometry generation in computational biomedicine stem from images, which are usually considered to be at rest, despite the object being in mechanical equilibrium under several forces. We refer to the stress-free geometry computation as the reference configuration problem, and in this work we extend such a formulation to the theory of fully nonlinear poroelastic media. The main steps are (i) writing the equations in terms of the reference porosity and (ii) defining a time dependent problem whose steady state solution is the reference porosity. This problem can be computationally challenging as it can require several hundreds of iterations to converge, so we propose the use of Anderson acceleration to speed up this procedure. Our evidence shows that this strategy can reduce the number of iterations up to 80\%. In addition, we note that a primal formulation of the nonlinear mass conservation equations is not consistent due to the presence of second order derivatives of the displacement, which we alleviate through adequate mixed formulations. All claims are validated through numerical simulations in both idealized and realistic scenarios.

A method for solving elasticity problems based on separable physics-informed neural networks (SPINN) in conjunction with the deep energy method (DEM) is presented. Numerical experiments have been carried out for a number of problems showing that this method has a significantly higher convergence rate and accuracy than the vanilla physics-informed neural networks (PINN) and even SPINN based on a system of partial differential equations (PDEs). In addition, using the SPINN in the framework of DEM approach it is possible to solve problems of the linear theory of elasticity on complex geometries, which is unachievable with the help of PINNs in frames of partial differential equations. Considered problems are very close to the industrial problems in terms of geometry, loading, and material parameters.

This paper redefines the foundations of asymmetric cryptography's homomorphic cryptosystems through the application of the Yoneda Lemma. It explicitly illustrates that widely adopted systems, including ElGamal, RSA, Benaloh, Regev's LWE, and NTRUEncrypt, directly derive from the principles of the Yoneda Lemma. This synthesis gives rise to a holistic homomorphic encryption framework named the Yoneda Encryption Scheme. Within this scheme, encryption is elucidated through the bijective maps of the Yoneda Lemma Isomorphism, and decryption seamlessly follows from the naturality of these maps. This unification suggests a conjecture for a unified model theory framework, providing a basis for reasoning about both homomorphic and fully homomorphic encryption (FHE) schemes. As a practical demonstration, the paper introduces an FHE scheme capable of processing arbitrary finite sequences of encrypted multiplications and additions without the need for additional tweaking techniques, such as squashing or bootstrapping. This not only underscores the practical implications of the proposed theoretical advancements but also introduces new possibilities for leveraging model theory and forcing techniques in cryptography to facilitate the design of FHE schemes.

In multiple hypothesis testing, it is well known that adaptive procedures can enhance power via incorporating information about the number of true nulls present. Under independence, we establish that two adaptive false discovery rate (FDR) methods, upon augmenting sign declarations, also offer directional false discovery rate (FDR$_\text{dir}$) control in the strong sense. Such FDR$_\text{dir}$ controlling properties are appealing because adaptive procedures have the greatest potential to reap substantial gain in power when the underlying parameter configurations contain little to no true nulls, which are precisely settings where the FDR$_\text{dir}$ is an arguably more meaningful error rate to be controlled than the FDR.

We present self-supervised geometric perception (SGP), the first general framework to learn a feature descriptor for correspondence matching without any ground-truth geometric model labels (e.g., camera poses, rigid transformations). Our first contribution is to formulate geometric perception as an optimization problem that jointly optimizes the feature descriptor and the geometric models given a large corpus of visual measurements (e.g., images, point clouds). Under this optimization formulation, we show that two important streams of research in vision, namely robust model fitting and deep feature learning, correspond to optimizing one block of the unknown variables while fixing the other block. This analysis naturally leads to our second contribution -- the SGP algorithm that performs alternating minimization to solve the joint optimization. SGP iteratively executes two meta-algorithms: a teacher that performs robust model fitting given learned features to generate geometric pseudo-labels, and a student that performs deep feature learning under noisy supervision of the pseudo-labels. As a third contribution, we apply SGP to two perception problems on large-scale real datasets, namely relative camera pose estimation on MegaDepth and point cloud registration on 3DMatch. We demonstrate that SGP achieves state-of-the-art performance that is on-par or superior to the supervised oracles trained using ground-truth labels.

The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.

北京阿比特科技有限公司