亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Some early violins have been reduced during their history to fit imposed morphological standards, while more recent ones have been built directly to these standards. We can observe differences between reduced and unreduced instruments, particularly in their contour lines and channel of minima. In a recent preliminary work, we computed and highlighted those two features for two instruments using triangular 3D meshes acquired by photogrammetry, whose fidelity has been assessed and validated with sub-millimetre accuracy. We propose here an extension to a corpus of 38 violins, violas and cellos, and introduce improved procedures, leading to a stronger discussion of the geometric analysis. We first recall the material we are working with. We then discuss how to derive the best reference plane for the violin alignment, which is crucial for the computation of contour lines and channel of minima. Finally, we show how to compute efficiently both characteristics and we illustrate our results with a few examples.

相關內容

Three-dimensional effect of tunnel face and gravitational excavation generally occur in shallow tunnelling, which are nevertheless not adequately considered in present complex variable solutions. In this paper, a new time-dependent complex variable solution on quasi three-dimensional shallow tunnelling in gravitational geomaterial is derived, and the far-field displacement singularity is eliminated by fixed far-field ground surface in the whole excavation time span. With an equivalent coefficient of three-dimensional effect, the quasi three-dimensional shallow tunnelling is transformed into a plane strain problem with time-dependent virtual traction along tunnel periphery. The mixed boundaries of fixed far-field ground surface and nearby free segment form a homogenerous Riemann-Hilbert problem with extra constraints of the virtual traction along tunnel periphery, which is simultaneously solved using an iterative linear system with good numerical stability. The mixed boundary conditions along the ground surface in the whole excavation time span are well satisified in a numerical case, which is further examined by comparing with corresponding finite element solution. The results are in good agreements, and the proposed solution illustrates high efficiency. More discussions are made on excavation rate, viscosity, and solution convergence. A latent paradox is disclosed for objectivity.

Perfect error correcting codes allow for an optimal transmission of information while guaranteeing error correction. For this reason, proving their existence has been a classical problem in both pure mathematics and information theory. Indeed, the classification of the parameters of $e-$error correcting perfect codes over $q-$ary alphabets was a very active topic of research in the late 20th century. Consequently, all parameters of perfect $e-$error correcting codes were found if $e \ge 3$, and it was conjectured that no perfect $2-$error correcting codes exist over any $q-$ary alphabet, where $q > 3$. In the 1970s, this was proved for $q$ a prime power, for $q = 2^r3^s$ and for only $7$ other values of $q$. Almost $50$ years later, it is surprising to note that there have been no new results in this regard and the classification of $2-$error correcting codes over non-prime power alphabets remains an open problem. In this paper, we use techniques from the resolution of generalised Ramanujan--Nagell equation and from modern computational number theory to show that perfect $2-$error correcting codes do not exist for $172$ new values of $q$ which are not prime powers, substantially increasing the values of $q$ which are now classified. In addition, we prove that, for any fixed value of $q$, there can be at most finitely many perfect $2-$error correcting codes over an alphabet of size $q$.

Sensorized insoles provide a tool for gait studies and health monitoring during daily life. For users to accept such insoles they need to be comfortable and lightweight. Previous work has already demonstrated that estimation of ground reaction forces (GRFs) is possible with insoles. However, these are often assemblies of commercial components restricting design freedom and customization. Within this work, we investigate using four 3D-printed soft foam-like sensors to sensorize an insole. These sensors were combined with system identification of Hammerstein-Wiener models to estimate the 3D GRFs, which were compared to values from an instrumented treadmill as the golden standard. It was observed that the four sensors behaved in line with the expected change in pressure distribution during the gait cycle. In addition, the identified (personalized) Hammerstein-Wiener models showed the best estimation performance (on average RMS error 9.3%, R^2=0.85 and mean absolute error (MAE) 7%) of the vertical, mediolateral, and anteroposterior GRFs. Thereby showing that these sensors can estimate the resulting 3D force reasonably well. These results for nine participants were comparable to or outperformed other works that used commercial FSRs with machine learning. The identified models did decrease in estimation performance over time but stayed on average 11.35% RMS and 8.6% MAE after a week with the Hammerstein-Wiener model seeming consistent between days two and seven. These results show promise for using 3D-printed soft piezoresistive foam-like sensors with system identification to be a viable approach for applications that require softness, lightweight, and customization such as wearable (force) sensors.

Contraction coefficients give a quantitative strengthening of the data processing inequality. As such, they have many natural applications whenever closer analysis of information processing is required. However, it is often challenging to calculate these coefficients. As a remedy we discuss a quantum generalization of Doeblin coefficients. These give an efficiently computable upper bound on many contraction coefficients. We prove several properties and discuss generalizations and applications. In particular, we give additional stronger bounds. One especially for PPT channels and one for general channels based on a constraint relaxation. Additionally, we introduce reverse Doeblin coefficients that bound certain expansion coefficients.

Latent variable models serve as powerful tools to infer underlying dynamics from observed neural activity. However, due to the absence of ground truth data, prediction benchmarks are often employed as proxies. In this study, we reveal the limitations of the widely-used 'co-smoothing' prediction framework and propose an improved few-shot prediction approach that encourages more accurate latent dynamics. Utilizing a student-teacher setup with Hidden Markov Models, we demonstrate that the high co-smoothing model space can encompass models with arbitrary extraneous dynamics within their latent representations. To address this, we introduce a secondary metric -- a few-shot version of co-smoothing. This involves performing regression from the latent variables to held-out channels in the data using fewer trials. Our results indicate that among models with near-optimal co-smoothing, those with extraneous dynamics underperform in the few-shot co-smoothing compared to 'minimal' models devoid of such dynamics. We also provide analytical insights into the origin of this phenomenon. We further validate our findings on real neural data using two state-of-the-art methods: LFADS and STNDT. In the absence of ground truth, we suggest a proxy measure to quantify extraneous dynamics. By cross-decoding the latent variables of all model pairs with high co-smoothing, we identify models with minimal extraneous dynamics. We find a correlation between few-shot co-smoothing performance and this new measure. In summary, we present a novel prediction metric designed to yield latent variables that more accurately reflect the ground truth, offering a significant improvement for latent dynamics inference.

Machine-learned language models have transformed everyday life: they steer us when we study, drive, manage money. They have the potential to transform our civilization. But they hallucinate. Their realities are virtual. This note provides a high-level overview of language models and outlines a low-level model of learning machines. It turns out that, after they become capable of recognizing hallucinations and dreaming safely, as humans tend to be, the language-learning machines proceed to generate broader systems of false beliefs and self-confirming theories, as humans tend to do.

Pretrial risk assessment tools are used in jurisdictions across the country to assess the likelihood of "pretrial failure," the event where defendants either fail to appear for court or reoffend. Judicial officers, in turn, use these assessments to determine whether to release or detain defendants during trial. While algorithmic risk assessment tools were designed to predict pretrial failure with greater accuracy relative to judges, there is still concern that both risk assessment recommendations and pretrial decisions are biased against minority groups. In this paper, we develop methods to investigate the association between risk factors and pretrial failure, while simultaneously estimating misclassification rates of pretrial risk assessments and of judicial decisions as a function of defendant race. This approach adds to a growing literature that makes use of outcome misclassification methods to answer questions about fairness in pretrial decision-making. We give a detailed simulation study for our proposed methodology and apply these methods to data from the Virginia Department of Criminal Justice Services. We estimate that the VPRAI algorithm has near-perfect specificity, but its sensitivity differs by defendant race. Judicial decisions also display evidence of bias; we estimate wrongful detention rates of 39.7% and 51.4% among white and Black defendants, respectively.

We investigate analytically the behaviour of the penalized maximum partial likelihood estimator (PMPLE). Our results are derived for a generic separable regularization, but we focus on the elastic net. This penalization is routinely adopted for survival analysis in the high dimensional regime, where the Maximum Partial Likelihood estimator (no regularization) might not even exist. Previous theoretical results require that the number $s$ of non-zero association coefficients is $O(n^{\alpha})$, with $\alpha \in (0,1)$ and $n$ the sample size. Here we accurately characterize the behaviour of the PMPLE when $s$ is proportional to $n$ via the solution of a system of six non-linear equations that can be easily obtained by fixed point iteration. These equations are derived by means of the replica method and under the assumption that the covariates $\mathbf{X}\in \mathbb{R}^p$ follow a multivariate Gaussian law with covariance $\mathbf{I}_p/p$. The solution of the previous equations allows us to investigate the dependency of various metrics of interest and hence their dependency on the ratio $\zeta = p/n$, the fraction of true active components $\nu = s/p$, and the regularization strength. We validate our results by extensive numerical simulations.

For a set of robots (or agents) moving in a graph, two properties are highly desirable: confidentiality (i.e., a message between two agents must not pass through any intermediate agent) and efficiency (i.e., messages are delivered through shortest paths). These properties can be obtained if the \textsc{Geodesic Mutual Visibility} (GMV, for short) problem is solved: oblivious robots move along the edges of the graph, without collisions, to occupy some vertices that guarantee they become pairwise geodesic mutually visible. This means there is a shortest path (i.e., a ``geodesic'') between each pair of robots along which no other robots reside. In this work, we optimally solve GMV on finite hexagonal grids $G_k$. This, in turn, requires first solving a graph combinatorial problem, i.e. determining the maximum number of mutually visible vertices in $G_k$.

Most of the tailored materials are heterogeneous at the ingredient level. Analysis of those heterogeneous structures requires the knowledge of microstructure. With the knowledge of microstructure, multiscale analysis is carried out with homogenization at the micro level. Second-order homogenization is carried out whenever the ingredient size is comparable to the structure size. Therefore, knowledge of microstructure and its size is indispensable to analyzing those heterogeneous structures. Again, any structural response contains all the information of microstructure, like microstructure distribution, volume fraction, size of ingredients, etc. Here, inverse analysis is carried out to identify a heterogeneous microstructure from macroscopic measurement. Two-step inverse analysis is carried out in the identification process; in the first step, the macrostructures length scale and effective properties are identified from the macroscopic measurement using gradient-based optimization. In the second step, those effective properties and length scales are used to determine the microstructure in inverse second-order homogenization.

北京阿比特科技有限公司