亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of chemotherapy treatment optimization can be defined in order to minimize the size of the tumor without endangering the patient's health; therefore, chemotherapy requires to achieve a number of objectives, simultaneously. For this reason, the optimization problem turns to a multi-objective problem. In this paper, a multi-objective meta-heuristic method is provided for cancer chemotherapy with the aim of balancing between two objectives: the amount of toxicity and the number of cancerous cells. The proposed method uses mathematical models in order to measure the drug concentration, tumor growth and the amount of toxicity. This method utilizes a Multi-Objective Particle Swarm Optimization (MOPSO) algorithm to optimize cancer chemotherapy plan using cell-cycle specific drugs. The proposed method can be a good model for personalized medicine as it returns a set of solutions as output that have balanced between different objectives and provided the possibility to choose the most appropriate therapeutic plan based on some information about the status of the patient. Experimental results confirm that the proposed method is able to explore the search space efficiently in order to find out the suitable treatment plan with minimal side effects. This main objective is provided using a desirable designing of chemotherapy drugs and controlling the injection dose. Moreover, results show that the proposed method achieve to a better therapeutic performance compared to a more recent similar method [1].

相關內容

Topological signal processing (TSP) over simplicial complexes typically assumes observations associated with the simplicial complexes are real scalars. In this paper, we develop TSP theories for the case where observations belong to abelian groups more general than real numbers, including function spaces that are commonly used to represent time-varying signals. Our approach generalizes the Hodge decomposition and allows for signal processing tasks to be performed on these more complex observations. We propose a unified and flexible framework for TSP that expands its applicability to a wider range of signal processing applications. Numerical results demonstrate the effectiveness of this approach and provide a foundation for future research in this area.

This paper investigates Distributed Hypothesis testing (DHT), in which a source $\mathbf{X}$ is encoded given that side information $\mathbf{Y}$ is available at the decoder only. Based on the received coded data, the receiver aims to decide on the two hypotheses $H_0$ or $H_1$ related to the joint distribution of $\mathbf{X}$ and $\mathbf{Y}$. While most existing contributions in the literature on DHT consider i.i.d. assumptions, this paper assumes more generic, non-i.i.d., non-stationary, and non-ergodic sources models. It relies on information-spectrum tools to provide general formulas on the achievable Type-II error exponent under a constraint on the Type-I error. The achievability proof is based on a quantize-and-binning scheme. It is shown that with the quantize-and-binning approach, the error exponent boils down to a trade-off between a binning error and a decision error, as already observed for the i.i.d. sources. The last part of the paper provides error exponents for particular source models, \emph{e.g.}, Gaussian, stationary, and ergodic models.

In this paper, we put forward the model of zero-error distributed function compression system of two binary memoryless sources X and Y, where there are two encoders En1 and En2 and one decoder De, connected by two channels (En1, De) and (En2, De) with the capacity constraints C1 and C2, respectively. The encoder En1 can observe X or (X,Y) and the encoder En2 can observe Y or (X,Y) according to the two switches s1 and s2 open or closed (corresponding to taking values 0 or 1). The decoder De is required to compress the binary arithmetic sum f(X,Y)=X+Y with zero error by using the system multiple times. We use (s1s2;C1,C2;f) to denote the model in which it is assumed that C1 \geq C2 by symmetry. The compression capacity for the model is defined as the maximum average number of times that the function f can be compressed with zero error for one use of the system, which measures the efficiency of using the system. We fully characterize the compression capacities for all the four cases of the model (s1s2;C1,C2;f) for s1s2= 00,01,10,11. Here, the characterization of the compression capacity for the case (01;C1,C2;f) with C1>C2 is highly nontrivial, where a novel graph coloring approach is developed. Furthermore, we apply the compression capacity for (01;C1,C2;f) to an open problem in network function computation that whether the best known upper bound of Guang et al. on computing capacity is in general tight.

In this paper, we study a novel episodic risk-sensitive Reinforcement Learning (RL) problem, named Iterated CVaR RL, which aims to maximize the tail of the reward-to-go at each step, and focuses on tightly controlling the risk of getting into catastrophic situations at each stage. This formulation is applicable to real-world tasks that demand strong risk avoidance throughout the decision process, such as autonomous driving, clinical treatment planning and robotics. We investigate two performance metrics under Iterated CVaR RL, i.e., Regret Minimization and Best Policy Identification. For both metrics, we design efficient algorithms ICVaR-RM and ICVaR-BPI, respectively, and provide nearly matching upper and lower bounds with respect to the number of episodes $K$. We also investigate an interesting limiting case of Iterated CVaR RL, called Worst Path RL, where the objective becomes to maximize the minimum possible cumulative reward. For Worst Path RL, we propose an efficient algorithm with constant upper and lower bounds. Finally, our techniques for bounding the change of CVaR due to the value function shift and decomposing the regret via a distorted visitation distribution are novel, and can find applications in other risk-sensitive RL problems.

We develop the first active learning method in the predict-then-optimize framework. Specifically, we develop a learning method that sequentially decides whether to request the "labels" of feature samples from an unlabeled data stream, where the labels correspond to the parameters of an optimization model for decision-making. Our active learning method is the first to be directly informed by the decision error induced by the predicted parameters, which is referred to as the Smart Predict-then-Optimize (SPO) loss. Motivated by the structure of the SPO loss, our algorithm adopts a margin-based criterion utilizing the concept of distance to degeneracy and minimizes a tractable surrogate of the SPO loss on the collected data. In particular, we develop an efficient active learning algorithm with both hard and soft rejection variants, each with theoretical excess risk (i.e., generalization) guarantees. We further derive bounds on the label complexity, which refers to the number of samples whose labels are acquired to achieve a desired small level of SPO risk. Under some natural low-noise conditions, we show that these bounds can be better than the naive supervised learning approach that labels all samples. Furthermore, when using the SPO+ loss function, a specialized surrogate of the SPO loss, we derive a significantly smaller label complexity under separability conditions. We also present numerical evidence showing the practical value of our proposed algorithms in the settings of personalized pricing and the shortest path problem.

The well-known Wiener's lemma is a valuable statement in harmonic analysis; in the Banach space of functions with absolutely convergent Fourier series, the lemma proposes a sufficient condition for the existence of a pointwise multiplicative inverse. We call the functions that admit an inverse as \emph{reversible}. In this paper, we introduce a simple and efficient method for approximating the inverse of functions, which are not necessarily reversible, with elements from the space. We term this process \emph{pseudo-reversing}. In addition, we define a condition number to measure the reversibility of functions and study the reversibility under pseudo-reversing. Then, we exploit pseudo-reversing to construct a multiscale pyramid transform based on a refinement operator and its pseudo-reverse for analyzing real and manifold-valued data. Finally, we present the properties of the resulting multiscale methods and numerically illustrate different aspects of pseudo-reversing, including the applications of its resulting multiscale transform to data compression and contrast enhancement of manifold-valued sequence.

The Check-All-That-Apply (CATA) method was compared to the Adapted-Pivot-Test (APT) method, a recently published method based on pair comparisons between a coded wine and a reference sample, called pivot, and using a set list of attributes as in CATA. Both methods were compared using identical wines, correspondence analyses and Chi-square test of independence, and very similar questionnaires. The list of attributes used for describing the wines was established in a prior analysis by a subset of the panel. The results showed that CATA was more robust and more descriptive than the APT with 50 to 60 panelists. The p-value of the Chi-square test of independence between wines and descriptors dropped below 0.05 around 50 panelists with the CATA method, when it never dropped below 0.8 with the APT. The discussion highlights differences in settings and logistics which render the CATA more robust and easier to run. One of the objectives was also to propose an easy setup for university and food industry laboratories. Practical applications: Our results describe a practical way of teaching and performing the CATA method with university students and online tools, as well as in extension courses. It should have applications with consumer studies for the characterization of various food products. Additionally, we provide an improved R script for correspondence analyses used in descriptive analyses and a Chi-square test to estimate the number of panelists leading to robust results. Finally, we give a set of data that could be useful for sensory and statistics teaching.

The trade algorithm, which includes the curveball and fastball implementations, is the state-of-the-art for uniformly sampling r x c binary matrices with fixed row and column sums. The mixing time of the trade algorithm is currently unknown, although 5r is currently used as a heuristic. We propose a distribution-based approach to estimating the mixing time, but which also can return a sample of matrices that are nearly guaranteed to be uniformly randomly sampled. In numerical experiments on matrices that vary by size, fill, and row and column sum distributions, we find that the upper bound on mixing time is at least 10r, and that it increases as a function of both c and the fraction of cells containing a 1.

A mainstream type of current self-supervised learning methods pursues a general-purpose representation that can be well transferred to downstream tasks, typically by optimizing on a given pretext task such as instance discrimination. In this work, we argue that existing pretext tasks inevitably introduce biases into the learned representation, which in turn leads to biased transfer performance on various downstream tasks. To cope with this issue, we propose Maximum Entropy Coding (MEC), a more principled objective that explicitly optimizes on the structure of the representation, so that the learned representation is less biased and thus generalizes better to unseen downstream tasks. Inspired by the principle of maximum entropy in information theory, we hypothesize that a generalizable representation should be the one that admits the maximum entropy among all plausible representations. To make the objective end-to-end trainable, we propose to leverage the minimal coding length in lossy data coding as a computationally tractable surrogate for the entropy, and further derive a scalable reformulation of the objective that allows fast computation. Extensive experiments demonstrate that MEC learns a more generalizable representation than previous methods based on specific pretext tasks. It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking. Interestingly, we show that existing batch-wise and feature-wise self-supervised objectives could be seen equivalent to low-order approximations of MEC. Code and pre-trained models are available at //github.com/xinliu20/MEC.

In this paper, we present a comprehensive review of the imbalance problems in object detection. To analyze the problems in a systematic manner, we introduce a problem-based taxonomy. Following this taxonomy, we discuss each problem in depth and present a unifying yet critical perspective on the solutions in the literature. In addition, we identify major open issues regarding the existing imbalance problems as well as imbalance problems that have not been discussed before. Moreover, in order to keep our review up to date, we provide an accompanying webpage which catalogs papers addressing imbalance problems, according to our problem-based taxonomy. Researchers can track newer studies on this webpage available at: //github.com/kemaloksuz/ObjectDetectionImbalance .

北京阿比特科技有限公司