Put a markdown to the next cell to see the newly-created file as follows:

Figure 2.5: Decision tree structure and a close-up of its fragment
The preceding diagram shows what our decision tree looks like. During the training, it grows upside-down. Data (features) travels through it from its root (top) to the leaves (bottom). To predict the label for a sample from our dataset using this classifier, we should start from the root, and move until we reach the leaf. In each node, one feature is compared to some value; for example, in the root node, the tree checks if the length is < 26.0261. If the condition is met, we move along the left branch; if not, along the right.
Let's look closer at a part of the tree. In addition to the condition in each node, we have some useful information:
Entropy value
Number of samples in the training set which supports this node