Click to add a red data point
Shift + Click to add green data point
R to retrain the model
Set this higher for smoother and more regularized final prediction. Higher is always better, but slower.
Depth of each tree in the forest. Set this higher when more complicated decision boundaries are needed (but runs exponentially slower and can be more prone to overfitting if there are not enough trees). Usually if you can afford many trees and resources you want to set this higher.
Number of random hypotheses considered at each node during training. Setting this too high puts you in danger of overfitting your data because nodes in the forest lose variety.