DataTaunew | comments | leaders | submitlogin
1 point by sebastianb 3337 days ago | link | parent

That table contains a few errors:

1. Neural nets are most certainly parametric models.

2. Naive Bayes can be either parametric or non-parametric (you need to model the class conditional distribution. Can do this with either a parametric distribution, such as multinomial, Gaussian etc. But you can also use non-parametric, such as kernel density estimation).

3. You claim that Logistic Regression doesn't handle irrelevant features well. This is not true: give some L1 regularization, and the weights of irrelevant features are set to zero. (this is often used as a feature selector)



1 point by dataschool 3335 days ago | link

@sebastianb, thanks for your comments.

1. I would disagree with you about neural networks. I call them non-parametric because they can fit arbitrarily complex decision boundaries. However, there is not one universal definition for what "parametric" means, so you might be using a different definition.

2. In the class, we only teach Multinomial and Gaussian Naive Bayes, and so for the purpose of the class (and the most common use cases), I would call it parametric. However, I do take your point.

3. I agree with you that L1 regularization (for both logistic and linear regression) can be used as a feature selector and allows those models to better handle irrelevant features. However, I consider regularization a "technique" rather than a characteristic of the underlying model, and ultimately my table is about the underlying models. Thus, I stand by the contents of the table, but agree with your point about regularization.

-----




RSS | Announcements