I'm so sick of these info-graphics and guides. We need to address how much maths and stats is required for this field rather than just saying statements like first year linear algebra and calculus.
Figuring out where to go after 'Stats101' is one of the problems I believe 'data scientists' like myself are facing at the moment...
If you have stats 101, linear algebra, and calculus, study some machine learning basics. Perceptron and decision trees/random forests, for example, will add some CS-motivated algorithms to your toolbelt.
After that, just work on problems. Try Kaggle, develop your own projects, explore a subdomain (numerical optimization, graphical models, stochastic calculus, deep learning) and implement algorithms. The field is absurdly broad, and it takes a long time to familiarize yourself with the basics.
EDIT: Also note that the list is heavy on tech/software because the role requires engineering competence. Also also, upon reviewing the graphic, it's really topical. This stuff gets you a junior role unless you have academic experience with research.
I'm already in what I would consider a pretty good data role, perhaps I would be considered as a glorified data analyst.
I guess machine learning is something I need to work on. I was looking at doing a stats post grad degree but I have seen a few times that stats degrees are outdated and machine learning is the new modern data analysis. I'm just not 100% sold on investing in learning ML as recommenders are not really revenant to my work now, but maybe there is more I can get out of it.
firmly is a strong word here, since the fields differ historically.
I like this post on stats.stackexchange, inspired indirectly by Breiman's classic paper on modeling. One commenter describes the difference as generative v. discriminative, which feels right--core ML algorithms (SVMs, neural networks) don't infer the data generating process.