There isn't always a need for 'latest mapreduce buzzword', however access to these technologies is fairly standard - at least from my experience. As 'hackers' everybody wants to play with the latest technology and advanced their skill set. I don't see why people get all crabby when people with smaller datasets hack on Hadoop/Spark. I think they have entitlement issues ha. They remind me of Social Justice Warriors.
While this is funny and a great comment on how a lot of people (business people in my experience) are too quick to assume they have 'big data' and need the latest mapreduce buzzword tech, spending the amount of money for the machines they listed isn't feasible to a lot of companies/people doing projects and renting the EC2 instances that are memory oriented can rack up a huge bill quite quickly.
This misses the point - sometimes is much easier to take 1Tb of text data and manipulate it using standard "big data" tools, than it is to figure out how to do it using a single machine and RAM. I don't care where it fits. I care about doing the job as quickly and efficiently (and reproducible) as possible.