Here's what hiring managers look for when vetting resumes for Hadoop-related positions -- and what prompts them to hit the delete button.
10 Biggest Tech Disappointments Of 2013
(click image for larger view and for slideshow)
If writing your resume feels like an exercise in psychological warfare, well, there's a reason for that. It often seems as if an awkward line here or the wrong word there might dash any hopes of landing the job you covet, especially when faced with fierce competition.
If it's any consolation, recruiters and hiring managers experience their own version of that stress. Take Hadoop, for example, and the broader big data universe that surrounds it -- the technology's youth means that the ideal candidate is sometimes a moving target.
"Since Hadoop is a relatively new technology, and big data is a new industry, it can be difficult to properly evaluate a 'Hadoop candidate,'" said Darin Matuzic, lead technical recruiter at Riviera Partners, in an email interview. On the hiring side of the equation, Matuzic said it comes down to asking a few fundamental questions: "Who are we looking for? Are we looking for someone to set up the system infrastructure with which to run Hadoop? Are we looking for a data scientist to run analytics with Hadoop on our existing infrastructure? Or are we looking for someone who can do both?"
The answers to those and similar questions will differ among hiring managers and organizations, of course. But Matuzic said there are some common turn-ons and turn-offs for recruiters and employers when looking at resumes for Hadoop gigs and other big data positions
Here's what gets your resume from the slush pile to the "yes" pile -- and what sends it straight to the "no" pile.
1. Yes: Strong object-oriented programming experience in dynamic languages "Hadoop is Java based, so strong Java experience is a huge indicator of a strong Hadoop engineer," Matuzic said. "If [you] have built systems in C++ and then moved into Java, that's the best. C++ is more systems-level and Java tends to move up the stack. Think of it like layers, with the physical hardware being at the bottom of the stack (C, C++) and Hadoop as a layer running on top (Java)."
2. No: Weak Java skills Enough said. Build robust Java experience before taking the Hadoop plunge.
3. Yes: Developed and configured distributed computing systems in prior positions This is a big one for recruiters trying to identify the candidates with bona fide experience in the data trenches. "A lot of candidates have tinkered with technology like Hadoop, but without direct distributed systems experience they can't fully grasp how it works," Matuzic said. "Often this experience will come from working at larger corporations like Google, Facebook, Microsoft, IBM, [and other data-intensive enterprises] and translate well into the start-up world."
4. No: Haven't worked with big data Matuzic's not just name-dropping the biggest technology companies. Rather, those firms share something in common: They all have tons of customers and megatons of data as a result. As a result, IT pros that honed their Hadoop expertise and other data engineering skills at large enterprises tend to stand out.
"The more users a company has, the more data it creates that need to be analyzed. Hadoop engineers out of Twitter, Facebook, Google, et cetera are leaders in this regard," Matuzic said. "I'm not interested in someone out of a news aggregation site that no one has heard of and has 10,000 monthly users -- the data is just not there."
5. Yes: Personal projects and activities reflect passion Matuzic likes to see evidence of "advanced interest" in Hadoop and other big data technologies beyond the workplace. Examples include attending or leading local industry meet-ups, achieving professional certifications, attending -- and especially speaking at -- conferences.
Matuzic's colleague Matt Andrieux lists similar activities as the best way for less experienced IT pros to develop their big data background. "[These] show that they’re passionate about being an expert in a rapidly growing and disruptive industry that is still in its infancy," he said.
6. No: Bonkers with buzzwords "The use of too many buzzwords and technologies is an instant red flag," Matuzic said. "Candidates need to be meticulous when listing their proficiencies." If it's irrelevant, it's irrelevant -- leave it off the resume. Any recruiter or hiring manager worth her salt will know the difference between hype and substance.
7. Yes: Commitment to open-source The passion described above (#5) can be pushed a step or two further: Get involved with the open-source platforms commonly used in big data. The "ultimate," according to Matuzic, is finding candidates that are involved with distributed systems open source projects -- if you want to work with Hadoop, then getting involved with the open-source project behind it is a good place to start. Platforms like MongoDB and Riak are likewise open-source. Matuzic also mentioned other Apache projects, such as Flume and Storm as relevant examples.
8. No: You're always on the move Job-hopping might be common in the startup world, but it will sully a Hadoop resume because it belies results on previous big data projects.
"Short stints in previous roles are a major problem. Huge data sets are like a fine wine -- it takes time to fully understand and appreciate the intricacies within," Matuzic said. "A passionate Hadoop engineer will become consumed by their data set for at least a year."
You can use distributed databases without putting your company's crown jewels at risk. Here's how. Also in the Data Scatter issue of InformationWeek: A wild-card team member with a different skill set can help provide an outside perspective that might turn big data into business innovation. (Free registration required.)
Kevin Casey is a writer based in North Carolina who writes about technology for small and mid-size businesses. View Full Bio
6 Tools to Protect Big DataMost IT teams have their conventional databases covered in terms of security and business continuity. But as we enter the era of big data, Hadoop, and NoSQL, protection schemes need to evolve. In fact, big data could drive the next big security strategy shift.
Big Data Brings Big Security ProblemsWhy should big data be more difficult to secure? In a word, variety. But the business won’t wait to use it to predict customer behavior, find correlations across disparate data sources, predict fraud or financial risk, and more.
InformationWeek Must Reads Oct. 21, 2014InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.