One of the most rewarding aspects of software development is in choosing the right tools. From platforms to programming languages to libraries, this process offers the exciting chance to break loose from the technical debt of past decisions and explore the latest and greatest in emerging technologies.
Of course, pinpointing the right tool can be very challenging. Not only is there a plethora of options, but product owners don’t always know the full requirements and constraints of features they’re seeking to build.
This struggle typically results in a kind of tooling paralysis, in which software engineers are unhappy with the current tech stack but also worry about the time, effort, and potential headaches of moving to other solutions. Product owners often gripe about swollen estimates and wonder why it takes multiple sprints to create a feature that could otherwise be completed in a 24-hour hackathon.
This is where a research engineer can make a big impact by methodically evaluating the multiple ways to approach a software problem. As engineers themselves, they’re armed with the perspective to find a solution that is both feasible and functional.
A research task starts with a vague idea, usually from a product owner or development manager. This idea is almost always too nebulous to put into a story -- in fact, it’s common to question whether it’s even possible.
For example, the idea may be to “git” an entire application, so that all the configuration changes are captured with rollbacks, branching, and review processes. In this case, should we store the change history in the database or in a source control repository? Do we write our own UI, or an adapter for a widely-used tool?
This is when the research engineer pokes at the problem from many different angles and investigates whether any such tool exists to solve it. It’s my favorite part of the job: exploring the current tooling landscape and navigating the best route to meet the goal.
The first day is usually spent on the Internet, and the second in a university library. For every idea you can think of, there is almost always: A hacker who has tried it out; and 2. an academic who has performed obscure yet insightful research.
Now the engineering part of the role kicks in: I take that vague idea, a Haskell proof-of-concept, and an academic paper, and try to write some code. The beginning of this process is straightforward enough, as virtually any modern technology has “Getting Started” guides.
But then I must start thinking about more complex, real-world scenarios. What happens when I add thousands of concurrent users? What if the data is corrupted, or a peer microservice is malicious? As I continue my investigation, I keep impeccable records. Rather like conducting a scientific experiment, I take note of all steps taken, assumptions made, and results observed.
Now if I discover a bug, I log it and write a pull request to fix it. This then gives me the unique opportunity to observe the health of that technology’s community. For example, was the bug readily accepted, or did I get trolled and name-called for breaking things? How was their code review process, did they accept my code as a fix? Was there any build automation prior to that? Asking these questions up front is critical, because my fellow engineers don’t have time to waste when they’re using this tech to build a new feature.
Once I’ve completed a proof-of-concept, I present it to engineering team and product owners. This allows for a story to be written and estimated, prioritized in the backlog, and eventually find its way to the product.
One of our process innovations is to involve the research engineer during the implementation phase of the story and join the team for a sprint or two. This is when we discover first-hand whether our original estimates and assumptions were accurate.
We’ve observed that involving research engineers benefits everyone. Product owners no longer reject vague ideas, rely on third-party expertise (“it works for Google”), or take deep dives into the technology themselves. Instead, they can focus on what they do best -- building the features our users want and need.
Engineers benefit from stories that are smaller and more detailed and may even get a working proof-of-concept for a feature they’re about to implement. There’s constant knowledge sharing between groups, even between offices in different locations. When a ‘nomadic’ research engineer joins a given team, he or she shares what works for other teams in the company. This promotes more rapid adoption of best practices rather than relying on watercooler interactions.
Tool proliferation remains one of the challenging double-edged swords facing DevOps-minded teams. Dedicating resources to a research engineer can help maximize the adoption of useful applications, minimize risk and complexity, and avoid unnecessary time-and-energy investments down the line.
Aleksey Vorona, a professional engineer for over 16 years, has applied his interest for big data to a range of challenges from modeling LED power output to DevOps AI. Prior to spearheading the migration to microservices and distributed architecture at xMatters, Vorona served as a systems architect at a biotech machine learning startup. He also played a key role in the sports online team at Electronic Arts, helping to lead their transition to microservices, cloud computing, and NoSQL revolution. Vorona is passionate about empowering the rising generation of software engineers by running meetups and mentoring local students in Victoria, British Columbia. He holds a PhD in Engineering from Saint-Petersburg ITMO University.The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio