Hate the player, not the game — or my case for Data Engineers

Has “database” become a dirty word within your organization lately? If you’re someone who has been a data technologist for the better part of your career, you may be wondering why the technologies you work with everyday seem to be acquiring such a bad rap. From NoSQL to No DB the current influx of brogrammers seem to take extreme pride in describing

Has “database” become a dirty word within your organization lately? If you’re someone who has been a data technologist for the better part of your career, you may be wondering why the technologies you work with everyday seem to be acquiring such a bad rap. From NoSQL to No DB the current influx of brogrammers seem to take extreme pride in describing how they’re able to write code while avoiding any kind of database technology whatsoever.

The impetus for this post actually started with something I read on the ‘Net the other day about Command Query Responsibility Segregation (CQRS), and how I was initially excited about the concept.

Martin Fowler has a nice, gentle introduction the topic here.

Before I get into the post, however, I think it’s useful for me to describe some of my attitudes toward data management. What’s really odd is that while I rather strongly disagree with the tone of Uncle Bob Martin’s Rant, I actually strongly agree with his assertion about the high value of use-case driven development.

I’ve had gentle debates about the meaning of “data as truth” with several people, and the age-old debate of whether data is more “important” than application code. Generally I’ve found that such debates end up as religious arguments instead of attempting to get to the value of acting on data / or data in action. Because in the end it’s hard for data to have value unless its acted on by a set of processing directives (applications), and while it’s possible to have valuable applications that don’t require knowledge about the past (basic rule engine codifications), in general they need each other.

Why I call myself a data engineer

I’ve been impressed with EMC’s attempt to define a Data Science curriculum. In particular, I like how they describe the different skills and roles necessary for a successful data science team, including the hot new title of data scientist. The data science team often includes a data architect, a data engineer, and a database administrator. So, what is a data engineer? In a blog by Steve Todd, Director of EMC’s Global Research and Innovation Portfolio, he has the following characterizations:

The “Database Administrator” provisions and configures the database environment to support the analytical needs of the working team. The “Data Engineer” tends to have deep technical skills to assist with tuning SQL queries for data management and extraction. They also support data ingest to the analytic sandbox. These people can be one in the same, but many times the data engineer is an expert on queries and data manipulation (and not necessarily analytics as such). The DBA may be good at this too, but many times they may simply be someone who is primarily skilled at setting up and deploying a large database schema, or product, or stack.

Many, many DBAs wear both hats, but I think it’s not a good idea — in general I think that DBA is to data engineer as system administrator is to software engineer, but the lack of data engineers has forced DBAs into dual-roles, often for which they are not well-suited. While I have basic DBA skills, I’m much better at the skills listed under the data engineer — and I enjoy working with the data scientists or application developers who have questions about the data and/or how they’d like it structured to support their use cases.

This is one of the reasons why I agree with Uncle Bob’s rant in which he also rails against frameworks in addition to the database — I just wish frameworks had received equal billing in the rant and title, but I’m guessing that the No DB vitriol resonated more highly with readers. In general I like making sure data is organized in such a way as to support as many use cases as possible. That includes being performant for each use case — which may mean taking advantage of techniques to denormalize, duplicate and synchronize, cache and distribute data.

Continue reading…