In a world where data proliferation reigns, having a strong infrastructure around data is becoming a top priority for any company wanting to excel. But how does an organization, particularly a larger one, stay agile in this environment of constant change and shifting methods? Where to begin when trying to stay current and what pitfalls are there to be wary of? In our series on flexibility, we will be shedding light on these quandaries. The following is a luminous piece from the archives by Andy Ruberg addressing the culture factor of data engineering and ways to effect mindset change in an organization around data.
Among Fortune 500 and startups alike, the proliferation of data silos and use of external data sources is posing a significant challenge. Why? Because organizations know to remain competitive, drive faster growth and reduce costs, they need to be data-driven. And they can’t be driven by just hindsight (traditional reporting) or observation (basic business intelligence)—they need to be driven by foresight (predictive analytics).
To advance the organization to predictive analytics, however, requires sophistication or what we call “data maturity.” Companies need to join data across the enterprise and add external sources. They need to process, enrich and clean data to ensure reliability. They need to make data available to every decision-maker. It’s worth it, because the result is leading the industry. According to Gartner, by 2019, organizations that provide agile, curated internal and external datasets will realize twice the business benefits of those that don’t (Gartner, 2016).
But this level of innovation requires a total system overhaul.
Companies need to join data across the enterprise and add external sources. They need to process, enrich and clean data to ensure reliability. They need to make data available to every decision-maker.
Create A Culture of Data Engineering
How can an organization even begin? Through data engineering. In other words, organizations need to engineer the data infrastructure (warehousing, analytics tools, software, communication channels, etc.) to make reliable data accessible to everyone in the organization: business users, data scientists, middle management and executive management. A company who’s done this beautifully is Airbnb. They call it “democratizing” data to make better decisions much, much faster. Yet even for a tech-based company, it took (and is taking) major innovation to get there.
With tight budgets and timelines in an increasingly competitive landscape, enterprise organizations with long-established systems require greater change to shift their company culture. Those seeking to do so typically have three options:
- Tap internal IT – IT is traditionally responsible for existing application maintenance and operational stability, which is more than a full time job, making it hard for them to gain traction on data projects. Additionally, honing the skill set required for data engineering can take extra time since it isn’t core to their everyday responsibilities. Since IT usually doesn’t have capacity to do that, organizations must recruit, interview, onboard and manage data engineers in order to build the infrastructure required to advance the organization towards a truly data-driven enterprise. With the average salary for a data engineer at approximately $90,000 (Payscale) plus benefits (25%), the cost of creating internal capability is expensive and time-consuming.
- Hire consultants – This too, is a more traditional yet expensive option. The hourly rate for Big Four (EY, Deloitte, etc.) or Accenture data engineering consultants can quickly reach $300 / hour and beyond. From my experience in big consulting, building and maintaining the data infrastructure required to be successful often requires at least three full-time consultants over multiple years, which equates to approximately 6,000 hours of work per year. That would be $1,800,000 per year just to build, deploy and maintain your data infrastructure. Additionally, consultants are incentivized to establish and grow the account, not to accomplish their work quickly and leave a fine-tuned machine in place. They’ll build custom solutions, but usually from scratch on older technologies, which leads to inefficiencies and higher costs.
- Pure software play – Completely downloading a big data project to an outsourced model is difficult and requires a hands off approach that lacks personalization, visibility into progress, timely deliverables and value. The outsourced organization often doesn’t know your business well and has a hard time extracting value while frequently disrupting your team. This method may be more cost efficient; however, it poses challenges to the organization and has a higher risk of delivering value late and beyond the budget.
The Data Engineering Solution
What if there was one more option? An option to partner your IT and other tech teams with a software company who deploys their own team of experienced data engineers armed with a cutting edge platform for building data infrastructure. A company that provides a wealth of experience in creating and maintaining data pipelines across numerous industries. A company like—you guessed it—MetaRouter.
MetaRouter has the cost of an outsourced model, the personalized touch of consultants and incentive to deliver value quickly. The result? Higher rate of success, faster time to value, lower costs and a breadth of knowledge within your industry and beyond to provide valuable insights quickly—insights that deliver the foresight you need to know where to take your business next.
If you want to stick to your core focus of growth, efficiency and competitive advantage, let MetaRouter handle the data engineering you need to get there. Curious about how that might look for you? Let’s talk.