Data infrastructure optimization, availability & security software
Data integration & quality software
The Next Wave of technology & innovation

From Legacy ETL to Hadoop: A Customer’s Journey to Big Data

We’ve all heard about it and companies are doing it. I’m talking about complementing existing data warehousing architectures with Big Data; or more specifically, about shifting heavy ETL workloads to Hadoop. However, it’s always great when we get a chance to hear – not from the vendors – but from the people implementing these new architectures in their own organizations.

And it’s even better, when that person happens to be one of our own customers. That’s why I’m very proud to share this interview from kdnuggets with Michael Lurye, Senior Director, Enterprise Data Management for Time Warner Cable.

Take a look and learn how Michael and the team at TWC decided to make the leap to Hadoop. Learn why they chose to offload ELT workloads to Hadoop and how they overcame the skills challenge –as Michael points out, “Companies that invented Hadoop, as well as early adopters, are full of developers accustomed to writing lots of code in low-level programming languages such as Java or Scala. But we are a BI shop and our developer skills are SQL and ETL tools, not Java.”

Apache Spark
Big Data technologies such as Hadoop and Spark can deliver BI solutions more cost effectively by supplementing your existing architecture with Big Data platforms.

You can read all about this and more in the interview; and remember Michael’s words, “Don’t be afraid of change, think of it as an opportunity, not a threat.”

Related Posts