Data infrastructure optimization software
Data integration and quality software
Data availability and security software
Cloud solutions

Regional Data Centers: Also on Big Data’s Front Lines

Photo: David Carter of Brush Mountain Data Center

David Carter, Chief Technical Officer of Brush Mountain Data Center

To better understand issues facing data center operators, who might be thought of as Big Data’s front line, we spoke to David Carter, Chief Technical Officer of Brush Mountain Data Center, a subsidiary of Advanced Logic Industries Inc. headquartered in Blacksburg, VA.

What’s the number one pricing question that prospects come to you with? Bandwidth? Disk space? Cycles? VM count?

Most often, prospective clients are making a real attempt to compare services as fairly as they can. Generally, we are asked about price per GB for storage, price per GB for bandwidth and price per clock cycle of CPU and RAM resources. We usually provide a matrix in order for clients to understand the differences that scale can make in an implementation.

Syncsort offerings, such as DMX-h for Hadoop, focus on rapid ETL ingest. Have you encountered situations in which customers needing rapid loading from legacy or other high-volume applications sought off-the-shelf tools? If not, what strategies are employed instead?

Our customers tend to be one extreme or the other; they either rely entirely on our team to manage their ingest and data builds or they handle everything themselves. Most custom software packages require custom solutions, so we generally do not place limits on the tools used.

How has the DevOps movement influenced planning at Brush Mountain, if at all? What does DevOps mean for future data centers?

Like many functions in the technology sphere, complexity continues to increase while resources to support them often do not keep pace. So DevOps and the integration of software methodologies, technology and QA, take on increasing levels of importance as each has a strong impact on the other’s success.

What advantages can regional cloud providers offer to SMB (i.e., not the Fortune 500) to offset the name recognition enjoyed by Amazon, Microsoft, Google and Rackspace?

The strongest advantage comes from our technical teams being able to come alongside our clients and help them make the right decision for their specific circumstances. Not every situation requires the same tools, and so our teams get to know our clients and line up solutions that best fit their organizations. That, to us, is the differentiator.

What do you see as the implications of Big Data Velocity for regional data center business?

BDV places all of us in a nexus between the need for high bandwidth and increasingly commoditized solutions to manage the rate of data growth and its increasing need to be stored for – sometimes – years at a time. We see a need for stronger investments in infrastructure as well as mechanisms to lower the time it takes to process inbound data for meaningful use.

What percent of a data center’s budget goes to security? To disaster planning? Who decides?

Our clients always decide. When we consult with clients, the answers to those questions are direct result of how much our customer values the availability and reliability of their architecture in a disaster. So, does a local medical practice need a Tier IV data center with 35+% of its budget spent on ongoing security measures, or do they need best practice measures in line with their budget? It’s a tough call, and no two organizations have the same tolerance for downtime or service disruption.

Telecommunications providers are key data center partners. Are the key stress points technical or business? What trends will impact this relationship over the next five years?

Business issues with telecom tend to be our biggest constraints. Often our telecom providers can bring us the bandwidth we ask for, but at a price that would make it advantageous to invite competition.

What changes could Software Defined Networking have on your data center over the next five to 10 years? Who’s driving those changes, if anyone?

We are experiencing those changes as we speak. Traditionally the marriages between the manufacturer of networking equipment and its own software systems have been sacrosanct. Now, however, we are able to run software packages that suit our individual clients’ networking needs without nearly as much consideration for the constraints at the hardware level. This will continue to accelerate over the next five to 10 years, we believe.

Comparing your data center to one the same size 10 years ago, what’s the main difference from an operational perspective? Have there been changes that directly influence customer service?

The ability to remotely manage just about every aspect of the data center has been a big shift. From a customer service perspective, this means that we can activate a customer’s badge when we know they are coming, let them in and remotely record their session without having to manually send an engineer onsite to supervise client work.

How much expertise in Hadoop, Storm, Flume, Spark and other popular Big Data tools should a data center possess? What tips the balance for technical leadership?

It truly depends on how close to the compute, processing and presentation layer a data center provider wants to get with their clients. We balance the need to be able move those data sets through our infrastructure while being able to assist in specific projects.

Is the cloud movement favoring non-Microsoft solutions, or shall I keep my MSDN subscription a bit longer?

Cloud solutions do favor non-Microsoft solutions, but I have learned over the years to never count Microsoft out. They’re often late to the game, but they have a knack for catching up and innovating in ways that win over customers. I’d keep that MSDN subscription, ☺

Related Posts