Data infrastructure optimization, availability & security software
Data integration & quality software
The Next Wave of technology & innovation

Expert Interview with Steve Garrison from Pica8.com on Use Cases for SDN

Photo: Steve Garrison - VP of Marketing for Pica8

Steve Garrison – VP of Marketing for Pica8

Software Defined Networks (SDNs) technically aren’t new, yet many IT professionals have not worked with them yet. To get the attention of people in IT, what are the strongest use cases for SDNs in your experience?

We see SDN breaking up into four camps, or areas of focus, where the IT environment is specifically driving clear benefits and has well-defined requirements. Similar to how Cloud morphed over time, SDN is now moving from hype to clear areas of value. These are the four camps:

Enterprise Data Center – This is where the idea of network virtualization and SDN started. The key driver was extending application and server agility to the network. The common use case we see today is extending high availability and disaster recovery.

We think of VXLAN (or Labeled BGP) for overlays and see daily battles between the different resource pools. From the Cisco ACI/APIC group, challenging pretty much every other team. VMware has its lead because of their market presence and NSX (from Nicira) around the idea of overlays to enable virtual machine (VM) mobility between network domains. Of course, we see lots of challengers here, from Midokura and PLUMgrid to Brocade’s Open Vyatta.

Enterprise LAN – This camp is still nascent compared to the Enterprise Data Center. Companies like HP launched an App Store and showed early use cases around collaboration (Microsoft) and security (DDoS for BYOD).These are real pain points for the Enterprise. We all see the plethora of new devices we want to bring to work, and CIOs are constantly keeping security as their top spend bucket.

We saw early adopters looking at OpenFlow as a means for universal programmability. In this camp, users want turnkey solutions “that just work.” Companies like Ecode Networks and Pertino aim to create dashboards that give users “apps” to click on. We will get there!

Carrier/SP Data Center – This camp is very active; most early adopters of SDN are in this area. The benefit here is similar to Enterprise Data Centers: to drive network agility. For service providers, this takes on many faces. Services range from DR-as-a-Service (DRaaS) to ensuring that secure multi-tenant services can be scaled.

All the players from the Enterprise Data Center are here as well, however. OpenStack is the leading orchestration platform being tested. Integration with OpenStack is key whether you are using the latest network plugin (Neutron, currently) or you are leveraging OpenFlow through OpenDaylight; you can find a way to jump on this wave. From a service provider perspective, they wish to see services more fluid in how they are provisioned more dynamically. And of course, the idea of tailoring services for each customer is unthinkable without some degree of SDN to bring more policy-based thinking to networking.

Service Provider WAN – Whether you are a large carrier or regional service provider, your goals are similar: find ways to drive up average revenue per user (ARPU on the mobile side) and take share from your competitor. SDN can help as it offers a more real-time means to build services that are flexible and customer controllable. We see leading edge service providers doing this today from all regions of the globe.

The reason this is such a hot camp is that service provisioning here means not only manual change; it means truck rolls. And whereas you can send a data center technician down a data center row to modify 50 cabinets, you sure have to use more time to roll a truck and technician to, say, 50 buildings in a part of your service area.

For this camp, we are seeing the emergence of white boxes for Network Function Virtualization (NFV) as a part of a virtual CPE device. And of course, white box switches aggregate these vCPEs. Companies like Viptela, Versa and Nuage (ALU) are seeing successes. We also see SDN-WAN companies like Glue and Silver Peak jumping in.

What is a “white box switch” and why should Big Data center engineers know about them?

“White Box” has been a term people used to describe the non-branded manufacturers that built PCs. Many of these “ODMs” (original design manufacturers) ramped up production of white box servers, and now many of them are producing white box switches. They look just like any other switch and are assembled by the same companies that build white box servers. To give you a sense of scale, Accton/Edge-Core, Quanta Computer and Alpha Networks all make white box switches (standard 1 RU 48-port Ethernet switches); and as a collective, these companies have more than $200 billion of manufacturing muscle on an annual basis.

Data center managers should know about them as they represent a means for abstracting hardware from software. As with the server world, there is a transition underfoot to create more abstraction so a hardware change does not impact the application. Today, a network upgrade does impact the application: even with Cisco, the user environment between a Nexus 3000 and 9000 is very different. Hardware abstraction at the network level is addressing this.

On your website’s list of helpful use cases, you mention using Pica8 VXLAN product for integration with OpenStack. Can you explain this use case for our readers who are less familiar with OpenStack?

Expounding on what we discussed above under use cases, the idea with VXLAN and other overlay technologies is to abstract the physical network. Why? Making changes in the IP addressing scheme to change resource separation is time-consuming. Creating abstraction on top of the physical network creates a means to connect disparate pools and not change the IP environment. This thinking is taking hold, and the early use cases are high availability and disaster recovery. Having a resource fail over to another cabinet or pool of servers can be policy-based if controllers turn on tunnels (VXLAN) when that connection is needed and tear them down afterwards. Extending this thinking to enabling virtual private services between distinct data centers helps to build a more secure hybrid cloud model. IT managers tell us that is their end game, yet starting with something more fundamental like increasing availability lets them learn and build the necessary in-house skills.

How much interest is there in OpenStack? Do you have a response to the complaint of some critics that OpenStack introduces too much overhead?

We see a lot of interest in OpenStack, and we also agree it is a lot to dig into. Having said that, all new technologies and ideas take time to “get it right.” With OpenStack, the goal is clear: help organizations orchestrate resources from one pane of glass. That is idealistic, and over time we think we will get there. The reason the project is open source is to learn and leverage as many early adopters as possible. Over time, use cases will be defined that will be more palatable to mid-sized organizations. Today, we still see the power users like eBay and Walmart making the first moves on the enterprise side. Cloud providers like Rackspace are the obvious first movers on the service provider side, and I am sure many readers know Rackspace was a key driver of the OpenStack initiative.

What role is VMware playing in promoting SDN or OpenStack? Are they a partner, or a competitor, or both? How about Cisco?

We see VMware as a key player in the overall movement. VMware introduced hooks for OpenStack integration last year, and we expect there to be more synergy over time. An OpenStack user would certainly want to manage a VMware set of tools. We don’t see VMware as an SDN company; rather, it’s a company that is pushing the higher-level idea of software-defined data centers. That does not mean VMware is divested from SDN; rather, they are the Switzerland of networking today, with multiple networking partners touting integration with NSX – including Pica8 of course.

Your web content promotes Pica8 as an element in DevOps scenarios, happily integrating with Puppet, Chef, CFEngine and other leading DevOps tools. How should DevOps folks proceed to work SDN (and, by extension, Pica8) into the DevOps set of duties?

We see DevOps as a long-term movement to help bridge the silos between all the operations teams in the data center; storage, network and compute all have their own methodologies and tools. CIOs tell us that this is a costly and slow way to do business. They want a more streamlined model that makes cross-training easier and thereby reduce costs, as well have a smaller set of best practices. Pica8 participates by supporting the common tools you note (Puppet etc.) and is based on an unmodified Linux distribution. With x86 CPUs more common on top of rack white box switches, we help IT teams not only use Linux thinking to manage the switch, but we also assist with developers being able to use all the same tools for both servers and switches. We think this is a key “check box” of any next-generation product and a requirement for software defined data centers.

The mention of encapsulation and de-encapsulation techniques suggests that Pica8 resources can strengthen security for multi-tenant scenarios. Has Pica8 introduced new security opportunities, or does it only act as an overlay for existing security frameworks?

We ourselves operate more as an enabler here. The idea we promote is that overlays are not all created equal. Some organizations want to use VXLAN, and some want to use more traditional schemes like L2 or L3oGRE or MPLS. We support all of them! And again, tools must match the team’s skills and use the right tool for the right job. Making everyone do exactly the same thing just hinders adoption. Additionally, the security issues around VM mobility are manifold; at our layer, the network, we strive to use standards and establish tools. Our differentiation is based on this idea that we are overlay technology agnostic.

The folks at Rule Your Cloud, speaking of the OpenStack Congress app worried that “. . .fully automated IT management is a double-edged sword. While having people on the critical path for IT management was time-consuming, it provided an opportunity to ensure that those resources were managed sensibly and in a way that was consistent with how the business said they ought to be managed.” If OpenStack and related offerings create new requirements for policy enforcement and governance just to keep up with increased automation, are there improved opportunities created by this technology?

This is a really important question and we see this dynamic challenge in organizations today. First, to think IT systems will be some uber machine and there are no humans at all – that is just a false dream and vision. IT needs people, period. Having said that, all IT organizations tell us that they need something to help scale. Let’s again look at how to scale. If a team can support 10 times more devices through automation, it means that same team can now be more strategic. That is a huge plus. OpenStack as noted before is complex today; over time, it will get simpler. So the real question is when to adopt new tools? If your team is scaling well today, great. Stop, relax, watch the market. If your team is struggling to stay above water, it might be necessary to take a step back to take two steps forward. Making investments in new technologies takes time. In most cases we have seen over the past 20+ years, the investment pays off many times over.

You mention Pica8 coordination with automation products from HP, VMware, OpenStack, MidoKura and Mirantis. What specific scalability and security risks are associated with cloud orchestration services in SDNs?

Any time you change a best practice or introduce new technology, there is always risk, both in terms of training your team, as well as potentially new security issues that need to be checked. Having said that, we don’t see any show stoppers here – just careful assessment, and a pragmatic approach to adoption. Start small; start with a project that is not in the mission critical part of your infrastructure. Experiment with different solutions and vendors. See which combinations best fit your team’s experience and training programs, and, most importantly, which are best aligned with your business needs. Specific concerns we have seen are ensuring access by the right people at the right level; so having multiple layers of authentication is key, as well as leveraging well-known and trusted security models.

How will offerings from firms like Pica8 affect the future of Big Data management?

We think there is huge potential here for optimizing Big Data tools such as Hadoop based on having the network conform to the needs of this application. We have written a blog on this topic to help users see what we can do today. Our blog Establishing the Big Data Connection talks to our ability to correlate big data events and protocols back to network events and protocols, and to be able to classify big data network flows correctly. To establish the Big Data Connection, we discuss how administrators can possibly achieve this and the role network programmability and agility play in this discussion. Today we see most users expecting the network to be fast and stupid. Over time to help push Hadoop to its highest performance levels, we think the network needs to be an intelligent and tune to squeeze out more performance – driving even better ROI for users.

To learn where Syncsort fits into your data center’s network flows, you might investigate how to join Hadoop Big Data with ETL by using DMX-h.

Related Posts