Expert interview: Matt Osminer

Expert Interview Series: Matt Osminer of Cardinal Peak – Part 1

Matt Osminer is an engineering director at Cardinal Peak, where he uses his broad technical background – running the gamut from low-power embedded devices to high-performance computing to video and set-top app development – to lead a team of highly experienced software engineers in cutting-edge software development.

We recently checked in with Matt to find out what Big Data projects Cardinal Peak is tackling these days and get his advice on how companies can overcome their data problems. In Part 1 of this two-part interview, here’s what he had to say about Cardinal Peak’s specialization and the types of Big Data projects they tackle, when companies should outsource engineering, and best practices for Big Data integration:

What type of projects does Cardinal Peak specialize in?

Since 2002, Cardinal Peak has been supporting and engineering some of the most successful digital products and platforms being launched by the nation’s leading technology companies. Delivering unsurpassed leadership in the areas of the Internet of Things (IoT) and Digital Video & Audio, Cardinal Peak offers a rare combination of proven expertise and personalized, responsive service to its clients.

When should companies consider enlisting outsourced engineering?

  • Proof of Concept/Prototyping/Skunkworks – Many companies need to keep their engineering teams focused on their core products, and don’t have the luxury of exploring new ideas outside their main expertise. Outsourcing allows a company to explore new product and technology ideas without distracting their own internal development teams.
  • Short Term Surge – Companies sometimes want to tackle a short-duration project that requires fast time to market, but don’t want to take on the overhead to hire and onboard full-time or contract staff.
  • Expertise – When considering a new product or technology space, hiring an outsource engineering firm with the expertise you need allows you to get an initial product offering done faster and helps jump start internal staff on new technology.
  • Try before you buy – If you don’t presently have an engineering team and want to understand what it means to do engineering, hiring an outsource firm can be a great way to gain insight

What types of Big Data projects has your team tackled?

We’ve done a significant amount of work for a large telecom company, helping them build a Big Data platform for monitoring and analyzing their business’s performance. This included operations analysis, allowing the operations team to quickly identify network issues, monitoring data loads, and identifying common points of failure or areas that required additional resources to balance the load.

The platform also collects user interaction data for the company’s product to help the product design team understand where customers spend the most time and what actions they perform. This helps the product team identify areas in the product that can be improved, as well as what features are the most important and might be interesting to augment. This is the classic Big Data pitch.

In the IoT space, we’ve done projects that align very similarly to the operations work we’ve done in the telecomm space. Clients collect error and maintenance information from their units to identify how their products are used – and how they perform – in the field. Using this information they can identify components they may want to replace due to high failure rates, have service personnel address problems before a unit fails, and provide value-add services for third-party channels and partners selling and maintaining their products.

It boils down to two types of applications overall. One is aimed more at using data to solve more concrete problems, such as how a network is performing. The second is using Big Data to try and infer answers to questions by interpreting statistical trends.

How do you advise clients on managing Big Data integration? What best practices do you coach?

Always be clear on what your goals are, and what data sources you will need to achieve those goals. There are two major challenges in a Big Data project. The first is identifying, collecting and sanitizing the data you need. The second is correctly interpreting the data to achieve your goals. In our experience, the operations problems are easier to solve because they require less interpolation of the data and the data you need is usually readily available from hardware throughout the system. Furthermore that data, because it tends to be machine generated, is more uniform and easier to process.

In the end it’s much easier to build a core Big Data platform and see useful results in a shorter span of time. In many ways this is a pure engineering project.

Answering more abstract questions about how customers use your product, or if your new feature is meeting customer needs is much more difficult. Selecting the proper data points can be tricky, as can gathering that data. Interpreting the data once it’s collected requires a more powerful set of search and analysis tools, not to mention data scientists to analyze the data properly and answer the questions at hand.

Supporting this next level of sophistication is a natural extension of an operations platform. Basically, you’re bringing new data sources online and ingesting them into your Big Data platform, then layering on additional query and data processing tools, and then topping it off with people who can properly use the tools. I’m over-simplifying of course, but that’s the idea.

The only thing you’re really missing is someone at the top asking the right questions and guiding the results to get the best value for the cost – another very tricky role.

In tomorrow’s part 2 of this interview, Matt will discuss the most common pain point organizations are looking overcome in data management, how to improve ROI on engineering investments, a mistake businesses make in staffing for their development needs, the most interesting or innovative IoT projects they’ve taken on and what industry trends he’s following.

As explained in the white paper, “Syncsort DMX-h: Modern Data Integration for Your Modern Data Architecture,” Syncsort designed DMX-h to help organizations build a core Big Data platform to get useful results in a shorter span of time.  The paper documents how DMX-h simplifies the creation and management of end-to-end data integration and transformation processing to shorten time to value, while allowing organizations to leverage their existing data integration skill sets.

Susan Jennings

Authored by Susan Jennings

Syncsort contributor Susan Jennings writes on business topics ranging from big data and digital marketing to leadership and entrepreneurship.
1 comment
  1. […] Peak. He also talks about dev ops, when to outsource, and improving ROI in data management. Part one and part two of the interview are live on the Synscort blog […]

Leave a Comment

*