4 Webcasts You Might Have Missed
Our webcast schedule last month was chuck full of goodness. If you were too busy enjoying the summer sun, don’t worry — we’ve rounded up our recent webinar recordings.
Here are four recent webcasts featuring new tricks for your old mainframe, active archive, a review of Trillium Discovery Center, and some expert-led thought leadership around de-siloing your legacy data. Enjoy!
Old Dogs, New Tricks: Big Data from and for Mainframe IT
Watch this webcast to see how machine data from z/OS is changing everything for Mainframe IT and enabling new solutions around IT Operations Analytics (ITOA), Security Information and Event Management (SIEM), and IT Service Intelligence (ITSI).
Put Long Term Data to Work Without Cluttering Your Database
If you need to store data long term to comply with regulations, or you want to have access to more than six months of customer data, but can’t afford a bigger data warehouse, store as much as you need with an active archive.
Watch this webcast to discover the strategies, gotchas, technologies, architectures and benefits of building an active archive in your company. So you can put your data to work!
Discovering More! Leveraging Trillium Discovery to Uncover, Remediate, and Prepare Your Data
Over the last several releases, Trillium Discovery Center has put in place a new, easy-to-use, browser-based view to facilitate the work of a broader class of users in understanding and addressing data quality.
Watch this webcast to review core features in the Discovery Center to profile data, build business rules, prepare and analyze consolidated data sources, add notes, and remediate issues through recode tables and applied expressions.
Mainframe Challenge: Unlocking the Value of Legacy Data
Even if you’re not directly responsible for legacy systems you should be aware that siloed data inevitably compromises all your analytics and business intelligence efforts. It can also compromise data governance and security.
Watch this expert-led webinar to review how leading organizations optimize use of application and log data from the mainframe and elsewhere in their Big Data environments.