Encryption, Tokenization, and Anonymization on IBM i: How Do These Data-Protection Technologies Differ? Which One Should You Choose?
The rise of high-profile breaches as well as new/expanded compliance regulations are compelling every company to increase its vigilance when it comes to securing sensitive data. As a result, if you are finding yourself under pressure to understand and implement encryption or some alternative to protect sensitive data within IBM i environments, Syncsort has a new e-book that can help. Encryption, Tokenization, and Anonymization for IBM i—A Quick Guide to Protecting Sensitive Data walks you through these essential data-protection technologies and explores the reasons for considering one technology over another in different situations.
Let’s take a quick look at how these technologies work.
Encryption combines publicly available algorithms with private encryption keys (code strings that are unique to your company) to transform human-readable information into an unreadable format. This means that if your system is breached and an attempt is made to either view sensitive data or steal entire files containing this kind of data, the sensitive information can never be read unless it is decrypted to its original form, which can be done only through the use of the same key that was used to encrypt the data. For encryption to be most effective, it is critical to utilize the most current algorithms (many older algorithms have been compromised by hackers) and to carefully manage and protect your private encryption keys. Should your encryption keys fall into the wrong hands, all of your encryption efforts could be for naught.
A note about using encryption to secure sensitive data at the field level within IBM i applications: An exit point exists called FieldProc, which when utilized, makes it possible in most cases to encrypt field data without needing to make code changes to those applications, saving a lot of time and expense. You can learn more about FieldProc in the Syncsort white paper IBM i Encryption with FieldProc and Alliance AES/400: Protecting Data at Rest.
An entirely different approach to protecting sensitive data is to replace this kind of data with non-sensitive substitute values called “tokens.” Tokenization utilizes a database—sometimes referred to as a token vault—to store the sensitive data as well as information about the relationship between the sensitive data and its replacement token. Since token vaults should always be kept on a separate server, tokenization thus removes sensitive data from the server where applications exist. In other words, if your production system is compromised, it will be impossible for the intruder to see the sensitive data because there is no algorithmic relationship to the original data. Tokenization is commonly used to replace credit card numbers, social security numbers, and other personally identifiable information within applications and reports.
Anonymization is a form of tokenization that eliminates the use of a token vault. This means that the tokenized data is permanently replaced with a substitute value, making the original data completely unrecoverable. This solution is ideal when using production data for development or test environments, or when reporting information to external partners or agencies. In these situations, tokenized data shouldn’t maintain any link to sensitive information.
Pros and Cons
Depending on the makeup of your IBM i environments as well as data processing and protection requirements, you may find that it makes the most sense to implement one of these data-protection technologies in one type of situation and a different technology in another. Each approach has its own benefits and drawbacks; for instance, there are things to consider regarding performance, complexity, regulatory compliance, impact on field formatting and database indexing, and more.
You can learn much more about these important data-protection technologies as well as the pros and cons of each by clicking here to download our eBook.