Remember the 400? If you’re reading this, I’ll bet you have at least a passing familiarity with IBM’s midrange systems.
On June 21, 1988, IBM introduced the Application System/400 (AS/400) and it changed the business world. Initially code-named “Silver Lake”, the 400’s introduction was a highly choreographed move that also introduced more than 1000 vendor programs and software packages to instantly create a market for serious business systems. This niche was begging to be filled with powerful, usable computers that brought a lot of mainframe power to small and mid-size businesses. With hundreds of configuration options available immediately, the AS/400 took a commanding spot in a highly profitable space, allowing Big Blue to ship 400,000 systems over the decade that followed.
As the product evolved, the AS/400 family went through a number of name changes over three decades, first renamed to eServer iSeries around the turn of the millennium, then System i and now as part of an ecosystem cleverly encompassed by Power Systems branding.
Throughout its evolution, the 400 was billed as the paragon of security, fundamentally built to quickly dispense with any talk of risk to IT systems and data before anyone could ever say “defense in depth.” Dependable hardware integrated with stable software created an integrated system that forged a solid pedigree for dependable business computing.
Companies of all sizes depended on mission critical applications that touched sensitive information assets in the sheltered context of its integrated OS/400 (and its descendants i5/OS and IBM i) thanks to an integrated database and journaling features. As the platform kept pace with the world around it and the system that came to be known simply as the ‘i’ evolved with the times, new functionality allowed it to expand its scope of operations with enhanced connectivity, a robust TCP/IP stack and modern networking.
While the OS retained the flat virtual memory model and object based architecture, it developed the ability to interact directly with Windows-based networks and vice versa. This late adoption of TCP/IP turned into a challenging situation for capable IT departments once the realization that the i’s security model was suddenly thrust into the same risk envelope as the rest of the corporate servers under management.
For the better part of the last decade, organizations that have had the foresight – or more realistically, the regulatory pressure – to explore data risk exposure ‘on the 400 side’ have discovered various challenges in securing sensitive information due to a number of factors that should come as a surprise to no one:
- the sheer volume of legacy applications still in use
- a model based on security by obscurity
- increased complexity at all layers
The once-secure IFS architecture has the capability to contain Windows PC data and with that, malware. What this means today is that the i can provide dependable storage for any number of threats to the stability of the network and the security of sensitive information, even if they can’t execute on the i.
While the venerable platform can and does act as a perfectly good incubator for malicious attack vectors into the entrails of unsuspecting organizations, the reality of that doesn’t quite hit the vast majority of IT managers until they realize that its diverse programs harbor some of the latest software vulnerabilities.
Take POODLE for example. According to Wikipedia, the “Padding Oracle On Downgraded Legacy Encryption” is a man-in-the-middle exploit which takes advantage of Internet and security software clients’ fallback to SSL 3.0. Such an attack can control the connection between client and server to run malicious browser code, decrypt encryption with say, banking sites and generally access confidential data being passed back and forth.
What does your trusty IBM i have to do with this? Nothing, except that its Apache based IBM HTTP Server is vulnerable to it and represents a risk to your organization.
Isolated vulnerability? Don’t bet on it.
A quick look through vulnerability databases yields a surprising amount of vulnerabilities plaguing IBM systems. Though not all impact the i, you can bet that the number and diversity of programs available from and through Big Blue covers a wide swath of security risks.
As of this writing, 374 new vulnerabilties have been reported for everything from middleware to the integrated file system itself and they span the entire spectrum of risks that every organization should have active protection against. From availability breaches caused by denial of service attacks to terrifying scenarios involving the corruption of DB2 data, all threat vectors are represented within a grand total of 2,159 reported vulnerabilities tracked since 1999.
Although the inherent vulnerabilities presented by different software packages can be a significant wake-up call for many organizations, regular operational risk reviews of the i can paint an even more meaningful picture of the potential exposures that need to be addressed.
For instance, it should be intuitively clear that the number of users on a system increases the risk of compromise as each of them has different access permissions to a diverse set of objects. Studies have found that the average number of users on an IBM i is about 800, which is not a ridiculous amount, but it’s an awful lot. Many of these are long forgotten accounts, some administrative and a lot of users with inherited access left behind after many departmental moves. Multiply that number by the average number of objects on a system – 35,000 – and a fuzzy picture of the security posture of midrange systems emerges. While an adequately secured IBM i has all libraries and objects secured against public access, this is often not the norm. For the most part individual objects are rarely secured to individual users, so privilege creep is rampant. This places the focus squarely on access control and more precisely: the security of user access credentials.
Earlier this year, IBM publishes in its X-Force security report a staggering statistic: its systems track 7,000 malicious hacking attacks each day. This creates a climate of risk, where breaches are not just a frequent occurrence, but a continuous one.
With the confirmed compromise of half a billion records in 2013 and almost one billion in 2014, the scale of the threat, mostly targeting businesses, has ramped up considerably since 2011, the “Year of the Security Breach” according to IBM.
Frequency is one thing, cost is another
By teaming up with the Ponemon Institute, IBM gained and shared visibility into the sheer scale of the financial impact on companies, industries and economies. They pegged the current cost at $5.5 million per breached organization and expect that amount to grow by 68 per cent over the next five years. As an example, based on the reported average of $136 per lost record, a major retailer could face more than $1 billion in fines and associated costs. This seems plausible, but only the tip of the iceberg that impacted Target’s sales to the tune of almost half a billion per quarter for much of this year, following revelations of its massive breach.
Traces of the old 400 and the new i exist in just about every industry sector. The approximately 400,000 Power Systems around the world today are trusted with the most sensitive data in the organization, ranging from credit card numbers, bank accounts and personal health information to customer details, payroll records and transactional data. A whopping 16,000 banks run their ERP, supply chain, financial applications and core banking on the i, reminding businesses to maintain a fundamentally secure configuration.
Unfortunately many of these systems are poorly configured and their security is largely mismanaged. With companies having limited visibility into the risk the easy way is to perpetuate a laissez-faire culture of security by obscurity, effectively creating a false sense of security.
I may be wrong, but today’s i is not your daddy’s 400. There are numerous ways to access sensitive data and even more options to connect to other systems, most notably using the TCP-IP stack. 30 years ago, sensitive data was protected by a green screen application menu. That was the only way to get at the data. Without database checks and referential constraints, you can get at it with your own interface, with access to the database and numerous other ways to access objects. Sure, you can retrofit application security schemes to an Adopted Authority Scheme, but you’d have to know what you’re doing. Absolutely, you can control and log all network data access with exit programs, alert on data integrity changes and log improper manipulation using database change audit software, but do you?
The threat landscape for i
The leap in connectivity and modernization seen by the i over the past couple of evolutionary iterations brought with it a multitude of attack vectors most commonly associated with the Windows platform, with a focus on server-based vulnerabilities:
- inactive user profiles with unchecked permissions
- excessive user access
- data transfer in plaintext
- password management
- database vulnerabilities
- patch management
Drive sharing turned IFS into a malware carrier and default passwords, generic roles and inactive profiles made unauthorized access possible.
But the continued assumption that the i is still invulnerable to security threats is the largest single risk to information assets stored and processed on the platform.
Certainly, platform specific security risks abound, but they are dwarfed by the simple failure to enact best practices and harmonized risk management across platforms. Many organizations are dependent on the i for business operations and are even familiar with its object-based model, but how many align the security preparedness of their midrange systems to the rest of their IT operations?
If pressed, IT managers will indicate an understanding of some of the biggest issues plaguing the platform today:
- private object authorities overriding the Authorization list
- patched programs that can violate system security and data integrity
- hypervisor-specific/virtualization vulnerabilties
Not to mention the IBM i’s full support and in many cases dependence on Java, which accounts for half of all Web-originating malware, according to IBM’s own X-Labs. In many cases, every user has complete access to every object on the system and the average number of privileged accounts per machine is over 80.
Until recently, one-third of systems failed to use system level auditing and 68 per cent allowed any user to change data on the System i using PC applications like Excel and Access. While twho-thirds of organizations used no exit programs at all, only six per cent of the ones that did employed the features correctly.
What this means is that in some retail environments, in stores and offices dependent on IBM i, every single user may have full access to credit card numbers and customer data stored in database tables, nullifying any efforts towards PCI-DSS compliance. In healthcare, the concept of the Circle of Care simply can’t be enforced without monitoring user access to personal health information records and auditing change logs for various types of data. In banking, where the integrity of information is paramount, the risk of unconstrained object-level access for individual users presents the risk of every teller being able to access and modify any and all accounts. Sarbanes-Oxley and Bill 198 compliance simply become an impossibility for the finance sector and public companies without proper auditing and exit programs in place.
How should companies address security for the i?
The first step to take in considering the security posture of the IBM i environment is its alignment with the organization’s overall risk management strategy. A top-down, management driven governance, risk and compliance approach dictates that whatever gaps are identified during such an alignment exercise are systematically addressed by a remediation plan involving security best practices:
- Whether internal or external, compliance requirements must be identified and addressed.
- Standardize on a mature approach and leverage existing best practices.
- Adopt continuous or at least recurring security assessments to maintain visibility into risk.
The bottom line: standardize. Rely on audit processes aligned with IBM definitions. Borrow from existing guidance such as the IBM Security Framework, IBM I Security Reference, RedBooks, Security Administration and Compliance documentation.
There are well over 40 facets of security on the i to initially assess and map out. From authorization lists and object authority to exit point security and auditing to layered countermeasures. The simplicity of the platform is key to its robustness, but it also exposes organizations to assumptions that lead to a false sense of security.