Threat #10: Laws and Regulations
Laws and regulations? Aren't they supposed to help—not hurt—the IT data security world? In theory, yes, but sometimes they cause focus to be directed away from the correct, more secure solution so that the law can be followed. Take the notification laws passed by California and other states: The law says that individuals do not have to be notified when their data is stolen if the data is encrypted. Does encryption make the data more secure? On the surface, the answer is yes. But looking deeper, encryption may not make the data secure at all. The law does not state the type of encryption that should be used, nor the encryption key methods that need to be employed. A weak or "home-grown" encryption scheme can leave data quite easily decryptable as can a poorly implemented key management scheme. Yet, instead of focusing on a sound data access control policy and implementation, which has a much better chance of actually keeping the data secure, administrators are focusing on encrypting the data so they don't have to be concerned about the notification laws.
In addition, federal laws often override more stringent state laws and actually make us more vulnerable than if the federal law had not been adopted. Once again, let's look at the notification laws. At the time of this writing, at least 17 states have adopted some form of notification law, meaning that people whose private data has been lost or stolen must be notified. Even though several attempts have been made to introduce a notification law into the U.S. Congress, none have succeeded, primarily due to lobbying from large businesses. Because Congress has to "please" so many special interest groups, it's likely that a notification law—if one is ever passed—will not be very strict. Also, it will probably usurp any corresponding state law, which will take us backward from a privacy protection point of view. So the fact that a federal law hasn't yet been passed may actually be a good thing!
Threat #9Focused on Passing an Audit, Not on a Secure Implementation
Threat #10 directly feeds into Threat #9. Some administrators are so focused on passing their various audits that they neglect or don't care about the implementation of their security configuration. I have seen administrators of systems running third-party applications whose security scheme was to have all application users be a member of the owning profile. This scheme—as most of you are already aware—gives all application users ownership of all application objects, including data files.
In addition, in one case, *PUBLIC authority had been left at the system default of *CHANGE. In this case, these data files contained financial information, so the threat to the privacy and integrity of the data was significant (a polite way of saying "huge"). The administrator decided not to take action on this vulnerability because the third-party vendor assured the company that its security was sound (referring to the menu-based security) and that it had passed a SAS70 audit (never mind that a SAS70 audit doesn't mandate sound data security practices). This assurance was sufficient for this organization's internal auditor, so no audit findings were issued. Since the company passed its audit, the administrator consciously chose to do nothing about the vulnerabilities on their system – even though those vulnerabilities had been explained in great detail. I found this appalling and very scary. I can only hope that the administrators of the systems containing my financial information don't take this approach.
The message to take away from this threat? Passing an audit does not necessarily mean that your system and data are secure.
Threat #8: Defensiveness
Unfortunately, I believe that many security vulnerabilities go unresolved because a lot of administrators and programmers are defensive. I've seen administrators become quite defensive when prompted to address the security vulnerabilities on their systems. This defensiveness often seems to paralyze them from taking action now. It's as if they fear that taking action now will get them fired for not taking action in the past.
Well, here's my opinion on that matter.
Back when many administrators were being trained, we were in the green-screen, AS/400 days when terminals were hard-wired to the system. At that time, it was quite easy to secure the system. Basically, all one had to do was configure the users to be limited-capability users and use the application's menu security to confine them to the appropriate menu options. Administrators never had to be security experts, nor did they have to spend a lot of time thinking about security. Fortunately (or unfortunately, depending on your point of view), the system became more open and complex, and security was suddenly not so easy. In addition to the complexities of security, the system became more complex to manage in general, and administrators had to wear even more hats—performance, availability, message management, job scheduling, etc. But security was rarely called to the top of the heap.
Now, with various laws and regulations and the ever-increasing threat of security breaches, security is finally a priority. Most administrators welcome any help they receive that makes them more efficient and knowledgeable about security. They realize that their system's security configuration has vulnerabilities and are looking for ways to change the configuration so that the issues are resolved before they become an issue. However, some administrators refuse the help, saying that if they inform their bosses about the vulnerabilities, they'll be fired. When I was a manager, I know that I greatly preferred hearing about problems before they became an issue and became quite irritated when having to resolve an issue that could have been prevented had someone told me about it. I realize there are some unreasonable bosses in this world, but if you are one of those administrators who is refusing to admit there are vulnerabilities on your system, I encourage you to take a proactive approach to security and think about the actions that management will most certainly take should a breach occur on your system that they thought you had secured. Dropping the defensiveness may mean dropping a bit of pride as well. I realize that it is difficult for some administrators to admit when there are issues on their systems, but this is one case where pride could truly come before a fall.
Threat #7: New Technologies
I believe that the scariest part about new technologies is that they are completely unproven. Their real threats are unknown, and therefore, it's difficult to know how to eliminate or, at least reduce, the risk associated with them. Take Radio Frequency Identification (RFID) technology, for example. On the surface, it seems harmless. But when embedded in passports, it appears a bit scary. The government is supposedly taking steps to safeguard our privacy with this new technology, but quite frankly, I'm a bit skeptical that they know what the full impact on our privacy will be.
The other issue with new technologies is that their risk could cause them to be listed as my #1 threat or my #10 threat, but I don't really know where to rank these technologies until they mature. The message to take away from this threat? For security's sake, let a technology mature before you jump on its bandwagon. Make sure you're comfortable that you understand it well enough to know the associated risks and can mitigate them, when necessary.
Threat #6: Lack of Education and Social Engineering
These two threats go hand in hand, so I decided to combine them. We, as technical professionals, can forget that the majority of the computer users remain ignorant to the threats posed by viruses, phishing, and spyware. I believe that it is our duty to help educate those around us—whether it is the end users at our place of employment or our friends and family.
I was reminded of this as I visited with my niece over her Thanksgiving break. She was showing me her new laptop when a notification that her free anti-virus trial period was about to expire. I said, "You're going to renew that, right?" When her answer came back as "Probably," I immediately informed her that not renewing was not an option and that I'd pay for the renewal if money was the issue! Now perhaps she knew that a wishy-washy response would cause her Aunt Carol to pay for the subscription, but I believe that the cause of her noncommittal response was lack of education. So I proceeded to inform her of the importance of this type of software, and we renewed the subscription (and yes, I paid for it!). My niece is a nursing student—hardly what you'd call dumb but certainly not what you'd call technically literate. Stepping back and looking at the situation, I realized that I am surrounded by many people—just like my niece—who could use a bit of education on the dangers of viruses, phishing (currently the most popular form of social engineering), and spyware.
The only reason viruses, spyware, and especially phishing exploits are successful is because people are uninformed. They open attachments or respond to an "official notification" from a bank or amazon.com or PayPal because they don't know they shouldn't; they've never been told that's a dangerous thing to do. They aren't technical, so they don't read all the warnings or hear of all of the dangers like we do. So who better to educate them than us? I encourage you to help reduce this threat. Educate your friends, family, and coworkers! You might even consider giving an anti-virus or anti-spyware subscription as a Christmas present for that hard-to-buy-for friend or family member! (And now you know what I'm giving my nieces and nephews for Christmas!)
Threat #5: Unsecured Development Systems
Most organizations understand the need to secure their production systems. What they fail to recognize is their need to treat their development system as a production system and secure it with as much care. Why is this required? Because it is rare that development systems don't contain production data. Developers need data to test applications, and what better data to test with than the real thing? While this might not be too much of an issue if the data is inventory control data, it quickly becomes a significant issue when the data contains HR, payroll, or credit card information. Why is this an issue? Because developers typically have more access and, often, more capabilities on a development system than on a production system. Thus, they often have inappropriate access to very sensitive data. This is a threat because they have the knowledge of where the sensitive data resides and they have the ability to obtain the data without being tracked if the system is not secured properly or if the developer has too much authority and can alter the security and audit configuration.
The resolution to this threat is to secure the development systems and reduce the capabilities of the developers. In other words, treat the development system as a production system. Another step that you can take is to "neutralize" the data so that it is no longer "real" data and is worthless except for testing the application.
Threat #4: Depending Solely on Exit Programs to Secure Your Data
For those of you who think your system and data are locked down tight because you've got exit programs in place, you may want to read this. Do you realize that exit points don't exist for all entrances onto the system—like sockets or a Web (HTTP) application? And have you considered that an exit program will not get called when a user accesses data from a command line? But, you argue, most of your users are limited-capability users. I understand that, but the users who have the most knowledge of your system and who possess the knowledge of what to do with the data typically have command line access. In other words, exit programs won't protect against inappropriate data access by users (such as DBAs, system analysts, support personnel, programmers, etc.) who have a legitimate business need for command line access.
I'm not saying that you should toss your exit point software; just be aware of its appropriate uses. If you need to log or monitor your system's network traffic (FTP, ODBC, etc.) or cause an alert to occur when someone tries to download a specific file, exit programs are the only way to do that. Or if you want to allow certain persons to use ODBC but not allow them to use FTP, exit points are called for. Or if you take the multiple layer of defense strategy, exit programs are certainly one layer you may want to consider adding.
However, if you are using exit programs as your sole source of access control for files containing sensitive data, I encourage you to reconsider that strategy.
And that leads me to the next threat....
Threat #3: Lack of Use of i5/OS and OS/400 Object Security
Object-level security has been around since the System/38 days. However, most administrators and vendors ignored this integrated feature of the operating system because it was so easy to secure the system (see Threat #8). Thus, most systems I see today have not taken advantage of this very powerful feature. Object-level security is the only access control method that is always available no matter how an object (e.g., data file) is accessed—command line, socket program, Web application, FTP, JDBC, etc.
I strongly disagree with those who argue that object-level security is too hard to implement or that it needs to be implemented for every object on the system. Neither statement is true. Significant security can be achieved by securing the libraries of the applications containing sensitive data. Take your payroll application. Who needs to access this application data? Users in the payroll department, maybe HR, and possibly a timekeeping application, right? So set *PUBLIC authority to *EXCLUDE and grant HR, the payroll group, and the timekeeping application profile *USE to the payroll libraries. You have now walled off a very sensitive application from the majority of your users. Yes, you can take it to the next level and additionally secure the objects in the payroll libraries, but that's not necessary when you're first getting started with object-level security. You can easily attack that level of detail after you are comfortable that the object-level security on the library is working as required.
In addition, several of today's laws and regulations require exclusionary access control—in other words, the use of object-level security. For example, the Payment Card Industry's (PCI) Data Security Standards require exclusion-based access. That is, the access controls on files containing credit card information must be configured such that only users with a direct business requirement be allowed to access the file. Translated into i5/OS terms, that means that it is insufficient to rely solely on an application's "menu security" for access control. The application's security scheme for files containing credit card information must be set to *PUBLIC *EXCLUDE, and only users who should be allowed to access the file outside of the scope of the application (such as through ODBC) may be given additional authority.
Threat #2: No Policy
Why do I think a written and approved security policy is so important? Because it is proof of your organization's commitment to security. It is also proof that the business has examined its requirements and has determined its security strategy. Lack of a policy is a threat because it shows lack of commitment and lack of understanding on the part of management that security is a vital business function, not a one-time IT project that is completed and never addressed again. Sound security policy is the foundation that needs to be in place to ensure commitment. In addition, because a security policy serves as a legal document (of the organization's commitment to security and its security practices), one can only imagine what judgment might be netted out by a court of law should a breach occur and the defendant company not have a formal security policy in place.
And this leads me to the #1 threat today....
Threat #1: Apathy
I find it incredibly scary when I encounter administrators and management who just don't care whether exposures exist in their systems or their network or who haven't implemented a virus scanning or patch strategy for their PCs. Their attitude is "It hasn't happened yet, so why do I need to do anything about my security implementation?" My response to that attitude is this: First, you aren't paying attention to your security configuration, so how do you actually know that nothing has happened? Second, how does an event-free past guarantee an event-free future? New technologies expose data where it wasn't exposed before, new viruses and spyware is introduced almost daily, employees' attitudes change, new employees with increased technical skills are hired, and production applications are deleted deliberately or accidentally because someone has too much authority. Again, I ask you: How does an event-free past guarantee an event-free future?
I don't understand apathy because I am proactive by nature. I think apathy is our #1 threat because that could be the attitude of the administrator at my bank, at my insurance company, or at the clearing house that processes my credit card transactions. "But," you say, "there are laws and regulations protecting that data." True. But the attitude of apathy allows these people to ignore the laws and regulations, thinking they won't get caught and won't get audited. So my data remains unsecured. I've seen the attitude. It exists even in the i5/OS world. And as long as it exists, my data—and yours—will remain exposed and vulnerable.
Carol Woodbury is co-founder of SkyView Partners, Inc., a firm specializing in security compliance management and assessment software as well as security services. Carol is the former chief security architect for AS/400 for IBM in Rochester, Minnesota, and has specialized in security architecture, design, and consulting for more than 15 years. Carol speaks around the world on a variety of security topics and is coauthor of the book Experts' Guide to OS/400 and i5/OS Security.
LATEST COMMENTS
MC Press Online