Tuesday, August 21, 2007

Security : Security Requirements Engineering

A lack of security requirements leads to insecure software; proper planning, however, can enhance the security of a software development lifecycle

Developers are often blamed for software security mishaps and punished through losses in wages or embarrassed on walls of shame.1 At Foundstone, we believe developers, for the most part, don’t write insecure code intentionally or because they are negligent; they do so because they haven’t been taught any better and don’t receive adequate help and guidance from other stakeholders.

Essentially, when dealing with software security, it is a common failure to focus too much on the development phase of the software development lifecycle and not enough on the others. This article, therefore, focuses on security requirements engineering, one of three key support activities that can help tremendously in improving the security of projects churned out by your development teams. The other two areas—security acceptance testing and security knowledge management—will be covered in a future article.

The security community and industry has evolved tremendously since the late ’80s, when the first “security attack” was perpetrated in the form of the Morris Worm. This led to the creation of the Computer Emergency Response Team, or CERT, as it is popularly known. For the next decade or so, the industry focused on securing the network and, to a lesser extent, on securing the host. As a result of this, the major security technologies of that era were the devices and software we almost take for granted today—firewalls, intrusion detection systems, and virus scanners.

However, as the Internet exploded and the World Wide Web went from being an academic network of computers to a platform upon which business was done, the threats also evolved. Now the attackers began to attack not just the network and the host, but the applications that sat on top of them as well. In many ways, these applications represented the crown jewels—the confidential data, the precious intellectual property and business intelligence that organizations and indeed consumers did not want to lose.

Increasingly, organizations succeeded in getting their “ducks in a row” on the network and host side, as tried and tested solutions became available. At the same time, however, development teams were struggling with securing the application. Enter new methods of attack such as buffer overflow, SQL injection, and cross-site scripting; the list could go on.

So how have we dealt with this problem over the last few years? As one would expect, the first attempt was a variation on not dealing with them at all: developers released software, hoped for the best, and then fixed issues as they were publicly reported. Next came the phase of penetration testing a few weeks or days before going into production. This, again, provided little time to effectively fix the issues discovered. As an industry, we continued to evolve, and the next phase was to go hunting through code for the common classes of vulnerabilities that were in the news—whether these were buffer overflows in the ’90s, or common Web application vulnerabilities more recently.

The more strategic of the organizations at this point invested in software security training and building policies, such as language-specific coding standards to aid their developers in dealing with the problem and to prevent the introduction of future vulnerabilities.

The focus from the beginning has been on developers and the development phase, for the most part; and only rarely has it touched on the secure design elements. As a consequence of this focus, it has become almost instinctive to blame developers and hold them responsible for vulnerabilities in the application. If something goes wrong, it must be the developer’s fault—especially now that we have a firewall and a secure coding standard!

Holistic Software Security

Unfortunately, it appears that as a community we, the software security folks, have not learned as much as we should have from the decades of research into software engineering. If you treat a security vulnerability as a bug first and a security issue second, you can quickly adapt many of the lessons that have been learned with regard to improving the security of software applications.

Software security must be viewed holistically. It is achieved through a combination of effective people, processes and technology, with none of these three capable of fully replacing the other two. This also means that, as with software quality in general, software security requires that we focus on it throughout the application’s lifecycle—or from cradle to grave, as some like to say. Unfortunately, thus far most of the effort has focused on activities such as application penetration testing, security code reviews and, to a lesser extent, on threat modeling.

While all of the aforementioned activities are critical to improving the security of your applications, they are by no means the only ones. Unfortunately, as both a community at large and individuals looking to tackle the software security problem in our development teams, we have tended to ignore the non-developer-focused activities. In this article we present one of these activities.

Before we get too far, it helps to define a common frame of reference to view software security problems and solutions. Our Foundstone Security Frame2 helps us better prepare for going into a software development project as well as to perform better root-cause analysis when faced with vulnerabilities. (See Table 1.) In the context of this article, we will use it to help us be more efficient, effective, and thorough.

Security Requirements Are Key

One of the most ignored parts of a security-enhanced software development lifecycle is the security requirements engineering process, and one of the prime reasons for this oversight is that security is assumed to be a technical issue and therefore best handled during architecture and design or, better still, during implementation. Since software requirements are often written by non-technical business analysts, this is a common conclusion.

The problem with this approach, as any experienced software professional will tell you, is that software that does not have its requirements elicited, enumerated, and well-documented will most likely be lacking in quality. This is because its developers do not have a specific target with regard to building security into its design and implementation. Further, quality assurance folks have no benchmark to validate the software against, and traceability—a key software engineering attribute—is unachievable. In fact, it is hard even to build a good threat model without a clear idea of the security requirements.

This is a well-understood concept in the general field of software engineering. A lot of research3 has been performed on how to effectively elicit, validate, and document software requirements. Further, most modern System Development Life Cycle (SDLC) support tools already provide some mechanism for documenting requirements.4 Hence, it should not be too difficult to extend these systems and the process itself to include security requirements.

The challenge, however, as mentioned above, is that most organizations we work with are used to thinking solely about functional requirements—requirements that the system and business analysts writing them can put their arms around. (For instance: Should the application have this particular widget, or that one? And how should it respond to the click of a button in the top right corner?) The non-functional requirements, on the other hand, are often marked as “N/A.” Our findings have been that this is not necessarily because they are considered unimportant, but rather because they are assumed to be de facto requirements—“the developers should know better than to build a slow or insecure or unreliable system.” The assumption always seems to be that these requirements are obvious and hence don’t need to be documented.

On examining this problem a little bit further, we discovered that to a large extent the difficulties lay in the lack of awareness and knowledge of the people writing the requirements. The non-functional requirements can be very technical—consider the specification of the encryption algorithm, cipher mode, key lengths and rotation parameters. Defining requirements around all of those would typically require a detailed understanding of the mechanisms around cryptography—not something that is typically found in the job description of a business analyst.

As a solution to this issue, we present a template-driven approach designed specifically to help the non-technical stakeholder define very technical security requirements. Although creating the templates does involve some effort, we have found it to be tremendously effective in ensuring that security requirements are documented (and not just with “N/A”!) as well as implemented and tested.

The first step on this path is for an organization or team (depending on the size and variety of applications involved) to identify all the drivers for security requirements that would, could, and should influence development. In our experience, most often you will see a lot of commonality among the various applications developed within the organization or team, and hence we attempt to leverage that commonality and thus gain efficiencies across multiple projects.

In our experience it is best to think about these drivers along the following categories. As mentioned above, most of them will influence many, if not all, of the applications churned out within an organization.

  • Regulatory Compliance:5 This involves specific requirements that would be mandated by various governmental agencies. Depending on the application’s scope as well as the legal environment within which the organization operates, a number of regulations may be relevant. Some of these include:

    • Sarbanes-Oxley, Section 404
    • Health Insurance Portability and Accountability Act
    • Payment Card Industry Data Security Standard
    • Gramm-Leach-Bliley Act
    • SB 1386 and other state notification laws
    • Basel II
    • Federal Information Security Management Act
    • EU Data Protection Directive
    • Children’s Online Privacy Protection Act
    • Local key escrow laws
  • Industry regulations and standards: These include standards that are specific to an industry: financial services, for instance. This category in our classification is also set up to include standards bodies such as ISO and the norms they define. Examples include:

    • ISO 17799
    • FFIEC Information Technology Examination Handbook6
    • SCADA security7
    • OWASP standards8
    • OASIS9
  • Company policies: Most organizations that we work with have a slew of internal policies that should and could affect the development of an application. Among the most common are the following:

    • Privacy policies
    • Coding standards
    • Patching policies
    • Data classification policies
    • Information security policies
    • Acceptable use policies
    • Export control
    • Open source usage
    • Results from previous security audits
  • Security features: Most applications will have some form of security feature: for instance, authentication and authorization models that replicate real-world, role-based access control, or administrative interfaces that will be used for user management, including provisioning and de-provisioning.

In some cases, it is best to work with the legal department and internal auditing to arrive at a list of regulations relevant to a given application. Once that list has been defined, the next step is to examine each of these regulations from a legal viewpoint, as well as the viewpoint of a software development expert. The aim is to convert the list of legal requirements to a set of core technical requirements.

The Foundstone Security Frame can come in extremely handy here. For each of the relevant drivers, consider the various categories in the Security Frame and how they might be impacted. For instance, if your organization is regulated by the Gramm-Leach-Bliley Act (GLBA), privacy of personally identifiable information (PII) is absolutely critical. This in turn can have implications across multiple Security Frame categories, not the least of which is Data Protection in Storage & Transit. The outcome of this step should essentially be a set of specific requirements along the various Security Frame categories that would satisfy each of the applicable drivers defined above. It is also vital at this stage to study the requirements above to avoid having to meet overlapping or redundant requirements.

A parallel step in this requirements process is to classify the applications in terms of how they are impacted by the drivers. This is best done by creating a large matrix, with the drivers forming the columns and the application set forming the rows. Classification then is the task of checking the appropriate boxes depending on whether, based on legal and other opinions, an application is affected by a specific driver.

As a result of the two parallel steps mentioned above, the team should now have a specific set of technical requirements for each application based on its requirement drivers. All of this effort is intended to be performed only once and then revisited periodically. In our experience, it is very rare that these drivers change with each application release, or even particularly frequently. This is primarily because applications tend to evolve very slowly with regard to the drivers mentioned above. Further, as mentioned above, there is much opportunity to leverage commonality across applications, since it is not atypical for many of the applications to be operating within a similar driver environment.

Having now defined this universal set of requirements a priori, as each application release is defined, the specific set of requirements for that release can be drawn out of this set. As part of the process, the data classification and privacy policy can help to identify which data elements handled by the application are affected by the drivers. It is also important to consider features that might be added in this release, and whether they will be affected as well. Based on these pieces of input and the universal set of requirements, a subset of those requirements will be obtained that are relevant for this specific release of this specific application.

The person formulating these requirements now does not need to be an expert in security or in any of the Security Frame categories; he or she can simply check the appropriate boxes to obtain a set of requirements. In fact, this last application step can be easily automated through a template or lightweight application that references all the relevant policies as well as the universal set of requirements, considers the data elements in use, and provides a set of technical requirements that may leverage encryption and access control and other security mechanisms. These can then literally be copy-pasted into the master requirements list.

To wrap up, let us consider an illustrative example. Take, for instance, an online loan processing application. Such an application will obviously make extensive use of personally identifiable information and is determined to be affected by the GLBA driver. This in turn defines specific requirements around confidentiality, integrity, availability, and access to data as well as audit trails that monitor and report on such access.

Now, consider that a new feature is being added that e-mails the result of the loan decision to the customer. When a business analyst is defining the requirements around this new feature, he or she would need to consider all of the different data elements that would be part of this e-mail, the transport mechanism used by the e-mail, and the authentication around it. Based on business need and security, it can then be decided to avoid certain data elements, or perhaps use a secure e-mail solution.

We believe security requirements engineering should exist as part of a security-enhanced software development lifecycle if it is to be successful in improving the security of your applications. This is non-traditional, in the sense that it does not merely go after the development phase of the lifecycle. However, in our experience, having helped a number of large organizations implement a secure development lifecycle, we believe that without getting this part of the puzzle correct, your team will not achieve the best possible results from any investment into software security.

0 komentar:

Powered by WebRing.