All stages of development is necessary to achieve comprehensive software security, and individuals in the organization must contribute. To meet customer standards, a comprehensive approach including executive-level support, clear requirements, and training for development personnel is essential. [FII-SCF-024-TDA-06]
Application Security Control Definition
An effective secure software development program involves identifying and managing Application Security Controls (ASCs), security requirements, and security issues. Technical controls that are clear, actionable, and continuously refined to reflect development processes and changes in the threat environment form the foundation for SDL tools and processes. The practices outlined in this document and the application security controls they drive help identify weaknesses in software design or implementation, which can expose the application, environment, or Netspective Communications LLC to a level of risk when exploited.
Actively Manage Application Security Controls
-
We should define application security controls throughout an application’s lifecycle to respond to changing business requirements and an evolving threat environment, regardless of the development methodology used.
-
To identify the necessary security requirements, we should use inputs such as secure design principles described in the following section, feedback from an established vulnerability management program, and input from other stakeholders like a compliance team.
-
At a high level, we should follow this workflow:
- Identify threats, risks, and compliance drivers faced by the application.
- Identify appropriate security requirements to address those threats and risks.
- Communicate the security requirements to the appropriate implementation teams.
- Validate that each security requirement has been implemented.
- Audit, if necessary, to demonstrate compliance with any applicable policies or regulations.
- We should track each identified security requirement through implementation and verification, and manage the controls as structured data in an Application Development Lifecycle Management (ADLM) system instead of an unstructured document.
Design
The development team must incorporate security features in software to meet internal security policies or comply with external laws or regulations. The software architecture and design must also be capable of resisting known threats based on the intended operational environment. By combining the process of threat modeling with appropriate consideration of security requirements and secure design principles, the development team can identify necessary security features to protect data and meet the system’s users’ requirements.
Secure Design Principles
- Economy of mechanism: Keep the system design simple and small.
- Fail-safe defaults: Base access decisions on permissions instead of exclusion.
- Complete mediation: Check authorization for every access to every object.
- Least privilege: Use the least set of privileges necessary for each program and user.
- Least common mechanism: Minimize common mechanisms for multiple users.
- Psychological acceptability: Design the human interface for ease of use to encourage correct use of protection mechanisms.
- Compromise recording: Use mechanisms to record compromises of information.
- Defense in depth: Design the system to resist attacks even if a single security vulnerability is discovered or a single security feature is bypassed.
- Fail securely: Design the system to remain secure even if it encounters an error or crashes.
- Design for updating: Plan for the safe and reliable installation of security updates, as no system is free from security vulnerabilities forever.
Threat Modelling
Threat modeling is a security-focused design activity and a fundamental practice in the process of building trusted technology. Performing threat modeling early in the development lifecycle before committing code has proven to be one of the best ‘return on investment’ activities for identifying and addressing design flaws.”
Perform Architectural and Design Reviews
An organization should incorporate architectural and design review into its security program. A design flaw that permits a malicious actor to entirely compromise a system and its data can be catastrophic and challenging to fix.
Develop an Encryption Strategy
-
Encryption protects data from unintended disclosure or alteration, whether it is being stored or transmitted. An encryption strategy should include several key components:
-
Define what data to protect: At minimum, all internet traffic should be encrypted while in transit, and private network traffic should also be encrypted unless there is a compelling reason not to. Criteria should be developed for what types of data to encrypt and what mechanisms are acceptable for protecting that data when it is stored in files, cloud storage, databases, or other persistent locations.
-
Designate mechanisms for encryption: There are numerous encryption algorithms, key lengths, cipher modes, key and initial vector generation techniques and usages, and cryptographic libraries implementing some or all of these functions. Choosing the wrong option for any one of these aspects of encryption can undermine protection. Only industry-vetted encryption libraries should be used, rather than custom internal implementations. Strong, unbroken algorithms, key lengths, and cipher modes (for the specific scenarios) should be allowed. For encryption in transit, only strong versions of the encryption protocol should be allowed. When encrypting data at rest, the method of deployment must also be considered. Solutions such as disk encryption, OS credential/key managers, and database transparent encryption are relatively easy to deploy and provide protection against offline attacks.
-
Decide on a key and certificate management solution: Encrypting data is only one half of an encryption strategy. The other half is the solution for managing encryption keys and certificates. Every party that can access an encryption key or certificate can access the encrypted data. Therefore, a key and certificate management solution should control who has access (whether it is a person or a service) and provide a clear audit log of that access.
-
Avoid hard-coding encryption keys (or other secrets) within source code because it makes them very vulnerable. Implement an encryption strategy with cryptographic agility in mind, which means specifying how applications and services should implement encryption to enable the transition to new cryptographic mechanisms, libraries, and keys when needed.
Standardize Identity and Access Management
-
The organization should apply a standard to its products and services that comprises several components of identity and access management, including:
- Users (both end-users and organization administrators) authenticate their identities through a mechanism: - To provide assurance that the users are who they say they are, organizations must consider many aspects, such as how they initially sign up or are provisioned, the credentials (including multifactor credentials) they provide each time they authenticate, how they restore their access to the system should they lose a necessary credential, how they authenticate when communicating to a help desk for the product, etc.
-
An organization can delegate authentication to a third-party identity provider via a mechanism such as OpenID, incorporate a prebuilt identity solution into their product from an organization that specializes in creating authentication technologies, or centralize the design and maintenance of an authentication solution with an internal team that has expertise specific to authentication.
-
One service or logical component authenticates to another through mechanism(s), and the credentials are stored and rotated in a timely fashion: - Service credentials should not be stored within a source repository (neither hardcoded nor as a config file), as there are insufficient protections preventing the disclosure of those credentials. Credential rotation for services can be challenging if not designed into a system, as both the consumer and producer services need to synchronize the change in credentials.
-
The actions of each principal are authorized through a mechanism(s): - The safest way to ensure this is to build the application such that, without an explicit authorization check, the default state is to deny the ability to perform the access or action (an approach commonly referred to as “default deny”).
-
Establish Log Requirements and Audit Practices
- Designing well-functioning application, system, and security log files provide the ability to understand an application’s behavior and how it has been used at any moment in time. Automated Security Information and Event Management (SIEM) systems rely on these logs as fundamental data sources to trigger alerts.
- Software design decisions for creating and maintaining logs are critical and informed by business and system requirements, the threat model, and the availability of log creation and maintenance functions in the deployed environment. These logs are used for a range of purposes, from identifying security incidents to optimizing system events.
- The group or groups that will need to consume the log file contents should always determine the log files’ content.
- Identifying what security information is relevant and needs to be logged, where the logs will be stored, for how long the logs will be retained, and how the logs will be protected is equally important.
- Any logging system should provide controls to prevent tampering and offer basic configuration to ensure secure operation.
- During application runtime, ensure that security logs are configurable.
- It is recommended to move logs to a central location and archive them, and not to delete local log files too quickly as deletion could potentially hinder required investigations into past events.
Secure Coding Practices
Developing secure software aims to minimize the number of unintentional code-level security vulnerabilities. Achieving this involves defining coding standards, selecting the most appropriate and safe languages, frameworks, and libraries, ensuring their proper use (especially their security features), and using automated analysis tools and manual code reviews.
Establish Coding Standards and Conventions
The development team should create, maintain and communicate appropriate coding standards and conventions after making technology decisions.
- The frameworks and tools selected should include and enable built-in security features, which should be enabled by default.
- When addressing issues, a single option should be selected as the standard.
- Any use of frameworks, libraries or components should be loosely coupled to facilitate easy replacement or upgrade.
- The standards should be enforceable and realistic.
Use Safe Functions Only
The development team should provide guidance to developers on which functions to avoid and their safe equivalents, according to the coding standards described in the preceding section. Additionally, the development team should deploy tooling to assist in identifying and reviewing the usage of dangerous functions.
Use Current Compiler and Toolchain Versions and Secure Compiler Options
-
Important to use the latest versions of compilers, linkers, interpreters and runtime environments. As languages evolve, they incorporate security features, and developers using previous versions of compilers and toolchains cannot take advantage of these improvements in their software.
- Developers should enable secure compiler options and avoid disabling secure defaults for the sake of performance or backwards compatibility.
- When starting a project, consider the compiler version.
- Use tools to verify compiler and linker options.
Use Code Analysis Tools to Find Security Issues Early
Developers can use tools to search the code for deviations from requirements to verify that they are following guidance and identify problems early in the development cycle. They can use static analysis tools that plug directly into the IDE to find security bugs effectively without leaving their native IDE environment. This method is comparable to using the latest versions of compilers and links with appropriate security switches, as discussed in the previous section. Secure code review is also a good way for developers to identify vulnerabilities that result from logic bugs in the source code.
Handle Data Safely
- Applications can consume and process data from various sources such as the Internet, network, files, or other applications through inter-process communication or data channels. However, the origin of data is often unclear or not well-defined.
- To ensure data is interpreted purely as data in the context where it is being used, developers should use encoding.
- To prevent data from being interpreted as control logic, data binding should be used to bind it to a specific data type.
- Developers should always sanitize input from untrusted sources. It is recommended to use the sanitization and validation methods provided by components because custom-developed sanitization may miss hidden complexities.
Handle Errors
- All applications inevitably encounter errors. Although errors resulting from typical use can be detected during functional testing, it is almost impossible to predict all the ways in which an attacker might interact with the application.
- Although expected errors can be handled and validated with specific exception handlers or error checks, it is necessary to use generic error handlers or exception handlers to cover unexpected errors.
- The integrity of further execution after such an action can no longer be trusted.
- Error handling should be integrated into the logging approach, and ideally different levels of detailed information should be provided to users as error messages and to administrators in log files.
Manage Security Risk Inherent in the Use of Third-party Components
- Developers increasingly use third-party frameworks and libraries to innovate and deliver more value in shorter time periods, they must be aware that there is inherent security risk in the use of third-party components (TPCs) and investigate and evaluate these risks before use.
- Developers must understand that they inherit security vulnerabilities of the components they incorporate and that choosing components for expedience (e.g., to reduce development time) can come at the cost of security when those components are integrated in production.
- When choosing TPCs, developers should choose established and proven frameworks and libraries that provide adequate security for their use cases and defend against identified threats.
- Re-implementing security features that are native to the framework can introduce new risks.
Testing and Validation
Most mature security programs use multiple forms of security testing and validation, making it an essential component of an SDL program, and typically the first set of activities adopted by an organization.
Automated Testing
Developers can run several commercial or free/open-source tools for automated security testing either on their workstations, as part of a build process on a build server, or to scan the developed product. These tools enable rapid and repeatable testing at scale and can identify certain types of vulnerabilities.
Use Static Analysis Security Testing Tools
Static analysis inspects either source code or the compiled intermediate language or binary component for flaws. It looks for known problematic patterns based simply on the application logic, rather than on the behavior of the application while it runs.
Perform Dynamic Analysis Security Testing
Dynamic analysis security testing deploys a suite of prebuilt attacks against an executing version of a program or service. It typically runs against the fully compiled or packaged software while it’s running. This enables dynamic analysis to test scenarios that are only apparent when all the components are integrated.
Fuzz Parsers
Fuzzing is a specialized form of DAST (or in some limited instances, SAST). It involves generating or mutating data and passing it to an application’s data parsers (file parser, network protocol parser, inter-process communication parser, etc.) to observe their behavior. Fuzzing produces far better coverage of parsing code than either traditional DAST or SAST. Once the automation is set up, it can run countless distinct tests against code with the only cost being computed time. However, setting up fuzzing can be laborious depending on the scenario.
Netspective Communication LLC uses GitLab Secret Detection Scanner tools and GitLab SAST Analyzer tool, to actively identify vulnerabilities and prevent the inclusion of sensitive information.
Network Vulnerability Scanning
Applications delivered as a physical or virtual appliance or deployed in a SaaS environment that requires security of the operating system environment should undergo this category of testing. It is also essential to scan any application in its installed state to ensure that the installation process does not introduce additional vulnerabilities. In this scenario, conducting a baseline scan before application installation and another scan after installation and configuration is most effective. Comparing the results of the two scans can provide insight into any vulnerabilities introduced by the application installation.
Verify Secure Configurations and Use of Platform mitigation’s
security automation tools is to detect security vulnerabilities, but as the secure coding practices section outlines, it is also crucial that software utilizes available platform mitigation techniques and configures them properly. Regular validation of the correct usage of platform mitigation techniques by software and services is typically easier to deploy and execute than other forms of security automation, and the results obtained have very low false positive rates.
Perform Automated Functional Testing of Security Features/mitigation’s #3
Organizations should extend their testing mechanisms that are used to verify the correct implementation of general features and functionality, such as unit tests and other automated testing, to also verify security features and mitigations designed into the software.
Manual Testing
Manual testing can evolve based on previous findings and subtle indicators of an issue, as human testers can learn, adapt, and make leaps of inference, in contrast to automated testing.
Perform Manual Verification of Security Features/mitigation’s
- Include security features in existing unit tests and other automated functionality verification efforts, as well as in any manual functional testing efforts that are performed.
- Incorporate verification of security features and mitigation’s into the test plan for manual quality assurance, as they can be manually verified in the same way that any non-security feature is verified.
Perform Penetration Testing
Penetration testing assesses software similar to the way hackers look for vulnerabilities. It can find the widest variety of vulnerabilities and analyze a software package or service in the broader context of the environment it runs in, actions it performs, components it interacts with, and ways that humans and other software interact with it.
Verification
- Prioritization for all testing activities based on documented business risks or compliance requirements (evaluating failed or missed test cases against these).
- Identifying requirements for mitigating controls against threats, misuse (abuse) cases, or attacker stories.
- Examining security test case descriptions.
- Analyzing security test results.
- Reviewing penetration testing or security assessment reports.
Manage Security Findings
- The SDL program performs the primary goal of identifying software design or implementation weaknesses that, when exploited, expose the application, environment or Netspective Communications LLC to a level of risk.
- Performing the secure development practices outlined in this document aids in identifying these weaknesses. However, simply performing these activities is not sufficient. The team should take action to correct the identified weaknesses to improve the overall security posture of the product. The team must track the findings from these artifacts (practices such as threat modeling, third-party component identification, SAST, DAST, penetration testing) and take action to remediate, mitigate or accept the respective risk.
- When the issue cannot be completely remediated, the team must perform an assessment to determine whether the residual risk is acceptable. The residual risk may be radically different between products, depending on the products’ usage and the sensitivity of data being processed or stored, and other factors such as brand damage, and revenue impact should also be considered.
- Once the team understands the residual risk, they must make a decision whether the risk is acceptable, whether additional mitigations are required, or whether the issue must be fully remediated.
- To ensure that action is taken, the team should record these findings in a tracking (ideally an ADLM) system and make them available to multiple teams in the organization. The teams who may need access to this information to make informed decisions regarding risk acceptance and product release include the development or operational teams who will remediate or mitigate the risk and the security team who will review and provide guidance to both development teams and the business owners.
- To enable easy discovery of these findings, the team should also identify them as security issues, and to aid prioritization, they should have a severity assigned.
- The tracking system should have the ability to produce a report of all unresolved (and resolved) security findings.
Define Severity
- The SDL program/team recognizes that clear definitions of severity are important to ensure that all SDL participants use the same language and have a consistent understanding of the security issue and its potential impact. To support this understanding, they recommend defining criteria to categorize issue severity.
- The team considers implementing a severity scale (e.g., Critical, High, Medium, and Low or a finer-grained scale, Very High, High, Medium, Low, Very Low, Informational), and next, they define the criteria that contribute to each severity category. If detailed criteria are not available or known, they start by mapping severity levels to Common Vulnerability Scoring System (CVSS) thresholds (e.g., 10-8.5 = Critical, 8.4-7.0 = High, etc.). They use CVSS primarily for confirmed vulnerabilities but can also use it to prioritize SDL findings based on their complexity of exploitation and impact on the security properties of a system.
Risk Acceptance Process
- The SDL program/team acknowledges that when an issue cannot be completely resolved or fully mitigated, or can only be partially mitigated, they need to approve a risk approval or mitigation request before releasing the product. They anticipate these situations ahead of time and create a structured format to communicate them, define process or workflow, and assign responsibilities to the roles involved.
- They track and achieve acceptance of risk. The record of risk acceptance includes a severity rating, a remediation plan, or an expiration or re-review period for the exception and the area for review/validation (e.g., a function in code should be re-reviewed to ensure that it continues to sufficiently reduce the risk).
Vulnerability Response and Disclosure
The resolution of externally discovered vulnerabilities and keep all stakeholders informed of progress, we need to establish a vulnerability response and disclosure process. This process becomes particularly important when a vulnerability is being publicly disclosed and/or actively exploited. The process aims to provide customers with timely information, guidance, and possible mitigations or updates to address threats resulting from such vulnerabilities.
Define Internal and External Policies
- To ensure clear and effective vulnerability response, it is important to define and maintain a vulnerability response policy that states the Netspective Communications LLC’s intentions when investigating and remediating externally reported vulnerabilities. The policy should include guidelines for vulnerability response both internal to the Netspective Communications LLC and external to the public.
- The internal policy should specify who is responsible at each stage of the vulnerability handling process and provide guidance on how to handle information related to potential and confirmed vulnerabilities.
- The external policy should primarily address external stakeholders such as vulnerability reporters, security researchers, customers, and potentially press or media contacts. It should set expectations for external stakeholders on what they can expect when they report a potential vulnerability.To ensure clear and effective vulnerability response, it is important to define and maintain a vulnerability response policy that states the Netspective Communications LLC’s intentions when investigating and remediating externally reported vulnerabilities. The policy should include guidelines for vulnerability response both internal to the Netspective Communications LLC and external to the public.
- The internal policy should specify who is responsible at each stage of the vulnerability handling process and provide guidance on how to handle information related to potential and confirmed vulnerabilities.
- The external policy should primarily address external stakeholders such as vulnerability reporters, security researchers, customers, and potentially press or media contacts. It should set expectations for external stakeholders on what they can expect when they report a potential vulnerability.
Define Roles and Responsibilities
- Organizations should establish dedicated Product Security Incident Response Teams (PSIRT) or incident response teams whose charter is to define and manage the vulnerability response and disclosure process. PSIRT members should have a clear understanding of all policies, guidelines and activities related to Vulnerability Response and Disclosure. They should be able to guide the software development team through the process. Additionally, everyone involved in software development at the organization and supporting functions such as customer service, legal, and public relations should understand their role and responsibilities as they relate to Vulnerability Response & Disclosure Process.
- The internal vulnerability response policy should clearly document the roles and responsibilities of each stakeholder.
Ensure that Vulnerability Reporters Know Whom to Contact
The Netspective Communications LLC should ensure that vulnerability reporters or security researchers, customers, or other stakeholders know where, how, or to whom to report vulnerabilities. They should make the contact or report intake location easily discoverable. If sensitive vulnerability information is being communicated, it should be kept confidential. Secure communication methods such as encrypted email addresses and public keys should be clearly documented in both the internal and external vulnerability response policies.
Manage Vulnerability Reporters
The PSIRT should acknowledge receipt of vulnerability reports from vulnerability reporters (or security researchers) and customers in a timely manner. Upon receipt, the organization should set expectations for how they will respond and may request additional information from the reporter to support triage.
Monitor and Manage Third-party Component Vulnerabilities
Development organizations incorporating TPCs, which include both open-source software (OSS) and commercial off-the-shelf (COTS) components, inherit the security vulnerabilities of these components. Having a process to monitor and manage vulnerabilities that are discovered in these components is critical for organizations.
Fix the Vulnerability
- The team that owns the vulnerable code should immediately triage the reported vulnerability for validation and remediation, if appropriate. They should determine the severity of the vulnerability (see Manage Security Findings) to assist in prioritizing the fixes. Other considerations such as potential impact, likelihood of exploitation, and scope of affected customers should also be taken into account to determine the relative urgency of producing the fix.
- Besides fixing the reported issue, the software development team should consider the nature of the defect. If there is a likelihood of having additional defects of the same class in the codebase, it is better to address the defect in those places as well as the one reported. Mitigating proactively and systematically helps avoid repeated submissions of the same vulnerability in other areas of the codebase. Dependent teams should be notified if the team develops software that is reused by other teams, and they should update their software accordingly.
Identify Mitigating Factors or Workarounds
The software development team should fully test the security fixes. They should also identify and test any mitigating factors or workarounds, such as settings, configuration options, or general best practices, that could reduce the severity of exploitation of a vulnerability. These mitigations and workarounds can help users of the affected software defend against exploitation of the vulnerability before they are able to deploy any updates.
Vulnerability Disclosure
-
The software development team should communicate the fix to customers through security advisories, bulletins, or similar notification methods as soon as it is available. They should release security advisories and bulletins only once fixes have been implemented for all supported versions of the affected product(s). Individual customers should not be provided advance notification to ensure that all customers are protected from malicious attacks while the fix is being developed. The process should be clearly defined and documented in the internal vulnerability response policy, especially when the vulnerability is being publicly disclosed and/or actively exploited.
-
The security advisories and bulletins should include the following information:
- Products, applicable versions, and platforms affected
- The severity rating/level for the vulnerability (see Manage Security Findings)
- Brief description of the vulnerability and potential impact if exploited
- Fix details with update/workaround information
- Credit to the reporter for discovering the vulnerability and working on a coordinated disclosure.
Secure Development Lifecycle Feedback
Perform a root cause analysis and update the secure development lifecycle using information gained from the root cause analysis and during the remediation process to prevent similar vulnerabilities from occurring in new or updated products. This feedback loop is a critical step for continuous improvement of the SDL program. When planning the implementation and deployment of secure development practices, consider the following factors:
- The culture of the organization
- The expertise and skill level of the organization
- The product development model and lifecycle
- The scope of the initial deployment
- Stakeholder management and communication
- Efficiency measurement
- The health of the SDL process
- The value proposition for secure development practices
Culture of the Organization
When planning the deployment of any new process or set of application security controls, it is important to consider the culture of the organization. Some organizations respond well to corporate mandates from the CEO or upper management, while others respond better to a groundswell from the engineering team. To ensure success, follow these steps:
- Look for examples of process or requirements changes that were successful and those that were not.
- Learn from past successes and mistakes.
- If mandates work well, identify the key managers who need to support and communicate a software security initiative.
Expertise and Skill Level of the Organization
To successfully implement an SDL, the organization needs to provide some level of training. The organization should raise awareness about the importance of security throughout the organization, and provide detailed technical training to development teams that clearly explains the specific expectations for individuals and teams. If individuals do not understand the importance of these practices and the impact of not performing them, they are less likely to support them.
Product Development Model and Lifecycle
Along with specifying secure development practices, it is essential to determine when these practices should be applied. The frequency and timing of these practices depend on the development model in use and the available automation. Although executing security practices sequentially and logically can result in greater security gains and cost-effectiveness, such as conducting threat modeling before committing code, in agile or continuous development environments, other triggers, such as time or response to changes in the operating environment, should be considered.
Scope of Initial Deployment
- Should we include all secure development practices in the initial rollout or just a subset? Depending on the organization’s maturity, it may be wise to start with a subset of secure development practices, establish traction and expertise in those areas before full-scale implementation.
- When selecting an adoption period for each team, consider the product release roadmap and avoid introducing a new process or set of practices to a team nearing completion of a specific release. The team is probably marching towards a tight timeline and has not included the time to learn and adopt new practices.
- Consider targeting teams with higher security risk posture during the initial rollout and deferring lower risk products for later adoption.
- Allow time for a transition to full adherence to all SDL requirements. For example, an existing product whose development team is working on a new release may have an architecture with vulnerabilities that are difficult to mitigate. It may make sense to agree with such a product team that they will “do what is feasible now” and transition to a new architecture over an agreed period of time.
Stakeholder Management and Communications
Identifying the stakeholders, champions, and change agents who can assist with communications, influence, and rollout of the program is essential in deploying a new process or set of practices.
Compliance Measurement
When defining the overall SDL program, you should consider the following:
- Do teams need to complete all secure development practices? What is mandatory, and is there a compliance target?
- What evidence is required to show that practices have been executed? Ideally, the ADLM system and testing environments can be configured to produce this evidence as a byproduct of executing the secure development process.
- How will compliance be measured, and what types of reports are needed?
- What will happen if the compliance target is not achieved? What is the risk management process that should be followed? What level of management can approve shipping with open exceptions to security requirements? What action plans will be required to mitigate such risks, and how will their implementation be tracked?
SDL Process Health
The SDL process must identify and establish key feedback mechanisms to identify gaps and necessary improvements. The vulnerability management process can serve as a valuable source of feedback on the effectiveness of the secure development practices. If a vulnerability is discovered after the release, the team should conduct a root-cause analysis to determine the underlying issue, which may be one of the following:
- A gap in the secure development practices that requires attention
- A gap in training or skills
- The need for a new tool or an update to an existing tool.