This article describes the general security requirements for the secure development of software or application components.
There are many different causes for the development of unsafe software. This chapter describes some major causes of unsafe software that must be avoided when planning, developing and operating applications.
Complexity is a feature of systems that usually has a very negative impact on their security. Complexity can introduce vulnerabilities into the application and, above all, significantly increase the effort required to verify them. Often, complexity is also increased by external components. Modern web applications often consist of a large number of APIs, frameworks and other components. Only rarely does a web application use the full functionality of the underlying frameworks. However, these unused functionalities lead to increased attack potential, which also increases the risk that functionalities can be misused or vulnerabilities in the web application can be exploited. For this reason, one of the key security principles is to keep the overall system as simple as possible, in line with the "keep it small and simple" principle.
Concrete application security requirements form the basis for secure application development. If security requirements are not defined at the beginning of development, or are defined inadequately, the activities that depend on them cannot be carried out effectively. In addition, it is difficult to formulate these requirements precisely in order to prevent misunderstandings and to ensure subsequent validation of compliance with the requirements. Precise formulation of requirements is particularly problematic in agile approaches, where application security requirements must be adapted accordingly due to frequently changing use cases. Missing or insufficient software security requirements are often a major cause of subsequent security problems.
Insufficient security awareness throughout the product or development team can lead to misconceptions about the need for protection in general as well as the impact of implemented functions and to errors in the actual implementation.
For a variety of reasons, testing software for security is not a trivial issue; in fact, it is enormously difficult. This is one of the reasons why even the most reputable vendors do not produce tools that adequately enable this. Therefore, software assurance activities must be addressed as early as possible in the development process and adequately considered in the appropriate phases. The poorer the testability of an application, the easier it is to overlook security issues. Poor testability not only significantly increases the effort required for security checks, but also increases the risk that vulnerabilities will remain hidden.
Before development begins, the responsible person checks to see if there are appropriate policies, standards, and documentation that must be considered to ensure a secure development process. Documentation is extremely important. No guidelines or standards can cover every situation the development team will face. Documentation can still reveal vulnerabilities after the fact that were previously undiscovered.
The software development process should generally be organized into sprints, each lasting 3 to 6 weeks. Each sprint begins with sprint planning and ends with the sprint review. The tasks for each sprint are collected in a sprint backlog. To ensure the principle of double-checking, each new task must be submitted by the Head of Development to the Product Owner and verified during the Sprint planning call. The ISB has co-determination and veto rights on matters where a clear breach or risk related to information security is foreseeable. If the Head of Development or Product Owner is also an ISB, the veto power passes to the Deputy ISB to avoid conflicts of interest.
In the conception and planning phase, some security-related aspects of the application's development must be defined and documented in user stories.
User stories are used to define requirements from the user's point of view. The required level of detail depends strongly on the complexity of the software to be created. User stories should include the following points:
Tests are written during feature development and should test all common scenarios. Automated tests cover the correctness of code at multiple levels of the system, such as unit tests or integration tests. These tests run automatically, and errors prevent code from going into production for the most critical systems. To test the application from the user's perspective, end-to-end tests must be written and run before each merge. Automated tests must also be run when the underlying infrastructure is replaced. All automated tests must pass before the code can be released to production. The documented acceptance criteria related to functionality can be reviewed at any time in the appropriate ticket.
Responsibilities during the software development process are divided among four types of roles that exist in the organization. The engineer is responsible for the design, development, and automated testing of the code, following security best practices. The development lead defines the requirements and specifications for functionality; he or she manages the entire development process within the team. The product owner defines functional requirements. The Technical Security Officer reviews technical solutions and ensures that they comply with standards.
In the initial phase of a development process, functional requirements are specified and approved by the product owner. Then, the technical requirements are specified and approved by the head of development (in some cases, approval by a colleague is sufficient). Code is developed by engineers and requires writing and running automated tests. Mandatory manual review of the code reduces the possibility of bugs getting into the code. Business-critical functions can also be tested manually before release if requested by the information security officer. The organization has adopted pair programming as a programming method. In this method, engineers work on functions in pairs to improve their quality and reduce the risk of potential security problems. Pair programming is intended to foster a culture of feedback and challenge among engineers in the organization.
Applications should have a documented design and architecture. This documentation may include models, text documents, and other similar documents. The following requirements must be considered in the development process / during development:
Certain requirements must be met for any change to the software stack. These include that the introduction or modification of software libraries requires, at a minimum, an approved review by an engineer or the Information Security Officer. All changes to the libraries used are documented via version control. Package managers are used to manage software packages and their dependencies whenever possible.
Compliance with protection goals, confidentiality, integrity and availability must be ensured not only when code is created, but also when it is handled. This includes:
To ensure a high standard of secure coding, the organization requires automated testing for all relevant changes to the code base. The organization relies on frameworks where appropriate to reduce the risk of common attack scenarios such as injection attacks or cross-site scripting. Code reviews are also required for all changes. When security alerts or risks are identified that are deemed significant and likely, mitigating these risks takes precedence over business objectives.
To ensure that the company's developers are able to avoid, find, and fix vulnerabilities, and to raise awareness, we regularly share security-related findings within the team and discuss potential solutions. Pair programming is performed to avoid security risks. In addition, the development team is in constant contact with the pentest team, so that current hazards are always pointed out and the developers thus continuously develop in the area of application security. The organization ensures that engineers can update the libraries themselves to respond to vulnerabilities that are already known, all of which is taken into account as part of the review process.
Access to source code is restricted to engineers currently working on the project. We document all source code changes via the Git versioning system, specifically via GitLab merge requests. These always include a reference to the ticket system and are based on a template that includes a checklist of common tasks.
Access to the source code is restricted to selected engineers. Changes to the source code can only be added to the master branches of the code base after a mandatory peer review.
Our development and test environments are separated from the operational developments. The development environments reside on the developers' PCs. As soon as a merge request is opened on GitLab, a test environment is created with the respective changes and unit and integration tests are automatically executed. Only when all software tests have been successfully completed can the new changes be transferred to the operational environment.
The use of production data containing personally identifiable information, confidential or otherwise sensitive data for testing is not permitted under any circumstances.
To ensure secure development, we separate development environments from development, test, and production environments. This prevents untested code changes from deleting or corrupting production data, and prevents developers from accessing test and production systems. Our environments are divided into three isolated sandboxes:
Testing of code to ensure that no malicious features or vulnerabilities are introduced into production must be performed on a regular basis. All testing must cover as much of the source code as possible, especially if the code is written in a scripting language that could lead to malicious code from the outside. One hundred percent test coverage is not always achievable, nor is it often goal-oriented. The reviewer must conscientiously decide whether the test coverage is sufficient for the feature in question and, if necessary, request higher coverage from the developer.
The correct and complete implementation of each security-related requirement or change must be verified by an appropriate security test. If possible, it should always be a combination of automated scans and manual security tests.
A web security scanner is used to automatically scan a web application for vulnerabilities. Web security scanners can detect simple vulnerabilities (low-hanging fruit). However, scanners are not perfect and cannot cover all vulnerabilities. Like static code analysis tools, they can only identify vulnerabilities specified in their database. They also cannot identify logical flaws in the web application. Therefore, many bugs cannot be detected with such tools. However, they are useful when continuous testing for specific, known vulnerabilities should be performed. Even within development, these (often free) tools enable simple vulnerabilities to be identified at an early stage.
A code review involves manually checking the program code. It is important to ensure that the review is performed by a different person (e.g., peer review) than the person who wrote the program code, otherwise bugs can easily be over-blamed. Typical errors that can be detected are deviations from standards, deviations from requirements, design errors, buffer overflows, etc. This is especially important for sensitive functions such as transaction management, authentication and cryptography.
Penetration testing are security tests from an attacker's perspective. They attempt to identify and exploit vulnerabilities within the web application to cause a breach of a protection objective. Penetration tests are usually performed as follows:
In the agile context, acceptance testing is essential. Due to the usually short iterations and the intensive exchange with the customer or later user, acceptance testing becomes an ongoing process here.
Since our applications are used by our own employees outside of development, we see them as our end users who perform acceptance testing for us. To do this, they are regularly notified of changes and encouraged to test those changes in the staging environment before they are released on the production environment.
As soon as a functionality meets the employees' expectations, it is released for publication. The respective feature is then considered accepted and is regularly tested with automatic regression tests.
If errors are found during the acceptance tests, they are immediately fixed in the current development cycle and a new acceptance test is performed. For this, it is sufficient for another developer to perform this acceptance test.
The development team makes the necessary changes to the software code to fix bugs. Once the bugs are fixed, a different developer each time verifies that the bugs were fixed properly. This is called retesting. Each bug report is checked for accuracy. When a corrected version is delivered to acceptance testing, a certain amount of regression testing is also performed. The purpose of regression testing is to ensure that no bugs have been introduced when developers have corrected the reported bugs. This can be done with a set of regression test cases, usually key flows or parts where bugs frequently occur, based on previous experience.
Changes to the master branch, including library patches, can be deployed when the test build is successful. This is done by replacing the running containers of the corresponding service with an automated deployment pipeline. In the case of changes to critical assets, the asset class owner must be notified of the intended patch or update. In the case of network infrastructure, the update must occur outside of regular office hours. In the case of critical assets in the office infrastructure, the current firmware version must be saved and a backup of the configuration must be made prior to updating the patch. Such updates should be scheduled outside of the customer's regular office hours to minimize the impact on availability. Minimize availability; changes without downtime are desirable.
Changes can be rolled back to a state with a successful test build at any time. If a system patch has failed, the system shall be automatically rolled back to the last working version. Approval to rollback in the event of a failed update is not required.
Schedule a no-obligation initial consultation with one of our sales representatives. Use the following link to select an appointment: