search menu icon-carat-right cmu-wordmark
quotes
2020 Year in Review

Improving the Security of Software Code

Software vulnerabilities constitute a major threat to the Department of Defense’s (DoD’s) ability to secure its information and assure mission success. Today’s software assurance tools and approaches cannot address the vulnerabilities in the DoD’s huge volume of code, especially under-supported legacy software.

“The SEI’s Secure Coding team has led the development of secure coding practices and standards,” explained Bob Schiela, Cybersecurity Foundations technical manager, “as well as tools and practices for auditing software source code to identify and mitigate security flaws.” This team’s work includes improving the efficiency and efficacy of security flaw resolution. Two lines of research include automated code repair and improved static analysis result adjudication.

The Secure Coding team’s automated code repair tools find and repair specific types of common security flaws in source code, avoiding painstaking verification and repair by human analysts and developers. Automated code repair is especially helpful for maintaining the security of legacy software.

The team wrapped up its Automated Code Repair to Ensure Memory Safety project in 2020. The tool they developed tracks the boundaries of allocated memory and checks that a pointer is within bounds before using it to access memory.

To evaluate the tool, the team used a software verification tool to compare code from Competition on Software Verification (SV-COMP) benchmarks before and after automated repairs were applied. Principal investigator Will Klieber explained, “Before the repairs, the tool couldn’t verify any benchmarks as safe, and it found that over 90 percent were unsafe. After the repairs, the tool verified that more than 50 percent were safe, and it did not find any to be unsafe.” The team then added an option to reduce the tool’s overhead from 50 percent to 6 percent by repairing only lines flagged as problematic by a third-party tool. Software developers and sustainers could use these tools before building their software to eliminate critical memory-safety defects, which could allow an attacker to take control of a system.

In the second line of research, the SEI has been developing a machine-learning-based method to automatically classify and prioritize results from static analysis tools. This Rapid Adjudication of Static Analysis Alerts During Continuous Integration method helps auditors and coders address large volumes of results with less effort.

Different static analysis tools find different types of defects, and these tools miss defects, misidentify defects, or both. To increase their coverage, analysts generally use multiple tools. However, this approach typically produces more results than can be manually adjudicated. These results are often not analyzed, leaving defects that are not found and fixed. To address this problem, the Secure Coding team is using machine learning classifiers to increase the efficiency of adjudication. Using information from previous manual adjudications, the team trained the classifiers to automatically predict a confidence that new results are likely true or false. Analysts will be able to use this new confidence to prioritize their reviews and help ensure that likely defects are addressed.

In 2020, we added design and implementation changes to the SEI’s SCAIFE and SCALe tools to address challenges to using static analysis classifiers in continuous integration environments.

Lori Flynn
Senior software security engineer, SEI CERT Division
Lori Flynn  (BW)

“In 2020, we added design and implementation changes to the SEI’s SCAIFE [Source Code Analysis Integrated Framework Environment] and SCALe [Secure Code Analysis Lab] tools to address challenges to using static analysis classifiers in continuous integration environments,” explained Lori Flynn, principal investigator of this research. For example, the SCAIFE and SCALe systems have been modified to enable external tools to automatically trigger evaluations and generate reports. In the coming year, the team will have SCAIFE integrated and running within a continuous integration environment.

The Secure Coding team is exploring how to incorporate both methods into the software development lifecycle to improve software assurance. With a large body of mission-critical software, including many legacy systems, the DoD stands to benefit from more accurate, thorough, and automated ways of addressing vulnerabilities. Schiela explained, “Combining automated code repair and machine learning classifiers to separate static analysis results into categories that support increased levels of automation promises to significantly improve the efficiency of fielding assured software.”

To learn more about the SEI’s work in secure development, visit sei.cmu.edu/our-work/secure-development.