top of page

Five Lies About Software Security

  • Writer: Susan Sons
    Susan Sons
  • Aug 19
  • 4 min read

The man who warned us about lies, damned lies, and statistics never had to deal with software.

Rome could have fallen faster if it had been written in Python.
Rome could have fallen faster if it had been written in Python.

Before I had a cybersecurity title, I worked as a software engineer and architect.  Repeatedly turning around software security crises is what made me move to cybersecurity as a career.  Sadly, I'm still seeing some of the same misconceptions I was working to fix in the early 2010s undercutting the work of otherwise promising development teams.


Have a think about each of these, and perhaps you can prevent a development team you care about from becoming a cautionary tale.

Lie #1: Security isn't the dev team's job.

Developers often find themselves without specific training in software security, yet they make countless decisions that impact the security of a product or organization.  It's easy to take the lack of training to mean that someone else must be in charge of security.  I've heard from dev teams that they believe the networking team, the systems administrators, or the security team is responsible for security... each has a part to play, but the development team is best positioned to prevent and remediate software vulnerabilities.

Lie #2: More security means slower development.

Many developers and project managers feel that software security will be a burden on the team's ability to produce bugfixes and features.  However, many of the practices that make development more secure are also development accelerators, such as:


  • Build automation, which enables faster and more consistent building and testing.

  • Organizing code into smaller, more comprehensible chunks makes human error less likely and easier to track down, while allowing developers to move faster without breaking things.

  • Comprehensive automated testing allows developers to verify code changes faster.

  • Good documentation helps developers patch bugs faster and onboard new team members more quickly.

  • Safer programming languages and careful dependency selection allow for faster development and safer development, with less time wasted on patching for vulnerabilities in poorly-managed dependencies.

  • Architecting software with limited, carefully-specified interfaces lightens the workload for developers and reduces avenues of attack.


Generally speaking, good all-around development practices are a great start in software security.

Lie #3: Good code is self-documenting.

Comprehensible code--meaning code that an average programmer can understand at a glance--is important, but it doesn't obviate the need for documentation, either in the form of code comments or as a separate reference.  The problem with the idea of "self-documenting" code is that it shows what the code is doing, not why it does that or what the developer's intentions were.


Many security vulnerabilities are created by two or more good programmers working with incompatible assumptions.  Alice believes that this function only receives input from trusted sources.  Bob doesn't know that, and sends the function untrusted user inputs.  Something becomes abusable because it isn't adequately sanitized.  Carol believes that her software will only run on case-sensitive operating systems, but Dave runs it on a case-insensitive OS.  These things happen every day, and are made more likely by gaps in documentation.

Lie #4: Unit testing will prevent all security problems.

Good unit testing will prevent some security problems, but not all.  That's because, as in the examples above about developers with differing assumptions, a unit test doesn't test how disparate pieces of code work when they are used together.  This is where automated functional testing comes in: tests need to be written not just on discrete units of code, but on larger functions the code performs.  A great example is to have a suite of tests that send malicious or invalid inputs to APIs, and measure the software's behavior in response to those inputs.

Lie #5: Security is just a matter of developer skill.

Developers never set out to create vulnerabilities, yet even the smartest developers can do so.  That's because a great deal goes into code quality beyond a specific developer's talent.  There are development teams that operate in ways that make security vulnerabilities more likely, and those that provide safety nets to reduce human error.  Here are some examples of practices that can do each:


  • High-turnover teams tend to produce more errors than teams that keep staff around for long periods.

  • Under-resourced development teams produce far more errors than well-resourced teams.  Think about the effects we see in other areas of practical knowledge work: the biggest medical mistakes tend to happen when practitioners are sleep deprived, rushed, or working without adequate support.  We programmers are no different.

  • Code organized in comprehensible units--with functions small enough for a developer to hold in working memory--tend to end up with fewer errors than code organized in larger, more complex units.

  • Teams with high-quality automation tend to produce fewer errors, because they can focus on fewer decisions in the course of any code change, trusting the automation to be a "safety net" for certain classes of errors.

  • Teams that use one another for effective checks, such as through adversarial testing, tend to produce fewer errors than teams where one person's work is integrated into a code base without a separate person checking it over or testing it.


Managers, project managers, and executives can increase code quality by improving the environment and incentives for development teams.... and that includes improving security.


bottom of page