SolarWinds and Sunburst: How to stop supply-chain attacks in their tracks

SolarWinds and Sunburst.png

Virtually everything is online these days. As many businesses move towards a SaaS (“software-as-a-service") business model, the risk and potential consequences of “getting hacked” grow by the day. One major instance of this was the recent SolarWinds hack – a major US tech company essentially has had its flagship product used to compromise as many as 18,000 different companies and government agencies. How do you avoid this happening to your business?

Before we start, it’s important to have a bit of background of what SolarWinds Orion is, and how the hack actually worked. SolarWinds Orion is a monitoring platform that tries to be a “one-stop-shop” for all of your monitoring, logging, and system administration in general. Installations have multiple components, but essentially there is a central Orion server that collects metrics from servers, as well as agents that collect metrics for the centralized server to report on. Both the agents and central monitoring software were compromised.

In fact, there were actually two hacks: “SUNBURST” and “SUPERNOVA”. SUNBURST is the more interesting of the two attacks and is what we’re going to focus on here – an attacker was able to modify SolarWinds’ source code without anyone at the company noticing.

Every time a company installed SolarWinds’ products from the official site, they were actually installing a backdoor into their servers. Making matters worse, this went on for months, and SolarWinds still has yet to determine the full extent of how many products were affected. The severity of the hack was so bad that the US government-mandated taking any SolarWinds servers as well as any servers monitored by SolarWinds permanently offline. Essentially, if your business had been using SolarWinds Orion to monitor your systems, you’d essentially need to power off and treat virtually every server you have as compromised.

How to securely publish software

Checksums and reproducible builds

The most obvious way to verify that software is what it's supposed to be is to verify that the artifact you distribute matches its intended contents. The most common way of doing this is via checksums. Checksum tools compute a unique identifier (typically a string of characters) for a given file. If two files have the same checksum, they are identical. This is especially useful for distributing software – most software packages are distributed as a single artifact (for instance an installer, native Linux package, or zipfile), resulting in a single checksum to check when distributing or downloading a package.

To make it easy for an end-user (or your own company) to verify that your software is what you say it is, you can take a SHA256 checksum of the file and provide these checksums to your end-users. Take the checksum of a file (or multiple files) using sha256sum and save the results to a file - for instance:

sha256sum filename.zip > checksums.txt 

When you distribute:

filename.zip 

Distribute:

checksums.txt

as one of the downloads.

Users can use the checksums in checksums.txt to check that the software has not been corrupted (or replaced with something else entirely, like the SUNBURST hack). All the user has to do is run sha256sum on the file and they should be able to verify the files have not changed.

Note: the sha256sum command can be different depending on OS – on a Mac, it’s:

shasum –a 256

There are a few important things to note here:

We use SHA256 checksums instead of SHA1 or MD5 checksums because SHA1 and MD5 sums are no longer considered cryptographically secure. It’s theoretically possible for an attacker to generate a file that has the same SHA1/MD5 checksum as another file. which would let them substitute a malicious file for the normal one without users noticing.

Checksums are even more worthwhile if your software builds are reproducible. When you build your software artifact, it should have the same checksum every time the build is run, from any machine. This means that you cannot include build metadata like time or the build system hostname in the final artifact. Reproducible builds are extremely important because it means that the software (and resulting checksum) will be the same every time software is built. For more information about reproducible builds and how to achieve them, you can read-up on them here.

Digital signatures

Ok, you’ve made your build reproducible and now distribute checksums with your downloads. What prevents an attacker from replacing your build artifact and your checksum files at the same time? After all, if your software is proprietary, your end users wouldn’t know the difference between the malicious downloads + checksums instead of the real ones... they can’t build things from source themselves to double-check!

In this case, you need a way for users to definitively verify that the checksums were provided by you and not a malicious third-party. This is done using digital signatures. Digital signatures typically involve signing an artifact with a “secret key” that only you have access to. End users can then verify the signature against your “public key”. In this case, your public key is already known by the user – either you’ve published it on your website, pushed it to a public keyserver, or in the case of Linux distributions, the key may already be installed on the end-user's machine. By verifying the signature of a file against your public key, users can know that a file definitively came from you – after all, you’re the only one with the secret key used for signing, right?

The signing and verification process looks a little different depending on what your software publishing workflow looks like, but generally speaking:

You generate a GPG key and publish the public key so that it is accessible for your users to obtain.

  • When you generate an artifact, you sign the resulting checksums.txt file with your GPG key. (Though it’s possible to sign some build artifacts like .deb and .rpm packages, sometimes this causes installers not to work or makes the file more difficult to use, so typically people just sign the checksum files).

  • You provide the signed version of the checksums file along with the actual downloads.

  • The user downloads the signed checksum file and artifact, and verifies the signature of the checksum file. If the signatures match, they know it was provided by you.

  • Once the user has verified the checksums file, they can check the other files against the checksums to prove they are what they’re supposed to be.

  • Now that the user has verified the software download is what it was supposed to be, they can safely install it.

Depending on what your workflow looks like, you may need to add steps here and there (for instance: a build server might sign things in place of a developer after verifying that a developer has signed the source code). You can even sign your git commits if you want to ensure that a malicious user can’t impersonate you and make changes to your source code repo.

Does this sound super complicated? It is. Typically, vendors like Microsoft, Apple, or the Linux distribution of choice automate this process on behalf of users when installing packages. When you install something from Apple’s App Store or using your Linux package manager, the signature and checksum verification process is typically done for you behind the scenes by the app store or package manager.


If you’d like to start signing your Git commits, Github has great documentation on how to do so here.

If you’d like to start signing your build artifacts, there is a guide by Red Hat here.


In the case of the SolarWinds’ SUNBURST attack, the attackers were able to modify the Orion source code. This points to several scenarios: the build system itself was compromised so that it would inject the attackers’ modifications when building the software, attackers were able to insert their changes in SolarWinds’ version control systems without anyone noticing, or an employee of the company added the malicious code in a legitimate manner. In any scenario, this highlights just how hard it is to secure the entire software build lifecycle from start to finish.

Protecting docker images

Ok, those were some great instructions on how to ensure a file is what it says it is, and ensure an unbroken chain of trust from source code to the final built artifact but how do I get this process working for a SaaS web app that uses Docker? How do I enforce that my Kubernetes cluster or servers running Docker only use signed images?

As with normal files, Docker images can be signed as well – follow this guide to set up signatures for Docker images and enforce that only signed images can be run on servers.

Although Docker image signatures aren’t natively supported by Kubernetes (Kubernetes will happily ignore your signatures and run anything you give it), you can add signature checks to Kubernetes using the Connaisseur admission controller. This admission controller requires that images have a valid signature before they can be launched. Of course, this has its drawbacks: many public images for tools like Nginx-ingress or fluentd will not be signed, you’ll need to create a signed copy and push them to your own private Docker repos.

A simpler solution for most admins may just be using immutable image tags. When you use immutable image tags, an attacker will not be able to overwrite an image or docker tag in your repositories. Though this can sometimes be an inconvenience (you can’t re-tag images using tags like “latest” or “production”), it prevents images from being modified or overwritten once pushed.

Takeaways

Although it’s a lot of extra work (nothing comes for free), you can ensure that it’s significantly more difficult for an attacker to replace your software with their own malicious copies by following the instructions in this post. Because only you have the signing keys, only you will be able to publish new versions of your software. Keep in mind however that all bets are off if an attacker steals your keys (so keep these safe!). And technical safeguards such as the ones described here need to be supplemented by due diligence on the human resources side as well: what’s to prevent an employee from adding malicious code in a “legitimate” manner? It's up to you to hire the right people and enforce that all code changes follow the proper review process.

Likewise, as an end-user, it’s on you to verify that software is what it says it is. Unless you’re installing software from the App Store or use your system package manager, GPG signatures and checksums won’t check themselves. Always verify checksums and signatures if they’ve been provided by a developer. If the checksums and signatures don’t match, don’t install the software! Never

curl https://some.url.here | sudo bash 

as you have no way to verify or check what you’re installing. There are lots of tools to help make this process easier: as an example, you can have infrastructure provisioning tools like Ansible to verify download checksums against “known good” checksums before installing software. Docker and Kubernetes can also be configured to check signatures before running an image.

Checksums and signatures aren’t a be-all-end-all solution for cybersecurity. However, if you use them properly, supply-chain attacks like SUNBURST on the software you distribute become difficult, if not impossible. Many organizations don’t bother with supply-chain security at all. The SolarWinds hack is a wake-up call that this type of security can no longer be ignored.