Programmable Logic Controllers (PLCs) are often seen as one of the major reasons Industrial Control Systems are insecure. These devices -even today- are indeed crippled with critical vulnerabilities. Even worse, they have by design vulnerabilities, also known as forever-days.
While securing ICS is definitively possible even though we use very insecure devices, it is a legitimate worry and manufacturers need to step up their product security game.
This year, I attended the S4 conference in Miami South Beach for the second time. It is a great event, one of the very few cybersecurity events focused on ICS. I will try in this post to mention some of the talks I attended, as well as give the main takeaways I go back home with. Most of the slides are unavailable when I’m writing this, so a lot is from memory and twitter searches, so please excuse any imprecision. Also, this is just a write-up of “my” S4, so there’s plenty of things I will not cover (on ramp, sponsor stage, most of the main stage, Pown2Own competition, 3rd floor activities, cabana sessions….).
14 Hours and an Electric Grid (Jason Larsen)
Target falls in the 1st few hours #s4x20 Unlock firmware ez, magic number, reverse image, tcp stack, messages, and eventually function to access all of memory including configuration + password. pic.twitter.com/DK42PWbhwZ
The goal of this talk was to try to challenge some of our assumptions regarding the offensive capabilities of our adversaries. To do so, Jason took us on the story of the first 14 hours on a security assessment he recently did. The target of this assessment was a Moxa Serial-to-Ethernet converter, very similar to the ones that were attacked in Ukraine in 2015. In a nutshell, below are the steps Jason took in 14 hours: • Perform a port scan to identify exposed services • Analyze the legitimate packet during a firmware update • Analyze the firmware • Modify the firmware in order to incorporate additional features, like modifying the values received/sent to the controllers connected to serial.
While it’s certainly impressive, the main takeaway is that the 2015 Ukraine attackers only needed to step 2 and send an invalid firmware, that would crash/make the device unsuable.
Key takeway: real-world attacks on ICS are much more simple than most of the research work published by the security community and we tend to overestimate the required effort to perform such attacks.
Exploitable Vulnerabilities Hidden Deep in OT (Mark Carrigan)
In this presentation, the focus was shifted towards “configuration-related” vulnerabilities, by opposition to “patchable” vulnerabilities. Industrial Control Systems insecurity does not stop at the network or system-level. Very common misconfigurations or abuse of configuration can be leveraged by attackers to propagate and cause havoc in the network. As the slides are not published, I cannot go way further in the details of this talk.
Key takeaway: after securing the network, the systems, there is still some work to do at the application layer, and this will be even harder to fix.
Critical Infrastructure As Code (Matthew backes)
Infrastructure as code is a concept in which all the infrastructure is documented and version-controled, for example using a tool like Terraform. In this talk, Matthew details the experiment performed by the MIT Lincoln Laboratory to build a Critical-Infrastructure-as-Code for a small electrical substation and generation control system. They started with the tool ANSIBLE, and were able to create an intermediate layer to interface to several type of devices: • HTTP scraping for devices having only a web interface for configuration • SSH for network devices • A specific DLL provided by the vendor for the HMI
Key takeaway : While the infrastructure-as-code is not ICS specific, it is interesting to see that this project was able to use it in a realistic ICS environment by developing the right interfaces. Having a solid configuration management is a key asset in cybersecurity defense. We should push vendors to provide standard configuration APIs to help with this approach.
Running A Factory: A Realistic, High Interaction ICS Honeynet (Stephen Hilt)
Stephen hilt and his team created a very realistic honeypot environment to try to detect what kind of actions attackers would perform once they got access. For example, they left open an unauthenticated VNC service to one of the exposed workstation (a situation that unfortunately still exist today on the Internet). They even created a fake company, website and personnel to make it look more real. The PDF is very interesting for people that want to perform similar activities.
Key takeaways: while important efforts were put in to create a very realistic ICS honeypot, the attacks observed ranged from classical (ransomware) to absurd, but nothing advanced was witnessed. ICS advanced attacks only make sense if you have a goal: destabilization, etc… Even a very realistic honeypot didn’t catch anything worth it.
What To Patch When … Automating and Replacing the CVSS (Art Manion)
This is something I was looking forward to! Last year, on the same stage, was presented a new methodology for vulnerability patch triage, called TEMSL. Unfortunately, not a lot of details were provided and no further resource was available at the time.
This year, a paper is published by the software engineering institute at Carneggie Melon to detail this new approach.
The main difference with CVSS is that this methodology (SSVC, for Stakeholder-Specific Vulnerability Categorization), does not grade vulnerability but works with a decision tree:
One new thing is that the methodology defines different outcomes for the decision tree, based on wether you’re the developper of the application for which there is a new vulnerability, or a user of the product.
Four outcomes are defined:
Considering the difficulty of applying security patches to ICS environment, it is really interesting to limit the actions. I even prefer last year version that only had: NOW, NEXT, NEVER. But this version is probably more applicable to mature environments.
Only 4 criterias are used to navigate through the decision tree and take a deicison for the patch: • Exploitation: is the vulnerability actively exploited? • Exposure: what is the level of network exposure (internet, internal network…) of the devices affected? • Mission impact: how bad would be the exploitation of the vulnerability? • Safety: wether a successful exploitation of the vulnerability could have an impact on safety
Below is an extract of the decision tree if there is no known exploit at the moment:
More details can be found directly in the paper, that I strongly encourage you read and maybe perform a test of this methodology.
Key takeaway: there is now a paper and no more excuses not to test this promising methodology.
Special Access Features on PLC’s (Tobias Scharnowski)
In this talk, a Siemens S7-1200 PLCs was analyzed from a hardware point of view.
A small chip containing the bootloader was identified, and by reversing the code authors were able to identify a “special access feature” allowing to access privileged functions like dumping and loading an arbitrary firmware, to finally play Tic-Tac-Toe on the device.
Key takeaway: Securing the boot process of PLCs will limit the risk a supply chain attack replacing the valid firmware or installing a rootkit, but will also reduce security analysts’ abilities to perform deep analysis of the firmware.
PLC Secure Coding Practices (and the consequences of not following these practices) (Jake Browdski)
This talk was very interesting as it focuses on an uncommon topic: secure development for PLCs.
Needed to get Jake Brodsky and his presentation title on one photo in order to remind me and everyone else of a great talk giving engineers actionable info on how to program PLCs with security in mind because “no one teaches that in school”. #s4x20pic.twitter.com/92k8FmWmo2
Funny thing, some best coding practice for IT also apply for PLCs, like VALIDATING THE INPUTS!
This also dived a bit into cyber-physical attacks, by mentioning the example of a motor/actuator being restarted or moved too frequently, which could cause a physical impact in a pipe. Unfortunately, the talk was fast-paced and I wasn’t able to remember everything in detail, hopefully the slides will be released without waiting for the video.
In the meantime, we have limericks to study:
Validate your inputs, I declare Start and stop together will snare The problems we hate To fix when it’s late Check them before you make err
Key takeaway: I need to dig into PLC logic vulnerabilities and try to add some examples of exploitation to my ICS setups used in my trainings.
Using Rust, a Secure Programming Language, in Embedded & Safety Critical Systems (Adam Crain)
We still develop using the same toolset as 60 years ago. New technologies exist and allow the developers to produce more secure code. Adam exposed a few of the security-related advantages of Rust, which I’m not going to detail (mostly because I am a poor developer and didn’t really grasped everything).
Key takeaway: Rust is there, secure, fast, and now there is a Modbus library developed by Adam (with his colleague, a beer-pong tournament winner) so use it!
Tuning ICS Security Alerts: An Alarm Management Approach (Chris Sistrunk)
This talk is about not reiventing the wheel: engineers have been engineering & tweaking alarm systems for ICS for a long time, and OT security monitoring faces similar challenges: how to make sure critical alerts are treated, how to prevent alert fatigue….
A high-level methodology was giving throughout the talk:
Know your systems
Define your alert philosphy and start small
Tune the alerts & reduce the noise
Create playbooks and run them in crisis exercises
Kudos to Chris for publishing the slides immediately!
An interesting discussion emerged during the Q&A, regarding the relationship between SOC operators and ICS operators: how to provide valuable information to the ICS operator without turning him into a level 1 SOC analyst? I understand that operators shouldn’t be in charge of cybersecurity monitoring, but I also think that they should somehow be informed that the SCADA/BPCS could be untrustable at some point if suspicious activity is ongoing.
Key takeaway: Don’t reinvent the wheel and start looking at alarm management systems (ISA 18.12 & EEMUA 191) to improve your ICS cybersecurity alerting.
Designing A More Secure ICS Protocol Chip (Andrew Zonenberg)
In this talk, Andrew detailed why and how he created a security chip dedicated to securing ICS protocols. In order to keep things as simple as possible, he used SSP21 protocol and decided to implement it in hardware, using an FPGA. This approach reduces the overall risks and especially the risk of exploiting a vulnerability in the SSP21 software implementation to compromise the device. With a pure FPGA implementation, there is no memory to corrupt, no way of persistence.
Key takeyaway: there is a protocol called SSP21 and a hardware implementation integrated directly into PLC hardware could allow for an easier and very less vulnerable implementation.
Unsollicited response is a fun environment where attendees can rant about a specific topic for 5 minutes. I’ll only mention two of them that really strike me:
Ron Fabella: Urging us to stop thinking we’re the heroes, but rather to act as guides for our clients
Selena Larson: Urging us to be more inclusive in this field, wether it is for non-technical people, or different gender/races.
I’m not even detailing Reid’s rant, he apparently totally lost it 😉 [probably by fear of seing Rust used more widely in PLCs]
My impression just at the end of S4 was very similar to the one I felt last year: mixed feelings. It is a great event, but I was disappointed/not really interested in a large part of the content. I spent most of my time on stage 2, which offered content more related to my expectations.
After writing this post, I realize that the content that I liked, even if it’s only 50% of the program, was worth the travel (and ticket cost). My favorite talks were definitely the ones that will allow me to dig more in the next months: PLC programming best practices, ICS security alert management, and SVCC to replace CVSS.
Also, what makes most of the value at S4 is the people coming there. Last year, I was able to meet for the first time some very talented individuals that I’ve followed on Twitter and admired for years (and continue to do so). BEER-ISAC meetings also allowed me to meet a lot of people.
However, I feel S4 capitalizes too much on its attendees, on the fact that “this is the conference to go” for ICS cybersecurity. As the conference is growing, perhaps having a diverse program committee would make it feel less like a one man show.