Endpoint Management | System Administration

Patching Best Practices – Why speed in IT Management matters

25. November 2020, Avatar of Sean HerbertSean Herbert

In modern life and technology, it’s all about speed. Next day deliveries, next gen fiber communication, Netflix streaming; we want it all, and we want it as quickly as possible! Now sure, whilst life is certainly more of a strain without these things, they are hardly showstoppers to our everyday existence.

Our work life however, that is where time is money and delay is money wasted. Nowhere else is this more pertinent than the IT department, whose skill and proficiencies are relied upon to ensure the systems are working to their fullest.

IT Automation has been a game changer of course, but what about systems that still require a level of manual intervention? What about the “need for speed” in IT Security, specifically the response times between the identification of a vulnerability, and its sometimes-longwinded required remediation? When best practices demand real hands on, how can we ensure the hands are working most efficiently?

There is no doubt that the worldwide media coverage of WannaCry in 2018 helped IT departments build a better case for the importance of patching. It is still the most common topic in IT I talk about with potential customers. What is clear is that practices and systems often differ from company to company, each with their own positives and negatives, but some points are universal:

  1. Understand your environment granularlyin order to identify vulnerabilities in your endpoints.
  2. Utilize dedicated test-machines, which represent your environment accurately, to test new patches before their release to live endpoints. 
  3. Once tested, deploy to the live environment with minimum delay.

Now this may seem an over simplification of the task and frankly, it is. There will be many factors to consider during these steps, such as reacting to critical threats, or taking steps to mitigate potential breeches where no patch is available, but as a foundation to build from, these three pillars should be the first you erect. The question that is unearthed at this stage is, what materials do you use to build the rest of the house? And if you are in a race against hackers trying to bring the house down around you, how do you make it secure enough to withstand attacks?

I’ve compiled some points, with the onus on speed and efficiency, which can complement your patching strategy and hope they can help when you evaluate the processes you have in place.

1. Staff & Tools

IT Teams know the importance of patch management. They also know it is a laborious process without an efficient tool for IT management. Exploring patching tools and IT automation shouldn’t be seen as a replacement for that work, it should be seen as something that empowers the team, allowing more freedom. Dispensing with the mundane, repetitive and arduous task of deploying patches, means your team is freed up to resolve larger issues a lot quicker.

Automation Begets Innovation! A specific area to consider in this, is the provisions in place for patching 3rd Party Applications, such as Adobe and Java, which can make up around 75% of software environments in today’s workplace.

2. Vulnerability Management

Accurate reporting of vulnerabilities without a good tool is a major hurdle in the race against malware. If you only become aware of a vulnerability when the remediation for it is released it could already be too late. Understanding which Common Vulnerabilities and Exposures (CVEs) have been identified and how they relate to your environment is incredibly important and allows IT Teams to take remedial action if no patch is available, such as roll back to an earlier version or uninstall the application completely.

3. Initial Reaction Time

The vast majority of those affected by WannaCry were only susceptible to the malware because they had failed to deploy a two-month-old Microsoft critical patch. IT leaders should be determining KPIs and reaction time markers for their team, reviewing them in real time to establish where choke points lay and adapting the practices to ensure their optimization.

4. Testing Time

Processes are equally if not more important than the toolset when it comes to patching. Consider how long it takes you and your team to roll out patches. Testing the patches in a test environment is essential to ensure no issues arise in your live environment. But this should be done with minimum delay. A great practice I have seen clients undertake is to daisy chain jobs: For example, Day 1 is release to test environment, followed by a staggered deployment of the patches to live departments over the following days, only interrupting the automated process should a problem arise. This is a very effective utilization of automation in the patching process and saves a lot of time and effort.

5. IT Automation

IT Automation helps optimize a great many areas, whether deployment of applications and OS’, or vulnerability scanning and enrolling new clients on the network. When it comes to patching, we have to look at this more granularly. Automatic patching is 100% better than no patching at all, but auto-updaters or complete automated release through a toolset prevent control or testing of patches. If you do that into a live environment, it’s not a case of if something will go wrong, it’s when.

Testing the patch beforehand allows you to make an informed decision on what does and does not cause issues. Good patching vendors should be pretesting patches to a degree, but when IT environments are essentially a unique fingerprint of your business, it is impossible to replicate them, so patch testing should be undertaken before any automated releases.

6. Application Whitelisting

Less about speeding up and more about giving yourself some breathing space. Application whitelisting has been around for a while, but businesses in the past have had a reticence to using them due to the effort and hours involved in keeping a whitelist up to date. This has changed in the past few years with the introduction of Intelligent Whitelisting, which allows the setting of trusted vendors or updaters, automatically including the desired patches into the whitelist, based on hashes. This means that not only is the effort in whitelisting reduced, but the ability to lock down your live environment in a known good state and stop any executables outside of the whitelist from running means you also have a longer time to play with between the release and deployment of patches.

Vulnerabilities are, and will be, at least for the foreseeable future, a part of everyday IT management. New CVE’s are identified and will continue to be identified thick and fast, so businesses will need to keep up. Whether Wannacry, Heartbleed, Meltdown, Spectre or any other cyber headline of tomorrow, IT Managers, and of course business leaders, need to ensure their company is not the next in a long line of names added to the list of victims. Patch, patch efficiently and patch thoroughly, it should be the beating heart of your security process, and the speed at which you can ensure vulnerabilities are dealt with may well be the factor which saves you.

Read more

Entries 1 to 3 of 3