Comment: IT staff go through bad patches

If patches are applied quickly and cause problems, the IT department will be blamed. But it will also be blamed for delays if it takes time to test patches. Mark Street offers a solution

Imagine that you have become the proud owner of a brand new, very expensive, top-of-the-range company car. After the first few days of driving, the manufacturer tells you that because of a production fault the front bumper could fall off at any time. It sends you a piece of wire to put it right. But when you attach the wire, you discover to your horror that the fix has compromised the car's steering so you have to put it on the blocks to sort it out.

You are upset because your expensive car is now in a garage rather than on the open road and you are neglecting your other duties as you devote every spare moment to putting it right. You complain to the manufacturer, but it shirks all responsibility. Worse still, you find out that there is a similar problem with the back bumper.

As you throw up your hands in exasperation, your colleagues tell you it is actually your fault because you should have bought a spare car or at least driven the new one on a private race track until you were sure it was OK.

Clearly, no customer in their right mind would accept such a situation, and yet IT managers do just that on a daily basis as they frantically attempt to patch their flawed systems.

The big problem is that there is simply not enough time to shore up system defences. As confidence in the quality of patches plummets, increasingly IT managers are having to test them on isolated parts of the network before applying them.

It is no surprise that IT departments are so often found at the familiar rock/hard place interface. If systems fail before patches are applied - because they are still being tested - IT staff will face criticism.

On the other hand, if a poorly-tested patch is implemented too quickly, any ensuing system failure will again elicit an angry response from users.

Of course IT managers can take precautions to reduce risks - they can improve firewalls and apply virtual patching, for example - but why can't the software vendors provide better products?

In an industry that defines itself by standards, it is ironic that vendors cannot agree a common strategy to fix their own faulty software. The debate over when vulnerability warnings should first be posted to users looks set to run for some time, but the argument that vendors are keeping firms in the dark for their own good is looking increasingly weak. There appears to be no common, effective approach for patch delivery, vulnerability alerts or severity ratings.

The upshot is that IT managers are going to have to throw a lot more staff and technology at security for a long time, creating a huge black hole in their annual budgets.

To take the sting out of criticism from the rest of the business, IT departments should invest more time in briefing chief executives and key stakeholders about the dilemmas, without bamboozling them with technology.

Once the facts have been laid out, ask their advice on which route leads to the lowest risk.

If board members are not too keen to give their opinions, then renew calls for the immediate establishment of a chief risk officer whose main concern should be the security of the firm, rather than cutting costs and arguing over return on investment.

Have your say: reply to IT Week

More IT Week Comments