Often driven more by regulation and legislation than by any real desire to change behaviour, information security policies are seen by author and reader alike as a tick box exercise. Organisations either forget or don’t care that a policy is completely useless if it’s not read, and it’s more likely to be read if you write it with the reader in mind. I recently had to review and rewrite the information security policy for the University of Plymouth, and that process informed and update my thinking on policy, which I’ve set out below.
Where to start
Many people will start by re-hashing information security policies from the existing or previous organisations, or just searching the web. All of those options can inform your approach, but your policies should be driven by risk. In simple terms, if you use computers in your organisation (including smartphones and tablets) and if you store or process data which is valuable to your organisation or relates to living persons (and thereby protected by law) you are exposed to risks. Once you understand those risks you can identity appropriate technical (e.g. backups or endpoint protection) and policy controls (e.g. information security or classification policies).
Understanding what your risks are is a whole different kettle of fish (which if I’m brave enough I’ll blog about later) but let’s assume you’ve understood your risks and determined some behaviour you’d like to require or prohibit. Next, let’s look at the audience.
A policy for each audience
Many organisations have one information security policy. Unfortunately these tend to contain a lot of information which isn’t relevant to the bulk of the readers (i.e. the non-technical service users). This significantly reduces the chances those people will get past the first paragraph, which isn’t a good result. At Plymouth readability was my top priority, so I chose to separate the policies out into one for students, one for staff, and one for IT (which is where all the technical terminology goes).
This approach means you can require different things of different people. In an educational setting that’s definitely a good idea, but this could also extend to any organisation where each groups requires a different policy approach (e.g. an NGO might want to differentiate between paid staff and volunteers for example). Having defined the behaviour you want to prevent and the audience/s you’re talking to, let’s look at how to trim your policy so it’s all good content, and no filler.
Content to avoid
There are a few things which tend to ‘bulk out’ policies and make them harder to read, so it’s good to exclude those up front, and understand why they’re not needed.
Documenting technical control in user facing policy
If you enforce a minimal character length for passwords in Active Directory, don’t put this requirement in your user facing information security policy, because you don’t enforce through policy, you enforce it through technical controls. User policy is there to mandate or prohibit behaviour which you cannot control via other means.
It’s fine to include this information in documentation or guidance for users, but user facing policy is not technical documentation. You should put that minimum character limit in your IT Information Security Policy, because you’re requiring the technical staff responsible for that area to implement and maintain that technical control.
Complying with policy creates friction in people’s job
Information Security policy is not guidance
I once heard someone say “Don’t put anything in a policy unless you’re prepared to discipline or dismiss someone for failure to comply”. The point is, if your policy says “Adherence to this policy is mandatory and non-compliance could lead to disciplinary action” what you’re doing is setting an expectation of behaviour within your organisation. If this isn’t complied with by everyone from the top down, and breaches aren’t followed up, it has no integrity, and is completely ineffective.
Therefore, what goes into policy is what people must or must not do, because these are behaviour which have an impact on information security risk. The language should therefore reflect that, so avoid words like “should” or “could”, and make sure you care enough to require people to comply. Complying with policy creates friction in people’s jobs, there is a cost to compliance (transaction overhead, time for training or even morale) so make sure that cost is matched by a corresponding reduction in risk.
Having defined the risks, audience and some pitfalls now it’s onto language and presentation.
It’s absolutely vital that policy clearly communicates what the reader should do or not do, and that means the language and terminology used should be familiar and well understood by your audience. I wouldn’t spend too much time on this, but it’s well worth (as I did) getting feedback from as many people as possible on the language in your policy. There’s a balance to be struck between the ‘legalese’ approach (which is sometimes preferred by HR or Legal teams who want to be able to refer to this policy in a tribunal or court of law), and the reader friendly approach (which may be clearly understood but result in actual or perceived ambiguity amongst HR or Legal professionals).
Ultimately, if you go with legalese language you’re not communicating clearly, some people won’t understand what you require of them. Many people will be so turned off they’ll either fail to read, or read but fail to recall the vital information. I’m not sure if there is any legal precedent about making policies clear for readers or a defence that a policy could not be reasonably understood, but remember this policy is not here for HR, it’s here to manage information security risk!
Part of readability is how you present your policy. Please as far as possible avoid Word documents exported to PDF and linked to on your website. Those fail to pass most accessibility tests, they look ghastly on mobile devices, and they’re a relic of a bygone age when documents were printed, read and filed. You will probably be constrained by your Content Management System (CMS) and organisational guidelines, but within those try and make sure your policy passes the following basic tests;
- Can you read it on a mobile devices (with a display as small as a 4.7″ screen)?
- Can it be read in all the common browsers (Chrome, Firefox and Safari)?
- Does it pass Web Content Accessibility Guidelines (WCAG)?
I would also suggest that you think about some sort of ‘Quick Guide’ at the top or as an accompanying complementary article. The format the Digital Marketing team at University of Plymouth came up with was great – a grid of icons with a snippet of text beneath means people could scan that, and probably be able to recall and comply with the most important elements of the policy. However much you do, accept that people will either skim read or read but forget your policy, so anything you can do to embed that information is worth doing.
Review and feedback
Having put all the effort into writing your policy you don’t want it to go stale, so it’s important that you;
- Review your policy regularly, probably annually, to ensure any changes in the organisation, risks or legislation are reflected in the policy.
- Actively seek feedback on your policy through engagement with customers and colleagues as individuals or groups, and through your existing stakeholder groups (e.g. Committees, Working Groups etc.)
The regular review should have access to that feedback so it is incorporate along with the changes to organisation, risk and legislation, ensuring the policy continually improves, rather than gradually declining in relevance and effectiveness.
As a huge fanboi of Emma W at the NCSC I’ll co-opt one of her statements and say “information security policy that doesn’t work for people, doesn’t work”. If you consider an information security policy to be a checkbox exercise, and something that only auditors read, you’ve missed the point. It’s supposed to change the behaviour of your service users, customers, colleagues so that they do the right thing, and information security risk is reduced. If it’s going to change behaviour the most important aspect is how effectively it communicates. To communicate effectively you need to tailor your policy so it is clear, relevant and pertinent to users, and ideally easy to recall and comply.
As ever, I’m always happy to get feedback on this blog and the policies I wrote at Plymouth. I don’t pretend those policies are perfect, nor am I an expert in sociotechnical cyber security, but I think these policies are an improvement on many I’ve seen (including those I’ve written before). If you can help make them (and this blog post) even better, I’d be very grateful!