The Chatham House Rule: Your new collaboration tool?

A novel approach to sharing security data and expertise in an atmosphere of trust
528 readers like this.
CIO Engaging, retaining and co-creating IT

In June, 1927, someone had a brilliant idea. Or, at least, that’s when the idea was first codified, at a meeting of the Royal Institute of International Affairs at Chatham House in London. All attendees of the meeting could quote comments made at the meeting, but they weren’t allowed to say who had made the comment.

This became known as the Chatham House Rule, and the most recent incarnation is defined thus:

"When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed."

This is brilliantly clever. It allows at least two things:

  • The sharing of information which might be sensitive to a particular entity when associated with that entity, but which is still useful when applied without that attribution;
  • The sharing of views or opinions which, when associated with a particular person or organization, might cause wider issues or problems.

The upshot: A powerful kind of trust

The upshot of this is that if somebody (say, Person A) values the expertise, opinion and experience of another person (say, Person B), then they can share that other person’s views with people who may not know Person B, or whose views on Person B may be biased by their background or associations. This is a form of transitive trust, and situations where transitive trust are made explicit are, in my opinion, to be lauded (such trust relationships are too often implicit, rather than explicit).

[ Read more by Mike Bursell: Why IT leaders must speak risk fluently. ]  

Benefits for security discussions

Security is one of those areas which can have an interesting relationship with open source. I’m passionately devoted to the principle that open-ness is vital to security, but there are times when this is difficult. The first is to do with data, and the second is to do with perceived expertise.

While we all (hopefully) want to ensure that all our security-related code is open source, the same cannot be said for data. There is absolutely a place for open data – citizen-related data is the most obvious, e.g. bus timetables, town planning information – and there’s data that we’d like to be more open, but not if it can be traced to particular entities. Aggregated health information is great, but people aren’t happy about their personal health records being exposed. The same goes for financial data: Aggregated information about people’s spending and saving habits is extremely useful, but I, for one, don’t want my bank records revealed to all and sundry.

Moving specifically to security, what about data such as the number of cyber attacks – successful and unsuccessful – against companies? The types that were most successful? The techniques that were used to mitigate? All of these are vastly useful to the wider community, and there’s a need to share them more widely. We’re seeing some initiatives to allow this already, and aggregation of this data is really important.

There comes a time when particular cyber attack examples are needed.

There comes a time, however, when particular examples are needed. And as soon as you have somebody stand up and say “This is what happened to us”, then they’re likely to be in trouble from a number of directions, which may include: their own organization, their lawyers, their board, their customers and future attackers, who can use that information to their advantage. This is where the Chatham House Rule can help: it allows experts to give their views and be listened to without so much danger from the parties listed above.

It also allows for other people to say “we hadn’t thought of that”, or “we’re not ready for that” or similar without putting their organizations – or their reputations – on the line. Open source needs this, and there are times when those involved in open source security, in particular, need to be able to share the information they know in a way that doesn’t put their organizations in danger.

Mike Bursell joined Red Hat in August 2016, following previous roles at Intel and Citrix working on security, virtualisation, and networking. After training in software engineering, he specialised in distributed systems and security, and has worked in architecture and technical strategy for the past few years.