Trust and Internet Identity Meeting Europe
2013 - 2020: Workshops and Unconference

TIIME 2015 Session 2: Baseline Trust & Assurance + Identity Assurance Framework - Building Critical Trust

Conveners: Tom Barton, Joni Brennan

Abstract: Kantara - Identity assurance trust framework. How to comply with government’s ideas and how to translate their requirements and the capabilities of organisations? What to expect from ID & service providers? How can we get communities to work together?

Tags: Trust Frameworks, Assurance, LoA

Joni Brennan:

Kantara is a non-profit organisation creating strategic vision for digital identity transformation.

Moving into a strategy for the growth and development of trust, which is validated through certified 3rd party auditing programmes.

Identity assurance trust framework

  1. Trying to build tools to make it more adoptable
  2. Trying to remove complexity

Relying parties provided requirements, which were fed into Kantara's framework, very open and transparent (all available online).

There is the need to translate between government requirements and the capabilities of organisations.

  • Program is currently running in US.
  • The US government has come up with the idea of a credential service provider. Kantara has taken this and broken it up into modules.
  • Key value of Kantara is to link up individual players in the chain of credential service providers, e.g. identity proofing and token management.

Working closely with eIDAS in Europe, program in UK and Canada.

--> what's the baseline?

"Special snowflake" syndrome: unique requirements. 90% are very similar; Kantara is focussing on the deltas and building bridges.

Reports of LoA assessment are stored in both machine and human readable format and can be leveraged by further systems.

Interesting: framework also works with non-traditional partners

Challenges: systems in US are struggling, as they do not leverage the social approach.

Partnerships: healthcare system: Health risk different from other risks


Concept: vectors of trust: Token management, identity proofing, attribute binding, incident response and reporting… what else? Measurable components

What are the specific areas?

Levels of assurance: Low level 1 --> up to 2, 3... (Linear scale) some scales less linear

Q: How to reach the right measurement?

Tom Barton:


See slides: Download: S2_Baseline_Trust.pdf

There has been a strong reaction against the idea of a linear relationship between risk and trust.

Aligning with governmental ideas is difficult; large amount of effort required to get Kantara certified.

There’s been progressions:

  1. Creating profiles
  2. Working with federal government

There’s meant to be a value. The original approach failed since the cost was too high. Identity providers in InCommon turned out to be (reaction): Incredibly expensive -> failed.

Assurance profiles (e.g. Kantara) were too heavy for InCommon purposes. In 2001: InCommon's Participant Operating Practices were too light, non-verifiable.

What is it really that should be expected of id providers and service providers?

Assurance in the community: They came up with an easily understandable solution: there should be layers of automation and profiles, considering the case of federation.

Slide: potential participant baseline expectations

What to expect of IDP? Or as IDP: what to expect of SP?


  1. Everyone's a stakeholder
  2. If these exp. aren't observed - what to do about that?
  3. Problem: this stuff has already been done in production by businesses and Gov. baseline for expectations: what's going on in Europe? 6 or 7 businesses: bad policy. They HAVE expectations for both IDP and SPs.
    Gov: regulated industries that have to interact with gov's.
    Conveners: we're talking about diff communities. It’s more about communities working together (in our case they don't do: in the private sector: want to offer their services - gov intervenes). // it's a process. It’s about humans. All trust springs from people. Problem in our context: small scope; trust within. Thousands of organisations globally. There has to be an unburdened conversation.
  4. Peer review: doesn't work if they tell that practise is being followed
  5. Observation: Many federal work the same way. Operation operator role has been pulled back. Gut feeling: you could move in the opposite direction. Problem for trust. You have provided a middle layer. Organizations do not willingly rely on.... make parties make their own decisions. If all organizations share the same system. Can work fairly well.... Objection of audience: "that only works if the transaction is relatively low. Does not work if you have high risk and high value" composite source trust: several sources of authority.
    Same conversation over and over again. There are going to be multiple protocols for a long time. --> what's the right way to go?
    There are multiple solutions - need to put the right one in the right time.
  6. How to establish trust amongst diff stakeholders. New problems crack digitally. Problem we face as consumers with privacy policies: doesn’t tell you things YOU want to know. Would be good: they express what is relevant to you. What are providers willing to sign up to?
    Making privacy policy statement <-> what your statement commits you to do.
    Judgement calls: to more outcome based approach. It’s the stakeholders in the community who decide.
  7. Presents: interviewed research structures that could be relevant for IDP.

Peer review of self-assessment: start a low level and then expand

- There’s no statement about the right for the user. IDP typically consider themselves as owners which makes it hard for others.

Discourse will continue.


David S: supports the InCommon framework

Scott K: fed operator should be represented (there is a FOG mailing list, fed op should ensure voracity of these statements)

Peter: Didn’t do homework, already been done in production in corporations. Should check European BCA, all based on anti-seeing baseline contracts between partners. Governments, regulated industries have to interact with governments’ apps that drive topics such as this. Need to baseline before blue sky thinking… {Joni} we are taking different communities, need to teach and help communities to work together. Governments have expectations but these do not compare with abilities of private sector or research and education. Mismatched expectations. {Tom} blue sky thinking can have useful response. Community has spent time paying dues with no return. All trust springs from people. How do we scale that from individuals to 10s of thousands of organisations globally?

Need to find a way to get there, one of which is a social mechanism. Need to talk about what we expect. Needs to be an unburdened conversation.

David S: observation of challenge between peer to peer and hub and spoke networks. Difference is driving much of the trust. E.g. switch operates as hub and spoke, huge amount of responsibility but based on contractual relationships. Full mesh can work in the same way. Take away complexity from IDP and SP; federation acts as broker providing middle layer of trust.

David G: a set of organisations in a well-regulated space can work together based on guidelines, however in a more distributed environment the cost is comparable to the benefit. {Peter} that only works when risk is low. {David} depends on where you source your trust.

Joni: Are we doomed to have same conversation over and over again? There will be multiple protocols for quite some times, metaphor of different forms of transport for different purposes. Sometimes going in with auditing is fine, sometimes self-assessment is enough.

Robin: stated expectations is a helpful idea. Some of what we are experiencing is teething trouble. Business of establishing trust between organisations with different purposes is quite a new problem to crack. As consumers reading privacy policies of websites, these don’t tell you what you want to know, tell you info from website’s point of view. Wouldn’t it be nice is the SP would express its commitments to you as a user. This is what we see here in Incommon’s framework. This makes it auditable since you can ask for a test or proof, same principle as in privacy policies. There is a maturity step between making privacy policy statement and the actual work behind it.

Even professional auditors will have to have some peer review of their own practices. Moving away from self-assertion to audit where people are required to show how they are fulfilling certain behaviours. Some audit not completed since method of audit not defined.

AARC self-assessment is required for lowest level of LoA, supplemented by peer review.