WedaCon Blog

IAM: AN EXPERT Yes we are. We are WedaCon. Your experts for a wide range of IAM solutions from development to implementation.

Decentralized Identity Management

I had the great honor to present on the ‚Blockchain ID Innovation Night‘

Relationship Notation Language

Refining the Design principles of Identity Relationship Management

Why HR 4.0 might not work for you

The magical version ‘4.0’ is something we stumble over in 2016

Semantic Entity Relations

It requires a consistent reorientation and adjustment of current technologies and methods to meet the upcoming challenges of Identity Management.

2016 EU GDPR: Goals, Impact and involved parties

Within this second part we would like to dice deeper into the topics

2016 EU General Data Protection Regulation

This article is planned to be the first in a series to explain what the new EU General Data Protection Regulation means

Entity Relationship Management

In 2008, we at WedaCon started to consequently handle those 'related' objects with the same technologies, ideas and concepts as we do with the identities within our projects.

Password SelfService Portal

Imagine you are working in an environment which faced an incredible grow during the past few years

European Identity & Cloud Conference 2015

More than 600 participants from all over the world met again in Munich 2015

Criteria Groups

We all know: managing groups can become complex.

Relational LDAP Services

Lightweight Directory Services are somewhat strict. They have a schema, which you have to follow.

Enterprise Identity & Access Management 2015

Our founder and CEO Thorsten Niebuhr talked about Entity Relationship Management and how this will bring IAM to a new level.

Security Levels based on SmartCard Login

How to protect sensitive data (HR, innovations, whatever) in a highly complex, worldwide acting company?

Knowledge Transfer

Information Technology is a complex topic. Life itself is another complex topic. And living and breathing for Information Technology is even more complex.

European Identity & Cloud Conference 2014

EIC 2014 was a great success for WedaCon. So far the biggest event we sponsored!

Most Innovative IAM Solution Provider

WedaCon is awarded the title 'Most Innovative IAM Solution Provider 2019 - Germany

Top 10 IAM Consulting & Services Companies in Europe 2019

WedaCon recognized as member of Top 10 in Europe


Blockchain ID Innovation Night


Principles Of Decentralized Identity Management

I had the great honor to present on the ‚Blockchain ID Innovation Night‘, which took place just before the European Identity & Cloud Conference in Munich. According to the ‚call for speakers‘ send out in February, the organizer (KuppingerCole) was not looking for ‚pitches‘, but for a 'slam-style event where you try to entertain and convince the crowd that the world will be a better place with your contribution‘ at the same time. Well, I think the world is a better place since I presented, at least for me.

The basic idea of the talk was to compare the design principles for some recent (and also not that recent) papers and work around Identities, using the work on the 'Design Principles of Identity Relationship Management', from the Kantara Workgroup I am actively engaged in as the central starting point. In that group, we analyzed several systems doing relationships management 'in the wild' in some way. While we already had 'blockchain' as one of them in the list, the specific usage of DID and SSI was not taken into account specifically.

Principles, Laws and Goals

Identity Management Experts *love* principles. Remember: We are the good guys! Many of those principles overlap for different use cases, scenarios, technologies or disciplines, which lead to a situation where you have, in a fully implemented stack, covered those principles more than once. Which is good, as you should not rely on one part of your overall technology/ process stack only.

The new kids in town (and the old one)

Since 2-3 years now, everyone (at least in our industry) is talking about 'Blockchain' and Self-Sovereign Identities. Along with that we often find the requirement or functionality of a decentralized Identifier; and of course: Relationships are everywhere. And we have Kim Camerons 'Laws of Identity', as the oldest and most established set of 'good practice' around identities.

I asked myself the question: what do they have in common? Lets do a deeper dive into the four aspects of identities I have chosen to investigate

The Design Principles of Identity Relationship Management

The complete story is available here - but in short, relations must be provable, but only to authorized parties, which is a constraint on its own. Other constraints can be put on the usage. Mutability and also the definitions of immutabilities on its different edges, which includes revocability or delegations, temporary or infinitive. The scalability is a crucial one here, as the IoT is one of the main drivers for this topic.

Contextuality was in the initial design, but within a relation, context is everywhere. The actionable principle is something we have left over for the work of a relationship manager, whatever that one will be.

Design goals and principles of Decentralized Identifiers

Decentralization means to eliminate the need for a central external authority, while offering the power to own and fully control the digital identifier by humans or non-humans. Privacy and sufficient security by cryptographic proofs of authentication and authorization by design should be accompanied by functions to discover the relevant DIDs and enable its use through interoperability and portability. This should be enabled by any system which is capable of handling DIDs, in the most simple way possible, while allowing it to be extensible over time.

Again, for a complete view, just visit the W3C Draft-Document.

Principles of Self-Sovereign ID

This is from Christopher Allen and the 'Rebooting the Web of Trust' initiative. They did a great job (as always) on this, here is an extracted compilation of the important principles:

An independent Identity, fully controlled by the owner, including controlling the access to it and having access to any aspect of it. The underlying systems and algorithms should be transparent in its functions. The identifier itself should be persistent as long as the owner prefers it should live or moved (ported) to another party. The SSI must be enabled for interoperability with other systems, as long consent for this was given. The disclosure of those claims must support the concept of minimization. The individual rights of the owner must be protected and they take precedence over the rights and needs of the network.

The Laws of Identity

If you are working in the Identity Management Space, you most likely have heard of them. If not, I recommend to do it now.

Kim Cameron's Laws require the reveal of information to be based solely on user‘s consent, with minimal disclosure for a constrained use, and only to justifiable parties. Offering identifiers based on the public or private requirements of it as a ‚directed identity‘. The pluralism of the identity providers and technologies, including the definition of the human itself as a component of the overall system, requires those systems to offer a consistent experience and to be as simple and easy to use as possible.

What do they have in common?

When we put all those principles into a Venn Diagram to show which is used in more than one principle, we might get a good start.

OK, lets do it. The result is, well, somewhat disappointing:

What happened? All the principles are not comparable, without sorting or normalization, or in other terms: a categorization.

This is something we already tried on other occasions, and its a really hard thing to do. Once again, I realized that we miss something like (caution: stupid play of words now: 'universal Principles Names', a metaphysics of identity terms. But that is a completely different story.

As we lack this right now, I came up with a (simple) approach to normalization for those principles, reducing the original 36 principles from our four topics into 18 (by combining similar principles). These are:

Directed Identity
Human Integration

Fine, now that we have normalized it, lets try again:
Looks much better now. We have many overlapping concepts and ideas, but what I would like to focus on is:
Where are the lonely wolves? Which concepts are not covered by at least two candidates, hopefully disclosing were we still have work to do?

You see them surrounded by the red boxes, I call them 'lonely wolves'.

The need for decentralized Identity Management

All the concepts and some common sense lead to a simple conclusion: There is a missing element (or elephant?) in the room. And this is (in my opinion) what we have identified as ‚relationship Manager‘ in IRM, which might be, by its nature, a decentralized, or distributed Identity Management System, which needs to close these elements opened by IRM, DID, SSI and LoI.

A decentralized Identity Management system should therefore include the following functionalities (the lonely wolves we have seen) to complete the full stack of upcoming identity management services.

And I would like to add one more thing to this, which I see as a crucial new element: Semantics! We need a way to describe the processes around identities in a human understandable, and machine interpretable language.

The 7 Principles of Decentralized Identity Management

So here they are, the 7 principles you should consider adding to your Identity Management System (or Program, or project, or....)

  • Human Integration
  • The human being, its fundamental rights and fundamental faults must be seen as an explicit and integral part.

  • Availability
  • The given functionality needs to be provided even when some of the actors or components are not available.

  • Extensibility
  • The solution must be able to adapt to new, currently unknown demands and to connect to already available infrastructures and procedures.

  • Actionable
  • DIDM Systems must enable and enforce actions based on authorizations and obligations.

  • Scalability
  • A DIDM needs to have a theoretically unlimited scale.

  • Mutable
  • The mutability and /or immutability of relations and entities must be respected.

  • Semantical
  • Any entity (which includes everything by definition) must be described in a semantic way to enable human/machine interaction.

Author: Thorsten H. Niebuhr / @idmpath

Download the full presentation


Relationship Notation Language


Within its paper on ‚Refining the Design principles of Identity Relationship Management‘ [1], the Kantara Workgroup for Identity Relationship Management (IRM) defined the criteria a system should follow to enable representation and management of identity relationships.

In the course of its exploration, two things have become apparent: The need for a type of ‚Relationship Manager‘ and a Relationship ‚Notation‘ Language.

The document you are reading right now gives a first introduction and view on the topic of a ‚notation‘ language, and is one of the contributions from WedaCon to the mentioned workgroup. While concentrating on this, we will also see a few links and mentions of the functionality of a ‚relationship manager‘.

Notation ‚language‘

Wikipedia explains ,Notations' as ,a system of graphics or symbols, characters and abbreviated expressions, used inter alias in artistic and scientific disciplines to represent technical facts and quantities by convention. Therefore, a notation is a collection of related symbols that are each given an arbitrary meaning, created to facilitate structured communication within a domain knowledge or field of study.' [2]

The Merriam Webster Dictionary declares it as ,a system of characters, symbols, or abbreviated expressions used in an art or science or in mathematics or logic to express technical facts or quantities'.


With these two explanations, it becomes apparent that ‚notation language‘ is a tautology, as the term ‚notation‘ is just another word for ‚language‘: to describe facts using a convention (not necessarily using ‚words‘). The notation used for this document is known as ‚english‘.

Some relations

As already stated in the report, relationships are not new. Relational databases are the number one type of relational systems a reader might be familiar with. Architects and engineers in the profession of database design and implementation produce Entity Relation (ER) [4] models on a regular basis; so the question arise : Why do we need something special for IRM?

One of the reasons for this: while databases usually manage ‚local‘ data (within their tables and rows) only, IRM needs to be able to manage ‚disconnected‘ and remote data as well: To prove the relationship between Entity A (say: Alice) and Entity B (say: Dr. Bob), it might be necessary to link two different datasets (e.g. the patient database of Hospital C and the doctors‘ database of hospital D (as Dr. Bob is working there).

But that does not mean we need to re-invent the wheel: By investigating ER-Modells we can learn a lot about the requirements and principles of notation.

Notations and models

As mentioned, notations can consist of graphics, symbols or characters. Especially in technical topics, symbols are used to describe and visualize.

We already talked about ER-Models, which describe relationships between instances of entities in a given knowledge domain. Modeling facts using an abstract data model is done using symbols to form a graphical notation. This notation is perfect to visualize relations between entities , helping humans to understand and use this information to design (relational) systems. It makes a given fact human-understandable, but usually lacks the ability for machine interpretation.

What we are looking for is something that can help/ support a ‚relationship manager‘ doing its job. And this notation needs to be machine-interpretable. However: it would be nice if it is also human-understandable.

Its time to define the requirements for such a notation in more detail.

Requirements for Relationship Notations

A notation for identity relations needs to be able to describe facts (relations) between disconnected and remote domains and/or entities (Hospital C and Hospital D), and it needs to be able to either share concepts (‚what is a patient‘?) or understand the concepts used within a relation.

As it is per definition used for inter-connectivity, using standards (if available) is a must.

So far, as a result we have collected the following four basic requirements a relationship notation should provide:

  • Support the six design principles Provable, Constrainable, Mutable, Revocable, Delegable and Scalable from the IRM Design Principles Document.
  • Machine-interpretable and human-understandable.
  • Support disconnected and remote entities, concepts and domains.
  • Standards-oriented.

Symbols, objects and concepts

Any sign (symbol, word, sound) in itself has no meaning if not explained somewhere. We do use signs all the time without caring about this simple fact. The reason: we have a common understanding and expectation of the signs. If, during communication, the ‚sender‘ and the ‚receiver‘ have different expectations (understandings) of the signs, communication fails.

As a result, a notation for relations needs to provide a way to enable the sender and receiver to understand the claim made on a relation. Lets have a view on a simple statement about a relation:

This lightbulb is made by ACME Corporation

I think we can agree, one of the requirements for relationship notation is already met: this sentence is human-understandable. Apart from the fact that we are able to read and understand english, and know concepts / entities like ‚lightbulb‘ or ‚is made by‘, there is more in this statement to explore: it uses a common form we have learned in school:

In its simplest form, any relation can be described using this ‚subject-predicate-object‘ pattern. Lets use a more ‚techie‘ oriented variant of the above to further investigate its potential:

lightbulb:A is_made_by Corporation:ACME

Still, this statement consist of a subject (‚lightbulb:A‘), a predicate (‚is_made_by‘) and an object (‚Corporation:ACME‘). And it can still be understood by humans, but is this machine-interpretable? If you are familiar in IT (I am guessing you are), you might come to the conclusion that this variant is at least easier to ‚read‘ by machines. But being able to read (store into a variable) does not help at all: You (more precise: a relationship manager) needs to know how to interpret those strings.

What is required here is a way to understand the concepts represented by these strings: we need to convert the strings into ‚things‘. And while talking about things: it would also be beneficial to be able to uniquely identify the given thing as an instance of the concept we are describing.


In mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related".‘ [5], states Wikipedia in its articles about graphs.

Graphs consist of nodes (also known as vertices or points) and edges (arcs, lines). The simplest form can be thought of two nodes, connected by lines (on its edges). Sounds familiar?

Correct: A graph describes a relationship between two nodes: one node is the subject, the other one the object, and the relationship between them is the predicate.

Back to our lightbulb example, lets assume we have three lightbulbs from two different vendors

This picture can also be described using a notation:

@prefix lb: http://notationexamples.irm/lightbulb# .

@prefix lb: http://notationexamples.irm/company# .

@prefix lb: http://notationexamples.irm/relations# .

# three lightbulbs made by two different companies

lb:A pre:is_made_by co:ACME .

lb:B pre:is_made_by co:ACME .

lb:C pre:is_made_by co:BCME .

Note the ‚prefix‘ declaration: it defines a namespace for the type of entities and their properties, which is an arbitrary URI and identifier offering uniqueness for concepts and instances within its namespace.

The notation we have used here is based on the Resource Description Framework (RDF).

Its time to cover another one of our requirements: Standard orientation.

Resource Description Framework (RDF)

RDF is a specification by the World Wide Web Consortium (W3C) and was adopted as a recommendation in 1999. The RDF model is based on the ideas of making statements about uniquely identifiable resources in the already mentioned form of ‚subject-predicate-object‘, also known as ‚triple‘. The subject denotes the resource, and the predicate denotes traits or aspects of the resource, expressing a relation between the subject and the object. [6]

While the object can be a literal (string, numbers) as well, the possibility to use a unique (uniform) identifier (URI) provides interconnectivity in much the same way as what URLs (uniform resource locator) gave us for the World Wide Web.

The URI defines namespaces for the subject and object, and the same is true for the predicate: The type of relation is usually bound to one as well. This namespace might be available and known locally only, but with the extension of a URI and RDF specifications we are able to describe the type of relation (e.g. the concept of ‚is_made_by‘) in a way which enables remote systems to ‚understand‘ this claim and act accordingly.

Simplified, this definition can be seen as a vocabulary describing the specific notation used within the given concept, bound to a namespace.

Vocabularies and Ontologies

Vocabularies for the given use cases are also known as ‚ontologies‘, and we have a standard from W3C for this as well: the ‚Web Ontology Language‘, short ‚OWL‘ (yes, not WOL, as OWL is easier to pronounce. Not kidding here). Describing OWL and all its features would make this doc exploding, so lets simplify this: it is built on top of RDF, and uses triples as well.

The nice thing with OWL: it allows you to describe concepts using triples in much the same way as you can describe your data in triples. This includes boundaries and rules a given graph (statement, triple) can provide, allowing much more granulated rule sets than traditional rule sets and schemata can provide.

With our RDF example about lightbulbs, we already have defined a namespace for our claims. By adding metadata to those definitions, we can further enhance the meaning of the statements. Because of the usage of URI‘s instead of literals, we can include definitions from other vocabularies as well.

01 @prefix lb: http://notationexamples.irm/lightbulb# .

02 @prefix co: http://notationexamples.irm/company# .

03 @prefix pre: http://notationexamples.irm/relations# .

04 @prefix rdf: .

05 @prefix rdfs: .

06 @prefix owl:


08 # three lightbulbs made by two different companies


10 lb:A pre:is_made_by co:ACME .

11 lb:B pre:is_made_by co:ACME .

12 lb:C pre:is_made_by co:BCME .


14 #describing the predicate ‚is_made_by‘

15 pre:is_made_by rdf:type rdf:Property .

16 pre:is_made_by rdf:comment „the relation between a product and its producer“ .

17 pre:is_made_by rdf:type owl:SymetricProperty .

18 pre:has_produced owl:inverseOf pre:is_made_by .

A quick walk through on our additions (which are shown in bold):

In line 04-06 we have defined namespaces for three new prefixes rdf, rdfs and owl. Those provide a URI for the vocabularies that will provide further details and meaning to our information.

Line 15 and 16 are statements about the nature of ‚pre:is_made_of‘ as a property which is used to describe the relation between something that is produced and something that does produce.

Line 17 and 18 go even further by describing a new relation (and allows for implicit knowledge) about the inverse relation between a producer and a product.

As all those statements use a W3C-Standard, they can easily be used for interoperability. OWL provides the necessary functionality to deal with information, and not only plain data. It is, as RDF in general, a key technology for the semantic web and linked data. [7]

Implicit and explicit relations

In our example, we now have explicit relations (A is produced by B) and implicit relations (B has produced A). Although we have not explicitly declared a given company that has produced a given product (or an instance of it), the notation used for our statements allows for this ‚new‘ knowledge. This type of ‚intelligence‘ is easy for humans, but hard for machines.

As in any profession, despite the importance to know the details, we need to be able to ask the right questions. And to be able to do that, we need to know how to ask. For (relational) databases, we have SQL, directories are queried using LDAP-Queries, and triplestore databases, sometimes referenced as ‚Graph-Databases‘ have their own query language as well: SPARQL, which is, again, a W3C standard. [8]

Here is an example of a SPARQL query (and its results) on our lightbulb example. The data read is exactly what we have defined in the section about vocabularies and ontologies. A SPARQL client would be able to query the endpoint to get the required information on what to expect and the meaning of the statements.

Query Language vs Notation requirements

If our SPARQL Client is OWL aware, it would be automatically able to use the implicitly available information. To keep things simple here, we will do the explicit query here, to show us all lightbulbs produced by ACME Corporation:

The example above does only return the subject (as we have asked). However, the result could be a real graph again, allowing for further work on the relations that are available.

SPARQL is able to manipulate information stored in RDF (subject predicate object) format. Its flexibility also allows to retrieve unstructured data and generate new triples from that data, using externally defined ontologies and additional data sources.

The system used for storing the data does not need to be a graph-database or a ‚triplestore‘. What is required is an interface which understands this query language: a SPARQL-Endpoint.

While SPARQL is the query language, RDF and OWL are the notations used to describe relations.

Together they provide the ability to generate new triples (and knowledge) which makes them a perfect set of candidates for Identity Relationship Management:

  • standardized
  • machine interpretable (and still human readable with the help of ontologies)
  • supports disconnection (by caching the remote ontological definitions)
  • with the use of ontologies, it can provide any of the identified principles for IRM

Linked Data and Linked Entities

When talking about ontologies and semantics, the term ‚linked data‘ is something you inadvertently stumble across. With our links into wikipedia, we even use this technology actively: Wikipedia makes extensive usage of it to show facts related to a given subject.

In much the same way as linked data provides a way to interconnect several data sources and make combined use of it, ‚linked entities‘ would allow for much the same beneficial usage.

Relational Identity Management will most likely use very similar concepts, but this is something we need to investigate when talking about a ‚relationship manager‘.

Some Links for further reading

1 Kantara Initiative: Refining the Design Principles of Identity Relationship Management
2 Wikipedia: Notation
3 Merriam-Webster: Notation
4 Wikipedia: Entity-relationship model
5 Wikipedia: Graph theory
6 Wikipedia: Resource Description Framework
7 Wikipedia: Semantic Web Stack
8 Wikipedia: SPARQL


Why HR 4.0 might not work for you

The magical version ‘4.0’ is something we stumble over in 2016 each and everywhere people talk about 'disruptive' technologies, changes and new approaches. Whether its HR 4.0, Industry 4.0 or Web 4.0, the basic goals do not differ much, which is the reason for us to use the term ‘Idea 4.0’ throughout this document for the matter of ease and readability.

So what exactly does ‘Idea 4.0’ mean, and how did the previous visions and expectations for versions 2.0 and 3.0 look like? And even more important: Where do we stand now in the implementation of the previous versions, and does Idea 4.0 require a full or even partial implementation of versions 2.0 and 3.0 as a prerequisite ? Is there even a 'Cross-Update' path directly from 1.0 to 4.0?

Download your full copy as pdf, also available in german. Or just read it here!


Suppose we could start on a greenfield site, directly with Idea 4.0. All 4.0 technologies have in common that they provide or use small, more or less intelligent components (called Agents) to play a role in a given processes. Those agents can be implemented in hardware (Internet of Things) or software (Micro-Services).

The agents should behave autonomously and communicate and act with each other. They must be able to represent other agents (or another participative site) to make decisions and perform actions on their behalf. It is important to understand that this includes acting on behalf of a human being. The actions the agent performs must be secured on elements of authentication (who I am) and authorization (what am I allowed to do) and therefore require a high degree of robustness with respect to safety, to prevent unauthorized actions to be started.

Whatever 4.0 technology we look at: A secure implementation of functions managing authentication and authorization are essential, since a lack of those and the potentially resulting attacks on the infrastructure jeopardize the acceptance of such systems.

On the other hand, there will be no acceptance of new technologies by the end-users if we make it too hard for them to be used. As it is in many other areas, we have the need for a balance between 'Security and Usability'.

So even if we start on the 'green field': Idea 4.0 requires measures and functions for sufficiently secure communication between the 'Agents', regardless of whether we consider HR 4.0, Industrial 4.0 or Web 4.0.

1, 2 or 3

In order to understand what the different versions 1.0, 2.0 and 3.0 actually mean exactly, a closer look on the 'mother of all 2.0 hypes' might be useful.

When the term ‘Web 2.0’ came up around 2004, it was quickly re-used as hype and buzzword in many other fields for marketing purposes. Web 2.0 proposed new functions to collaborate on the internet, which were not really new. In fact, the first (and still current) versions of the hypertext protocol ‘http’ were designed to enable communication in any direction already.

However, the first applications to use the World Wide Web (WWW) were of a technical nature, creating an impression of a 'static' web for end-users: They were pure consumers of information. That old era of a static web was called 'Web 1.0'.

Following the static web, we got the 'participatory' web: New programs and services allowed end-users not only to be consumers, but in a very simple manner to generate new information and knowledge in a participatory way. As already said: They could have done that with Web 1.0 as well, but new offers allowed them to create information on their own using blogs, forums and personal web-pages quite easily.

Additional protocols and features were introduced to enable the Internet as a major 'platform' for new forms of communication and collaboration, and the catchy name ‘Web 2.0’ was used for it.

The massive increase of available information (and non-information), and their chaotic (because uncoordinated) storage now led to the fact that it became increasingly difficult to find required information, or even to use it efficiently.

New approaches were needed, not only to generate cataloges on the existing data, but to bring them into a logical context to each other: IT systems should be able to not only convey information, but to understand, to be interpreted with a certain degree of intelligence and automatically draw conclusions about it.

By using these semantic capabilities, raw data (for example, a text containing the letters ES) becomes real information: An ISO-3166.1 value which represents a specific country. Subsequently, further information can be semantically linked and new knowledge be generated.

The approaches and aims of Web 3.0 or as it was named, the ‘semantic web’, are not achieved until today. From today's perspective, the Web 3.0 can rather be described as the 'contextual network' to interpret information according to the current context and relations.

Example of these contextual skilled systems are the known social networks and Big Data approaches.

But a contextual network is far away from being intelligent and independent to make decisions: It lacks the necessary capacity to really understand the information, or at least to draw firm conclusions and to act independently. A skill we expect to be required with Web 4.0 agents.

One could argue that we are just on the run to make this happen with the next technological step.But there is a big gap. Inadequate implementations for Web 3.0’ goals (and in some cases even the 2.0 goals) do lack an important dimension:Trust.

Trust 1.0

As we have already noted in our views on the 'green field', the objectives of Idea 4.0 can not be achieved if a sufficient dimension of security and reliability in the communication of the 'Agents' exist. Without this dimension, we will not get the necessary confidence and trust to all the new things that are supposed to be intelligent.

And even in the area of ‘intelligence’ we are not where we planned to be promised by Idea 3.0. While there is a large amount of semantically organized data collections, they are rarely used in most cases.

The gaps identified in incomplete implementations of version 2.0 and 3.0 clearly needs to be closed to make Idea 4.0 a success.

HR and IT: Clash of Civilizations

Let's be honest: HR never belonged to those fields identifying and adopting new trends for itself very quickly. But that is not necessarily negative, sometimes it is worth to wait and watch others to fail before you do.

Maybe that was the reason for the fact that Marketing and Consultancy on HR Topics came up with the term ‘HR 2.0’ years after the concept it described was published.

HR 2.0 should exempt the HR departments of their stuffy existence. Holistic approaches, enterprise and sustainable management strategies for the resource 'human' were now announced. The concepts for this were already known since the late 90’s as 'Ulrich model'.

The evolution of the Web from version 1.0 to 4.0 have a lot in common to what is proposed for HR, especially when analyzing the goals dedicated to Version 2.0 of it:

The evolution of HR-IT

With the impact of the Ulrich model, various groupings and names of old as well as new 'players' in the the HR software market emerged. HRIS (HR Information System), HCM (Human Capital Management) or HRMS (Human Resource Management System) are just a few, but perhaps the most frequently mentioned. Depending on performance and functions, all available solutions can be sorted into one of the mentioned categories.

However, all have in common that they support processes known as 'Joiner-Mover-Leaver' (JML).

Greatly simplified today's HRIS / HCM / HRMS systems support the core processes 'JML' by offering assistance on the digital mapping of HR processes and enforcing the relevant compliance requirements.

A new employee record is created using the HR software, and thus, the software ensures that the digital personnel file at the beginning of the employment relationship is complete and all the necessary processes are carried out. All done for the first day, the HR managers can sit back and relax: Mission accomplished!

Well, this may be the case from the HR departments perspective: The manager of the new employee and the employee himself might have a totally different view on this!

Day 1 on a new job

Experience has shown that new employees spend their first two weeks waiting: For telephone, business cards, and various access profiles and accounts for a couple of IT applications that are necessary for carrying out his or her activities.

The necessary processes are usually in responsibility of IT, and we encounter very similar processes and requirements here: Again JML and compliance.

So in a first interim conclusion we can say the processes Joiner-Mover-Leaver must be reflected both in HR and in IT.

Depending on the maturity of the organization the JML processes for identities not managed from the HR organization such as partners, external employees or system accounts may need to be considered as well.

The JML processes, as they are to be implemented by the IT are usually realized by manually concerted operations, or processed automatically using an Identity & Access Management (IAM) system already.

Depending on the required or already achieved degree of integration, media discontinuity is a common concern and reality: Changes on HR Data are communicated via phone or e-mail and needs to be entered manually into diverse systems.

In rare cases organizations already have an common interface between the HR-Systems and other IT-Systems using IAM (Identity and Access Management) systems for automated processes.

Additionally to that, the IT department must ensure that the correct authorizations are assigned to the employee in accordance with his job profile and compliance requirements.

Next conclusion: While the basic processes and goals for HR and IT are the same or at least similar, they are not seen as a process "across departments". As result, they are acted on with different degrees of automation in HR or IT independently.

Even environments with a high level of automatism (HRIS + IAM systems in use) often lack direct connections between the systems and media discontinuities become visible.

The role of an ‚Administrative Expert‘

HR 3.0 and 4.0 have high demands on the maturity level of an organization with respect to their digital agenda. The most important element of the HR 2.0 models is the role (and the results achieved) of an Administrative Expert (AE)

  • Introducing an HRIS
  • Eliminating manual data entry
  • Identify and manage data sources and targets

Additionally to the technical dimensions of it, the Administrative Expert serves as an important role in organizations and enterprises to enhance holistic approaches for HR Management across departments and companies of the group.

But how should an organization which has implemented these functions inadequately or not at all deal with HR 4.0? How should those 'intelligent' agents make decisions if they do not have the relevant data and knowledge for this?

The stony path to 4.0, or: eating elephants

Whether your goal is Industry 4.0, HR 4.0, Web 4.0 or Idea 4.0: If you have not properly mastered Level 3 it will be difficult, perhaps even impossible. At least if you want to approach this with the necessary seriousness and a certain level of high quality awareness.

But you do not need to tackle everything at once, do it exactly like you would eat an elephant: Piece by piece.

An Agenda for Idea 2.9

Before you get started with Idea 3.0: Complete your 2.0 Agenda!

You may have started already, are near finishing it? Then lets call it Idea 2.9.

Central goal of Idea 2.0 are holistic views on the complete world of processes in an organization and their digital implementations. Especially when dealing with person related information – and who would like to deny this for HR – Identity and Access Management Systems need to be considered.

It makes sense here to get an understanding on the recent requirements and challenges related to the new EU General Data Protection Regulation.

  • Examine how personal data and process are handled in your IT systems today.
  • Identifying data sources and destinations on
    • personal data
    • organizational data (Master Data)
    • processes
  • Check / define data ownership and processes, including a view on the current requirement of the EU General Data Protection Regulation.

An Agenda for Idea 3.9

You know, before 4.0 comes 3.9!

To develop semantic skills you need to follow an approach to identify all relevant information, processes and side effects. This includes the transition from approaches centered around the view on humans/people towards a greater and broader concept: Entity Management.

An entity may be anything at this point : People, organizational structures, such as departments and subsidiaries, roles, processes, assets such as mobile phones and cars, and much more: A 'thing'.

And this totally makes sense if you want to deal with an ‘internet of things’

  • Identify relations and automated process capabilities and assignments (contextual or even semantically)
  • Automating processes and data traffic
  • Entity Relationship Management

A view on Idea 4.0

Wonderful, we arrived, and now we can take a closer look at what actually is meant by 4.0.

The "Agent's idea" consist of the following elements

  • Intelligent sensors + processes Intelligent sensors will transmit masses of data and information without asking in an automated way.It is important to check and control the part about 'without asking'.
  • Internet of Things If the toaster is about to order new toast from the refrigerator. Question: is he authorized to do so? And is it really your toaster ordering?
  • Blockchain & smart contracts Perhaps the hype around Idea 4.0: the need for reliable and trusted digital contracts
  • UMA (User-Managed Access) A section from the world of identity management, which returns end-user’s control of the data concerning him.

Real Life Results

Organizations which have adopted these principles can benefit from their agile processes, as it can be seen by the following figures from a real company currently on Level (or near to) ‘Idea 3.9’

  • Automated provisioning of all necessary applications and services depending on job role, department and organizational affiliation and other relations within 30 minutes.
  • Re-organizations (department changes and mergers for example) can be implemented within weeks rather than month
  • Automatic notifications about pending processes (password expiration, account deactivation, re-certifications)
  • Automatic re-certification of all assignments several times a day
  • User Self-Service
  • and many more


Whatever you would like to achieve with Idea 4.0, it requires secure digital communication, especially when dealing with sensitive information.

Connecting HR + IT systems by using an IAM solution leads to a massive reduction in effort and costs for the IT and the HR department regarding JML-processes.

Automated processes in IT, based on specifications from the HR, allow a secure and compliant implementation of all necessary steps in a few minutes and not in weeks: Your new employees can be productive from day one.

To achieve this, HR (and IT) organizations need to increase their maturity level and align their processes to reach the goals and objectives of the whole company.

Idea 4.0 can not be achieved if you have never dealt with Idea 2.0 or 3.0.

Download your full copy as pdf, also available in german, or get in contact with us for more information.

We will make Idea 4.0 work for you!


It requires a consistent re-orientation and adjustment of current technologies and methods to meet the upcoming challenges of Identity Management. With this White Paper we would like to introduce the latest development of our Entity Relationship Management system. The system’s new feature consistently manages and displays all types of entities and their connections to each other based on semantic and ontological approaches.


Identity and Access Management (IAM) and the 'sister-discipline' Identity Access Governance (IAG) are an integral part of the IT infrastructure in medium and large businesses. These systems manage internal user accounts for employees, system administrators and partners. Increasingly, access rights and accounts of customers and suppliers are considered in an IAM compliant view as well.

This expansion of the IAM/IAG application spectrum will increase even further in the coming years. Specifically the emergence of the 'Internet of Things' would make inclusion of "things" into the scope of IAM/IAG system necessary, because these elements often act on the users behalf, or in direct relation with the user.

Today's IAM/IAG systems and processes are mostly not designed to meet those expected and anticipated requirements.

This white paper presents our view on a few identified problems with the current IAM/IAG solutions in section 3 and how we attempt to address them in sections 4 and 5. Sections 6 and 7 present the current status of development and milestones.

Limitations of current IAM Systems

Current IAM systems have limitations in respect to a number of functions required for modern and future-ready management of digital identities of any kind. Innovative attempts are necessary to take IAM/IAG to a new level and meeting the demands of the 21st century.

Human-Machine Communication

Identity management involves defining what users can do on the network and IT systems with specific devices and services, and under what circumstances. Definitions of such access and accounting policies for IAM system processes and workflows performing authorization assignment provisioning is today done using machine optimized policy language. The origin of these policies is however made by the business, in natural language. The policies have to be translated to technical representation requiring close collaboration between the business requesting and technical acting teams.

Policy Management

Additionally the policies are in most cases not centrally managed, but are specific to the programs being integrated with the IAM system. On one hand this is driven by the different demands of the target systems APIs, on the other hand many IAM/IAG solutions offer a fixed set of components exchanging and sharing data in a rather proprietary way and does not allow for flexible control.

Back-end Systems

SQL-Databases and LDAP Directory services are the most commonly used back-end systems for IAM/IAG solutions. Modern, highly scalable and for those purpose optimized back-end systems are rare in the IAM/IAG world and are mostly only available as 'Add-on' (and additional data silo).

Master Data Management

The discipline of master Data Management, the Management of the enterprise data 'offside' user accounts in today’s IAM/IAG invironment, is rather exceptional than a rule. Departments, subsidiaries, addresses or other general information like zip-codes and location data usually are managed in external data sheets.

Semantic capabilities

Due to missing Master Data Management information not being used as information in purpose of a semantic approach, but in form of data for date, a timestamp is only used as a number or even a string.


IAM/IAG Systems, which have their origin in the early days of IDM usually scale rather poorly on behave of the back-end systems not designed for today's requirements. In contrary, modern IAM/IAG systems have an advantage here, but still miss out on required classic enterprise functions. Here we often see a difference of cloud born solutions and those that were offered on the market before scalability and cloud functionality was required.

Entities vs. Identities

Identities and people (User-Centric IAM) are the primary concern of Identity Management. Other entities (things, divisions, subsidiaries, units, relations) are arranged around the identity (=person). The management of these other entities is usually not consistent with IAM-specific approaches but instead with additional data silo.

Modular vs. Monolith

IAM/IAG- Systems, which present themselves as a 'one piece solution' are in most cases monolithically conceptualized. This complicates the management and leads to system dependencies. Modular performing systems on the other hand perform rather poorly with each other for the simple reason that they come along as an add-on and not as a module.

Flexible Adjustments

Perhaps the number one problem in IAM/IAG: The system complexity (and also the continuously growing rulesets they are based upon) complicates a flexible adaptation and the necessary new adjustments, as for example in provisioning or reorganization of companies.

Authorization and Obligation

The job of an IAM/IAG system is the authorization 'who is authorized to') of access. The control of responsibilities ('who is in charge (obligation)') is often overlooked. Failing to respond to one of these basic questions inevitably leads to the necessary use of tools checking compliance and re-certification.

Rapid Deployment

Installation, system maintenance and extensions on existing IAM/IAG- Systems take up significant time. As previously mentioned, the primary reasons are the complexity of prevailing monolithic architecture and poor communication between system modules. Even federal approaches often fail on behalf of the complicated structure of SAML and Co.


The existing standard's such as: SCIM, REST, SAML, OAuth2, openID should promote interoperability, however most IAM/IAG providers use proprietary solutions. We admit things have increasingly improved in the last years; nevertheless we are still far away from the aim.

Semantic Entity Management

The basic idea of our new approach lies in the possibility of extending processes and requirements of IAM / IAG - systems on all possible entities, and combines it with a semantic structure.

As for background, we would like to take a little trip into philosophy: Already the ancient Greeks were meditating about the concept of entities, whether they are concrete things or abstract objects: a defined 'being'.

On the other hand, an entity can also represent the 'nature' of a thing, an essential property for the existence of the thing itself.

To describe entities in terms of their capabilities and their reality, we can use ontologies.

In the recent years of 'semantic Web' development, ontologies have gained importance in computer science. An ontology is a conceptual model of observed reality; a repository of interlinked concept pertaining to a given application domain. Ontologies have outranked the taxonomic approaches that allow only hierarchical classifications.

Coming back to the IAM domain: how could an entity be categorized using ontology in familiar IAM structures?

Let’s use countries as examples, which are represented by a string in a table column in most IAM systems. In a data model, which uses ontologies there is no table with possible (active) countries used, instead it refers to an Ontology describing countries, which do not necessarily lie in the 'domain' of the organization. This makes a statement such as 'Spain is a country' (subject-predicate - object) possible and turn the date (string) 'Spain' into a concept that has properties and relations making more than a string.

Illustration of a simple ontology: Spain is a european country which shares a border with France. Both are EU-Countries.

When speaking of Ontology we speak of classes of entities and instances of these classes: individual entities. They are related to each other through inheritance and relations defined by properties. Additional 'knowledge' can be added using axioms.

The advantage of ontologies compared to non semantic data representation lies not only in the presentation of knowledge (their computational usability), but in relations providing riches of contextual description to data.

These relations don’t have to be known necessarily during the 'design' of the ontology; they can be computationally developed from the existing knowledge, and are then available as 'new' knowledge. ontologies can be represented as a set of 'triples' (subject-predicate-object statements): the same representation is also used in graph databases.

Graph databases are today’s preferred engines within 'Social Networks' to store relations between people. Similarly the relations stored in the graph database are used in e-commerce to display and recommend 'similar' products or additional services of interest based on what was bought by other customers with similar profiles. These technologies offer scalability, what is already presented in these examples.

Graph Databases store 'nodes' (the individual instances, for example 'Spain') as well as their 'edges' (relations to other nodes). Nodes and edges are expanded and further defined by their properties. A big advantage of graph databases is, that these structures are not required to be known in advance, as it is the case for databases and directories.

This coincides in high degree with an ontology, as an ontology can be perceived as a graph.

The combination of scalability provided by Graph Databases with ontological models of a semantic enable the development of Entity Relationship Management systems and allow the repeal of a huge number of limitations of today's IDM Systems.

Furthermore, ontologies make writing policies and rules in a 'natural language' possible by using the ontology concepts, which are understandable by humans as constructs. And nevertheless being directly 'machine-readable'.

The IHMC (Institute for Human and Machine Cognition) in Florida, USA has a great reputation for semantically managed System integration, and we are very proud and happy to have them on board for our journey.

Based on concepts of positive and negative evaluation of authorization (granted / denied) and obligation (required / waived) our system allows to create the relevant policies in constrained natural language. In a case of policy conflict (e.g. required but forbidden; Figure 2), the system automatically tries to resolve the conflict based on predefined algorithms.

A [User] is [required] [to] [reset] his [Password] [every 90 days] .

Example of a policy which uses concepts from ontology (in square brackets).

Another example where an ontology shows its strength is the central filing of provisioning rules in and from connected systems, such as schema mappings (Figure 3 and Figure 4).

Let's have a look on a typical usage scenario in IAM Systems: When did the persons last login occur?

An ontology representing concepts related to the login.

A semantic system does not simply store the timestamp; it stores the fact that it represents a point in time, whatever format is chosen for storage.

Ontology on relations between different forms of timestamp representations

Other useful information that we can derive from this fact and use/proved via ontologies could be

  • the Format,
  • Conversation Factors,
  • origin (Unix, db, ad, LDAP).

MicroService Architecture

Modern system architectures and development methods use slim processes; with short, iterative cycles and constant improvement of the implemented elements. This approach has not arrived in the IAM/IAG world yet. Still, large monolithic systems are being developed that in best case are extendable by plugins and add-ons.

Applications servers, originally thought to be a container for multiple services and applications, are most often suffering from interdependencies between the hosted JAR, WAR or EAR packages or simply share the same system resources. The common 'solution' to this is to deploy more application servers and distribute the applications on them. Ask yourself: How many application server instances do you have running, and how many applications do they provide per instance?

A modern architecture must above all support one paradigm - Rapid Deployment. This is not limited to usage of Dockers or similar approaches and methods.

Our Micro Services are an approach/method to IAM/IAG system development that follows the methodology of Rapid Deployment. The main characteristics of MicroService architectures are:

  • The ability to easily exchange single components of the architecture.
  • Smart endpoints & (less smart) interpoints.
  • Complete redevelopment of components within the shortest time (approximately 14 days)
  • Fast (automated) Infrastructure
  • A MicroService is specialized in performing its roles, and no more. It's purpose determines it's function, not the technology.

Micro Services are also associated with decentralized data storage (persistence) and are dynamically connected to each other.

The Micro Services architecture consists of few layers which have well defined input and output functions to the respective layers.

The first layer (Controller) provides MicroServices self-description and control functions:

  • status
  • who am I
  • who are my neighbors
  • what is my task

This layer facilitates the dynamic connection between Services.

The second layer (Transform) is responsible for the actual task the MicroService got assigned. Its the workhorse accepting data streams which are manipulated according to its functions. When the transformation is completed, the resulting data stream is forwarded to the output layer.

The appropriate transformation rules are obtained from the Ontology/GraphDB component for its 'position' and job in the system architecture and performs the transformation of input data based on it.

The persistency of the Service state is realized by the Persistancer layer (through which communication basically takes place).

The transformations at this point also consults Policies System, in which obligation (required / optional) and authorization (allowed/prohibited) and possible conflicts (required but prohibited) are evaluated and applied to the process data and transformation.

A Micro Service in our environment in first place has no function 'per se', but gets its 'purpose' directly and dynamically in form of notification by the ontology. Micro Service acting in this environment can operate on more 'complex' or 'smart' features, or only 'simple' activities.

Typical purposes for 'simple', less smart MicroServices

  • Transliteration of characters (ü -> ue)
  • Transformations and conversions
  • Regex Tests

Typical complex (smart) purposes

  • Connecting external Applications and API
  • Transformation in Standards (SAML Request, OpenID, SCIM, LDAP, etc)

Current status

In cooperation with our customers and by partnering with IHMC (Florida Institute for Human and Machine Cognition) we have already successfully implemented and tested a large part of the components in interaction with our product 'YIAMSuite' during the past months.

YIAMSuite is an IAM solution which considers relational interactions between Master Data and Identity Data, and already provides intense Entity Management skills since 2011.

Entity Relationship Management System


With the solution developed by WedaCon and our partners, as well as with our more than 15 years of experience IAM/IAG range we feel perfectly prepared to take Indentity Management to the next level and essentially influence the area of Entity Relationship Management (ERM).

Download the full article as PDF here.


2016 EU GDPR: Goals, Impact and involved parties

In our first part we quickly introduced some of the main topics regarding the new EU General Data Protection Regulation (EU-GDPR), but we have not explained the details of the new rules. Within this second part we would like to dive deeper into the topics by starting with

  • Starting position
  • Goals
  • Impact
  • involved parties

Starting Position

Within directive 95/46/EC, the European Parliament and the Council already planned to harmonize the data protection laws within the EU in 1995. Because of the form of a directive (and not a regulation as what we have now), it did not succeed, and the way data protection is implemented across the Union it has become even more fragmented.

Under the impression of several decisions from the European Court and publications related to data privacy, the European Parliament, the Commission and the European Council reached an agreement in a so called 'Trilog': A new regulation on the 'protection of individuals with regard to the processing of personal data and on the free movement of such data'.


The objectives and principles of the Directive 96/46/EC remain sound, as it is stated in the considerations around the new regulation. Also stated in it are a few reasons, why this Directive did not reached the goals. The new regulation therefore should help to reach the following goals

  • binding rules for the whole EU
  • Individuals should get more control over their personal data
  • the rules should help to set global standards
  • Harmonize and simplify the rules regarding this in the EU
  • enforceability of these rules


As already stated, we are dealing now with a 'regulation', not a directive any more. As such, it is a binding law with immediate validity as it is also stated in the Article 91: 'This Regulation shall be binding in its entirety and directly applicable in all Member States.' Because of this legal nature the EU-GDPR will replace national regulations on Data Protection very soon, although special escape clauses are to be expected for some countries.

The regulation will be effective with its publication in the Official Journal of the EU, which is to be expected in Q2/2016. After a transition period of 2 years, it will be activating itself fully, finally replacing national regulations and requiring everyone doing business in the EU to comply with those rules.

Involved Parties

Central view point is the 'affected person' (someone residential in the EU), whose personal data is processed by a Controller, or by a Processor assigned by the Controller. The data might be transferred to a third party as well.

This means: It does NOT matter, if the third party, controller or Processor is NOT EU established.

Controller and Processor have to make sure to process the data according to the regulation, and they have to be able to proof it.

Security 2016

Evidence showing the inefficiency of the old 1995 directive is also shown by a recent paper published by two scientists from Universities Hamburg and Siegen (Germany).

Stay tuned for the next part in our series!


2016 EU General Data Protection Regulation

Article 8(1) of the Charter of Fundamental Rights of the European Union and Article 16(1) of the Treaty lay down that everyone has the right to the protection of personal data of him or her. Following this fundamental right, the European Parliament and the Council have agreed on a regulation to make this happen. This regulation[pdf] is expected to be set active in Q2/2018.

This article is planned to be the first in a series to explain what the new EU General Data Protection Regulation means to european citizens and organizations in terms of personal data, and also to those organizations and companies offering goods and services to any individual residing in the EU.

This first document will offer a short abstract on what is to be expected, while the following docs will go further into details on the different aspects. The regulation is (as of today) not published in the Official Journal yet, which is expected to happen in the next weeks. However, the texts (and regulations) available today will not significantly change any more.

Should I care?

The regulation will have a massive impact for any company dealing with personal data of EU-Citizens: It does not matter if the company is based in the EU, US, or from outer-space. In other words: any organization offering services to individuals in the EU Market is affected, and needs to make sure to take the new regulations into account, latest in Q2-2018, when the regulation is finally in place.

What if I do not care?

As an organization offering services in the EU and violating (which means: not following) the regulations, you risk penalties up to 20 Mio. EUR or even 4% of your annual worldwide turnover.

So what means personal data processing?

Basically: everything you can do with data, from collection to erasure. Data is personal, as long as it can be linked or belongs to a real person.

But it is still allowed?

For sure, yes. But only if you do that in a lawfully, fairly and transparent manner; while taking into account and provide: purpose limitation, data minimization, accuracy, storage limitation in time and integrity/ confidentiality.

Thats all?

No. Processing of data is lawful only if one of the following applies: given consent of the subject, performance of a contract or legal obligations, vital interests, public interest or legitimate interests.

Legitimate Interests?

Yes, If you have a legitimate interest (or any of the other points above apply), you can work with the data. But you have to make sure to make your processing transparent to the subject of the data.

What means transparency?

As soon as you start collecting the data, you have to inform the subject about this process in clear and understandable language. You have to provide a couple of details to the subject about it, including the why, who, when, for whom, how long, his/her rights, your obligations and a few of more things.

And while we talk about obligations: you need to be prepared to offer your costumer the right to limit the data processing, which effectively gives them the 'right to be forgotten'. You need to be able to easily transfer the data to another party on request as well. And you must be able to prove this.

Still relaxed?

Because of the transparency requirements, you have to have processes and regulations in place to offer all this - And you are required to provide evidence on this on request! Based on the specific risk to the personal data you manage, you have to have data protection management processes in place. If you have more than 250 employees, you have to have an overview on which processes are affected and how you process personal data.

Privacy by Design?

Requried. According to Art. 23 of the regulation, you have to implement appropriate techniques to provide data protection by design and by default.

Scary! So whats next?

In the next articles in this series we will have a deeper look into the different areas and requirements. Stay tuned!

21.12.2015 - 'do good and talk about it'-series, volume 13

About Relationships

Identity Management is and always was 'Relationship Management' as well. Identities (=Users) do have relations, and those relations define the 'inner' meanings, roles and authorizations of them. In 2008, we at WedaCon started to consequently handle those 'related' objects with the same technologies, ideas and concepts as we do with the identities within our projects.

Since then, we call it 'Entity Management'. During the past years, we realized that those relations became the most powerful and usable part of the systems we designed and managed for our customers. Therefor, today we name it 'Entity Relationship Management'.

Recent initiatives in the Identity and Access Management industry are going into the very same direction, and this made us 'rethink' our approaches and compare them to what several IAM-Masterminds and colleagues propose as the 'next generation' thing.

With the Internet of things (or each and everything, Industry 4.0, put-in-you-favourite-buzzword here) we have to deal with billions of objects and their relations to each other.

I would like to point out and talk about two terms here: Relationships and 'Things' (or lets use a more common and elegant term: Entities. As a great friend and believer in philosophical approaches I would like to re-use what all those wise guys in the past 3000 years have found out on relations and entities:

Maybe the most important and one of the first appearance of the terms can be found around the ideas of 'Entities'. According to Wikipedia, an entity is 'something that exist in itself, actually or potentially, concretely or abstractly, physical or not.'

In other words, an entity is something that exists and can describe many different things like 'things', 'properties', 'events', 'relations', or all of them at once. Especially the last one ('all of them at once') gives the term 'entity' the meaning of something that 'is'.

But just knowing what an entity is (or might be) is not enough. To be able to describe entities and all the potential and real situations, we need a systematic approach. Within philosophy, this is known as 'Ontology'. Ontologies provide a common vocabulary for a given domain and formally define the meaning of entities, their terms, properties and the relationships between them. In the end, we do not deal with 'data' any more, but with knowledge.

Let me explain that using a 'country' datamodel: In most systems currently in use for IAM approaches, we will find 'countries' as a table with all possible values. Those values are represented as strings, and the value of 'Spain' means: nothing. Its just a string. The table might include other details (dialcode, ISOCode and so on), but this data is 'static'. Introducing new concepts (or even newly learned facts) is not possible (at least not easily).

An ontology of countries tries to collect and provide 'knowledge'. 'Spain is a country' (subject-predicate-object) is a very simple knowledge representation:

The important thing here is that the ontology is used to describe the data: The concept of a country can have borders, dialcodes and IsoCodes, it can have relations. New concepts regarding countries (eg: which departments of a given company operate in a certain country) could be linked by an ontology describing an entity 'department'.

We think that building Identity- sorry: Entity Relationship Management Systems with the help of entities and ontologies will give us new ways to manage all the Entities (Things) we need to deal with, in the upcoming 'Internet of Things' (or Internet of Entities?).

The following video gives an overview on the power of Entity Relations by using graph-database technologies and ontologies.

GraphDatabases as ERM Backend

This is a small excerpt from our actual white paper on 'Semantic Entity Relationship Management', which will be released soon. If you cant wait for it, get your personal copy by contacting us....

Feel free to contact us via

16.06.2015 - 'do good and talk about it'-series, volume 12

Since 2001, WedaCon Informationstechnologien GmbH is helping its customers to reach their goals regarding everything related to identity and access management (and access governance, to be complete). In this series, we would like to give you as the valued reader a quick overview on what we have achieved. Others just call it 'Success Stories'. Today we will talk about...

Password SelfService Portal


We all know: managing groups can become complex. Often, administrators tend to take the quick path and assign rights to users directly, not by assigning him to a group, and then allow the group to access the resource. On the other hand, how many groups do you have with only one member? And wouldn’t it be cool if the groups are filling and maintaining themselves?


Imagine you are working in an environment which faced an incredible grow during the past few years, and everyone was just happy to work, and do not have to bother about security and passwords. In fact, many users are still working with their initial password, and no real feeling about security is available. How do you bring security awareness into such an environment?


Beside the processual work, we developed and implemented a fork from an open-source password portal, and extended/ adjusted a couple of functions to allow the user

  • to set his own security level (and based on this a password policy)
  • to allow password self service (2-Factor password reset using device and/or Challenge-Response functions)
  • and automatic assign/ deny groups and profiles based on the chosen security level


Today, the Password SelfService is rolled out and used by more than 20000 users. During the first 2 month of operation, we had an overall of 26 errors and incidents only, mainly because of failed synchronization of password details into connected systems. The next phase will be used to extend the usage of the security levels to other systems and resources. Plans are available to extend this concept further by deny (writable) access to areas with lower security level, if you are currently working in a higher level; a concept we already implemented for another customer years ago.

Like what you just read? Need more information and references, where we have successfully applied our ideas?

Feel free to contact us


European Identity & Cloud Conference 2015

EIC 2015 took place from May 5th to 8th 2015 in Munich. More than 600 participants from all over the world met again in Munich 2015, to discuss and learn the future of Identity Management and Cloud technologies

WedaCon proudly sponsored the coffee breaks again!

During the conference, WedaCon' founder and CEO Thorsten Niebuhr was member of a few panels discussing ABAC, RBAC and the future of LDAP Directories.

26.03.2015 - 'do good and talk about it'-series, volume 8

Since 2001, WedaCon Informationstechnologien GmbH is helping its customers to reach their goals regarding everything related to identity and access management (and access governance, to be complete). In this series, we would like to give you as the valued reader a quick overview on what we have achieved. Other just call it 'Success Stories'. Today we will talk about...

Criteria Groups


We all know: managing groups can become complex. Often, addministrators tend to take the quick path and assign rights to users directly, not by assigning him to a group, and then allow the group to access the resource. On the other hand, how many groups do you have with only one member? And would'nt it be cool if the groups are filling and maintaing themself?


Role Bases Access Control (RBAC) and Attribute Based Access Control (ABAC) are two common concepts that are widely used in IT to manage access to resources. But who is managing the role, and who is managing the attribute? And how to handle exceptions?

In our customers environments we prefer to implement a relational management of entities (which can be a location, department, device, user, whatever). Instead of 'Identity Management', we call this 'Entity Management'.

The concept of 'entity management' in conjunction of a relational LDAP Directory design (yes, LDAP can be relational) allows us to implement a completely new way of managing groups. We call them 'Criteria Group'.


A Criteria Group is a kind of dynamic group (there are several RFC for LDAP dealing with dynamic approaches). The dynamic nature is defined by a kind of LDAP Query. Every object matching the query will be member of the group. Fine! What about manual assignments? Works, eg openLDAP and Novell eDirectory can easily do that. But that opens again the problem space: is a group which has manual assignments still a 'dynamic' one?


Our criteria groups are working based on 'variance analysis'. Beside the usual dynamic element (who should be in the group?) we have an additional one (who is currently in the group?). By doing a regular comparison, we can eliminate incorrect assignments (e.g. manual ones).

Explicit assignments (and also explicit deny) is still available, but now by using standard group membership assignment.

Like what you just read? Need more information and references, where we have successfully applied our ideas?

Feel free to contact us

18.03.2015 - 'do good and talk about it'-series, volume 5

Since 2001, WedaCon Informationstechnologien GmbH is helping its customers to reach their goals regarding everything related to identity and access management (and access governance, to be complete). In this series, we would like to give you as the valued reader a quick overview on what we have achieved. Other just call it 'Success Stories'. Today we will talk about...

Relational LDAP Services


Lightweight Directory Services are somewhat strict. They have a schema, which you have to follow. And they are read optimized, so perfect for access control and identity management.

But they lack a function that is available on databases: they are not relational, which means you have to have all required attributes and information on one object you will query. Sure, you can do more than one query, but nearly all systems using LDAP require you to deliver all information they request in ONE call.


In the real world, an employee (user) belongs to a department, which belongs to an organization (you can add more relations like countries, locations, cost centers, etc here). In our approach, we exactly link these elements together: a user is linked to a department, which is linked to an organization. Lets call them all 'Entities'.

Based on these relations, the entities can 'inherit' settings from each other. Why? Well imagine you can simply assign a new service (e.g. a group) to a department, and everyone in that department will get the service automatically. The department is renamed? Well - rename the department. And every user belonging to it is automatically updated.


Using an Identity Management system, we 'flatten' the relational data (directly on the event itself) from the related entities. That means an event like 'change organization name' is triggering the event 'Update the Organization-Information' on all users belonging to that organization. Within seconds.


The first relational LDAP implementation was done by us in 2008. Since then, this system is operating as expected. Additionally to that, we quickly found out that this implementation ca be served as a kind of 'virtualization layer', which decouples the 'real' organization structure (often driven by financial aspects) from the requirements of IT-Systems and administration.

Today, we do not fear any organizational rebuild any more. We just adjust the relational rules and policies. A recent re-organization taking place at customer side affecting more than 1000 users took us just two days to adopt the system. New Organizations and departments are integrated within 1 working day.

And a complete provisioning of a new user with all rights and services assigned is happening with 2 minutes, targeting into more than 20 applications, services and databases.

Like what your just read? Need more information and references, where we have successfully applied our ideas?

Feel free to contact us

Enterprise Identity & Access Management 2015

WedaCon was one of the main sponsors at European Identity & Access Management Conference in Berlin 2015.

Our founder and CEO Thorsten Niebuhr talked about Entity Relationship Management and how this will bring IAM to a new level. Other topics during the conference

  • ABC4 Trust EU Project
  • IAM Governance Structures
  • Internal Communications for IAM Teams and Employees
  • Role-Based IAM
  • and many more

15.02.2015 - 'do good and talk about it'-series, volume 4

Since 2001, WedaCon Informationstechnologien GmbH is helping its customers to reach their goals regarding everything related to identity and access management (and access governance, to be complete). In this series, we would like to give you as the valued reader a quick overview on what we have achieved. Other just call it 'Success Stories'. Today we will talk about...

Security Levels based on SmartCard Login


How to protect sensitive data (HR, innovations, whatever) in a highly complex, worldwide acting company? The challenge here was to establish a completely secured environment for specific teams inside the enterprise, while allowing them to use the enterprise global IT Structure as much the same way as the rest of the participants do.


The Design was based on a security levels, and to reach the highest access and security level, the individual seeking access to sensitive data had to use a smartcard to login (2-Factor Authentication). Once reaching this security level, the user was able to access the secured data, but was not able to write (store) information to any device that had a lower security level.


The security level assignments was done on several elements and used throughout them. These elements where implemented as modules, eg 'SecurePrint', 'SecureNetwork', 'SecureFileSystem', 'SecureLogin' or 'SecureMail'. For each module, we had security layers defined from 'Basic Security', 'High Security' and 'VeryHighSecurity'.


The system was implemented early 2003, and was in use until the end of 2013, when it was replaced by newer technologies and based on new requirements.

During the 10 years of operations, the overall ideas and implementations proofed to be a real benefit for the day to day operation. There was no single know security issue or data leakage.

Like what your just read? Need more information and references, where we have successfully applied our ideas?

Feel free to contact us

13.02.2015 - 'do good and talk about it'-series, volume 3

Since 2001, WedaCon Informationstechnologien GmbH is helping its customers to reach their goals regarding everything related to identity and access management (and access governance, to be complete). In this series, we would like to give you as the valued reader a quick overview on what we have achieved. Other just call it 'Success Stories'. Today we will talk about...

Knowledge Transfer


Information Technology is a complex topic. Life itself is another complex topic. And living and breathing for Information Technology is even more complex. To survive in today's multiplexed world, you need a good and solid understanding of the processes, opportunities and pitfalls surrounding you not only in the IT-Sector, but also when dealing with the 'soft' facts and skill (some call it OSI-Modell Layer 8).


True, the word 'Design' doesn’t fit here. But we decided to have the same headline on all volumes in our series, so lets stick with it!

When we started our business, one of our main business areas was 'Training'. Nearly all of our staff were either 'Microsoft Certified Trainer' or 'Novell Certified Instructor', plus some other certifications in Project Management and Data Security to instruct others in the usage of decent technologies. Some had both of the top Instructor Certifications available those days. So we know how to transport knowledge, its even in our name: 'Weda' is sanskrit and simply means 'Knowledge'.


We see Knowledge Transfer and Build-Up is a crucial element in any project and throughout any project. This is not only true for technology aspects, but also in terms of process and project management. And because we usually do not know all the already available processes and technology in our customers environment, this transfer happens in both directions.

Our expertise on how to adopt, prepare and transfer information is maybe one of our greatest and most important skills!


Here are a few of the techniques we use

  • Classical Project- Management
  • ScrumBan (we have our own lean mix of Scrum and Kanban)
  • 4-eyes-principles on new functions and
  • Moving Team members (let Managers do support, Supporters do Development and so on. Let them see what the others need)

Like what your just read? Need more information and references, where we have successfully applied our ideas?

Feel free to contact us

European Identity & Cloud Conference 2014

EIC 2014 was a great success for WedaCon. So far the biggest event we sponsored!

A few impressions...

CV Magazine Corporate Excellence Awards 2019

Most Innovative IAM Solution Provider 2019 - Germany

With pioneering technologies at its core, WedaCon have earned the prestigious award of Coroprate Magazine's 'Most innovative IAM Solution Provider 2019 - Germany'

CV Magazine Innovation Award

Read the full article online or as PDF

Enterprise Security Magazine

WedaCon recognized as member of Top 10 Identity and Access Management Consulting/ Services Companies in Europe 2019

A strong and secure access management system has been one of the main pillars of a company's security infrastructure ever since. 'By offering the best technological services and with several success stories to their credit, these service providers are constantly proving their worth in the field of identity and access management services.'

Top 10 IAM Service Provider Award
Read the whole story online!