PE-BPMN gives rise to binary privacy analysis and allows to differentiate between data that has some form of protection and data that does not.
Simple disclosure report is a table where columns are data objects from the process and rows are the stakeholders (lanes). Each cell is marked either V (visible), H (hidden) or -. Marking - means that this stakeholder does not see this data object in the process. On the other hand V means that the contents of this data are fully visible to the stakeholder. H is the middle ground denoting that the participant has the data object, but it has a form of protection on it. For example, a ciphertext will be denoted with H in our table.
We are also planning to add a marker A (accessible) for data that could be opened by the stakeholder, but is not opened directly in the process.
Simple data dependency gives the data dependency matrix of the model. The relations described there are either straightforward from the model data associations or result of collaborative tasks with collaborative stereotypes. Essentially the data dependency analysis gives an adjacency matrix for the process from the viewpoint on the data in the process.
We mark D (direct dependency) for cases where data A is an input to a task that produces data B - meaning that B directly (through one task) depends on A. If data C in turn depends on B then we mark I (indirect dependency) for the dependency between A and C - C indirectly (through a path of more than one task) depends on A.
We can enhance the simple disclosure with the data dependency to arrive at an extended simple disclosure report. In addition to the visibility this allows to get a glimpse of the consequences of some data becoming visible for some party. Essentially, for any marker V in the simple disclosure we look at the data dependency to see which data this object depends on. Making this data visible to some party has a risk of leaking something about the data that it depends on. Other layers of analysis, e.g. leaks-when and sensitivity analysis can then be used to study this risk in more detail.
For any data object in the model each participant may have a subset of V, H, -, I and D annotations. If there is V then it is clear that the participant has full access to the given data. Analogously when there is only - or H then the participant does not have access to said data. However, a combination of -/H with D and I means that while the participant does not have direct access to the data it does see something that is derived from this data object. Hence, there is a possibility that something about this data leaks to the said participant. These are the cases that should be studied further with leaks-when or sensitivity analysis to discover which information about the data is actually leaked to the party.
Leakage detection is used to analyze more complex PE-BPMN models where the disclosure tables are not sufficient to get a good overview of the process (e.g. the models with a lot of non-trivial (terminating) branching). It is used to detect if some input data may end up in certain points of the model - e.g. some task or participant. Leakage detection takes into account the possible executions of the process over the different branching choices (but satisfying the synchronization rules of grouped tasks).
This analyzer is currently very experimental and not stable.