User Tools

Site Tools


sql-derivative-sensitivity-analyser_demo

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
sql-derivative-sensitivity-analyser_demo [2018/11/27 17:01]
alisa [Example model for SQL combined sensitivity analysis]
sql-derivative-sensitivity-analyser_demo [2019/01/09 14:41]
alisa [Running sensitivity analysis]
Line 84: Line 84:
 We are now ready to run the analysis. Click the blue button //​Analyze//​. Let us first set ε = 1 and β = 0.1. Click the green button //Run Analysis//. The most interesting value in the output that we see is the //relative error//. This can be interpreted as an upper bound on the relative distance of the noisy output from the actual output, which holds with probability 80%. There is unfortunately no strict upper bound on the additive noise, and it can potentially be infinite, though with negligible probability. Hence, we can only give a probabilistic upper bound on the noise, which is in our case hard-coded to 80%. We are now ready to run the analysis. Click the blue button //​Analyze//​. Let us first set ε = 1 and β = 0.1. Click the green button //Run Analysis//. The most interesting value in the output that we see is the //relative error//. This can be interpreted as an upper bound on the relative distance of the noisy output from the actual output, which holds with probability 80%. There is unfortunately no strict upper bound on the additive noise, and it can potentially be infinite, though with negligible probability. Hence, we can only give a probabilistic upper bound on the noise, which is in our case hard-coded to 80%.
  
-We can now play around with the model and see how the error can be minimized+We can now play around with the model and see how the error can be reduced
-  * Try to reduce β, e.g. try = 0.1. This does not affect security in any way, but may give smaller noise level.+  * Try to reduce β, e.g. try β = 0.01. This does not affect security in any way, but may give smaller noise level.
   * Try to reset scalings of //Table norm// to ''​1.0'',​ or even try larger values. The error descreases, as we now consider smaller changes in the input (which means that we lose in security).   * Try to reset scalings of //Table norm// to ''​1.0'',​ or even try larger values. The error descreases, as we now consider smaller changes in the input (which means that we lose in security).
   * Try out different row sensitivity. Instead of ''​rows:​ all ;'',​ try some particular row, ''​rows:​ 0 ;''​ or ''​rows:​ 1 ;''​. It can be seen that ships with higher speed have larger sensitivity and hence add more noise, since changing their locations even a little may affect the arrival time more significantly.   * Try out different row sensitivity. Instead of ''​rows:​ all ;'',​ try some particular row, ''​rows:​ 0 ;''​ or ''​rows:​ 1 ;''​. It can be seen that ships with higher speed have larger sensitivity and hence add more noise, since changing their locations even a little may affect the arrival time more significantly.
Line 119: Line 119:
  
  
-We can now play around with the model and see how the error can be minimized.+We can now play around with the model and see how the error can be reduced.
   * Increasing allowed guessing advantage decreases the error. At extreme cases, we get the error ∞ if we want advantage 0%, and the error 0 if we allow advantage 100% (more precisely, if we allow posterior probability 100%, which happens for even a smaller advantage).   * Increasing allowed guessing advantage decreases the error. At extreme cases, we get the error ∞ if we want advantage 0%, and the error 0 if we allow advantage 100% (more precisely, if we allow posterior probability 100%, which happens for even a smaller advantage).
   * Try to decrease the allowed guessing radius (e.g. set it to 1). In general, it becomes more difficult for the attacker to make a guess, so the error decreases.   * Try to decrease the allowed guessing radius (e.g. set it to 1). In general, it becomes more difficult for the attacker to make a guess, so the error decreases.
   * Try to increase and decrease the initially known ranges on latitude and longitude. While it directly affects the prior probability (which can be viewed by clicking //View more// in the analysis result), the upper bound on posterior probability may change less. Technically,​ differential privacy makes the "​sensitive area" similar to its neighbouring surroundings,​ and not the entire set of possible values, so increasing the range may have little effect on the posterior probability. As the result, if the advantage level is kept the same, increasing the range may also increase the error.   * Try to increase and decrease the initially known ranges on latitude and longitude. While it directly affects the prior probability (which can be viewed by clicking //View more// in the analysis result), the upper bound on posterior probability may change less. Technically,​ differential privacy makes the "​sensitive area" similar to its neighbouring surroundings,​ and not the entire set of possible values, so increasing the range may have little effect on the posterior probability. As the result, if the advantage level is kept the same, increasing the range may also increase the error.
sql-derivative-sensitivity-analyser_demo.txt · Last modified: 2021/06/14 11:22 by alisa