Bit Security has received a lot of criticism lately – can you explain why that is?
So, there are a few different issues here. One is there are loss of precisions that you have versus the concrete security approach where you can take a more detailed look at the resources and advantages, and you take these numbers and bit security compresses them into one. There you will lose something; this is a simplification, and a valid one in my opinion in many cases, but certainly not in all. Another point is the work by Bernstein and Lange which pointed to issues that dealt with non-uniformity of adversary’s, so these are adversary’s where we know they exist but we’re not sure how to find them. (So, for example, we don’t know how to program these).
What is the focus of your research?
In this work we look at a different aspect essentially. As I said Bernstein and Lange have proposed several counter measures that one can do, but they didn’t look at the advantage functions, this quantity that most people think of as the distinguishing advantage. We think that if you quantify security in terms of bit security, what you should be looking at is the quantity alpha times delta squared (that correspond to the adversaries output probability (alpha) and conditional distinguishing advantage squared (delta^2)).
Your research is redefining the decision problems?
Not the problems itself, but how you would measure the security of decision primitives.
What are the real-life implications based on your work, if any?
So, that is kind of an interesting question. When talking about real life applications, you usually look at things that have a constant advantage. For example, I have an adversary that with probability ½ will break the scheme or probability ½ is able to distinguish something. There, it doesn’t really matter that much if you look at the distinguishing advantage or distinguishing advantage². Because if you have a success probability or a distinguishing advantage of maybe ¾, which is very large, then its square will be 9/16, which is still large – and in that sense, if you’re talking about real world adversaries that go ahead and break something it won’t make much a difference. It’s more to get a cleaner way of reducing between primitives. But also, potentially, there are implications for example, approximate sampling for lattice based cryptography, where this does have real world impact as far as how much precision you’ll need in order to prove that your scheme is still secure. So it does have some implications.
But not for your average person
Not really, but that’s really a good a thing – Bit security has been around for a while and people have an intuition about it, which isn’t necessarily wrong. The nice thing about bit security is if I tell you something has 100 bits of security you’ll be like, oh that’s pretty secure. But if I tell you something has only 50 bits of security you’ll probably stay clear of it. And I don’t want to change that at all, I think it’s useful to talk about and quantify measure of security in this way. You can see Michael’s complete presentation from EuroCrypt 2018 here.