cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
gthompson
Newcomer II

Strong (er) Authentication ? Interested in your thoughts.

As an (ISC)2 board member, but more importantly as an executive for a large bank, I'm very interested in our community's thoughts on how to solve the authentication problem.

 

What worries me specifically are the sensationalized stories emerging about biometric authentication being defeated. 

 

https://www.theregister.co.uk/2017/11/13/iphone_x_face_id/?utm_source=dlvr.it&utm_medium=facebook

 

http://www.dailymail.co.uk/sciencetech/article-4522062/Brothers-trick-HSBC-voice-recognition-softwar...

 

My concern is that these articles tell only part of the story. In reality, most if not all "authenticators" can be defeated. The question as always, in my opinion, should be: "Is the control commensurate with the risk?" I would think that biometric face recognition is a far sight more effective than passwords (as an example) for a lot of reasons - even if it can be fooled.

 

I think these stories can actually be dangerous because they scare people away from using incrementally better controls. And so because of stories, like the ones above, many will opt out of the stronger biometric control in favor of using an easily guessed or phished password etc... Nothing is perfect, but some controls are less imperfect than others and we need to keep this in mind. How do we educate people toward making pragmatic and informed decisions on how to keep themselves safe??

 

I'm interested in your perpsectives.

 

Greg T.

Toronto, Canada

15 Replies
CISOScott
Community Champion

I agree. Articles can scare people off with misrepresentations and omission of key facts, but that is how they sell papers, or these articles. We have to proclaim the virtues of these features of better security and not perfect security. As security practitioners we have to sell the idea that no solution is perfect, but as long as we keep improving security we can keep improving and making it harder for the bad actors. We must keep the idea that it is all about improving from where we are not waiting to act until we have a perfect solution.

jeffl
Viewer II

Yes, the negatives are being sensationalized.  That is to be expected as that is what gets eyes on the content.

 

Having said that, I tend to pay more attention to the articles exposing the weaknesses in new technologies.  When the technology works the way it is supposed to, everything is hunky dory.  I want to know the ways it can fail or be misused.  I do not need to defend against when things are used the way they are supposed to be (malicious tech aside), but I do need to defend against when it breaks, is miss used, or abused.

 

In this case, passwords and their vulnerabilities are well understood.  Biometrics, far less so.  When a password is suspected of being compromised, it can easily be changed.  If a biometric is compromised how do you change it short of changing  the underlying technology?

swissjon
Viewer

Any second factor has to be better than password alone.

 

The defense industry is obliged to use 2-factor authentication.  I have experience of running a system where users had either a hardware token, on a key ring, or an app on their phones which gave a code.  Worked pretty well, and not many users complained but maybe they knew that the customer was insisting on it. 

 

It is hard to impose a system!  If biometrics work they are ideal, but I have the feeling we are not there yet.

Early_Adopter
Community Champion

To the last point we could just all get new retinas... thoughts as follows.

 

All controls and technologies are possible to defeat/work around and will get defeated in the press and in real life, especially if the problem space sets rewards for the attackers doing so. Not always using the same methods/being predictable I think is important long term.

 

I see biometric auth(and maybe all auth) as a bit like climbing a rock-face, three points of contact is going to be the best option as far as safety is concerned.

 

So if you did facial recognition from a camera, voice print from a mic, gait analysis over time from an accelerometer, fingerprint from a reader and monitoring of heart rate(I think Apple have patented this), you'd have five biometrics and you could select three.

 

Going out on a limb in nature, look at antivirals in medicine three points of attack across different channels really does help dealing with diseases that mutate allot.

 

To go even further to the world of biology we could say, don't deploy your best anti-bacterial drugs needlessly to avoid resistance being built up.

 

Stepping out of anaology:

 

  • Try to use biometric's in combination supported with other biometrics; also don't use them frivolously, and don't be predictable;
  • Once you are registered with a biometric use other factors things on devices like keys, TPMs, location awareness etc to make sure that only defeating biometrics does not grant access;
  • Secrets such as passwords and passphrase are still somewhat useful in combination, and would always serve to increase attackers work rate - but even more than this perhaps new technologies open up more options for new types of secrets - I for would be quite chuffed if I could do a bee inspired waggle dance form my Kinect to make a bill payment( - not for everyone I know and the novelty would wear off - but might even get a Kinect if there was such an option).

I think that long term we'll be talking about 'complex factor authentication' rather than multi-factor auth and we will probably even look at continual authentication for some use cases, things so sensitive that if the system(s) that verify lose track of you in their blockchain timeline record, you might have to convince you are you all over again.

 

Tritely paraphrasing, but I believe it if we invest enough it will hold true :

 

 'You can fool some of the authenticators all of the time, and all of the authenticators some of the time. But you can't fool all of the authenticators all of the time.' 

 

Robert
Newcomer II

A lot of biometric techniques depend heavily on machine learning in some of the spoofed attacks likeness has been used to break the authentication.

 

In applying machine learning its not the image or the finger print that is authenticated but extracted features of the object. What may make these stories more shocking is if the features used in the ML algorithms could be replicated from objects which bears no resemblance to the object being authenticated. A mistake that a human or password validation wouldn't make! 

 

I think a lot of the testing for these algorithms focuses on authentication of examples of me and not me. Some interesting research may to explore mimicking of the features used in the algorithm and not the image itself, I think we might be surprised at what is actually authenticated.

 

Long passwords may not be that bad and may be more predictable


Early_Adopter
Community Champion

Excellent point Robert.

lukos
Viewer

There are a number of problems with biometrics, most of them mentioned in separate answers already:

1) Resetting biometrics is largely not possible. If at any point, your biometric data is stolen, what are you supposed to do both as the customer and as the company? How can we ever trust someone who has to store biometric data?

2) Biometrics are not available to all people which restricts their effectiveness for the general public. Not everyone has fingerprints.

3) Biometrics are not always repeatable. What happens to my gait or heart rhythms if I am trying to authenticate after a run. What happens to facial recognition if I have some kind of injury

4) Biometrics are not just fooled at the physical level, you can inject data into anywhere in the software stack. How can you ensure that there is not a weakness between e.g. the retina scanner and the authentication app itself?

5) There is a natural fear of biometrics from the public. This might get better over time but there have been enough abuses by governments of our data that makes people justifiably concerned about the storage and misuse of this data. Scope creep has happened many times. Once a company has my biometric data, we can guarantee that the government do.

6) Biometrics have been shown to be cracked without even needing to be in the same room as the person you are impersonating (i.e. information from photos, from items that we carelessly discard without thought covered in fingerprints etc.) To crack a hardware generated number or a password (assuming the password is strong) is both better known and also generally harder when done remotely.

 

The bottom line is that they are apples and pears and hard to compare. You shouldn't be unduly worried by sensationalist media but you shouldn't also discard it as scaremongering or the concerns of the uninitiated. The deeper question is around identity itself and how far you need to identify someone. Do you really need to know that the person is John Smith from 100 Station Road, who grew up in Canada etc. or do you just need to know that this person is the same person who opened the account, that they have the right to open an account and don't have other accounts already for example? Unless you know how specific an identity you need from someone, it is very hard to decide on the correct choice of authentication.

Early_Adopter
Community Champion

@lukos Some valid points and I broadly agree with your thrust, some I think are perhaps not so valid, or at least apply more generally.

 

I'll put some stream of  thoughts down in answer to your points here:

 

1. What's stored is how the biometric credential 'looks' to the particle sensor(s), protocols and algorithms that sample the data. So if it's very a limited model of your retina that's stolen(spooky) then perhaps that model can be refined. Or move on to another set of samples. Biometric re-keys are going to be hard.

 

To the 'how can we trust...x, to do y?' thats done all the time, always imperfectly and it comes dow to risk, impact and threat. You've to trust someone.

 

2. You'd probably be surprised(or maybe you wouldn't) as to how many biometrics and behavioral analytics are used are used on us - selfie to make a payment, use a casino face geometry etc. GDPR is going to have some impact here in profiling. I think if society decides to use them to orwellian levels we're not going to to run out. 

 

3. Your heart and gait are pretty consistent across their operation(tibia is x long, left ventricle has Y capacity, arteries have a similar week to week amount furiness) unless you injure yourself, have a pacemaker, get really old etc. These can be supported and rekeyed and if we really want super strong identity biometrics would learn over a timeline.

 

4. This point applies to any computer system, so I'm inclined to discount this one. To ensure there is not a weakness is very expensive and imperfect, but if you want to formally verify all systems with mathematical proof then fine, but we can't afford it for almost everything we build.

 

5. I don't know the fear is so much natural for biometrics, or is a defined phobia, From Mashable  http://mashable.com/2014/08/02/tech-phobias/#oMOZrCTnZsqV fear of selfies would be the closes It's a cherry pick from the internet I know, but I think fear of tech,  fear  of the powerful entity rather than the technology are more pertinent. Invasive vs passive etc are good ways to look at making these technologies acceptable. Its a tool like any other, your smartphone could spy on you in very private moments it's still by the side of most people's beds. I f you travel a lot you got over that feat pretty quickly or you don't get on planes.

 

To the last point; which government? How would we guarantee that they do? If 'the goverment' is the problem then maybe they should be a Jon Postel(https://tools.ietf.org/html/rfc2468) of Identity to establish the global trust. Maybe no one company should make a decision?

 

6. Complex factor authentication and authorization  would not rely on biometric techniques solely, psychometrics, knowledge based, keys on the system, timelines etc could all be used. But if you want to identify a person they are probably the best way of getting to ground truth.

 

Most of the claims/assertion based systems deal with what you need to hold for a level of knowing, and balanced legislation in required to balance rights and responsibilities. If peoples money keeps getting stolen, there postal photos are leaked and attackers take over their accounts and computers I'd expect more calls for better authentication - if attacks stop less. I don't think that there is a perfect balance, but if you need to know who someone really is then short modding humans with their own HSMs people I don't think we have a better suite of technologies. FIDO makes a good set of compromises, use what available and the, serve provider needs to decide what the bar is on assurance.

Early_Adopter
Community Champion

Terrible habit to reply to yourself, but this is pertinent and probably means that Apple have joined Microsoft and Google in doubling down on facial recognition on the device with Face ID:

 

https://arstechnica.com/gadgets/2017/12/apple-invests-390m-in-finisar-maker-of-tech-used-in-truedept...