cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
rslade
Influencer II

AI and forensics

(First of all, I stuck this in "Chapters" because the conversation took place at a meeting of the Vancouver Chapter, but I can't think of where else to put it.  I suppose I could put it in "Certifications" since law and investigation is part of the CBK.  This isn't really "Industry News" and I'm pretty sure it wouldn't fall under the GDPR.  Can't we have a "general infosec topics and ideas" board some place?  But I digress ...)

 

I was talking with a colleague about a case he had noted recently, where an appeal was based on the use (by the court, actually, in sentencing guidelines) of a piece of AI software.  My colleague is more of a law enforcement and policy guy, not a techie, so he didn't understand why the company that produced the system was unable to produce source code for evidence under discovery.

 

A number of AI methods use neural nets, genetic programming, or other "self-learning" systems.  These systems actually modify themselves in response to running with various datasets.  Therefore, simply having the original source code doesn't give you the final system.  These systems often develop in unpredictable ways (which is kinda the whole point).  There is the well known example of Microsoft's Tay chatbot, which, after a few hours on Twitter became an obnoxious jerk and had to be terminated.  An earlier, and much more expensive, example was a US system in support of the Total Information Awareness initiative which, in a salutary example of why you never, ever test a development system with real data, was given a live stream, and became completely corrupted and useless, destroying a four billion dollar project.

 

In legal discovery (under a Common Law legal system), one side gets to know whatever evidence the other side is going to present.  By extension, if the evidence is produced by a process or system, the other side also gets to know the process or system, in order to be able to assess how reliable the evidence is.

 

Therefore, the use of an AI that uses self-learning (which is pretty much all of them outside of expert systems) is going to be extremely problematic in forensics.  The systems become too massive to review, and, if you can't review them, you automatically have a "reasonable doubt" about their reliability.  (This is, of course, a problem in a Common Law legal system: Civil Law legal systems have their own standards for evidence and so this principle may or may not hold.)

 

For the same reason, systems based on quantum computing may be a problem in forensics.  We hope that quantum computers may give us answers impossible to come by through the use of traditional computers, but, if that is the case, it may be difficult to check the results. (This situation is, of course, only made worse by the issues of error correction and reliability in quantum computing, plus the fact that simply trying to understand quantum mechanics makes your brain hurt.)

 

(One area for which I do have some hope: quantum computers and neural nets sometimes do similar tasks, such as pattern recognition, in completely different ways.  Thus, if we have competing systems that give us the same answer, we may have some assurance that the answer is, in fact, correct.)  (Some assurance ...)


............

Other posts: https://community.isc2.org/t5/forums/recentpostspage/user-id/1324864413

This message may or may not be governed by the terms of
http://www.noticebored.com/html/cisspforumfaq.html#Friday or
https://blogs.securiteam.com/index.php/archives/1468
0 Replies