I've been skeptical about AI from day 1 --- and haven't been shy about expressing that here. The usual virtual assistants that we encounter these days are no exceptions. Since I don't make use of AWS / related services and Apple systems, I've only faced Cortana & Google Assistant.
Cortana is a particular 'perpetual' nuisance, given that I've attempted to disable it multiple times but it's always coming back. (I suppose Microsoft wants it / her to stay)
While Google Assistant, I've simply opted not to use it, and also denied the application any permissions on my device --- given that there's no real telling what might go wrong.
Speaking of what might go wrong, check this article about Cortana 'betraying you' and something about such systems being exploited without you hearing anything. (Yes, it's old news)
Sometime back, I got a bit of a surprise when sending out an e-mail, and was more amused than concerned, but with AI being steadily integrated into just about everything we use these days, it's probably something to be taken more seriously...
I totally distrust those devices/assistants and here is why:
1) In order for it to respond it has to be listening all the time.
2) In order to tell if you just said "Alexa" "OK Google" or "HEY Siri" or whatever the "magic keyword" is, it has to process everything you say. How does it do that? It is it constantly forwarding your conversations to be deciphered? I would think that it would have to. Is it being sent to a human or machine for deciphering?
3) Once it determines you have meant to activate it, It then really pays attention and records what you have to request. What does it do with these recordings? How long does it store them, etc.? I told my kids, who were fond of Snapchat about my concerns. They said "After 10 seconds the thing you posted goes away." I said "It goes away from you or the person you shared it with. It doesn't mean it is gone away forever. The people you shared it with can also screenshot it and then redistribute as they wish. There is no telling what Snapchat, or any other social media company is doing with your data.
A co-worker of mine who has a "smart" device commented that she noticed that about 3 days after having a conversation in her house, which she DID NOT involve the smart device in, she starts seeing ads relating to that subject in her browsing windows that she had not been searching for online. Leads me to believe that they ARE doing something with the things they hear when you don't even involve them.
There are several additional threads that could be spawned off of this:
1. The cross platform integration of browsers and the integration of AI devices within those browsers. "Siri, order a dozen Trokar 5/0 long shank hooks from Cabelas and have it shipped overnight." Now every platform that you open FireFox on you will get either crummy commercials from Cabelas, Trokar, the shipping carrier, or competing hook vendors. Talk about nuisance.
2. As we have all heard, those listening devices auto-order something without the consumer's o.k. because the smartphone was in someone's back pocket. They broke wind and it sounded like: "Siri, order a meat lovers pizza from Papa John's." Amazing talent, I know.
3. Your credit card is an open account. Both my close friend, who is an CISSP, and myself have credit cards with zero limits. How is that possible you ask? Easy. Open a SECOND Paypal account with a PP card that is setup to only manually have money moved to it. As an added layer of defense keep the account at zero balance. Paypal doesn't ding you for doing that and you can still use the CC if you have accounts that require a CC as part of the initiation process. I've stopped several nefarious events with this technique.
4. Biometrics. Yet another angle of the user giving away their identity through speech. If speech patterns are one factor in an auth process and you have shared it with Siri whats to keep her from playing it back when some biometric lock needs to be opened? She is your assistant, right?
5. The developer of the AI knows whats better for you then what you do?!?! This is what most perturbs me. The developer, and more importantly the company he/she works for, assumes that their suggestion is better then what you desire to select.
6. Home security. What if a device was in a front window and all you have to do is say: "Siri, open the front door, garage door, back door, butlers entrance ..." Until you find the fault that will allow you open access.
7. Smart cars (and I don't mean those silly little battery powered things) - "Siri, start the Rolls Royce and set the A/C to 65 for the driver and 75 for the passenger and unlock the doors."
Overall this is just another money grab. Retailers are looking for any advantage they can exploit to get more of your hard earned dollars out of your wallet and into theirs.
I have to admit I've used AI in my life. When I'm sitting in the duck blind and it's too cold to get my hands out of my gloves to peck out "sunrise, Wash DC" I will open Siri and say "Siri, when is sunrise in Washington, D.C." and she will respond back with the correct time. Then I know when I'm legal to hunt. That's the one instance when I trust her.
I choose Siri because we are an all Apple home. I'm sure the other AI techs are equally as nefarious in their capabilities as well.
Now this is one we all would trust because it is from mainland China.
Super secret spy A is sitting in his home talking to super secret spy B and they discuss some super secret project while the refrigerator is "phoning" home to China in real time the entire conversation.
Super secret counter spies don't have to make clandestine installations of tech anymore. All they have to do is put a Best Buy sale flyer in the mailbox and send an mini-EMP through the home to fry the appliances.
Geek squad installs quick and cheap.
Shannon,
Thank you for starting an interesting discussion around virtual assistants and trust. I agree with the other commenters that we should be careful and thoughtful about what information is sent through a remote voice recognition and AI software, but I also think that we will have no choice but to find safe ways for those technologies to be used in our organizations. Our users and eventually business processes will require voice assistant AI for a whole range of functionality. Like all new technologies of the past, we will need to define standards, controls, and defenses to enable new solutions.
A few thoughts on some early steps:
I think this represents an opportunity for the IT and Security organizations to get ahead of the curve and offer good guidance instead of reacting to a less secure alternative solution.
Adam
@EIAKPKP452, yes, as you said, this is an unavoidable risk, & the best thing to do is treat it without accepting it, for which IT Security has to ensure that top management sees things the same way....
Alas, the latter is often a challenge. I can just imagine a scenario where there's a meeting held to discuss this risk & mitigation techniques, complete with reports of a risk & cost-benefit analysis.
After the presentation, the GM pulls out his iPhone and goes: 'SIRI, what's your opinion?"