Hi All
An interesting decision by the Italian authorities to ban ChatGPT due to GDPR.
ChatGPT is now banned in Italy.
The country’s data protection authorities said AI service would be blocked and investigated over privacy concerns.
The system does not have a proper legal basis to be collecting personal information about the people using it, the Italian agency said. That data is collected to help train the algorithm that powers ChatGPT’s answers.
Regards
Caute_Cautim
I can understand them blocking it due to privacy issues but wondering how other countries/states/etc. will react to it.
If is not aligned with GDPR, I question why other EU countries or even the US have not banned it?
I have recommended to the firms, I do work with that they need to take great care especially around the privacy of their data. I believe that where ChatGPT fails is inadequate notices being provided when data is input, therefore the question of informed consent could be raised.
I am also concerned that there does not seem to be any protection for minors related to content of answers being provided.
The Ethics related to this AI are also VERY questionable.
Does this tech have benefits? I believe it does but also believe a lot of work needs to be done by organizations to understand their risks vs the benefits.
my nickel
d
Look at the attachment and follow up:
Artificial intelligence: the Guarantor blocks ChatGPT
Illegal collection of personal data. Absence of systems for verifying the age of minors.
Stop ChatGPT until it complies with privacy regulations. The Italian Data Protection Authority has
ordered, with immediate effect, the temporary restriction of the processing of Italian users' data
against OpenAI, the U.S.-based company that developed and operates the platform. At the same
time, the Authority opened an investigation.
The ramifications will put significant pressure on OpenAI and other providers to get their act together.
Regards
Caute_Cautim
It strikes me that with AI in general (and certainly ChatGPT in specific), we are back where we were 40 or even 50 years ago. The focus has been so much on making things work that we haven't sufficiently entertained how it can break or be abused.
In order for it to infer or predict genuinely, it needs a lot of genuine data. Hence, if people are able to opt-out (as they most certainly should be allowed), AI will be dealing with skewed data. More concerning is if someone can feed it bad data, you can manipulate results. I suspect these drive some of the policies of OpenAI in terms of being more transparent about its data training. If people know their data is being used for AI training, the data may no longer be genuine, and it could lead to deliberately false data sets.
While there are technical questions that bump against ethical ones regarding AI, to me, the more pressing question is economics. Can AI be implemented in a cost-effective way? Inherently, AI deals with the unknown, which means from a risk standpoint, you need to assign exposure factors and rates of occurrence for things that are (humanly) very difficult to predict. Outside rather limited cases, it will cost more to test or support AI than it will to run it. But those economics won't stop anyone. Just as we see on a daily basis, organizations run technology without sufficient controls. They roll the dice (whether they realize it or not) in the interest of the immediate competitive advantage AI might deliver.
Good decision. AI is too dangerous to let it be on its own. Soon, deepfakes will become free and hyper-realistic. Anyone will be able to send anyone to jail, even to death row.