It is a couple of years now since Microsoft put Tay, an artificially intelligent chatbot, onto Twitter. Within a few hours it became an offensive jerk.
(While mini-rapper Lil Tay is undoubtedly named after rapper Tay-K, I have found it ironic that the foul-mouthed prepubescent has the same name as the bot.)
In any case, a team at MIT has now done the same thing, quite deliberately, with Reddit. They created an artificial psychopath.
(Given the amount of time we have to spend researching on the Internet, should we be concerned?)
By the way, there is now formal research demonstrating a causal link between time spent on social media (specifically Facebook, Instagram, and SnapChat) and ... ...
This is not such a bad idea, how does one these human behaviourial norms, with developing the appropriate tools for generating bad behaviour? Given the technological direction towards User Behaviour analysis via facial recognition in crowd situations - looking for the odd person out and then go investigate.
However, there is a danger of how PC do we want to become?
Should such systems, be monitoring every day human behaviour?
The Chinese have been using such techniques for years, to identify people who do not walk the party line. Will it happen in Western cultures?
Regards
Caute_cautim