In infosec, we are constantly trying to advise people who have no background (and little interest) in our profession, but we have to try to be honest, even if "the facts" can sometimes be used to avoid doing the right thing ...
Other posts: https://community.isc2.org/t5/forums/recentpostspage/user-id/1324864413
This message may or may not be governed by the terms of http://www.noticebored.com/html/cisspforumfaq.html#Friday or https://blogs.securiteam.com/index.php/archives/1468
An interesting set of points, discussed. Now at some point, we have to make decisions as to what is true, and what is definitely wrong or skewed. However, this is becoming more difficult. Take for example AI, the old adage garbage in, garbage out still applies even today. However, it is becoming more difficult to discriminate and evaluate what is essential truthful and what is not. Therefore due to our upbringing, education, we learn inherent values, intrinsically, without realising many times, we develop our own bias, which we use to make judgements and decisions every day. So one of the goals of AI, is to realise those bias, remove them and ensure that advice provided, from thousands of research sources, social media, web sites, wiki's etc etc - however, can we depend on that information long term.
At least in Information Security we have a set of principles, some of which can be mathematically proven - within cryptography for instance. However, increasingly we have to make decisions, based on our core principles and knowledge and evaluate the information in front of us. Is there a point, at which all parties can agree, what is in front is true or has been biased and to what extend or even skewed to unbalance the discussion or argument?
The world is definitely becoming a lot more complex, full of different biased views, some of which automatically resonate and others which do not and cause one to look at a bit deeper as to whether they are of value or not.