The is a thread asking a theoretical questions of members. This is a question of security, but much larger in scope.
AI is upon us. I have understood the emphasis and the direction it has been applied over the past 3 decades. For me, until 4 years ago, I felt it provides a method to expand our immediate knowledge given accuracy of the infomration provided to assist.
AI doesn't scare, and there will be enormous upsides to its use as mankind marches forward.
Here's where the problem exist, it is NOT the AI as it does what it is design to do in providing answers from the known elements of mankind's database.
Problem: Mankind's ability to manipulate results and reasoning to the detriment of others. Lies, twisted thoughts of reality, twisted outcome analysis of accutate known elements of accurate mankind's database, projecting propaganda in such a way as to have a large portion of a population to reject their best interest in propaganda's favor.
For over 13000 years we have evidence that people lie. AI is not the problem mankind faces.
Questions
Can mankind find ways to distinguish truthful, factual information? Is there a way to identify information based upon evidence from manipulative, non-evidence, information?
With the current place mankind is at, today, can our technology be used to save us from various non-sense that is passed into societies across the world?
And, yes I know, these questions precede any of us and technology.
But do we, with the technology we have, process any mechanism to change information such that mankind knows if sources are accurate based upon evidence/data/facts/proofs?