Your news source may be an algorithm.
The Turing test evaluates conversations between a human and a machine:
Developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Wikipedia
Maybe in conversation we can still spot a machine, but not in posts and articles. There’s a new text-generating algorithm that can write compelling articles without any ethical need to be truthful.
Last week, the nonprofit research group OpenAI revealed that it had developed a new text-generation model that can write coherent, versatile prose given a certain subject matter prompt. However, the organization said, it would not be releasing the full algorithm due to “safety and security concerns.” Slate
The algorithm was happy to write an article when prompted with nonsense like “scientists discover a herd of unicorns…” or “Russia declares war on the United States…”
Some observers think the danger to political discourse and public understanding of science is exaggerated, but if so, that will be temporary. How will you know what to believe any more on social media? On blogs? Can a giant like Facebook find a way to detect and delete such articles? Can you and I?
It will get worse. Software is being developed to create video and audio files too. If you think crazy memes mislead people today, imagine fake images of real people better than any CGI you’ve seen in the movies.
Perhaps we’ll lose faith in all news on social media and revert to viewing nothing but lol cats and baby pictures.
Fact checkers will be more important than ever, but we already have trouble agreeing on which fact checkers to trust. People are likely to seize on one source to trust and give up trying to evaluate what they see. Because sorting truth from lies can be hard and no one is an expert in everything.
You’ll be granting a lot of power to whomever you choose to follow. Be careful, because as the saying goes, power corrupts. We’re in for a wild ride.