San Francisco: Alarmed by the mammoth spread of fake news, tech giant Microsoft is working round-the-clock on building “trusted” algorithms to control false news on its platforms like Bing Search engine and professional networking website LinkedIn.
According to David Heiner, Strategic Policy Advisor at Microsoft, the problems Facebook and YouTube are facing with fake news today have alerted the tech companies the world over and Microsoft is right on the job in building Artificial Intelligence (AI)-driven systems to fight back. “We are already working on a couple of such AI-powered initiatives towards Bing and Linkedin. We are also trying to forge tie-ups with trusted news sources and then indicate to users what is the source of the news and letting them make their own decisions,” Heiner told IANS at the company’s sprawling, 500-acre campus here.
The main challenge, according to him, are concerns regarding censorship and defining what is fake news. “A very high percentage of people get news from Facebook and (Google-owned) YouTube and both these major platforms are having troubles with handling fake news.
“In the meantime, we have to draw a line for giving too much power to tech companies – in order to figure out what is being presented to the users that often leads to utterly fake news which is injurious to democracy and the civil society,” the senior company executive noted.
The need of the hour is to build “trustworthy AI” that is fair and doesn’t differentiate between religion, caste and colour. “The whole idea is to build applications around AI in a trustworthy way. People will not share data and they must not be. With respect to users’ privacy, we need trusted AI systems that are safe and transparent,” Heiner explained.
There are six core concepts to achieving “trustworthy AI”. AI-based systems need to treat everyone fairly, must be safe, protect privacy and security, iincluding everyone and need to be transparent as algorithms can be mysterious at times. “Lastly, people who are deploying AI systems — be it at Microsoft or at other companies – are to be treated as accountable for the trustworthiness of their AI systems,” Heiner explained.