Bill Gates, the co-founder of Microsoft, is a supporter of artificial intelligence and has frequently stated that he thinks models like the one behind ChatGPT are the most significant development in technology since the personal computer.
He claims that the growth of the technology could result in problems like deepfakes, biassed algorithms, and academic fraud, but he believes that these difficulties can be resolved.
“One thing that’s clear from everything that has been written so far about the risks of AI — and a lot has been written—is that no one has all the answers,” Gates wrote in a blog post this week. “Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think.”
As governments all around the world struggle with how to regulate the technology and its potential drawbacks, Gates’ middle-of-the-road stance on AI concerns may steer the discussion away from apocalyptic scenarios and towards more limited regulation addressing present risks. For instance, lawmakers saw a confidential briefing on artificial intelligence and the military on Tuesday.
One of the most well-known commentators on artificial intelligence and regulation is Bill Gates. Additionally, he continues to have a tight relationship with Microsoft, which has invested in OpenAI and integrated its ChatGPT into Office and other key products.
Gates argues in the blog post that humans have historically adapted to huge changes and will do so for AI as well by citing how society has responded to earlier breakthroughs.
“For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom,” Gates wrote.
Gates suggests that “speed limits and seat belts” are the types of regulations that are necessary for the technology.
“Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars — we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road,” Gates wrote.
The potential for job changes and “hallucination,” or the propensity for models like ChatGPT to fabricate facts, documents, and people, are two issues that concern Gates as the technology becomes more widely adopted.
He mentions the issue of deepfakes as an illustration, which he claims can be used to deceive people or sway elections by making it simple for anybody to create fake movies that impersonate other people.
He mentions deepfake detectors being developed by Intel and DARPA, a government-funded organisation, and expresses his suspicion that people will get more adept at spotting deepfakes. He suggests legislation that would specify the kind of deep fakes that are permissible to produce.
He also expresses concern over the potential for AI-generated code to look for the specific software flaws required to hack computers and proposes the creation of an international regulating organisation based on the International Atomic Energy Agency.
(Adapted from CNBC.com)









