Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
DEEP READ
Munambam Waqf issue decoded
access_time 16 Nov 2024 10:48 PM IST
Ukraine
access_time 16 Aug 2023 11:16 AM IST
Foreign espionage in the UK
access_time 22 Oct 2024 2:08 PM IST
Netanyahu: the world’s Number 1 terrorist
access_time 5 Oct 2024 11:31 AM IST
exit_to_app
Homechevron_rightTechnologychevron_right'Godfather of AI'...

'Godfather of AI' quits Google to "freely speak" about dangers of the technology

text_fields
bookmark_border
geoffrey hinton
cancel

San Francisco: Geoffrey Hinton, an executive who is dubbed the Godfather of AI for his role in developing the technology, quit Google on Monday. He confirmed that he regrets his work and wants to freely speak about the dangers of artificial intelligence.

Speaking to New York Times, he expressed concerns over AI's potential to eliminate jobs and create a world where many will not be able to know what is true anymore. He was referring to the rise of fake imagery and text. "It is hard to see how you can prevent the bad actors from using it for bad things."

"It takes away the drudge work. It might take away more than that." He said AI can replace paralegals, personal assistants, translators, and people who handle rote tasks.

He also expressed concern about future versions of the technology learning unexpected behaviour from the vast amounts of data they analyse. "The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

The 75-year-old tweeted: "In the NYT today, Cade Metz implies that I left Google so that I could criticise Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly."

Speaking about Google, he said: "Until last year, Google acted as a proper steward for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot - challenging Google’s core business - Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop."

He also gave an interview to BBC in which he said: "I can now just speak freely about what I think the dangers might be. And some of them are quite scary. Right now, as far as I can tell, they're not more intelligent than us. But I think they soon may be."

His work on neural networks is a crucial component powering products like ChatGPT.

He told BBC that chatbots can soon overtake the level of information a human brain holds. "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

Hinton has worked for Google for over a decade and his major AI breakthrough came in 2012 when he worked with two graduate students - Ilya Sutskever and Alex Krishevsky - in Toronto. They created an algorithm to analyse photos and identify common elements like cars and dogs. According to a report in the NYT, Sutskever is now OpenAI's chief scientist.

"I console myself with the normal excuse: If I hadn’t done it, somebody else would have," said the lifelong academic. "Maybe what is going on in these systems is actually a lot better than what is going on in the brain. Look at how it was five years ago and how it is now. Take the difference and propagate it forward. That’s scary."

In the 1980s, he was a professor at Carnegie Mellon University but left for Canada due to reluctance to take Pentagon funding. He has opposed the use of AI on the battlefield and referred to it as "robot soldiers".

Show Full Article
TAGS:geoffrey hintongodfather of AIAI dangerschatgpt dangers
Next Story