OpenAI and Apple veterans launch Safe Superintelligence Inc.
The tech company looks to prioritize safety in the rapidly growing field of artificial intelligence.
As AI continues to grow and evolve, so do concerns around the controversial technology. With companies like OpenAI, Microsoft, and now Apple looking to push AI forward, a handful of industry veterans are starting a new company with safety as the key goal. It’s called Safe Superintelligence Inc. and is currently staffing up.
Safe Superintelligence was founded by Ilya Sutskever (OpenAI cofounder), Daniel Gross (Former Apple AI researcher), and Daniel Levy (Former OpenAI tech staff). The newly launched website features a statement from the founders about the company’s objective.
The AI business has been under increased scrutiny as of late. Last month, it was reported that OpenAI failed to uphold its promise to control potentially dangerous AI at its company. Stick with Shacknews for the most important stories in the AI business.
-
Donovan Erskine posted a new article, OpenAI and Apple veterans launch Safe Superintelligence Inc.
-
New AI company by Ilya
Safe Superintelligence Inc.
https://ssi.inc/
https://twitter.com/ilyasut/status/1803472978753303014-
Ilya clearly sees a path to super intelligence and his message is kind of a shot at OpenAI. Some Shackers believe that AI is a scam (akin to crypto) and that LLMs are a dead end- as though AI companies are betting solely on LLMs and have no direction for the future of the technology. This suggests otherwise to me.
Also, in a recent talk I watched with Ilya, he suggested that LLMs still had quite a bit of room to grow.-
That's the thing, right? I'm blown away by the people claiming AI with LLMs are a scam. It's like covid deniers and other weird denial shit all mixed in with huge helpings of jealousy and Dunning-Kruger effect. It's clearly a huge leap and almost certainly likely to lead to true AI in the near future. That's not to say there are not multiple paths to that, but this is the one that opened up first.
-
LLM's are groundbreaking, obviously not a scam, but they have a loooooong way to go before they are close to AGI or anything that will take peoples jobs away from them.
At this point I would say the tech is on par with Google, when search engines first became non shitty and a human could ask a computer a question and get a non-stupid response. There's an enormous amount of value in that, but it's not something that will destroy the world economy or kill us all or whatever. -
-
-
-
Interesting, seems like a no-frills R&D project whose main priority is to avoid corrupting influences (assuming you believe that they themselves are benevolent). The question is whether he can get enough funding and talent to actually keep up with competitors. Training cutting edge AI is very expensive in compute, and who's going to provide that for them? Where do you get money if you can't promise a return to investors?
-
-
-
-
-
-
-
the question is what is the datacentre using these infant AI's for? Some say it uses the infant AI's fledgling internal weights and neural connections as a food stuff. eating baby creatures isn't unheard of. the predecessors of the AI, the long extinct humans used to eat a dish called veal, which is a baby cattle, and a subspecies of homo sapiens used to eat baby chicken embroyos boiled inside of their shell.
-
-
-
-
-
-
-