Google reveals Gemini AI language model
CEO Sundar Pichai describes Gemini as Google's 'most capable and general AI model yet.'
The AI race continues to heat up as the world’s largest tech companies allocate resources toward building and establishing their own AI services and products. Google previously released Bard in hopes of directly competing with the likes of ChatGPT and will hope to bolster those efforts with its latest move. Google has unveiled Gemini, a new AI language model that it states can “efficiently run on everything from data centers to mobile devices.”
Google gave us a proper introduction to Gemini in a post to its blog today. It features words from top Google executives about how the company is embracing AI technology as its builds a path for the future. Google DeepMind CEO and co-founder Demis Hassabis speaks about Google’s AI goals and how Gemini is helping the company achieve them.
Gemini 1.0 has been optimized for three sizes (Ultra, Pro, Nano) and will be available to developers starting December 13. For the latest updates in the world of artificial intelligence. Stick with us here on Shacknews.
-
Donovan Erskine posted a new article, Google reveals Gemini AI language model
-
Google announced their new Gemini AI model today. What’s available is just GPT3.5 quality. They claim their unreleased version beats GPT4.
This demo which is heavily edited to reduce latency and chop out some verbose responses is nonetheless very impressive even though it’s mostly what GPT4 can do: https://youtu.be/UIZAiXYceBI?si=KxKvP8tu457djJob
If you showed this to the average person on the street would they not call this AGI?-
-
-
I'm sure plenty of people would be 'fooled', but I'd contend that *some* people would be fooled by an eliza level program, at least for a bit. So I wouldn't put that much stock in it fooling someone who isn't approaching it from a skeptical point of view to begin with.
If people weren't easily fooled, then "HELLO I AM FROM MICROSOFT YOUR COMPUTER HAS A VIRUS" level scams wouldn't work so well.
-
-
I thought that was implicit since you started the thread talking about whether or not it'd be labelled as AGI - without getting into super semantics (I don't think either of us want that!) I think we can agree these things are not truly 'intelligent' (yet). Complex? Yes. Interesting? Sure. Potentially useful in various areas? Sure.
-
-
The problem here is that if you had a system that you and I could absolutely agree was not intelligent; imagine if (I'm not claiming this is the case here, it's a hypothetical) if each and every. one of these days, along with a hundred others were all hard-coded tasks rather than the outcome of a more generalized capability for recognition, etc.
In that case, these hypothetical "random persons" would have just as much reason to consider it 'intelligent'.
-
-
-
-
-
-
-
-
Does it recognize what it is looking at beyond matching a pattern it has seen before?
What does this mean? Like you can test this with GPT4 today. Ask it to tell you about a picture that can't be in its training set. Here for instance someone trained a model to draft Magic The Gathering which required it to reason about which cards were best from sets of cards it had never seen before https://generallyintelligent.substack.com/p/fine-tuning-mistral-7b-on-magic-the
Ask it about things you just made up that require generalizing concepts, ex https://maximumeffort.substack.com/p/i-taught-chatgpt-to-invent-a-language
Like there are absolutely questions about the degree to which these things can 'reason' and whether they have world models and such but it's also pretty clear they can do things well beyond what's in their training set.
-
-
-
-
-
-
-
This is how I foresee AI playing out:
https://youtu.be/XMm7QA7icbs
-
I think the key point is if it can learn and retain information. The demo doesn't really demonstrate active learning capability. If it's like ChatGPT where it's just stuck with the information it was trained on, probably doesn't meet the definition. If it's re-evaluating and updating the model weights on the fly as people interact with it - seems pretty fitting
-
yeah I think that's definitely one key angle. Although it seems like even these AIs could technically already do that, albeit very slowly and not in the way we normally imagine. That is, you could essentially train a model to have the capability to retrain a new version of itself on updated data/RLHF/etc and deploy that. It wouldn't be learning in real time but it would be self learning over some time period.
-
I don't think AGI is meant to be re-training itself based on user interactions, that's astoundingly dangerous. You'll definitely have AI training AI, but it'll be under very tightly controlled circumstances, and the final product that users see will be set in stone and thoroughly tested. That doesn't mean it doesn't have a working memory, but that's different than evolving its own model on the fly.
-
https://en.wikipedia.org/wiki/Artificial_general_intelligence - Learning is part of the generally accepted definition
-
-
-
-
Lots more demos on the site
https://deepmind.google/gemini
-
-