Analysis
Exploring AGI, the challenges of scaling and the way forward for multimodal generative AI
Subsequent week the factitious intelligence (AI) neighborhood will come collectively for the 2024 International Conference on Machine Learning (ICML). Operating from July 21-27 in Vienna, Austria, the convention is a global platform for showcasing the most recent advances, exchanging concepts and shaping the way forward for AI analysis.
This yr, groups from throughout Google DeepMind will current greater than 80 analysis papers. At our sales space, we’ll additionally showcase our multimodal on-device mannequin, Gemini Nano, our new household of AI fashions for training known as LearnLM and we’ll demo TacticAI, an AI assistant that may assist with soccer techniques.
Right here we introduce a few of our oral, highlight and poster shows:
Defining the trail to AGI
What’s synthetic normal intelligence (AGI)? The phrase describes an AI system that’s not less than as succesful as a human at most duties. As AI fashions proceed to advance, defining what AGI may seem like in apply will develop into more and more vital.
We’ll current a framework for classifying the capabilities and behaviors of AGI models. Relying on their efficiency, generality and autonomy, our paper categorizes programs starting from non-AI calculators to rising AI fashions and different novel applied sciences.
We’ll additionally present that open-endedness is critical to building generalized AI that goes past human capabilities. Whereas many latest AI advances have been pushed by present Web-scale knowledge, open-ended programs can generate new discoveries that stretch human data.
At ICML, we’ll be demoing Genie, a mannequin that may generate a variety of playable environments primarily based on textual content prompts, photos, images, or sketches.
Scaling AI programs effectively and responsibly
Growing bigger, extra succesful AI fashions requires extra environment friendly coaching strategies, nearer alignment with human preferences and higher privateness safeguards.
We’ll present how utilizing classification instead of regression techniques makes it simpler to scale deep reinforcement studying programs and obtain state-of-the-art efficiency throughout totally different domains. Moreover, we suggest a novel method that predicts the distribution of consequences of a reinforcement learning agent’s actions, serving to quickly consider new situations.
Our researchers current an alignment-maintaining approach that reduces the necessity for human oversight, and a new approach to fine-tuning large language models (LLMs), primarily based on sport concept, higher aligns a LLM’s output with human preferences.
We critique the approach of training models on public data and only fine-tuning with “differentially private” training, and argue this method might not supply the privateness or utility that’s typically claimed it does.
VideoPoet is a big language mannequin for zero-shot video era.
New approaches in generative AI and multimodality
Generative AI applied sciences and multimodal capabilities are increasing the artistic prospects of digital media.
We’ll current VideoPoet, which makes use of an LLM to generate state-of-the-art video and audio from multimodal inputs together with photos, textual content, audio and different video.
And share Genie (generative interactive environments), which may generate a variety of playable environments for coaching AI brokers, primarily based on textual content prompts, photos, images, or sketches.
Lastly, we introduce MagicLens, a novel picture retrieval system that makes use of textual content directions to retrieve photos with richer relations past visible similarity.
Supporting the AI neighborhood
We’re proud to sponsor ICML and foster a various neighborhood in AI and machine studying by supporting initiatives led by Disability in AI, Queer in AI, LatinX in AI and Women in Machine Learning.
In case you’re on the convention, go to the Google DeepMind and Google Analysis cubicles to satisfy our groups, see stay demos and discover out extra about our analysis.