Search

Google introduces Gemma 2, its latest iteration of open models

Share it

Promoted for Developers and Scholars

Gemma 2 isn’t just stronger; it’s crafted to seamlessly blend into your processes:

  • Transparent and reachable: In line with the original Gemma designs, Gemma 2 is accessible under our business-friendly Gemma license, granting developers and researchers the freedom to collaborate and monetize their breakthroughs.
  • Extensive framework adaptability: Seamlessly employ Gemma 2 with your favored tools and workflows due to its compatibility with major AI frameworks like Hugging Face Transformers, and JAX, PyTorch and TensorFlow through native Keras 3.0, vLLM, Gemma.cpp, Llama.cpp, and Ollama. Additionally, Gemma is fine-tuned with NVIDIA TensorRT-LLM for operation on NVIDIA-accelerated infrastructure or as an NVIDIA NIM inference microservice. Refine instantaneously with Keras and Hugging Face. We are actively working on enabling more parameter-efficient fine-tuning choices.
  • Effortless roll-out: Beginning next month, Google Cloud clients will effortlessly implement and oversee Gemma 2 on Vertex AI.

Delve into the new Gemma Cookbook, an anthology of practical instances and formulas to lead you in constructing your own applications and tweaking Gemma 2 models for precise duties. Learn how to smoothly operate Gemma with your preferred toolset, including for prevalent tasks like retrieval-augmented generation.

Accountable AI evolution

We are pledged to equipping developers and researchers with the means necessary to craft and launch AI prudently, including via our Responsible Innovatory AI Toolkit. The freshly disclosed LLM Comparator aids developers and researchers in thorough examination of language models. As of today, you can employ the accompanying Python bundle to conduct relative evaluations with your model and data, and visually present the outcomes through the application. Additionally, we are actively progressing towards opening up our text watermarking technology, SynthID, for Gemma models.

During the training of Gemma 2, we adhered to our stringent internal security approaches, sifting through pre-training materials and conducting thorough trials and evaluations across a comprehensive array of criteria to pinpoint and alleviate potential prejudices and hazards. We disclose our findings on an extensive array of public performance standards related to security and representational damages.

Visit the official Google blog for more information.

🤞 Don’t miss these tips!

🤞 Don’t miss these tips!

Solverwp- WordPress Theme and Plugin