AWS has recently launched Custom Model Import within Bedrock, aiming to solidify its position as a top platform for custom generative AI models. This new feature enables organizations to import and utilize their own generative AI models as fully managed APIs. By doing so, companies can take advantage of the same infrastructure and tools available for existing models in Bedrock.
The inclusion of custom model support by AWS aligns with the increasing trend among enterprises to develop and enhance their in-house models, reflecting the need for tailored solutions.
AWS Bedrock now supports custom in-house models
With Custom Model Import, businesses leveraging their proprietary models within Bedrock can also utilize the suite’s tools for knowledge expansion, fine-tuning, and mitigation of biases. This integration aims to offer customers a comprehensive solution for AI model management.
Notable capabilities of the service include the ability to monitor and filter outputs for undesirable content such as hate speech or violence, as well as evaluating model performance against diverse criteria.
The preview release of the service currently supports popular open model architectures like Flan-T5, Llama, and Mistal, with AWS committed to expanding architecture support in the future.
Furthermore, AWS has announced the general availability of Titan Image Generator and the launch of Titan Text Embeddings V2. These offerings are designed to provide cost-effective alternatives for companies by reducing storage and compute costs while enhancing model accuracy.
Additionally, the Meta Llama 3 foundation models have been introduced on Bedrock, with Cohere’s Command R and Command R+ models expected to follow soon, further enriching the platform’s AI model offerings.
By embracing interoperability through supporting third-party as well as in-house models, AWS showcases a commitment to enhancing its platform while delivering value to its customers through tailored solutions.