Quantcast
Channel: Joab Jackson, Author at The New Stack
Viewing all articles
Browse latest Browse all 697

Amazon Bedrock Expands Palette of Large Language Models

$
0
0

More options are always a good thing, and with this in mind Amazon Web Services have expanded the capabilities of its Amazon Bedrock generative AI framework.

The AWS service can now ingest customized models, and provides more selections in its own portfolio of managed models, including the latest from Cohere, Meta‘s recently-released Llama3, and Amazon’s own new RAG-optimized Titan Embeddings V2 (tweaked for advertising, e-commerce, and media apps).

The service also debuts a new feature, called Guardrails, which provides a way for users to set up safeguards to ensure their Gen AI applications meet their responsible AI policies.

What Is Amazon Bedrock?

Launched for production use last September, Bedrock has already been put to work by “tens of thousands of customers and partners,” to build GenAI applications, according to the cloud services giant.

“Bedrock is primarily for developers. These are people who just want to use an API, and who want a selection of models and developer tools to build and scale the generative AI-based apps,” said Atul Deo, AWS general manager and director of Bedrock, in an interview with TNS.

As a managed service, Bedrock relieves customers of the worries of managing the underlying infrastructure, including security and privacy issues, of building a generative AI application.

Amazon’s own Rufus expert shopping assistant was built on the technology and trained on Web data as well as the company’s product catalog, customer reviews, and community Q&As.

Aha! built a generative AI tool to help its customers refine their product strategies. Marketing firm Dentsu built an image generator, based on Titan, to create studio-quality images in large volumes, using natural language prompts. The Salesforce CRM service used Bedrock to evaluate potential models that could help in the company’s personalization efforts.

AI’s Next Step: Customized Models

Importing new models into Bedrock (a preview feature) addresses an important trend in LLMs, that of many companies — such as healthcare and financial services — trying to better model their own specific industries, or verticals, often by using their own domain-specific knowledge and even languages.

Many AWS customers have been on a ” journey of customizing some of the open models themselves,” Deo said. “Customers with advanced data science teams or machine learning experts can take these models [Like Llama], use a tool like Amazon SageMaker and others, and do some advanced customization and fine-tuning.”

Models can be imported through an API, and are validated. Scaling and safeguarding is done by AWS. And they can be intermingled with AWS’ own prepackaged collection of models.

Bedrock also offers a Model Evaluation, which can help the user determine the best model to use for a particular job, helping them understand the balance between cost and accuracy, potentially saving hours of in-house analysis.

With Guardrails, users provide a natural-language description of the topics that need the be safeguarded against, such as hate speech, insults, sexualized language, prompt injection, and violence. Filters can also be applied to remove personal and sensitive information and profanity.

A watermark detection app built with Amazon Bedrock.

The post Amazon Bedrock Expands Palette of Large Language Models appeared first on The New Stack.

Amazon Bedrock was built primarily for developers who just want to quickly build a generative AI app with pre-vetted models and support tools.

Viewing all articles
Browse latest Browse all 697

Trending Articles