There were plenty of announcements at Amazon’s AWS Summit in New York on Wednesday, July 10. The company is betting heavily on AI applications for its various products. One of the most important is the public preview of App Studio, a platform for creating applications without code. Makers can use AI to provide instructions in natural language to build apps.
By simply specifying the app’s functionality and data sources, users can generate enterprise-grade applications within minutes, a task that would normally take developers days.
During a live demonstration, Amazon showed how App Studio could build an invoice-tracking app based on a simple request, with the AI providing detailed suggestions for additional functionality. Once the app’s structure is approved, users can further customize it using intuitive drag-and-drop tools.
App Studio integrates with third-party services and AWS itself via connectors. Adam Seligman, vice president of developer experience at AWS, indicated that more integrations will follow based on customer requirements.
Q Developer to Sagemaker
Another set of announcements concerns Amazon Q, the company’s AI assistant. The Developer variant will become available in Sagemaker, Amazon’s machine-learning environment. Q was previously available only in the AWS console and developer environments like IntelliJ IDEA, Visual Studio and VS Code. There, the assistant already offers advice, generates code and helps with troubleshooting.
According to Amazon, the integration into Sagemaker simplifies machine learning workloads by, among other things, advising on data preparation—important because this is basically what AI is trained with—and indicating which approach (code, no-code, the data format used) is preferred per use case.
Amazon Q Developer can now also use an organization’s internal code library, as well as APIs, packages, classes, and other methods that allow the AI assistant to generate sensible code proposals.
Amazon Q Apps is additionally becoming generally available as a feature within Q Business. This will allow non-developers to build their own apps on a limited scale using proprietary data. Think of applications that summarize feedback, generate onboarding plans or create memos. The idea is that such functionality frees up IT staff and developers for more important or trickier tasks while allowing other employees to create their own tools to some extent.
Updates to Bedrock
Bedrock, Amazon’s enterprise platform for building generative AI based on one or more existing models, is also getting a series of updates. Anthropic’s compact Claude 3 Haiku model, which works with Amazon, can now be fine-tuned in Bedrock. This is the first time a Claude 3 model has been made suitable for this purpose.
When fine-tuned properly, customers can deploy the model for very specific use cases, considering an organization’s idiosyncrasies, interests, compliance requirements and workflows. This feature is now in preview.
Furthermore, more data sources are becoming available for Bedrock’s Knowledge Bases functionality. Think connectors for Confluence, SharePoint, and Salesforce, as well as user-specified Internet sources. The tool also knows how to handle data from CSV or PDF files more accurately. The idea is that the trained models can better navigate mazes of proprietary data and draw specifically from there.
Agents created via Bedrock are now better at interpreting code and memory retention. That enables them to keep summaries of previous conversations with users to factor into generating advice for multi-step tasks such as booking flights or processing claims. Such agents can now perform data analysis and visualization, do complex calculations and process text.
Guardrails to combat hallucinations
To round out the parade of announcements, features, and enhancements, AWS is releasing an important update in security and compliance. With Guardrails for Bedrock, AWS is providing another measure against AI hallucinations. This is done by grounding models, particularly when customers use RAG and proprietary source data.
This method ensures that answers are always generated using the right proprietary data and always keep the user’s original query in mind. This addition, which comes on top of existing filters, should prevent 75 percent of hallucinations. Furthermore, an API for Guardrails is coming to unlock the functionality for use in foundation models outside of Bedrock as well.
AWS further released an update on the company’s responsible AI policy initiatives, which include visual watermarks, strengthened guidelines, and pointers on responsible AI training.
Also read: AWS puts 7.8 billion into expansion European Sovereign Cloud