194. Remaster Building Blocks - AWS re:Invent 2024 recap Day 2

194. Remaster Building Blocks - AWS re:Invent 2024 recap Day 2

Since June 2024 Matt Garman has been serving AWS as the new CEO, who has been at Amazon since 2006 and helped launch the initial set of AWS services.

Garman announced a number of updates to the building blocks of AWS - compute, storage, database and inference. Together, they enable companies to innovate at lower cost and with much greater energy efficiency – at scale.

But scratch below the surface, and the story is a little less rosy. The vast majority of the value companies have captured, for example, remains in isolated pockets and at subscale, according to expert's estimation, only 20% of the workloads are running on cloud today.

Still big gap to be filled.

COMPUTE INNOVATIONS

Graviton 4

AWS only began developing its own silicon in 2018. Fast forward to now and 90% of the top 1,000 EC2 customers use Graviton chips. AWS launched Graviton 4 a few months ago, which was designed to address a much broader set of workloads than its predecessors. Pinterest is one customer and, according to Garman, managed to reduce compute costs by 47% and carbon emissions by 62% using the new chips.

Amazon EC2

"AWS has the fastest and most powerful network...and we work hard to make sure you always have the latest technology," he said, "AWS Nitro offering the chance to offload virtualization for compute and security onto its own chips, giving better flexibility and performance."

130 million new EC2 instances are launched every day.”

Trainium 3 (2025 Preview):

One Trn2 instance will deliver 20.8 petaflops on a single compute node, but Trainium (Trn3) will be much more powerful, built using a 3nm process for significant efficiency gains, expected to set a new standard for ML hardware. It is estimated to offer 2x performance over Trn2, and better energy efficiency and power, with 40% efficiency increase.

However Trn3 is planned to be released around end of 2025. Sounds a little big strange why they have announced so early, almost one year in advance.

STORAGE INNOVATIONS

For the ones who are struggling to differentiate EC2 vs. S3, here's a simple summary table: EC2 is virtual servers so that applications can run on them, S3 provides storage services of objects (e.g. files and documentations).

Amazon S3 has been an unrivaled success, built to handle huge spurts in growth, S3 now handles over 400 trillion objects, Garman notes - noting AWS now has thousands of customers storing over a petabyte on S3, some of which even have more than an exabyte.

With the GA release of new Amazon S3 Tables, up to 3x faster query performance and up to 10x higher transactions per second for Apache Iceberg tables.

Faster Metadata management in S3 is now available in Preview, it is not only the fastest and easiest way to manage metadata in S3, but also automatically updates object metadata in Iceberg tables.

DATABASE INNOVATIONS

Celebrating 10 years of Amazon Aurora

  • Fastest distributed SQL database
  • Low latency read-writes
  • Virtually unlimited scalability, and scales down to Zero
  • 4x faster read-writes compared to Google Spanner
  • Fully managed
  • 99.999% multi-region availability
  • Strong consistency
  • PostgreSQL compatible
  • Amazon DynamoDB Global Tables
  • Also supporting multi-region strong consistency

Amazon Aurora is a relational database service built for the cloud, combining the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It supports MySQL and PostgreSQL compatibility. It allows scaling database capacity automatically based on workload, ideal for unpredictable or intermittent workloads, enables multi-region writes and reads with latency measured in milliseconds.

Few more announcement were made to further improve the usage of Rational Database and non-Rational Database.

Amazon Aurora DSQL: A distributed SQL database delivering low latency and high scalability for modern applications.

DynamoDB Global Tables now support stronger multi-region consistency

Data Workflows and Security:

Zero-ETL Integrations: These eliminate the need for traditional ETL pipelines, enabling seamless data movement across tools like Zendesk and SAP while improving security workflows in Amazon OpenSearch Service

INFERENCE INNOVATIONS

To respond to critics and opinions about Amazon's only model, Matt said AWS is going for a multi model approach, because there's no one size fits all approach:

Amazon Nova Models

Previous AWS CEO, current CEO of Amazon, Andy Jassy has introduced Nova,

it is a suite of foundational AI models capable of generating text, images, and videos. These include three tiers of AI models—Micro, Lite, and Pro—with a Premier multimodal model forthcoming.

Amazon Nova Micro – A text-only model that delivers the lowest latency responses in the Amazon Nova family of models at a very low cost. With a context length of 128K tokens and optimized for speed and cost

Amazon Nova Lite – a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs to generate text output

Amazon Nova Pro – a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks. Amazon Nova Pro is capable of processing up to 300K input tokens and sets new standards in multimodal intelligence and agentic workflows that require calling APIs and tools to complete complex workflows

Amazon Nova Premier – most capable multimodal model for complex reasoning tasks and for use as the best teacher for distilling custom models.

The Amazon Nova models also include two creative content generation models:

Amazon Nova Canvas – A state-of-the-art image generation model producing studio-quality images with precise control over style and content, including rich editing features such as inpainting, outpainting, and background removal.

Amazon Nova Reel – A state-of-the-art video generation model. Using Amazon Nova Reel, you can produce short videos through text prompts and images, control visual style and pacing, and generate professional-quality video content for marketing, advertising, and entertainment.

Amazon Bedrock Enhancements: Features such as model distillation for creating efficient models, automated reasoning for accuracy checks, and multi-agent collaboration were announced to advance generative AI capabilities.

SageMaker Lakehouse: A new tool for unifying data from S3 data lakes and Redshift data warehouses, making AI/ML workflows more streamlined.

CUSTOMER STORIES

During AWS re:Invent 2024, Apple made a notable appearance to highlight its collaboration with AWS in advancing machine learning and AI initiatives. Benoit Dupin, Apple’s Senior Director of Machine Learning and AI, discussed how the company leverages AWS’s powerful AI tools to enhance its machine learning workflows. This included the use of AWS’s advanced compute infrastructure and generative AI platforms for tasks such as optimizing Siri’s capabilities and improving on-device machine learning models.

Apple emphasized AWS’s role in enabling scalable AI development and seamless integration into Apple's tech ecosystem, underscoring the partnership as critical for maintaining innovation in its products and services

Last but not least, to all the developers:

Matt mentioned that for only 1 hour per day that developer spends, is coding, while the rest is linked with documentation, unit testing etc., so improve coder productivity has great potentials.

Matt Garman concluded with a focus on customer empowerment, encapsulated in the statement: "We invent so you can invent." This keynote reaffirmed AWS’s leadership in shaping the future of cloud technologies, building For A Generative AI-Driven Future, still, there's a very long way to go.

Source: AWS Re:Invent, tech radar

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics