July 2025
Made using Amazon Nova Canvas. Prompt: "A lone puzzle piece glows with electric navy edges and magenta core, floating in space."

July 2025

Our latest newsletter features the winners of the Amazon Nova AI Challenge, along with the latest on Amazon Nova foundation models, advancements in the Nova Act SDK, and breakthroughs in tabular and multimodal AI.

AI News

Pushing the boundaries of secure AI: Winners of the Amazon Nova AI Challenge: Amazon announced the winners of the first-ever Nova AI Challenge, where university teams competed to develop and hack AI coding assistants, with teams from UIUC and Purdue winning the defender and attacker categories respectively. The challenge, which awarded $700,000 in total prizes, showcased new safety techniques, adversarial tools, and multi-turn evaluation methods that push the frontier of secure, trustworthy AI.

Everything you need to know about Amazon Nova: The Amazon Nova family of foundation models is designed to deliver powerful intelligence and industry-leading price performance for diverse customer needs, offering AI capabilities across text, image, video, and agentic tasks through models tailored to different requirements, available via Amazon Bedrock and nova.amazon.com.

Useful, reliable agents from prototype to production: Amazon AGI Labs added additional capability to the Nova Act SDK, enabling developers to build and deploy reliable AI agents with over 90% end-to-end reliability. The technology has been successfully deployed by enterprise customers for automated form filling, software testing, and other tasks through AWS integration.

Deep dives

Mitra: Mixed synthetic priors for enhancing tabular foundation models: Introducing Mitra: a foundation model from Amazon researchers that outperforms traditional methods for tabular data by learning from diverse synthetic priors. Available in AutoGluon 1.4 soon, Mitra uses in-context learning to adapt to new tasks without requiring separate models for each dataset.

Using generative AI to do multimodal information retrieval: With large datasets, directly generating data ID codes from query embeddings is much more efficient than performing pairwise comparisons between queries and candidate responses. The GENIUS model uses semantic quantization and query augmentation to generate accurate ID codes for diverse queries and data, significantly improving upon previous generative retrieval methods.

Pruning network nodes on the fly to improve LLM efficiency: Amazon researchers developed a new architecture that reduces a foundation model's inference time by 30% while maintaining its accuracy. Like specialized regions in the brain, this new system selects appropriate subsets of neurons depending on the task.

Article content
Two sparse architectures used in the researchers' experiments – the E-Branchformer (left) and the Transformer (right).

News and updates

The Aymara LLM Risk & Responsibility Matrix: New independent research from Aymara tested 20 leading language models against 10 real-world safety risks. Amazon Nova emerged as a leader, demonstrating strong performance across misinformation, impersonation, and privacy categories.

Amazon ML Summer School India 2025: Now accepting applications from students graduating in 2026 or 2027 from recognized Indian institutions. The program offers intensive machine learning training led by Amazon scientists, covering core ML topics through technical coursework and practical sessions.

New York Reinforcement Learning Workshop (NYRL) 2025: Amazon is hosting the inaugural event with Columbia Business School and NYU Tandon at its JFK27 campus on September 12. Leading researchers will gather to discuss the latest advances in reinforcement learning. Abstracts due August 21.

Conference roundup

ICML 2025 Test of Time Honorable Mention: Amazon Scholars Pieter Abbeel and Michael I. Jordan were recognized for their groundbreaking 2015 paper on trust region policy optimization. Their work has significantly influenced the development of reinforcement learning algorithms over the past decade.

Featured publications


LinkedIn | X/Twitter | Facebook | Instagram | GitHub | RSS

© 1996-2025 Amazon.com, Inc. or its affiliates | Privacy | Conditions of Use

Diamond Redmond MSc., MBA

AI Transformation Architect | Multi-exit Digital Technologist | Demystify and Democratize AI | Nurturing Sustainable Value: A Servant’s Approach to Digital Excellence | Creative Catalyst | Curious Compassion

1d

Loving Amazon's insightful new Mitra foundation model that breathes fresh life into the stagnate Tabular Intelligence landscape. Their novel approach of training only on synthetic data creates a more "rational" model with less conceptual knowledge, but much more adaptable pattern structures. It would be something like an LLM trained on meaning-rich, information-void language, so that it can more easily identify internal speech patterns without trying to map the input to existing knowledge - it's poor at answering questions, but it's great at in-context learning from the prompt itself. While current performance is only marginally better than established paradigms like XGBoost and TabPFN, it will likely scale better, since it does not require XGBoost's cycle-hungry gradient / hyper-parameter tuning and sidesteps the fixed feature / class limits of TabPFN. It also has more exploratory runway, as the architecture seems ripe for contextual LoRA overlays and even topical tuning that would boost the signal of patterns common to the target use-case without losing the multi-dimensional attention capacity. Existing base solutions seem to be plateauing, while Mitra is just getting started. Awesome work, Amazon Science - Thank you!

To view or add a comment, sign in

More articles by Amazon Science

Explore topics