GPT-5 launch is expected in August 2025. Here’s everything we know so far

GPT-5 launch is expected in August 2025. Here’s everything we know so far

Article content

Welcome to Atlas! We're glad you're here. This week, we follow intelligence as it leaves the screen and enters the world

📌 In today’s Generative AI Newsletter:

  • GPT-5 prepares for August release with big architecture gains
  • Meta’s AI glasses aim to embed superintelligence in daily life
  • NASA’s AI astronaut helps coordinate real-time ISS tasks

Special highlight from our network

AI Frenzy Could Send ‘EarnPhone’ Soaring

Article content

The hidden fuel of AI? It’s already in your pocket.

Every swipe, search, and scroll creates data powering Big Tech’s next breakthrough. They’re mining it.  Mode Mobile is flipping the script—giving the value back to you.

They are creating a user-powered data economy that shares the upside, and +50M users have already generated +$325M in earnings.

This isn't a theory… Mode’s 32,481% revenue growth landed them the #1 spot on Deloitte’s 2023 list of fastest growing companies in software, and they’ve secured the Nasdaq ticker $MODE ahead of a potential IPO.

AI breakthroughs are everywhere, but these models need your data to survive. Invest in the company that allows you to share in the profits from yours.

🚨Round closing — invest at 0.30/share now. 


Disclaimers

Mode Mobile recently received their ticker reservation with Nasdaq ($MODE), indicating an intent to IPO in the next 24 months. An intent to IPO is no guarantee that an actual IPO will occur.

The Deloitte rankings are based on submitted applications and public company database research, with winners selected based on their fiscal-year revenue growth percentage over a three-year period.

Please read the offering circular and related risks at invest.modemobile.com.

🌐GPT-5: What We Know So Far About OpenAI's Next-Gen Model

Article content
Source: Getty Images

According to The Information, OpenAI’s GPT-5 is already in the hands of internal users and the feedback is sharp. 

One tester called it “extremely positive,” with particular praise for its performance in software tasks. 

Altman said GPT-5 solved a complex technical problem on the spot during internal testing, calling the experience a “here it is” moment.

Reportedly set to be released in August, OpenAI believes the GPT-5 architecture can stretch through GPT-8, and CEO Sam Altman is already using it behind the scenes.

How GPT-5 Levels Up from GPT-4:

  • Outperforms past models in code, excelling in both algorithmic problem-solving and legacy code maintenance
  • Adjusts its own reasoning effort, scaling computation based on task complexity with potential user-level control
  • Acts more like a router, possibly combining OpenAI’s GPT and “o” model lines to direct queries dynamically
  • Reclaims developer mindshare, with performance strong enough to challenge Claude Sonnet 4 and push back against Anthropic tools like Cursor

OpenAI is planning to release its first open-weight model since 2019, ahead of the launch of GPT-5. The company is also preparing Sora 2, an upgraded video-language model, which is expected to be released alongside GPT-5.

Here's a timeline of the major GPT releases:

  1. GPT-1 (June 2018): The original model with 117M parameters.
  2. GPT-2 (Feb 2019): Jumped to 1.5B parameters, sparking concerns over misuse.
  3. GPT-3 (June 2020): Leaped to 175B parameters, enabling few-shot learning.
  4. GPT-3.5 (Nov 2022): Improved instruction-following and factual accuracy.
  5. GPT-4 (Mar 2023): Introduced multimodal input with text and image support.
  6. GPT-4 Turbo (Nov 2023): Enhanced speed, efficiency, and context length.
  7. GPT-4o (May 2024): Added full multimodal support across text, image, audio, and video.

Instead of the expected "GPT-5," an OpenAI timeline shown at the VivaTech conference replaced it with “GPT-Next,” suggesting the company is moving away from its familiar numbered naming system.

If earlier versions were clever interns, this one’s dangerously close to middle management. When it lands, you might not be typing prompts. You'll be assigning work.

🛰️NASA Tests AI Crew Support for Long-Term Missions

Article content
Credits: NASA

There’s a new member on the International Space Station, and it doesn’t eat, sleep, or need a space suit. CIMON, a floating AI assistant built by Airbus and IBM, is now helping astronauts with lab tasks, camera operations, and autonomous decision-making. 

Roughly the size of a bowling ball, CIMON looks more like sci-fi than real life. But it’s already supporting real missions, speaking and responding to crew members with voice commands and facial recognition.

This week, JAXA astronaut Takuya Onishi tested CIMON’s ability to coordinate with a free-floating robotic camera, using only spoken commands to locate an object in Japan’s Kibo module

The experiment is part of Japan’s ICHIBAN project, which explores whether AI systems can relieve astronauts of routine burdens like system checks and documentation, giving them more time for hands-on science. 

NASA is observing how CIMON interprets instructions, navigates space, and interacts with other robots in real time.

The long-term aim is to develop AI systems that can act as copilots for missions far from Earth, where communication delays and psychological strain pose real risks. 

From managing repairs to offering conversational support during months of isolation, these floating assistants could become mission-critical crew members on trips to Mars, lunar bases, and beyond.

If humanity is going interstellar, it won’t be alone. Machines like CIMON are earning their seat aboard the ship.

🧬 Meta Wants to Put Personal Superintelligence in Your Pocket

Article content
Source: Getty Images

In a Wednesday letter, Mark Zuckerberg made it official: Meta is going all-in on personal superintelligence

These systems will live inside AR glasses and headsets, watching the world unfold from your point of view and quietly shaping your actions in response.

The company has set up Meta Superintelligence Labs, a new division formed after a $14.3B investment into Scale AI. Internal development has now shifted. Work on Llama Behemoth has been paused, and Meta is focusing instead on product-native models built for real-world environments. 

The language around open source has changed too. Zuckerberg cited “novel safety concerns” and introduced the possibility of stricter release controls going forward.

These models will operate through Meta hardware that tracks user context in real time. The glasses are built to recognize physical spaces, interpret attention, and respond immediately. 

Safety, in Meta’s new playbook, means containment through design rather than openness through code.

It’s intelligence that moves with you, studies you, adapts to you. Once it enters your line of sight, it stays there. 

The most powerful model may not be in the cloud or the lab. It may be the one watching you reach for your morning coffee, already preparing what comes next.

🧠 Neuralink User Writes Her Name for the First Time in 20 Years

Article content
Credit: Audrey Crews

Audrey Crews spent the last twenty years unable to move or speak. At 16, a sudden medical event left her paralyzed. Today, she’s writing her name again without lifting a finger.

Neuralink’s latest brain-computer interface, the N1 implant, has turned her thoughts into action. Crews now controls a computer with pure intention, moving a cursor, typing responses, even drawing images, all by thinking them into existence.

The procedure was conducted at the University of Miami, where surgeons embedded 128 flexible threads into the motor cortex of her brain.

Each thread records neural activity that is then translated by AI into commands a computer can understand.

The device itself is the size of a coin and fully wireless, designed for everyday use without physical strain.

Crews has started posting online again. Her replies are drawn, her doodles are intentional, her screen is alive. Every line she creates is made through thought alone. 

The technology may be early, but its meaning is immediate.

This is the end of the line. The system updates next week. We will be observing the body become wired in, funding lines blur, and open weights shift.

Article content


Asad Shoukat

Google Certified SEO Specialist | Digital Marketing Expert

3h

It feels like we’re living in the season finale of “Humans vs. Tomorrow”—AI is now leaping from cloud servers straight to our wrists, brains, and, apparently, orbiting research labs. The bar for “hard problem” is moving faster than most of us can update our LinkedIn profiles. But as the interfaces multiply and AI becomes both invisible and indispensable, there’s still plenty of room for platforms that supercharge everyday work. Solutions like https://www.chat-data.com/ let businesses embed advanced AI into their own products and workflows. Whether it’s collecting insights, automating conversations, or even supporting voice and image interactions, you get all the brilliance—no need for spaceship training or brain implants.

Nick Allen

Founding Partner @ Olympic Change Partners | Strategic Leadership

3h

glad to hear Jurassic tech like AI is still evolving but Darwin will catch up. We are operating on post-Turing SSI. Sentient intelligence is so much better, easier, neuroaligned.

Betty Gerena MBA

Upper level university admin, non profit organization director, SAG-AFTRA,AEA actor/dancer, certified fitness/dance instructor/consultant, web content writer, photographer

5h

Technology evolving at the speed of light and changing the world.

Like
Reply
Ryan A.

AI Consultant, Patent Agent

5h

Help with coding is great but can these “newer” models solve complex engineering problems outside the realm of software. For example, design a more efficient solar cell or WPC chip? And I’m not just talking about electrical engineering problems… there’s much more work to do than better coding for software engineers.

Like
Reply

To view or add a comment, sign in

More articles by Generative AI

Explore topics