In the evolving landscape of image-guided surgery, mixed reality (MR) is capturing the imagination of both clinicians and engineers [1, 2]. The idea of overlaying a three-dimensional, holographic reconstruction of the patient’s anatomy into the surgical field-allowing the operator to “see through” tissues and navigate in real time-promises to bring a new dimension to precision surgery [3].

The study by Liu and colleagues, presented in this issue of *Prostate Cancer and Prostatic Diseases*, offers an interesting comparative analysis of MR-guided navigation in robot-assisted radical prostatectomy (RARP) for patients with high-risk prostate cancer. The authors report on a single-center, propensity score-matched cohort study comparing outcomes between MR-assisted RARP (MR-RARP) and standard RARP (S-RARP) in 114 patients with high-risk prostate cancer. Their results are worthy of note: MR-RARP was associated with higher nerve-sparing rates, lower incidence of positive surgical margins, and faster recovery of urinary continence and sexual potency at early time points (1 and 3 months). The MR navigation system emerged as an independent predictor of functional recovery and margin status, suggesting that its intraoperative use is not merely “beautiful to see” but clinically impactful. Importantly, these results were achieved without compromising oncological safety, with comparable biochemical recurrence rates between the two groups over a median follow-up of 28 months. What makes these findings particularly compelling is the clinical context in which they are situated. High-risk prostate cancer is still an ongoing surgical dilemma: how to balance oncologic control with the preservation of functional outcomes [4]. In this delicate equilibrium, every millimeter matters. The ability to visualize the index lesion and its relationship to key anatomical structures in real time—rather than relying solely on cognitive reconstruction from preoperative imaging—represents a genuine advancement [5]. MR-guided navigation provides that spatial fidelity, allowing surgeons to tailor their dissection strategy intraoperatively based on accurate 3D models of the prostate, tumor, neurovascular bundles, and surrounding anatomy. Their findings are encouraging but at the same time, their work underscores the numerous challenges that stand between promise and routine clinical adoption. Yet, while the clinical signal is promising, it must be interpreted within the boundaries of technical feasibility, reproducibility, and health system integration [6]. The current implementation of MR navigation in RARP, as described by Liu et al., relies on several labor-intensive steps. The quality of the reconstructed model is tightly linked to the resolution and consistency of preoperative imaging-an aspect that is highly variable across institutions and patient populations. Manual segmentation of mpMRI images and rigid registration of 3D models require close collaboration between radiologists, urologists, and biomedical engineers. Intraoperative integration with the Da Vinci console, although elegantly performed through TilePro™ technology, still depends on static models that do not account for tissue deformation or organ shift during surgery. Moreover, the overlay is manually managed by a biomedical engineer, who must continuously “track” the prostate using the 3D model during surgery. This alignment process requires specific expertise and experience, and inherently increases the risk of misalignment, potentially compromising the precision of the technique [7].

These limitations are not insurmountable, but they are real. They remind us that innovation in surgery is as much about systems and workflows as it is about vision and technology. For MR navigation to transition from investigational tool to standard of care, several steps must be taken. First, automation: segmentation and registration need to be streamlined using artificial intelligence (AI) and machine learning [8]. Second, adaptability: future platforms must incorporate dynamic, elastic models that respond to intraoperative movements and anatomical distortion. Third, validation: multicenter, randomized trials with long-term oncologic endpoints and cost-effectiveness analyses are essential to justify widespread implementation [9]. Nonetheless, the work of Liu et al. serves as an important proof-of-concept. It shows that even with the current generation of tools, MR-guided navigation can be safely integrated into complex oncologic surgery, yielding measurable improvements in early functional outcomes and surgical confidence. The significance of this achievement should not be underestimated.

Looking ahead, the convergence of MR with AI-powered analytics, real-time imaging, and (hopefully) robotic instrumentation could define the next chapter in prostate cancer surgery. But as always in medicine, the enthusiasm for innovation must be tempered by critical evaluation and responsible integration. Mixed reality, in its current form, is not a panacea. It will not compensate for poor surgical training or suboptimal preoperative planning. But as Liu and colleagues have shown, when placed in the hands of experienced surgeons with a commitment to technological excellence, MR can become more than a futuristic accessory—it can be a transformative ally in the quest for better cancer surgery.

The times are indeed a-Changin’, but the journey toward meaningful and widespread clinical transformation is still a long one—demanding vision, robust evidence, and unwavering commitment.