Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Facial expressions in mice reveal latent cognitive variables and their neural correlates

Abstract

Brain activity controls adaptive behavior but also drives unintentional incidental movements. Such movements could potentially be used to read out internal cognitive variables that are also neurally computed. Establishing this would require ruling out that incidental movements reflect cognition merely because they are coupled with task-related responses through the biomechanics of the body. Here we addressed this issue in a foraging task for mice, where multiple decision variables are simultaneously encoded even if, at any given time, only one of them is used. We found that characteristic features of the face simultaneously encode not only the currently used decision variables but also independent and unexpressed ones, and we show that these features partially originate from neural activity in the secondary motor cortex. Our results suggest that facial movements reflect ongoing computations above and beyond those related to task demands and demonstrate the ability of noninvasive monitoring to expose otherwise latent cognitive states.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Task, strategies and decision variables.
Fig. 2: Stereotyped facial expressions of decision variables.
Fig. 3: Facial expressions of decision variables do not depend on the strategy.
Fig. 4: Expressions of decision variables in facial movement versus neurons.
Fig. 5: Effect of M2 inactivation on facial expressions of decision variables.

Data availability

The behavioral and electrophysiological data used in this study are available on Figshare at https://figshare.com/s/924af1de619f4597f37a (ref. 44). Raw videos and electrophysiological data are too large to be shared on a public repository and are therefore available from the authors upon request.

Code availability

All analyses were performed using custom code written in MATLAB that is available upon request. The code used to process the videos is publicly available at https://github.com/MouseLand/facemap. The code used for the central GLM analyses is publicly available at https://hastie.su.domains/glmnet_matlab/. The code developed for the LM-HMM can be accessed at https://github.com/mazzulab/ssm/blob/master/notebooks/2c%20Input-driven%20linear%20model%20(LM-HMM).ipynb.

References

  1. Cowen, A. S. et al. Sixteen facial expressions occur in similar contexts worldwide. Nature 589, 251–257 (2021).

    Article  CAS  PubMed  Google Scholar 

  2. Roberts, L. W., Chan, S. & Torous, J. New tests, new tools: mobile and connected technologies in advancing psychiatric diagnosis. NPJ Digit. Med. 1, 20176 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  3. Euston, D. R. & McNaughton, B. L. Apparent encoding of sequential context in rat medial prefrontal cortex is accounted for by behavioral variability. J. Neurosci. 26, 13143–13155 (2006).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Cowen, S. L. & McNaughton, B. L. Selective delay activity in the medial prefrontal cortex of the rat: contribution of sensorimotor information and contingency. J. Neurophysiol. 98, 303–316 (2007).

    Article  PubMed  Google Scholar 

  5. Selen, L. P. J., Shadlen, M. N. & Wolpert, D. M. Deliberation in the motor system: reflex gains track evolving evidence leading to a decision. J. Neurosci. 32, 2276–2286 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Flavell, S. W., Gogolla, N., Lovett-Barron, M. & Zelikowsky, M. The emergence and influence of internal states. Neuron 110, 2545–2570 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. & Pollak, S. D. Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20, 1–68 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. LeDoux, J. E. What emotions might be like in other animals. Curr. Biol. 31, R824–R829 (2021).

    Article  CAS  PubMed  Google Scholar 

  9. Dolensek, N., Gehrlach, D. A., Klein, A. S. & Gogolla, N. Facial expressions of emotion states and their neuronal correlates in mice. Science 368, 89–94 (2020).

    Article  CAS  PubMed  Google Scholar 

  10. Vinck, M., Batista-Brito, R., Knoblich, U. & Cardin, J. A. Arousal and locomotion make distinct contributions to cortical activity patterns and visual encoding. Neuron 86, 740–754 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. McGinley, M. J., David, S. V. & McCormick, D. A. Cortical membrane potential signature of optimal states for sensory signal detection. Neuron 87, 179–192 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Cazettes, F., Reato, D., Morais, J. P., Renart, A. & Mainen, Z. F. Phasic activation of dorsal raphe serotonergic neurons increases pupil size. Curr. Biol. 31, 192–197 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Bimbard, C. et al. Behavioral origin of sound-evoked activity in mouse visual cortex. Nat. Neurosci. 26, 251–258 (2023).

  14. Clayton, K. K. et al. Sound elicits stereotyped facial movements that provide a sensitive index of hearing abilities in mice. Curr. Biol. 34, 1605–1620 (2024).

  15. Musall, S., Kaufman, M. T., Juavinett, A. L., Gluf, S. & Churchland, A. K. Single-trial neural dynamics are dominated by richly varied movements. Nat. Neurosci. 22, 1677–1686 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Yin, C. et al. Spontaneous movements and their relationship to neural activity fluctuate with latent engagement states. Neuron https://pubmed.ncbi.nlm.nih.gov/40602403/ (2025).

  17. Salkoff, D. B., Zagha, E., McCarthy, E. & McCormick, D. A. Movement and performance explain widespread cortical activity in a visual detection task. Cereb. Cortex 30, 421–437 (2020).

    Article  PubMed  Google Scholar 

  18. Tremblay, S., Testard, C., DiTullio, R. W., Inchauspé, J. & Petrides, M. Neural cognitive signals during spontaneous movements in the macaque. Nat. Neurosci. 26, 295–305 (2022).

  19. Syeda, A. et al. Facemap: a framework for modeling neural activity based on orofacial tracking. Nat. Neurosci. 27, 187–195 (2024).

    Article  CAS  PubMed  Google Scholar 

  20. Talluri, B. C. et al. Activity in primate visual cortex is minimally driven by spontaneous movements. Nat. Neurosci. 26, 1953–1959 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Hasnain, M. A. et al. Separating cognitive and motor processes in the behaving mouse. Nat. Neurosci. 28, 640–653 (2025).

    Article  CAS  PubMed  Google Scholar 

  22. Reato, D., Steinfeld, R., Tacão-Monteiro, A. & Renart, A. Response outcome gates the effect of spontaneous cortical state fluctuations on perceptual decisions. eLife 12, e81774 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Hulsey, D., Zumwalt, K., Mazzucato, L., McCormick, D. A. & Jaramillo, S. Decision-making dynamics are predicted by arousal and uninstructed movements. Cell Rep. 43, 113709 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Cazettes, F. et al. A reservoir of foraging decision variables in the mouse brain. Nat. Neurosci. 26, 840–849 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Vertechi, P. et al. Inference-based decisions in a hidden state foraging task: differential contributions of prefrontal cortical areas. Neuron 106, 166–176 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, 255 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  27. Schwartz, A. B. Movement: how the brain communicates with the world. Cell 164, 1122–1135 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Cowen, A. S. & Keltner, D. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proc. Natl Acad. Sci. USA 114, E7900–E7909 (2017).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  29. Langford, D. J. et al. Coding of facial expressions of pain in the laboratory mouse. Nat. Methods 7, 447–449 (2010).

    Article  CAS  PubMed  Google Scholar 

  30. Mangin, E. N., Chen, J., Lin, J. & Li, N. Behavioral measurements of motor readiness in mice. Curr. Biol. 33, 3610–3624 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Lottem, E. et al. Activation of serotonin neurons promotes active persistence in a probabilistic foraging task. Nat. Commun. 9, 1000 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  32. Dotan, D., Meyniel, F. & Dehaene, S. On-line confidence monitoring during decision making. Cognition 171, 112–121 (2018).

    Article  PubMed  Google Scholar 

  33. Cisek, P. & Kalaska, J. F. Neural mechanisms for interacting with a world full of action choices. Annu. Rev. Neurosci. 33, 269–298 (2010).

    Article  CAS  PubMed  Google Scholar 

  34. Steinmetz, N. A., Zatka-Haas, P., Carandini, M. & Harris, K. D. Distributed coding of choice, action, and engagement across the mouse brain. Nature 576, 266–273 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Zagha, E. et al. The importance of accounting for movement when relating neuronal activity to sensory and cognitive processes. J. Neurosci. 42, 1375–1382 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Miller, M. R., Herrera, F., Jun, H., Landay, J. A. & Bailenson, J. N. Personal identifiability of user tracking data during observation of 360° VR video. Sci. Rep. 10, 17404 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Asish, S. M., Kulshreshth, A. K. & Borst, C. W. User identification utilizing minimal eye-gaze features in virtual reality applications. Virtual Worlds 1, 42–61 (2022).

    Article  Google Scholar 

  38. Smith, M. & Miller, S. The ethical application of biometric facial recognition technology. AI Soc. 37, 167–175 (2022).

    Article  PubMed  Google Scholar 

  39. Pereira, R. S. in The Legal Challenges of the Fourth Industrial Revolution (eds Moura Vicente, D., de Vasconcelos Casimiro, S. & Chen, C.) 193–209 (Springer International Publishing, 2023).

  40. Faraldo Cabana, P. in Artificial Intelligence, Social Harms and Human Rights (eds Završnik, A. & Simončič, K.) 35–54 (Springer International Publishing, 2023).

  41. Lopes, G. et al. Bonsai: an event-based framework for processing and controlling data streams. Front. Neuroinform.9, 7 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  42. Shamash, P., Carandini, M., Harris, K. & Steinmetz, N. A tool for analyzing electrode tracks from slice histology. Preprint at bioRxiv https://doi.org/10.1101/447995 (2018).

  43. Pachitariu, M., Sridhar, S., Pennington, J. & Stringer, C. Spike sorting with Kilosort4. Nat. Methods 21, 914–921 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  44. Cazettes, F. Facial expressions in mice and neural correlates. Figshare https://figshare.com/s/924af1de619f4597f37a (2025).

Download references

Acknowledgements

We thank J. P. Morais for support with behavioral training. This work was funded by CNRS (F.C.), Simons Foundation (F.C.: SCGB 969875; Z.F.M.: SCGB 543011), Marie-Curie postdoctoral fellowships (F.C.: HORIZON-MSCA-2021-PF-01 101062459; D.R.: HORIZON-MSCA-2021-PF-01 101063075), Fundação para a Ciência e a Tecnologia (A.R.: LISBOA-01-0145-FEDER-032077 and PTDC/MED-NEU/4584/2021), la Caixa Foundation (A.R.: HR23-00799), the European Research Council Advanced Grant (Z.F.M.; 671251) and Champalimaud Foundation (A.R. and Z.F.M.). This work was also supported by Portuguese national funds through Fundação para a Ciência e a Tecnologia in the context of the project UIDB/04443/2020 and by the research infrastructure CONGENTO, cofinanced by Lisboa Regional Operational Programme (Lisboa2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund and Fundação para a Ciência e a Tecnologia (Portugal) under the projects LISBOA-01-0145-FEDER-02217 and LISBOA-01-0145-FEDER-022122.

Author information

Authors and Affiliations

Authors

Contributions

F.C., A.R. and Z.F.M. designed the study. F.C. and E.A. performed behavioral and optogenetics experiments. F.C. performed electrophysiological experiments and curated the data. D.R. processed the video data. R.S. collected the data used in Extended Data Fig. 3. F.C., D.R. and A.R. designed and performed the analyses. F.C., A.R and Z.F.M. wrote the paper. All authors reviewed the paper.

Corresponding author

Correspondence to Fanny Cazettes.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Neuroscience thanks Nadine Gogolla, Jeffrey Markowitz and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Decoding decision variables from movement PCs.

a, Decision variables (for example, consecutive failures) evolve after each lick outcome. Therefore, to predict these variables from movement PCs, we aligned the PCs to lick events using a 200 ms window. b, Decoding accuracy of various decision variables from movement PCs was estimated using a 200 ms sliding window (75% overlap) at different time lags relative to the lick event (example session shown). c, Peak decoding accuracy (grey dots) for each decision variable occurred between 100 ms and 350 ms after the lick. Error bars represent the median and m.a.d. across sessions (N = 10 sessions, 8 sessions from distinct animals, two from the same animal). d, This schematic describes the method used to partial out the linear relationship between latent variables. This approach allowed us to decompose DV1 (for example, consecutive failures) into the sum of two time series: one proportional to DV0 (for example, action outcomes) and another orthogonal (uncorrelated) to DV0, which we denote as unique DV1 (for example, unique consecutive failures). The unique consecutive failures residual (pink) is orthogonal to the action outcomes (black). Subsequently, the same procedure generated the unique negative value residual, orthogonal to both action outcomes and unique consecutive failures. The three resulting orthogonal time series (action outcomes, unique consecutive failures, and unique negative value) were then fit using movement PCs.

Extended Data Fig. 2 Facial movement encodes decision variables independently of licking rate and can also reflect slow timescale processes.

a, Changes in lick rate across 4 example bouts. b, Correlation (Pearson’s coefficient) between lick rate and the two decision variables (median across 10 sessions with 25th and 75th percentiles; whiskers represent minimum and maximum values). A small negative correlation exists between lick rate and the decision variables, suggesting that mice tend to lick slightly faster during reward consumption and slow down towards the end of a lick bout. c, Decoding accuracy (cross-validated R²) for action outcome, unique aspects of the two decision variables, and arbitrary signals in each region of interest (median across 10 sessions with 25th and 75th percentiles; whiskers represent minimum and maximum values). The movement PCs are used as predictors in multivariate regression models to predict the action outcome and the decision variables with lick rate variance removed (partialed out). The variance in action outcome, consecutive failures and negative value that is not explained by lick rate remains highly decodable. This suggests that the relationship between facial expressions and latent variables is not solely explained by licking behavior. d, Decoding of bout number from facial movement PCs in an example session. The actual bout number is shown as a thick light orange line, and the decoded projection is shown as a thin dark orange line. e, Decoding accuracy (cross-validated R²) across all sessions (N = 10, orange dots). Boxplots show the median (center line), 25th and 75th percentiles (box edges), and minimum and maximum values (whiskers).

Extended Data Fig. 3 Facial expression of different task variables in an auditory two-alternative forced choice (2AFC) task.

a, Mice (N = 5 for a total of 20 behavioral sessions) were presented with single tones (150 ms) of varying frequencies (low: 9.9, 12, and 13 kHz; high: 15, 16.3, and 20 kHz) and, after a delay period (500 ms after stimulus offset), reported their perceived frequency (high or low) by licking one of two water spouts (left or right) to receive a water reward if the response was correct. Simultaneously videos were recorded at 60 fps. b, Task schematics and time intervals analyzed. Video analysis focused on three periods, color-coded in the schematic: “pre-stimulus” (before tone onset; 1 s), “pre-response” (including stimulus presentation and part of the delay period; 500 ms), and “response” (around the time of the lick response; 200 ms). c, Representative frame from the co-registered video (generated by combining the 20 sessions), along with the first four eigenfaces. Error bars represent the median and m.a.d. across mice (N = 5). d, Average motion energy across task periods. Motion energy (relative to the video average) is shown for each analysis period. Low motion is observed during the pre-stimulus period (top), increasing during stimulus presentation (middle), and peaking during the response (bottom), reflecting movement associated with licking. e, Decoding accuracy of task variables from facial movement PCs. Decoding accuracy from facial movement PCs for previous trial outcome (rewarded/unrewarded) and trial number. Each dot represents the average decoding accuracy across sessions for a single mouse; error bars indicate the median and m.a.d. across mice. The task variables, especially the slow latent variable ‘Trial number’, can be decoded with relatively high accuracy from facial movement PCs. f, Weighted masks (that is, facial representation of the decoded task variables in panel e) for the three different time intervals. The expression of the task variables on the face is highly consistent across the different time intervals, suggesting that this representation is independent of the animal’s overall movement.

Extended Data Fig. 4 Image co-registration across videos.

a, Eight facial landmarks (red) were manually identified on average frames from each video. An affine transformation determined by using these landmarks and MATLAB’s fitgeotrans function, co-registered frames from each video to a reference frame (orange, bottom). This transformation facilitated comparisons and averaging of weighted masks (Fig. 2), video concatenation (Fig. 3), and definition of an average facial silhouette (Fig. 2). b, Improvement in pairwise 2D cross-correlation between average video frames before and after co-registration (N = 10 videos; error bars represent the median and m.a.d. across mice). c, Example traces of the first five principal components (PCs) derived from motion energy analysis of three co-registered videos. Singular value decomposition (SVD) was applied to the merged video data. d, t-SNE visualization of the first 100 PCs of the merged video. Points are color-coded by video identity, revealing substantial overlap between videos.

Extended Data Fig. 5 Stereotyped facial expressions of decision variables in wild-type and VGAT mice.

a, Weighted masks for a single example session (top) of each wild-type mouse (N = 3) and averaged across all sessions (N = 24, bottom) from all mice during the laser OFF condition. b, Same as in panel (a) but for a single example session (top) of each VGAT (N = 5) and averaged across all sessions (N = 24, bottom) from all mice during the laser OFF condition. c, Inter-animal similarity of facial expression of decision variable (Action outcome: OUT; Unique consecutive failure: CF; Unique negative value: VAL; Arbitrary signal: ARB). Colors represent the normalized 2D cross-correlation at zero lag between the mean weighted masks of two mice (for each mouse the mean weighted mask was the average across sessions, N = 8 mice, 6 ± 2.7 sessions per mouse). d, Mean weighted mask similarity for each mouse and decision variable. Each gray dot represents the average pairwise correlation of the mean weighted mask of a mouse for a given decision variable with the mean weighted masks of all the other mice for the same decision variable. Each gray diamond represents the average pairwise correlation of the mean weighted mask of a mouse for a given decision variable with the mean weighted masks of the same mouse for all the other decision variables. Color errorbars represent median and m.a.d across mice (N = 10, 8 sessions from distinct animals, two from the same animal). e, Distribution of average pairwise 2D correlations at zero lag for electrophysiology (data in Figs. 24) and downsampled optogenetics (data in Fig. 5) datasets. Pairwise correlations of facial expression of decision variables were calculated for all sessions in the electrophysiology dataset (black dots; median and median absolute deviation (m.a.d.) indicated). For comparison, 10 sessions were randomly sampled from the optogenetics dataset multiple times, and average correlations were calculated for each sample (gray dots). The sampling procedure was repeated twenty times, each represented as a row of gray dots. Note that correlations are overall smaller than the ones estimated using averages across sessions (rather than single ones) as in (d). f, Within- and across-mice correlation of decision variable facial expression. Each point represents the average correlation of a single behavioral session with all other sessions, either from the same mouse (within) or different mice (across). Errorbars represent mean and s.d. across sessions per mouse (N = 8 mice, 6 ± 3 sessions per mouse).

Extended Data Fig. 6 Decoding multiple decision variables and facial movement from neural activity.

a, We recorded with Neuropixels probes in multiple regions of the frontal cortex. Schematic target location of the neuropixels probe insertion. Vertical insertions were performed within a 1 mm diameter craniotomy centered around +2.5 mm anterior and +1.5 mm lateral from Bregma. b, An example of histology with the electrode track. We painted the probe with a red, fluorescent dye to recover the probe’s location post-hoc. c, To decode the instantaneous value of multiple decision variables (pink & blue traces, right), we used regression models taking as predictors the activity of simultaneously recorded neurons in each brain region (black traces, left, example activity from M2). The model predictions (the weighted sums of neural activity, black trace right) overlap with the decision variables. d, Neural vs. facial movement PC decoding latencies for different decision variables (N = 10 sessions). Each session contributes three points (gray, pink, blue), one per decision variable. Points above the identity line indicate later facial movement representation relative to neural representation. Black cross: median ± m.a.d. across all points. e, Predicting facial movements from neural activity in M2, OFC, and OC. GLMs were trained to predict facial movement PCs using a 50 ms non-overlapping sliding window of lagged neural activity from M2, OFC, and OC. Facial movement PCs were derived from concatenated videos to enable cross-session comparisons. f, Relationship between decoding accuracy and the time of peak decoding accuracy for facial movements. Peak times (median of cross-validated R² across sessions, N = 10) are shown for the 25 PCs of facial movement with the largest variance, as a function of decoding accuracy from neurons in M2, OFC, and OC. Each dot represents one facial movement PC. Negative values, particularly in M2 (M2 = −0.05 ± 0.06, p = 0.002; OFC = −0.05 ± 0.10, p = 0.194; OC = 0.05 ± 0.09, p = 0.0194; median ± m.a.d, N = 10 sessions, Wilcoxon signed rank test, Holm-Bonferroni corrected), indicate that neural activity preceding the facial movement is most predictive.

Extended Data Fig. 7 Effect of M2 inactivation on movement and facial expressions of decision variables with lick rate variance removed.

a, Laser-induced changes in facial movement patterns. 2D masks show the difference in average facial motion (calculated from movement PCs) between laser ON and OFF conditions, across mice and at different lick numbers (decision time points). b, Variability of laser-induced facial motion changes. The variance of the difference in motion energy between laser ON and OFF conditions (normalized by the variance in laser OFF) is shown as a function of lick number for inactivated (green, N = 5 mice) and control (black, N = 3 mice) groups. Thick lines represent group means. c, Decoding accuracy after removing the effect of lick rate (top & middle): Comparison of laser ON vs. laser OFF conditions. Dots below the unity line indicate that representations of decision variables derived from facial movement PCs were decoded less accurately during laser ON than laser OFF. Difference in decoding accuracy (bottom): Laser ON minus laser OFF (mean across sessions for each mouse in the inactivated and control groups). Individual mice are indicated by color. d, Same as in (c) but for decoding latency. Dots above the unity line indicate that representations of decision variables derived from facial movement PCs were decoded later during laser ON than laser OFF. Partial silencing of M2 reduced the accuracy and increased the latency with which facial movement PCs predicted decision variables, even after controlling for lick rate. This suggests that the latent variable represented in facial movements is not simply a consequence of the relationship between M2 activity and lick rate.

Supplementary information

Reporting Summary

Supplementary Video 1

A detailed example of the dynamic relationship between facial movements and an evolving decision variable (that is, consecutive failures). It simultaneously displays raw video of a representative mouse (top left), calculated facial motion energy (top middle), and example weighted eigenfaces by their related movement PCs changing frame-by-frame (top right). Focusing on two example behavioral bouts, the video synchronizes these facial dynamics with the complete sequence of action outcomes (rewards/failures; green and black dots, respectively), the actual evolving decision variable (pink trace), and its prediction derived from facial movement PCs (gray trace). This video offers a visual illustration of the temporal relation between these elements.

Supplementary Video 2

Comparison of video processing stages for three example mice. Top row: raw videos display variations in the original field of view. Middle row: corresponding facial ROIs considered for analysis, aligned to a standard orientation. Bottom row: videos after co-registration. Note that co-registration primarily rescales facial dimensions to correct for differing initial viewing angles (for example, Mouse 1).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cazettes, F., Reato, D., Augusto, E. et al. Facial expressions in mice reveal latent cognitive variables and their neural correlates. Nat Neurosci (2025). https://doi.org/10.1038/s41593-025-02071-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41593-025-02071-5

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing