Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Computational advances have enabled the deployment of increasingly complex models, which are applied now to a broad-ranging set of fields. This editorial showcase aims at providing a snapshot of the current tools and challenges that are currently holding the promise to change lives in several ways. Herein, we also highlight research on the underlying pursuit of developing the concept of Artificial Intelligence.
Brain-inspired neuromorphic algorithms and systems have shown essential advance in efficiency and capabilities of AI applications. In this Perspective, the authors introduce NeuroBench, a benchmark framework for neuromorphic approaches, collaboratively designed by researchers across industry and academia.
The control of turbulent flows is crucial for improving efficiency in engineering systems. Here, authors show in a numerical simulation that using deep reinforcement learning to control surface actuators can successfully mitigate turbulent flow separation, paving the way for new advancements in turbulence control.
Artificial neural networks, central to deep learning, are powerful but energy-consuming and prone to overfitting. The authors propose a network design inspired by biological dendrites, which offers better robustness and efficiency, using fewer trainable parameters, thus enhancing precision and resilience in artificial neural networks.
Reservoir computing designs recurrent networks that simultaneously buffer inputs and form nonlinear features. Here, authors propose a configurable scheme with better scaling where memory buffer and nonlinear features are in separate circuits. It can be efficiently implemented in neuromorphic hardware.
Large Language Models demonstrate expert-level accuracy in medical exams, supporting their potential inclusion in healthcare settings. Here, authors reveal that their metacognitive abilities are underexplored, showing significant gaps in recognizing knowledge limitations, difficulties in modulating their confidence, and challenges in identifying when a problem cannot be answered due to insufficient information.
Deep neural networks predict well despite having many more parameters than data points. The authors show that deep neural networks prefer simpler solutions due to an inbuilt “Occam’s razor”. Favouring simplicity avoids overfitting and captures patterns in data, explaining their success.
Fully connected neural networks in the infinite-width limit often outperform finite-width models, while convolutional networks excel at finite widths. Here, the authors uncover how convolutional networks leverage local, data-dependent kernel renormalization, enabling feature learning to absent in fully connected architectures.
Advanced machine learning techniques have demonstrated the identifiability of human traces online, however, assessment of their potential risks is usually done with small-scale datasets. The authors propose a physics-based approach to evaluate the effectiveness of identification techniques from reported measurements.
The authors propose a model for the process of one-shot learning in the brain. They show that it reproduces the repulsion effect of human memory and provides a blueprint for content-addressable in-memory computing with binary weights.
Patient recruitment is challenging for clinical trials. Here, the authors introduce TrialGPT, an end-to-end framework for zero-shot patient-to-trial matching with large language models.
RNA splicing is crucial for gene expression diversity and can be altered by mutations. Here, the authors present SpTransformer, a deep learning tool that predicts tissue-specific RNA splicing with high accuracy, revealing splicing’s role in pathogenic mutations and aiding clinical interpretations.
The emergence of large language models has the potential to transform healthcare. Here, the authors show that, when providing clinical recommendations, these models perform poorly compared to physicians and are overly cautious in their decisions.
Internal representations are crucial for solving tasks for natural and artificial agents. Here, using reinforcement learning and artificial neural networks, the authors present a framework to analyze the formation of individual and shared abstractions and their impact on task performance.
Predictive machine learning models, while powerful, are often seen as black boxes. Here, the authors introduce a thermodynamics-inspired approach for generating rationale behind their explanations across diverse domains based on the proposed concept of interpretation entropy.
Scientific discovery is a highly relevant task in natural sciences, however generating scientifically meaningful laws and determining their consistency remains challenging. The authors introduce an approach that exploits both experimental data and underlying theory in symbolic form to generate formulas that hold scientific significance by solving polynomial optimization problems.
To address challenges of training spiking neural networks (SNNs) at scale, the authors propose a scalable, approximation-free training method for deep SNNs using time-to-first-spike coding. They demonstrate enhanced performance and energy efficiency for neuromorphic hardware.
Analog in-memory computing recent hardware implementations focused mainly on accelerating inference deployment. In this work, to improve the training process, the authors propose algorithms for supervised training of deep neural networks on analog in-memory AI accelerator hardware.
The absence of an objective way of assessing freezing of gait in Parkinson’s Disease hinders research and care. A machine-learning contest using wearable sensor data delivered detection algorithms with high precision and identified time-of-day effects.
Real-time prediction of dynamics for complex physical systems governed by partial differential equations is challenging and computationally expensive. The authors propose a framework for learning neural operators in latent spaces that allows real-time predictions of high-dimensional nonlinear systems.
The curse of rarity—the rarity of safety-critical events in high-dimensional variable spaces—presents significant challenges in ensuring the safety of autonomous vehicles using deep learning. Looking at it from distinct perspectives, the authors identify three potential approaches for addressing the issue.
Artificial associative memories in neural network models have shown ability to store and retrieve static patterns of complex systems, however analysis of dynamic patterns remains challenging. The authors develop a reservoir computing based memory approach for complex multistable dynamical systems.
Detecting tipping points and predicting extreme events from data remains a challenging problem in complex systems related to climate, ecology and finance. The authors propose a data-driven approach to estimate probabilities of rare events in complex systems, and detect tipping points/catastrophic shifts.
Ising machines have been usually applied to predefined combinatorial problems due to their distinct physical properties. The authors introduce an approach that utilizes equilibrium propagation for the training of Ising machines and achieves high accuracy performance on classification tasks.
Brains and neuromorphic systems learn with local learning rules in online-continual learning scenarios. Designing neural networks that learn effectively under these conditions is challenging. The authors introduce a neural network that implements an effective, principled approach to local, online-continual learning on associative memory tasks.
Creating accurate digital twins and controlling nonlinear systems displaying chaotic dynamics is challenging due to high system sensitivity to initial conditions and perturbations. The authors introduce a nonlinear controller for chaotic systems, based on next-generation reservoir computing, with improved accuracy, energy cost, and suitable for implementation with field-programmable gate arrays.
In modern football games, data-driven analysis serves as a key driver in determining tactics. Wang, Veličković, Hennes et al. develop a geometric deep learning algorithm, named TacticAI, to solve high-dimensional learning tasks over corner kicks and suggest tactics favoured over existing ones 90% of the time.
Technical limitations of simultaneously multi-omics profiling lead to highly noisy multi-modal data and substantial costs. Here, authors proposed a versatile framework and data augmentation schemes, capable of single-cell cross-modality translation and multiple extensive applications.
Single-cell chromatin accessibility sequencing (scCAS) data suffers from high sparsity and dimensionality. Here, authors propose an accurate and interpretable computational framework for enhancing scCAS data that considers cell-to-cell similarity.
For reservoir computing, improving prediction accuracy while maintaining low computing complexity remains a challenge. Inspired by the Granger causality, Li et al. design a data-driven and model-free framework by integrating the inference process and the inferred results on high-order structures.
Perception and appreciation of food flavour depends on many factors, posing a challenge for effective prediction. Here, the authors combine extensive chemical and sensory analyses of 250 commercial Belgian beers to train machine learning models that enable flavour and consumer appreciation prediction.
The task of planning a sequence of actions, and dynamically adjusting the plan in dependence of unforeseen circumstances, remains challenging for artificial intelligence frameworks. The authors introduce a learning approach inspired by cognitive functions, that demonstrates high flexibility and generalization capability in planning tasks, suitable for on-chip learning.
Reservoir Computing has shown advantageous performance in signal processing and learning tasks due to compact design and ability for fast training. Here, the authors discuss the parallel progress of mathematical theory, algorithm design and experimental realizations of Reservoir Computers, and identify emerging opportunities as well as existing challenges for their large-scale industrial adoption.
Forecasting the future behaviors based on observed data remains a challenging task especially for large nonlinear systems. The authors propose a data-driven approach combining manifold learning and delay embeddings for prediction of dynamics for all components in high-dimensional systems.
Predicting the evolution of dynamical systems remains challenging, requiring high computational effort or effective reduction of the system into a low-dimensional space. Here, the authors present a data-driven approach for predicting the evolution of systems exhibiting spatiotemporal dynamics in response to external input signals.
In this work, authors propose a synergistic approach combining state-of-the-art deterministic forecasting model with artificial intelligence for predicting lightning occurrences. The strategy shows efficient predictive capabilities at medium-range forecast horizons.
Segmentation is an important fundamental task in medical image analysis. Here the authors show a deep learning model for efficient and accurate segmentation across a wide range of medical image modalities and anatomies.
Brain connectivity patterns shape computational capacity of biological neural networks, however mapping empirically measured connectivity to artificial networks remains challenging. The authors present a toolbox for implementing biological neural networks as artificial reservoir networks. The toolbox allows for a variety of empirical/measured connectomes and is equipped with various dynamical systems, and cognitive tasks.
Utilising geometric information and reducing computational costs are key challenges in the molecular modelling field. Here, authors propose ViSNet, which efficiently extracts geometric features, accurately predicts molecular properties, and drives simulations with interpretability.
Brain-inspired spiking neural networks have shown their capability for effective learning, however current models may not consider realistic heterogeneities present in the brain. The authors propose a neuron model with temporal dendritic heterogeneity for improved neuromorphic computing applications.
The modelling of human-like behaviours is one of the challenges in the field of Artificial Intelligence. Inspired by experimental studies of cultural evolution, the authors propose a reinforcement learning approach to generate agents capable of real-time third-person imitation.
Prediction and interpretation tasks may be challenging in high-stakes applications, such as medical decision-making, or systems with compute-limited hardware. The authors introduce an augmented framework for leveraging the knowledge learned by Large Language Models to build interpretable models which are both accurate and efficient.
Accurate property prediction relies on effective molecular representation. Here, the authors introduce KPGT, a knowledge-guided self-supervised framework that improves molecular representation, leading to superior predictions of molecular properties and advancing AI-driven drug discovery.
To ensure the privacy of processed data, federated learning approaches involve local differential privacy techniques which however require communicating a large amount of data that needs protection. The authors propose here a framework that uses selected small data to transfer knowledge in federated learning with privacy guarantees.
Visual oddity tasks delve into the visual analytic intelligence of humans, which remained challenging for artificial neural networks. The authors propose here a model with biologically inspired neural dynamics and synthetic saccadic eye movements with improved efficiency and accuracy in solving the visual oddity tasks.
Accurate flight trajectory prediction can be a challenging task in air traffic control, especially for maneuver operations. Here, authors develop a time-frequency analysis based on an encoder-decoder neural architecture to estimate wavelet components and model global flight trends and local motion details.
The increase of intermittent energy sources and renewable energy penetration generally results in reduced overall inertia, making power systems susceptible to disturbances. Here, authors develop an AI-based method to estimate inertia in real-time and test its performance on a heterogeneous power network.
Using two different mass spectrometric platforms, authors demonstrate how metabolomic data fusion and multivariate analysis can be used to accurately identify the geographic origin and production method of salmon.
Better understanding of a trade-off between the speed and accuracy of decision-making is relevant for mapping biological intelligence to machines. The authors introduce a brain-inspired learning algorithm to uncover dependencies in individual fMRI networks with features of neural activity and predict inter-individual differences in decision-making.
Automatic extraction of consistent governing laws from data is a challenging problem. The authors propose a method that takes as input experimental data and background theory and combines symbolic regression with logical reasoning to obtain scientifically meaningful symbolic formulas.
The biological plausibility of backpropagation and its relationship with synaptic plasticity remain open questions. The authors propose a meta-learning approach to discover interpretable plasticity rules to train neural networks under biological constraints. The meta-learned rules boost the learning efficiency via bio-inspired synaptic plasticity.
Artificial Intelligence has achieved success in a variety of single-player or competitive two-player games with no communication between players. Here, the authors propose an approach where Artificial Intelligence agents have ability to negotiate and form agreements, playing the board game Diplomacy.
Large-scale nanochannel integration and the multi-parameter coupling restrictive influence on electric generation are big challenges for effective energy harvesting from spontaneous water flow within artificial nanochannels. Here, authors apply transfer learning to overcome these and design optimized water-enabled generators.
Recent studies raised concerns over the state of AI benchmarking, reporting issues such as benchmark overfitting, benchmark saturation and increasing centralization of benchmark dataset creation. To facilitate monitoring of the health of the AI benchmarking ecosystem, the authors introduce methodologies for creating condensed maps of the global dynamics of benchmark.
Artificial Intelligence can support diagnostic workflows in oncology, but they are vulnerable to adversarial attacks. Here, the authors show that convolutional neural networks are highly susceptible to white- and black-box adversarial attacks in clinically relevant classification tasks.