Machines are everywhere. The chances are that you are using one right now, in the form of a smartphone or computer. It is challenging to think of modern daily experiences that do not involve machines of some kind. In a world that is increasingly dominated by digital technology, human–machine interactions will have a critical role in shaping future society.

The interfaces through which we engage with digital machines are diverse and continuously evolving. From traditional mouse clicks and touch screens to voice commands and brain–computer interfaces, each method offers unique ways for us to communicate our intentions and desires. Voice-controlled assistants have entered our homes, enabling us to interact with technology using natural language — something that would have seemed science fiction just a couple of decades ago. Gesture-based controls of devices and virtual reality systems promise immersive experiences that blur the boundaries of what is real.

Technological developments tend to build on each other and open new doors to possibilities that were previously confined to our imagination. The changes brought by such rapid innovation represent a rupture with the way that we used to communicate, learn and live in the past. To fully grasp the implications of this shift, we need to embrace a new perspective: that humans and machines are not just coexisting but are, in many ways, becoming inextricably connected.

Machines have historically been viewed as tools: mechanical devices designed to assist or augment human capability. However, with recent advances in computer science, artificial intelligence (AI) and neurotechnology, the separation between humans and machines is less clear. Many modern machines no longer simply respond to our commands. Generative AI systems can learn from our behaviour, adapt to our preferences and even anticipate our needs. Such systems, when specifically engineered to complement our limitations, may develop to become human ‘thought partners’. It is interesting, then, to think what might happen to our psychology (how we think and make decisions) once we have the option to delegate tasks to AI systems, such as large language models (LLMs), that are better suited than are human brains to processing large amounts of information. This human–AI synergy may constitute a new — synthetic — kind of cognitive system, parallel to the previously proposed distinction between intuitive and analytical thinking.

Generative AI systems are not only useful to the individual, but also affect how people coordinate with one another and exchange information. Thanks to the sheer volume of human data used to train such models and their wide adoption, they can power new ways of collectively tackling complex problems (but not without risks)1. And in the near future, we can envision a society that is built not only on human collaboration, but also on collective decision making that stems from social networks that include human-to-machine and machine-to-machine interactions. Such a radical integration of these innovations into our lives demands a re-evaluation of our own identities and roles. The more we rely on automated systems, the more we must consider the ethical implications and societal consequences of our relationship with machines.

The further integration of machines with human society comes not only with potential benefits but also with risks. For example, on the one hand, there is evidence that the proliferation of virtual environments — especially in work environments such as hybrid conferences and remote jobs — can be used to promote a more-inclusive and equitable society. On the other hand, the mishandling of AI tools poses new threats to human privacy, equity and autonomy. Who owns the data generated from our interactions with these machines? As algorithms influence our online experiences and decisions, how do we protect people from existing biases and inequalities? A key area in which such considerations could have particularly serious consequences is medicine, given recent advances in human brain–computer interfaces. A feat of engineering and computer science, brain–computer interface systems translate brain activity into commands for the control of devices such as robotic limbs, and thus offer life-changing treatment for patients with impaired movement. Beyond more philosophical questions around human agency and autonomy, the translation of brain–computer interfaces from laboratory settings to clinical applications comes with practical challenges related to equitable access, data privacy, security and potential stigma. Addressing these concerns necessitates proactive, human-centred policies.

The guiding principle should be prioritizing human needs and protecting human values. For example, a promising sector is education, in which generative AI technologies such as LLMs can power human learning — as long as we cultivate AI literacy and advocate for informed engagement. In education, as in all sectors, AI systems must be designed to counter existing inequalities rather than replicate them.

LLMs are not just tools to assist humans. In cognitive psychology research, LLMs can also be used to analyse natural language in written or spoken form with the goal of better understanding human behaviour, emotions and thinking styles. Even though such AI tools are quickly expanding beyond language to incorporate images, sounds and (eventually) movement, the role of humans in such interactions cannot be passive. Instead, users should actively engage with these technological innovations, and focus on use that enhances our humanity while navigating the ethical and societal challenges they present.

As the pieces in this Focus show, the ubiquity of human–machine interactions is an opportunity to explore, while being mindful of the risks. Embracing this means recognizing that machines — whether sophisticated AI systems or simple household devices — are now an integral part of our lives. We should work towards a future in which machines complement human abilities, while preserving human values.