Brain-Computer Interfaces (BCI) are traditionally conceived as a way to control apparatus, an interface that allows you to "act on" external devices as a form of input control. However, most BCI do not provide a reliable and efficient means of input control and are difficult to learn and use, when compared with other interfaces. We propose to change the conceptual use of "BCI as an actor" (input control) into "BCI as an intelligent sensor" (monitor). This shift of emphasis promotes the capacity of BCI to represent spontaneous changes in the state of the user in order to induce intelligent behavior of systems. This conceptual change gives rise to 3 grand challenges:
1. Inclusion challenge: How can we make an interface profit from user-information from different sensors? How can we include BCI as one of the interaction paradigms in the Multimodal Interaction Framework; how to transfer from "Brains Only" to "Brains Also".
2. Interpretation challenge: How can we fuse, but also disentangle and interpret information from the user, the task and the environment?
3. Representation challenge: How can we best feed back information to the user?