More and more we are entering the realm of cyberpunk.
In a study published today in “Current Biology,” UC-Berkeley researchers led by Jack Gallant, a professor of psychology, announced that they were able to reconstruct YouTube videos from test subject’s brain activity. The title of the published study is “Reconstructing Visual Experiences From Brain Activity Evoked by Natural Movies.”
The study was carried out at the Gallant Lab at UC-Berkeley, which “focuses on computational modeling of the visual system” in an attempt to understand how the “brain encodes visual information.”
Gallant’s coauthors were the test subjects who were placed in a functional magnetic resonance imaging machine and shown YouTube videos for several hours at a time. The activity in the visual cortex was then recorded. The clips were then reconstructed from the recorded brain activity to create gauzy, dream-like imagery that wouldn’t be out of place in a video art installation exhibit.
The paper states, “All these reconstructions were obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli.”
Gallant and the coauthors described the encoding model in the following way:
“The human visual cortex consist of billions of neurons. Each neuron can be viewed as a filter that takes a visual stimulus as input, and produces a spiking response as output. In early visual cortex these neural filters are selective for simple features such as spatial position, motion direction and speed.”
Since hemodynamic changes (i.e. changes in blood flow, blood volume and blood oxygenation) take place over seconds, while changes in movie imagery occur instantaneously, Gallant and his fellow researchers had to use a two-stage process to capture the imagery.
“The first stage consists of a large collection of motion-energy filters that span a range of positions, motion directions and speeds as the underlying neurons. This stage models the fast responses in the early visual system. The output from the first stage of the model is fed into a second stage that describes how neural activity affects hemodynamic activity in turn,” writes Gallant and the coauthors.
Gallant and his researchers believe the study represents another “important step in the development of brain-reading technologies that could someday be useful to society.”
And while it is unknown if dreaming and imagination are functionally different than perception, Gallant isn’t ruling out the possibility of being able to decode dreams and imaginings in the future with some sort of a brain-machine interface.
This, folks, is the stuff of the most imaginative science fiction. It reminds one of the flawed genius of Douglas Trumbull’s science fiction film “Brainstorm.”
No doubt advertisers and businessmen are taking notice and thinking of ways by which to profit from this technology in the future.