What’s up at Neuralink? Top “Show and Tell, Fall 2022” updates

Anna Stróż
9 min readDec 2, 2022

--

Figure 1. That’s what I would immediately type while testing the BCI. Source and credits: Neuralink channel / YouTube; a screenshot from [1] (0:32:19).

It begins to be seen as my personal tradition to comment on yearly updates of Neuralink company progress (if you’d like to see some previous posts, take a look here or here). And here we are, almost hitting the end of 2022, with another update from Elon Musk’s neurotech start-up. Yesterday, during “Neuralink Show and Tell, Fall 2022” event, Elon Musk and Neuralink’s representatives presented outcomes of their R&D exploration.

I can put it simply: acceleration is a single word to describe what’s going on there and if you need a TL;DR you can skip directly to the Key points. Many improvements, redesigns, another iterations of ideas. And as I try to follow the updates, I see more maturity in this project now, and much less buzzwords or crazy, naive ideas like “aww yess, we will stream some music to brain”.

In order to properly go through the updates, we will keep the chronological order — Elon Musk had opened the event, and then a deep dive session was led by Neuralink’s representatives from different fields of this multidisciplinary project.

AGI — Artificial General Intelligence — a term used to describe a higher-level AI, having some cognitive capabilities enabling it to do a variety of things, and/or/maybe be “sentient”. There is no easy definition here, take a look at [2].

What it is like to be an AGI?

Referring to a famous publication by Thomas Nagel, entitled “What it is like to be a bat?” was an intended pun here, so no worries, I won’t try to start any philosophical *bat*tle here. The reason I mentioned Artificial General Intelligence (AGI) here is that the term was present in Musk’s opening speech. He offered, but not discussed, a more speculative and futuristic long-term goal of neurotech industry — emphasizing a future possible connection of our brains with Artificial General Intelligence, and thus going far beyond our current cognitive capabilities. As I enjoy futurism and transhumanism a lot, I am happy that this time such crazy input was only at the beginning, and most of the event was more focused on essential, technical update and challenges.

FDA — U.S. Food and Drug Administration — American agency which controls (among others) food, drugs, and some specified devices, before their launch in the market.

“Prototypes are easy, production is hard”

In his speech, Musk said that the team worked a lot to reach FDA approval. Within the next 6 months they hope to progress in that matter and to be able to perform tests with human participants (see 0:26:25 in [1]).

Elon presented some videos of monkeys with implanted Neuralink devices. One of the monkeys, Pager, was presented also in 2021 while playing a game via the interface (see [3]). This time, there was also Sake, a monkey “typing” on the keyboard with his brain activity. No worries, monkeys don’t know (yet) written alphabet or semantic aspects of language — this monkey was only gazing at highlighted keys on the keyboard, and step by step sentences were written. Yet still, it is an interesting step forward, as the cursor movement is not steered by any eye-tracking device, but by monkey’s neuronal activity recorded with the implant. After recording the activity and decoding it with an algorithm, a proper piece of information was “translated” and sent to the actuator, the cursor. And if successfully, banana smoothie goes all the way to the monkey (see Figure 1).

Let’s jump into some technical updates —it will be brief for one particular reason: the presentations were very good, but it should take a lecture class or two from each representative to truly deep dive into the matter. If Neuralink will release their next white paper, I’d be glad to read it through and discuss some specific details* with you.

* Limiting myself to the details I’d understand.;)

Updates, yay!

Let’s go then through some highlights presented by the group of the company’s representatives. As the overall presentation lasted more than 2 hours, I kept it here in a chronological order — if you find any specific topic interesting, you can see an approximate starting time of each presenting person next to their names, and watch that video part by yourself at Neuralink YouTube channel ([1]).

(0:37:00) DJ. During the presentation he mentioned three major factors needed to be taken into account while developing such product: safety, scalability, and access to brain regions. During DJ’s talk, they started a live demo of Neuralink’s surgical robot to present the process of inserting electrode threads into an artificial model of a brain, later called a “brain proxy”. On the preview from the robot’s camera (see Figure 2) one can see the cortical surface (which is the robot’s target), and brain vasculature (which the robot needs to omit). There are also hitting targets, to which the thread could be sewed.

Figure 2. A preview from the robot’s camera during live demo. Please note that this is an artificial brain model, not an actual brain. Source and credits: Neuralink channel / YouTube; a screenshot from [1] (0:42:08).

Some highlights from DJ: they are thinking about scaling from prototyping to production. They have another facility in Austin, TX. In the future it may be possible to have Neuralink’s clinic?

(0:46:00) Nir quickly explained steps enabling monkey Pager to play the MindPong game:

  1. Recording monkey’s neural activity through the implant.
  2. Training artificial neural network that predicts the cursor velocity from the patterns of monkey’s neural activity.
  3. After decoding the cursor can be moved to a predicted position. If the position fits the target, monkey may receive a reward.

After some improvements they achieved higher bit rate (7.1 bits/s vs 3.0 bits/s in 2021, see 0:47:27). However, using only a mouse limits one’s experience with computer, and that’s why they started with keyboard integration in their experiments. They also mentioned research work of Willett et al. (2021) [4], with a BCI based on decoding one’s neural activity while the person tries to write letters. In Neuralink, they tried with such approach (you’re right, monkeys don’t write), by teaching their models on some digit-like patterns drawn by monkeys on the screens.

(0:51:50) Bliss mentioned a very important issue of signals’ stability over time. This is a challenge for any implantable interface to handle signals’ variability (in different brain structures there may occur some periodic, but also, even more difficult, non-periodic variabilities of the signals).

(0:56:00) Avinash updated on some details regarding processing modules of Neuralink’s devices, specifically about ASIC chips. He presented some news about improvement of several aspects: optimization of ASIC usage, doubling battery life of the device, their new approach to efficient spike detection in the signals. Currently, there are 1024 channels collecting the data, but they aim to scale it up to 16 000 channels in the future.

(1:01:30) Matt indicated that there are serious challenges when it comes to charging the device, of which the most relevant one is safety — such as thermal safety, so the battery will not reach temperatures significantly higher than of surrounding tissues. They presented a video of Neuralink’s charger in use, showing how the monkey approaches and uses the charger. Matt mentioned about improved charging performance, and that they work on a next iteration of the solution.

Figure 3. Monkey using the wireless charger, which is mounted in the tree branch. Monkey receives some banana smoothie from the straw. On the right side plots of charging performance and temperature. Source and credits: Neuralink channel / YouTube; a screenshot from [1] (01:03:40).

(1:05:30) Julian explained some aspects of communication between the device’s modules and how they approach testing its various aspects.

(1:11:30) Josh gave some details about an artificial system in which they monitor different aspects of the implant’s life. In their system they can perform “accelerated lifetime” studies, which enable them to run processes in prespecified conditions, so they can try to estimate the implant’s performance, e.g. by observing changes of humidity or implant’s longevity in a biochemical soup. Their current goal is to modify the testing environment, scale it up, and therefore reach a large number of devices under test in parallel.

(1:17:00) Christine discussed some of the challenges ahead if they’d want the surgical robot to perform all neurosurgical tasks. The robot seems to be significantly improved; it’s vision seems to be very advanced, as the robot combines several visual modes and processes the image to better distinguish vasculature and cortex. Christine also mentioned difficulties coming from inter- and intraindividual variability and future challenges in performing craniotomy (i.e. surgical skull opening) and durotomy (i.e. surgical opening of dura mater, the most external of the three layers covering the brain).

(1:24:00) Alex explained a possible way to make device upgradeability easier. So far, the implants were aimed to be placed on the cortex, but there could be a problem of a scar tissue filling the space in-between the implant and the cortex. It would be problematic then to remove the threads with electrodes properly during implant’s upgrade or total removal. One of the solutions proposed is to leave dura mater on the brain’s surface. It’s still challenging, as due to dura mater’s thickness, presence of collagen fibers and limited visibility of vasculature, it needs another way to tackle the issue. They currently work on proper imaging of vasculature below dura mater.

(1:29:00) Sam explained that they accelerated prototyping of needles for the robot. Instead of 2–3 days for testing, which were needed at the beginning of 2022, they currently need less than an hour to test.

(1:34:00) Lesley explained that they have developed different models of tissues with which they can experiment.

(1:37:00) Dan explained some aspects of vision neuroscience with an example of calcarine sulcus [5], which is present in occipital lobe, and discussed some properties of how images are being processed in primary visual cortex. He mentioned an experiment in which one of the monkeys was not only passively recorded from the visual cortex region, but also stimulated — and stimulation produced a phosphene — an induced phenomenon of visual perception (in other words: something is not present in one’s visual field, but one perceives it because of specific stimulation of vision-related regions in nervous system).

(1:47:00) Joey presented some results of experiments with a pig having Neuralink’s implants. They presented that they are able to stimulate pig leg’s muscles, and they contract. But how can we connect an intention to move a limb with its actual movement, especially, if the path connecting actuators with brain is damaged? They think about bypassing the path between motor cortex and ventral horn of a spinal cord, but also, as sensation is a crucial part of motor interaction, they want to provide another bypass by collecting the signal from the dorsal horn of spinal cord and sending them to somatosensory cortex.

In Q&A, when asked about open-sourcing some of experimental data that Neuralink has gathered so far, Elon answered that it’s not a problem, and that there is such possibility. Unfortunately, there was no answer when. Other questions were about alternatives to Bluetooth communication, problems with scar tissue and signal degradation, but the answers were not very detailed to reproduce them here.

Key points

The team has made many improvements:

  • With preview of brain surface, segmentation of vasculature vs cortex and targeting the areas for threads insertion (see DJ’s and Christine’s talks).
  • Achieving higher bit rates in some experiments (see Nir’s talk).
  • Optimization of ASIC usage, doubling battery life, new approach to spike detection (see Avinash’s talk).
  • Improvement of charging performance (see Matt’s talk).
  • New environment for “accelerated lifetime” testing (see Josh’s talk).
  • Change in how they approach the surgery — dura mater will not be removed during surgery (see Alex’s talk).
  • Accelerated prototyping of robot needles (see Sam’s talk).
  • Some of the experimental data will probably be open-sourced (see Elon’s answer at Q&A).

Thoughts?

This year’s presentation seemed to be much more professional and less hype-oriented than previous editions. Invasive implantation to cortex is a complex, multidisciplinary effort — it’s not only about opening the skull, putting something there and closing. The teams presented their progress and improvements in several areas, and I think it’s indeed huge, as this is how complex projects look like — there are ongoing iterations of better and better solutions, keeping in convergence with other disciplines contributing to the product.
By the way, the robot keeps impressing me. From its first iteration, which was also an amazing engineering effort in my opinion, it evolved into an even more impressive piece of work.

One of the conceptual problems I have is how they will approach brain sulci (like the calcarine sulcus mentioned by Dan), especially if they want to leave dura mater in place. Brain’s folded surface is a serious challenge, but I didn’t see any answer to that so far. Another thing — I’m not a neurosurgeon, but I wonder if leaving dura mater will make it significantly easier to remove the Neuralink device from the head. What about the threads under dura mater (i.e. between the cortex and inferior part of dura mater)? Won’t they cover with a scar tissue and be difficult to remove then?

Anna

References

[1] Neuralink Show and Tell, Fall 2022 video event. Posted by Neuralink channel on YouTube. Access date: 01.12.2022. URL: https://www.youtube.com/watch?v=YreDYmXTYi4.

[2] Artificial general intelligence. Source: Wikipedia. License: CC BY SA 3.0. Access date: 02.12.2022. URL: https://en.wikipedia.org/wiki/Artificial_general_intelligence.

[3] Monkey MindPong video. Posted by Neuralink channel on YouTube. Access date: 02.12.2022. URL: https://www.youtube.com/watch?v=rsCul1sp4hQ.

[4] Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M., & Shenoy, K. V. (2021). High-performance brain-to-text communication via handwriting. Nature, 593(7858), 249–254.

[5] Calcarine sulcus. Source: Wikipedia. License: CC BY SA 3.0. Access date: 02.12.2022. URL: https://en.wikipedia.org/wiki/Calcarine_sulcus.

--

--

Anna Stróż

Researcher in driver safety and driver monitoring systems. Cognitive Science & Neuroinformatics graduate. Neurotechnology, transhumanism and nature enthusiast.