The present EEPA theory and the corresponding hardware implementation were developed in the context of my bachelor thesis research. The explanations given here are written for a broad audience. If you are interested in the thesis, the project, have questions or remarks, please do not hesitate to contact me. (A short paper I wrote will be published via the dutch student research conference journal in the beginning of 2015, I'll upload it here then).


Did you ever wanted to play with an artificial neuron? Of course you did! Here I present you a novel method of neuron simulation in hardware which enables you to interact with the artificial neuron. You can change properties of the physical model by light manipulation and grasp what's going on in there!

Here you learn about

  • Neuroscience
  • Neurons and the Hodgkin Huxley model
  • How you can simulate neuronal properties in a hardware
  • How to build your own glowing neuron you can grasp and play with!

Step 1: Introduction

Please note: This introduction provides broad background knowledge for people who are new to neuroscience.

Have you ever wondered how you are able to know what you are seeing when you look around? How does your brain manage to convert the visual input from the text you are reading right now into a so called higher order (more abstract) representation so you know what those words mean? How is this knowledge represented in the brain? Those are just a few example questions you can ask if you are interested in the human mind.

In order to understand the human functioning we must understand the brain. This can be done in several ways but it basically comes down to two steps; 1. mapping brain function and 2. simulation. One investigates the brain, e.g. measures brain response to a visual stimulus, and than develops a computational model which simulates the found brain responses. If the simulation has similar properties as the found brain data one might conclude that the brain works in similar ways as the developed and tested computational model. In sum, simulation of brain processes is highly valued in brain research.

The brain can be tested and simulated on different levels; on the molecular level, the cellular level or the system level. Each of the levels are thought to be important and to contribute to particular properties of brain function. But what is the brain actually made of? The answer is very long, but the most important thing to consider are neurons.

Neurons are seen as the basic unit in brain computation. The idea of neuronal computation is quite intuitive; each neuron can be activated, we say it 'fires' or 'spikes', or be in a resting state. Furthermore, neurons are connected to each other by synapses which transfer the state of activation from one neuron to the other. If one connects some neurons, one has a neuronal network. What happens if we stimulate one neuron of a neuronal network? Right, it fires. What else? The firing is transfered to other neurons though the synaptic connections, 'igniting' the neuronal network. This is the basic idea behind neuronal computations. There is of course much more to say about this, but I confine myself to this explanation. What is important to realize here is that the simulation of neuronal networks demonstrate properties like automatic pattern recognition and noise resistance, properties found in human behavior.

Alright then, let us recap. Our behavior is the result of brain function and brain function in turn can be modeled by firing neurons. But wait, what actually is a firing neuron? Actually 'firing' means that the membrane potential of the neuron, the charge difference between the inside and outside of the neuron, changes. How? By the movement of ions through the membrane. We zoom in on this process at the next page.


  • Neuron: basic cell units of the brain, thought to be the basic of neuronal computation
  • Synapse: connection between neurons
  • Membrane: the 'wall' of the cell
  • Neuronal Network: a functional group of neurons
  • Action potential: also called 'firing' or 'spiking' of a neuron, the activation of a neuron
  • Resting potential: the state were the neuron is ready to fire
  • Ion gates: gates in the membrane which let particular ion species pass the membrane if opened
  • Concentration gradient: force which tends to 'push' ions to an equal distribution at both membrane sides
  • Voltage gradient: force which tends to 'push' ions in the direction of opposite charge (i.e. positive charge is attracted by negative and vice versa)

Step 2: Ion Movement in Neurons

So lets see what's going on in a neuron. How does it spike? Put otherwise; how does the charge difference between inside and outside the neuron change? According to the Hodgkin Huxley model, ion (i.e. charge carrier) movement though the membrane is the result of the state of ion gates (i.e. tunnels for particular ion types which can be closed or opened) as well as two forces which act on the ions.

First we take a look at the two forces involved; the concentration gradient and voltage gradient. It is called gradient because the particular force tends to 'push' the ions dawn the gradient, like a ball would role down a hill. The concentration gradient is the force that tends to distribute the ions to both side of the membrane equally (Fig. 1). Thus, if we would have 30 ions on one side and 70 ions on the other (100 total ions), opened ion gates and no other forces involved, than the concentration gradient would push the ions in the direction of the solution with less ions. We would end with a 50:50 distribution of ions. The voltage gradient, on the other hand, takes into account the charge of ions, which can be positively or negatively charged. As you surely remember from school, opposite charges attract whereas equal charges repeal. Lets say we have a negatively charged neuron with respect to the outside and ions which are positively charged (Fig. 2). Do you know what happens? Right, the ions will tend to move inside the cell, in the direction of the opposite charge.

A short recap: two forces act on the ions (i.e. charge carries) in neurons; the concentration gradient, which tend to distribute the ions equally to both sides, and the voltage gradient, which let the ions move in the direction of the opposite force. Ion movement though the membrane can only happen if the ion gates are open. If you get it until here you can go to the next page as this is sufficient to understand the model which I will describe. If you want to know how those forces lead to the firing of a neuron, read on.

As already said, the firing of neurons is a function of the state of the ion gates and the two gradient forces. The important ion species in the model are sodium (NA+), potassium (K+) and chlorine (Cl-). At resting potential, meaning that the neuron is in a sort of baseline state and is ready to fire, NA+ is concentrated outside of the neuron and potassium in the inside (Fig. 3). The ion gates are almost closed, so very few ions pass the membrane. Note as well that the inside of the neuron is negatively charged due to proteins. Do you how the gradient forces effect the ions? Just look at Figure 3. NA+ tends to get pushed inside the neuron, as more NA+ ions are concentrated outside and the neuron inside is negatively charged. Concerning K+, the two forces act in opposite direction. The voltage gradient again pushes K+ ions inside the neuron, but the negative charge pushes the K+ ions out of the neuron.

Magic happens if the inside of the neuron gets more positive, due to synaptic input (i.e. the input from a connected neuron). The trick lies in the ion gates; they are voltage gated. Thus, their open/closed state depends on the membrane potential. If the neuron gets more positive charged the NA+ gates begin to open (Fig. 4); NA+ is now allowed to flow into the neuron and making the neuron more and more positively charged. At around 30mv, which is the peak of firing, NA+ gates close and K+ gates open; the positive charge now leaks out of the neuron and makes the neuron more negative again (Fig. 5). This is the basic description of the action potential.

As always there is much more to say about this. There are a lot of good explanations of this model in textbooks but also in the Internet, you'll find them for sure.

Now after you have the necessary background knowledge I'll tell you something about the theory of this project, this is where it gets interesting so keep going!

Step 3: EEPA Theory

Now we talk about what this project is about; about the simulation of neuronal processes in a hardware. Broadly speaking, neuronal simulation can be divided into software simulation (i.e. modeling neurons on a computer) and hardware simulation (i.e. building a chip which works like a neuron). The later is called neuromorphic engineering. Those researchers have the objective to develop artificial neurons in silicon, i.e. in a chip. Why? Well, one advantage of silicon neurons are their computational power; real neural networks compute in parallel wheres computers compute serial, the so called serial bottleneck. If one wants to mimic neuronal networks, parallel computing is required as this is biological plausible. Furthermore, silicon neurons have the potential to be used as neural prosthesis, it has already been demonstrated that artificial neurons can be connected to real ones. In terms of explanatory neuroscience, silicon neurons have the advantage that they can simulate the electronic properties of neurons directly.

As you can see, silicon neurons have quite a potential. However, as always there are some limits. Simply speaking, silicon neurons are not real neurons, obviously. For example, whereas the action potential of real neurons is a function of the interaction of different ion types, those cannot be modeled directly in electronics. In an electronic circuit there is only one charge carrier, the electron, and that's it. If one wants to model neuronal behavior on the molecular level, one has to abstract to a higher level, usually by implementing a software simulation.

This project has two objectives. First, the framework will allow you to simulate non- electronic properties of neurons in hardware. Secondly, the framework will allow you to interact (i.e. manipulate) the behavior of the ions so that simulation processes are more easily accessible especially to the observer. Please note the difference between objectives pursued in neuromorphic engineering and the ones set here.*

EEPA, which stands for Extended Electronic Physical Architecture, is a novel idea which allows you to archive the two objective. In the EEPA, you do not simulate neuronal processes directly in the electronic circuit like you do in silicon neurons, but use electronics to shape the behavior of a non- electronic physical property (e.g. light) so it demonstrates neuronal behavior (Fig. 6). This means that neurons are not simulated on the electronic level but at the level of light.

Two features of the EEPA should be noted. Firstly, as the simulation takes place on a more abstracted level (i.e. light), non- electronic properties could be simulated in hardware. On the other hand, the electronic properties of neurons could also be directly simulated in the electronic circuit, so one does not lose this advantage of silicon neurons. This is nothing novel though, since the combination of hardware and software simulation lead to similar results. The novelty lies in the second feature; as the simulation should really take place on the level of light, you could manipulate the neuronal behavior by light manipulation.I purpose that this makes simulation more accessible to the user, for non- specialist as well as for specialists.

This all might sound quite abstract to you until now, but if you read about the technical realization on the next page things become much clearer.

*The objectives are different from those in neuromorphic engineering; classical sillicon neurons are made to be as small as possible in order to simulate large neuronal networks at low space. In the framework purposed here, efficient space use is seen as becoming important at later developmental steps. The focus here is on a accessible model which has the power to simulate properties at a broad spectrum in hardware.

Step 4: Applying the EEPA Framework

Please note: If you don't know anything about ion movement in neurons, I recommend to read 'Step 2: Ion movement in neurons' first.

The goal is to design a hardware environment which enables one to simulate ion movement in neurons with light. Note here that the simulation should really take place on the level of light, so any light, generated by the hardware itself but also extern light, should be taken into account. The answer here is light dependent resistor (LDR). Have a look at Figure 7 for the setup. Ions are modeled by light (i.e. photons) which are generated by LEDs. Ion concentration, and thus charge concentration, is simulated by the amount of photons at a particular place. If one implements different colors (i.e. photons of different wavelengths), different ion types can also be coded. Thus, different LEDs and LDRs corresponding to particular colors (i.e. by a color filter) are needed for the technical realization. In the current hardware setup I did not implement different color sensitive LDRs but shielded the LDRs from each other, but the idea are color sensitive LDRs. By shielding the hardware simulates two ion types.

In the light EEPA, the LDRs are oriented so that the light of only one side of the neuron falls on the sensors. This is used as input to the circuit which changes the output to the LEDs based on the gradient forces. The LDRs are the critical parts of the 'light EEPA' as they allow for the integration of circuit generated light and extern generated light. This is the main difference between the EEPA and a closed system simulation which only gives you the visual output of the simulation.

On the next pages we talk about the implementation of the EEPA, so you learn how to build your own artificial glowing neuron!

Step 5: Computer Simulation

Before I implemented the model in physical I programmed the behavior in Delphi (Fig. 8). I will not discuss the simulation here though.

Step 6: Implementation

Finally you learn how to build it!

The descriptions given here are for a hardware implementation for two ion types, but the concept stays the same for more ion types. First you need to decide if you want to go for an analog electronics implementation or a digital one. I decided for an Arduino based implementation since this approach is obviously more flexible, therefore I discuss only shortly a concept with which you can implement the EEPA with an analog circuit.

Have a look at Figure 9 for the analog implementation. The Figure shows you an independent building block for the simulation of one ion type. A voltage divider circuit directs the output to LEDs inside and outside the neuron. The divider is made of LDRs getting input from the light from the other side of the neuron. For example, the concentration gradient for K+ is model by the upper LDR which gets input from the K+ light inside the neuron. Remember that the concentration gradient pushes the ions to a 50:50 distribution, so if tuned correctly the LED output leads to the same amount of light outside of the neuron as it gets input from the inside. The voltage gradient is implemented in a similar manner, only that the other charges have to be taken into account as well. The output to the LEDs of one ion type is the result of the different LDRs at one site of the voltage divider. I must say, even if I would prefer the analog approach, the fine tuning of the circuit would be a challenge.

The digital implementationconcept is displayed in Figure 10. For each ion type, one building block (i.e. one MC controlled circuit for each) could be made so that you can add more and more building blocks to your simulation, like playing with LEGO. Using one Arduino for more ion types is fine and more efficient though. Each Arduino gets input from the LDRs of both sites and calculates the LED output based on the concentration and voltage gradient.

Read on for the BOM.

Step 7: Bill of Material

  • Arduino controller(s)
  • LDRs
  • different colored LEDs
  • Resistors for LDR measurement and LEDs
  • (color foil)
  • (a frame for the model)

Read on for the schematic.

Step 8: Circuit Layout

The circuit layout is displayed in Figure 11.

Read on for the Arduino code.

Step 9: Arduino Code

Basically the code works as follows:


  1. Mapping the 4 LDR inputs, biasing the input variables so that at start the light is equally distributed to both sites -> equilibrium state


  1. Mapping the 4 LDR inputs
  2. Converting LDR inputs into concentration gradient for each ion type; e.g. negative numbers if less light inside the neuron
  3. Calculating the change of LED output based on the concentration gradient by N units (speed)*
  4. Converting LDR inputs into voltage gradient for each ion type; e.g. negative numbers for a negatively charged inside of the neuron
  5. Calculating the change of LED output based on the voltage gradient by N units (speed)*
  6. LED voltage out

*If the difference in concentration becomes less, the speed (N of units transferred) of the movement becomes less. This can be implemented by delta concentration = inside concentration - outside concentration. However, the model displays this behavior without the programming of delta concentration due to fluctuations in measurements and PWM controlled LEDs; if the hardware approaches the equilibrium state it becomes less likely that the Arduino gets input 'above' the gradient (Fig. 12).

Note: The code does not implement biological plausible calculation to keep it simple.

Step 10: Model Behaviour

How does the artificial neuron behave? In Figure 12 and 13 you can see what happens if you play with the EEPA. In Figure 12 I 'injected' more K+ ions by using a flashlight; the light got distributed to the other site until it would

  • reach the limit of the LEDs (technical issue)
  • it reaches its equilibrium state depending on ion concentration and voltage gradient.

Figure 13 demonstrates what happens if you inject both K+ and proteins-; the settle state of the hardware oscillates around a value given by both ion concentration and voltage gradient. The oscillation actually is wanted since ions are never at rest; if we talk about equilibrium state we do not mean that ions do not move any more but that it is equally likely for ions to enter the neuron as it is for them to exit them.

Figure 15-16 show how it looks if you inject K+ ions, Figure 17-19 if you inject both K+ ions and proteins-.

Step 11: Outlook

Now that you have your own glowing 'toy' neuron, your own 'glowron', what to do with it? First of all, the current state hardware is in need for improvements;

  • tuning the code/ hardware to biological plausible values and calculations
  • simulate neuronal electronic properties in the hardware
  • implementing color sensitive LDRs
  • adding more building blocks (more ion types)
  • adding new components for feedback; sound which represents the membrane potential
  • adding new components for the building blocks; (servo controlled) ion gates are needed to model the action potential as is the sodium potassium pump
  • simulating an axon by light sources distributed over space
  • designing 3D hardware model
  • building a synapse; modeling neurotransmitters also by light or some other property
  • building a neuronal network

The guideline for all improvements should be that the model stays accessible from the outside, therefore without technical knowledge. The EEPA is thought to be a concept for an universal neuronal simulation tool. If it really has advantages in the teaching and research domain I cannot say as I did not empirically investigate this question, but I do not see how neurons you can play with are not needed!

As for the application in teaching, imagine the following (Fig. 21). You enter a dark room when you notice a soft humming; the glowron is at it's resting state. In front of you are several ion types (colors) inside the neuron as well as outside. The different parts of the neuron are clearly visible; the cell body, the dendrites with branches and the postsynaptic terminals, the axon with it's different parts ending at the presynaptic terminal. You are asked if you would like to be an ion, you agree with a smile (it's one of your biggest dreams). You transform to sodium by wearing a glove with red light at the tips. Now, what will happen if you inject yourself outside the neuron, inside the neuron, at the cell body, at the nodes of ranvier, at the mylinated parts? You observe how ion concentration changes, how ion gates are mechanically opened and closed and you hear how the membrane potential changes. Choosing the right spot for the injection, you trigger a chain reaction. The previously soft humming becomes louder and higher in pitch, a lot of movement is going on in the axon! Ions are flowing down the axon, suddenly you hear a crack, see a flash and you realize, the neuron has spiked. Exited you ask yourself how the firing is transmitted to the next neuron, but that's chapter 2.

Thanks for your time to have a look at this project, I'm looking forward for feedback and recommendations.

Kind regards,


<p>Hi Nspike, this is a very interesting project. I'm a neurophysiology lecturer in a Medicine School, and I would like to use your model in my first lecture on membrane potential. However there is still something I don't understand. It is possible to modify the K+ conductance in the sketch?It would be great if you add some comments on the sketch to better explain the meaning of the variables. How can you control exactly the amount of proteins- an K+, as the light inside the box interact with both LDRs? Thanx. </p>
<p>Hey Nspike, have you made any progress on this project with the improvements you list? Would love to see an updated version! </p>
<p>1 neuron is not enough to accomplish very much; I want more-ons to work for me!</p>
<p>well, if you want neuronal networks you shouldn't use this type of artificial neuron, at least for now since this project is more about the simulation inside the cell. do you have anything specific in mind? :) </p>
<p>Sorry Nspike, I should have added a [HUMOR ALERT!] disclaimer to my comment. Sometimes a phonetic play on words works out, other times not.<br><br>Moving on to seriousness, the concepts and execution are really slick!<br>Suppose a few artificial neurons were connected to react with position and force sensors for some simple task such as ensuring a board is running tight against a router fence while the edge is being cut. Usually a person does the job of feeding a long board through the router/cutter. <br><br>I ask about sensors and feedback because they may be able to adapt to each different board on the fly. Whereas merely &quot;dumb&quot; mechanical clamping systems must be reset for each board. Perhaps it's possible to use &quot;simple&quot; electronics to do the job and avoid operator error which results in burn marks on the edge, snipes, gouges, nicks, drift errors on edges, etc.<br></p>
<p>Hey, sorry for my late reply, but everyone needs vacation from time to time. ;) Don't worry, I didn't take your comment very seriously, but as you said sometimes it's hard to be sure about the meaning behind written words.</p><p>Anyway, what you're talking about seems conceptually not that hard to realize. There are quite a few artificial neural networks (ANNs) out there doing jobs like these (mostly software simulations though, at least the ones I know). For example, one of the first ANNs would analyze satellite images in order to identify tanks and tell you if there are tanks on the image. The cool thing about ANNs is that they are quite noise resistance if trained the right way, so in this example they would identify different tanks in images with different weather etc. So, it should be quite possible that you can use an ANN to automatically adapt to different 'boards on the fly' as you have termed it, as ANNs have the ability to generalize data to some degree.</p><p>Another cool thing that came into my mind and which is probably more related is autonomous robot learning. I recently could have a short look into a lab were they try to teach robots to walk; ANNs are adjusted after each fail so that the robot has in the end the optimum walking movement, without programming it explicitly. That seems pretty much the same thing.</p><p>Have a nice day!</p>
<p>Pretty amazing!</p>
<p>Whoa. That's a lot to digest! But a very enjoyable read, and interesting project. Thank you!</p>
<p>well i tried to keep it simple though. :) thanks for your feedback!</p>

About This Instructable



More by nspike:Play with glowing neurons? A novel framework for interactive neuron simulation in hardware. 
Add instructable to: