Hi,
At 13:11 in the 9th video “Gated Recurrent Unit (GRU)” of the “Sequence Models” course, Andrew Ng talks about the encoding of the GRU memory cell with a bit possibly remembering a singular or plural “cat”, and acting as a memory later on in the RNN. What would be the process to figure out which bits in the GRU memory unit represent whether the word “cat” is singular or plural?
It is an interesting question. There are a couple of levels at which to address this. Of course at the high level, the number of bits of state in the GRU cell is a “hyperparameter”, meaning a decision that needs to be made by the system designer. It’s been a while since I listened to these lectures, but I assume Prof Ng discusses that at some point.
The next level to the answer is that we always initialize weights and parameter values randomly for “symmetry breaking” and then run the training. So just as in simple feed forward nets and ConvNets, it’s random which neuron in a given layer will learn a particular recognition task in a given training run. But given a particular trained model, you could try to analyze the behavior that was actually learned.
The closest thing I can think of to the question you are asking is the lecture in Week 4 of ConvNets (DLS C4) titled “What Are Deep ConvNets Learning?” In that lecture, Prof Ng describes some really interesting work in which the authors of the paper figured out how to “instrument” neurons in the inner layers of a trained ConvNet and then were able to run a bunch of sample data through and see which types of inputs “triggered” a given neuron most strongly. Perhaps using an idea like that, you could figure out how to instrument the bits of GRU state and see which ones seem to be activated by the singular or plural value of the subject of the sentence or other conditions like that. But short of doing something like that, you can just observe that the behavior is clearly being learned but it’s not possible to point to a particular bit or bits of the GRU state that produce the specific behavior.
Is this why people say that nobody really knows how Neural Networks work internally? Is it because we do not fully understand the data formats?
Yes, I think that’s a way to express the same idea, although I wouldn’t describe the issue in terms of the “data format”. We know how many bits of hidden state there are, but we don’t really know how they perform their various functions at least without some serious further work. I pointed to the discussion in C4 about how some researchers have probed this type of question in the case of ConvNets.
There is also an entire field of “Understandable AI” or “Explainable AI” that is very active, but I have personally never looked at any of that material. If you’re interested, try the obvious google search and see if what you find grabs your attention or not. In fact Prof Ng has a video conversation with Prof Fei-Fei Li and Prof Curtis Langlotz of the Stanford Medical School about the future of AI in Medicine. The issues around understanding how an AI model produces its answers is discussed there. That’s probably a good place to start if you want an entry into the area of Explainable AI. The point is if we are saying that we’re willing to make potentially life and death decisions about patient care based on the recommendations of AI systems, is it really ethical and smart to do that if we can’t explain the underlying mechanisms by which the decisions are made. Of course if you want to get “real” about this, the FDA doesn’t require you to explain the “mechanism of action” of a drug in order to get it approved: you just have to run the clinical trials to prove that it is safe and effective. So you could consider that what we really need are the analogous testing mechanisms to show that whatever system we are proposing to use is also “safe and effective”.