Ask for suggestions on Classes to take beyond taking Deep Learning Specialization

I am a Hardware Engineer (RTL micro architect) in the area of micro processor and inference accelerator. To prepare myself to work on machine learning area, I took three Specializations which gave me a basic understanding of this field. I am very grateful and appreciated for mentors who provide the high quality help and support. same for the wonderful lectures.

  1. Deep Learning Specialization (l am at c5w4 transformer)
  2. Machine Learning Specialization
  3. Python3 Specialization

However, in the industry nowadays, ML is belong to software. Hardware team does not do much directly related to ML. In order to get into the machine learning field, I guess that I have to walk through a project cycle or do a end-to-end project myself.

Is there a class provide this experience and what pre-requisite needed for it ?
thank you for the advices.

Hardware for deep learning primarily uses GPUs. None of DLAI’s courses cover GPU technology.

All of the courses include practical labs. You can apply that knowledge to datasets that you can find online for free, to practice your skills,

I haven’t taken any courses here that involve full end-to-end projects. One thing you could do would be take a look at some of the Kaggle challenges. Here’s a thread that links to some of the ones that they have more for learning as opposed to competing. If you feel confident after reading or even working through a couple of those, you could try one of the active competitions.

If you are doing chip development h/w projects, you probably are already familiar with the equivalent of software project management, I would think. It’s been 20 years since I’ve been at a company that does h/w, so my knowledge is probably out of date. I remember it being a lot like writing programs, except that you’re writing them in VHDL instead of c or C++. :nerd_face:

2 Likes

Thanks you for the suggestion. I would take a look at Kaggle’s thread.

In general a company will have software team and hardware team. Software (SW) team will take care of the model training, optimization and SW stack based on the HW used (NPU/GPU/accelerators) while Hardware (HW) team takes care of SoC implementation. Yes, people use VHDL to describe the HW (I/O, basic core, cache, memory, network routing etc.). Then they do synthesis and critical timing fix, power budget evaluation etc. eventually the gate level model will be verified and fabricated.

My question is more on SW, SW stack and SW/HW co-design. How SW can interface with HW efficiently. HW people are willing to understand how a model is trained, optimized and compiled into a code that can run on a specified GPU/accelerator so that maximizing data reused, bandwidth, calculation efficient and accuracy etc. can be achieved.

Here is a link to lecture video “Hardware for Machine Learning” provided by Berkeley EE department. Hardware for Machine Learning, Spring 2021

At the bottom of web page, you can find HW resources for implementation

@Rishan_Tan I have some interest in this area if you want to private message me.

IMHO the easiest option at least in the embedded space is something like Xlinix’s ZYNQ where’ve a hard processor with the FPGA fabric (for the calculations all around). There is also PCI IP for this if you wanted to do an installable board.

If you wanted to interface with a PC however you’d need: 1) At least to configure the USB comm channel 2) For something internal, the design of a software driver to do the interfacing (OS dependent).

From there, a simple C library that programmers could put in their program to interface with the device would be very helpful/essential.

@Rishan_Tan personally I think one of the biggest challenges in this space is if you are using off-the-shelf chips-- even FPGAs-- short of taping out an ASIC you’re never going to have the high speed DRAM interconnects you’ll find on GPUs.

On the plus side though, it means you can scale your DRAM capability much, much more cheaply than on a GPU.

So I see it as a cost/performance trade off.