Advice needed: structuring FastAPI + agentic learning after a gap

Hi everyone,

I have a Python + AI background and previously studied ML/AI seriously,

but over the last few months my progress became very inconsistent.

I’m currently preparing for a junior backend / AI-adjacent role where

FastAPI, basic AWS deployment, and simple RAG or agent-style services

are expected.

Technically, I can understand concepts, but I’m struggling with:

- long unstructured tutorials

- decision paralysis (what to study next)

- maintaining emotional stability while learning under pressure

I’m not looking for motivation — I’m looking for:

- a realistic learning sequence

- how to scope daily tasks (1–2 hours)

- how to combine FastAPI + agents/RAG without overload

If you were restarting after a gap and wanted to prepare efficiently,

what would you focus on first, and what would you deliberately skip?

Any guidance or resources would help a lot. Thanks :folded_hands:

Yes, it is overwhelming and a very big problem. Break it down into the 3 components below. You need one end-to-end working example, start with RAG - incorporate the other two into the workflow.

  1. FastAPI as backend/ orchestrator. You don’t need to learn too much of FastAPI as a concept. It will kick in as you build the build the RAG.

  2. Agents/RAG : There are courses for both in Deeplearning. The RAG is with Haystack (I think); Use the templates and follow the instructions as is. Why? Because the packages, environments, dependencies, underlying models used are moving too fast. You will go down an operational abyss to a point of no-return! Boilerplate code templates with pinned dependencies - much as it sounds like “cheating”, its sadly the only option I found. I accepted defeat after nearly two years of operational dogfights! Note that it may cost a bit in terms of API usage etc., but if you’re using clean pdfs, should be fine.

  3. AWS deployment: Do the official cert, AWS is the simplest and the certs matter in Cloud jobs for some reason. Be careful with billing even on the free tier. Doing the cert is enough to know how to mount your workflow.

Other operational considerations:

  • Once done, use simple, clean pdfs to test first on local machine and then mount them on cloud. If you choose to use the exact workflow you learn, check Deeplearning’s terms of service and if allowed, liberally use ‘Source” to give credit to the organization and Haystack etc if relevant. Im sure someone in this forum will know the rules.
  • Get an LLM to write a disclaimer, get a free license (Apache or MIT) - very simple. Host it on Github with some clean diagrams and visuals, showing your process flow.
  • Be careful not to enter your environment variables/ API keys onto Git. What you showcase there is the infrastructure that others can follow, fork etc, not to test it with “your cloud account”, unless you are very wealthy and charitable!
  • Last Step: Now that you have a working process, read up on how the RAG can be enhanced with complex pdfs like scans, where you need to add OCR, or need Human-assisted reinforcement learning etc. I think Deeplearning has a course for the latter also, but I dont think you’ll need it for a junior role. Another option is to turn ‘common pitfalls’ and ‘enhancements like OCR’ into a roadmap in Git.
1 Like

This is exactly the practical breakdown I needed. Thank you for cutting through the noise.

Easier said than done. Good luck - to you and me!