Question regarding prompt to code training

I well understand the training principles for summarizing, question-answering, entity-extraction,… but I am still puzzled how to train to convert a prompt into a code snippet of python, for example.

I think to be able to convert a prompt into code there has to be an understanding of logic, or some logic support… I may ask to code one for-loop, I may ask for 2-nested for-loops, I want n-nested for-loops… this cannot be trained in the traditional way, can it?

Is there anywhere a good description how these type of bots can be trained?

It’s not really suited as a general programming environment.

You might want to check out Codex from OpenAI: OpenAI Codex

see also this source:

OpenAI Codex is an artificial intelligence model developed by OpenAI. It parses natural language and generates code in response.
[…]
Based on GPT-3, a neural network trained on text, Codex was additionally trained on 159 gigabytes of Python code from 54 million GitHub repositories.[5][6] A typical use case of Codex is for a user to type a comment, such as "//compute the moving average of an array for a given window size ", then use the AI to suggest a block of code that satisfies that comment prompt.

This means in my understanding that Codex possesses a decoder-only architecture since it is based on the GPT model family.

I guess one could also can think of it as some kind of translation task for which the sequence to sequence models were introduced in week 1, leveraging an encoder + decoder architecture which could be suitable, too - I guess.

Best regards
Christian