Hello
-
How do we train a code generating model. Are we just doing it Auto regressively or are there any specific ways to do so. Suppose I have an LLM , how do I fine tune it to generate codes. So will I follow next token prediction or what?
-
How the model is able to generalize the code. Based on the question we ask it generates code. But it is a model which generates next token based on probability. But there are potential chances that the syntax may slightly get incorrected due to selecting wrong word. I mean how the model overcomes syntax, runtime and other errors.