Impact of distributed compute on federated learning

I understand in federated learning, the training is done on the remote node where the data is available. But what is the impact on training performance and compute if the remote node does not have the capacity to train the model? How is this handled?

I havent done this course but the online server providers like AWS, Google, IBM…they have automatically scalable compute and storage capacities on demand!