The entry about human extinction in the WP separates natural causes (including population decline from birth rate reduction), from anthropogenic ones (including advanced AI).
However, there can be loops linking both. I will now copy a paragraph by Andrew Ng in the last Batch.
What would happen if, a few decades from now, AI systems reach a level of intelligence that disempowers large swaths of people from contributing much economic value? I worry that, if many people become unimportant to the economy, and if relatively few people have access to AI systems that could generate economic value, the incentive to take care of people — particularly in less democratic countries — will wane.
I buy the idea. Moreover, what if this taking care of people includes having children? AI can have such a profound impact in the world that the number of children many women desire or can afford to have vanishes. This would be a compound effect of anthropogenic and natural causes, very different from the paperclip maximizer or a killer intelligence firing nuclear weapons and the like.
I think that conclusions such as “AI will kill us all” or “AI will save us all” should come from some modeling, even if simple initially. An interesting recent paper, directly linking AI and fertility rate, is the following.
Influence of AI in TFR is discussed, with possible outcomes in opposite directions.