Hi there,
an example for unsupervised learning with neural networks are e.g. autoencoders or variational autoencoders. As mentioned in the previous answers they can learn how the typical distribution looks like. They are discussed in technical detail for example in this chapter: https://www.coursera.org/lecture/generative-deep-learning-with-tensorflow/first-autoencoder-vveV5
You can utilize these models for example for the use case of anomaly detection. E.g. when only using unlabelled normal data for training, the model can learn in an unsupervised way what „normal“ looks like in the data. After deploying the model: if a metric (like reconstruction loss or distance measure) exceeds a certain threshold for new data, this could be an indicator within an early warning system or an anomaly detection system, indicating a potential anomaly case since it seems to be sufficiently different from the „normal data“.
Feel free to take a look at this example for fitting a variational autoencoder model: TFP Probabilistic Layers: Variational Auto Encoder | TensorFlow Probability
Not concerning neural networks, but very good to get more familiar w/ unsupervised learning: also PCA (principal component analysis) is worth mentioning as unsupervised learning method. Feel free to take a look at this example as inspiration: CRISP-DM-AI-tutorial/Classic_ML.ipynb at master · christiansimonis/CRISP-DM-AI-tutorial · GitHub
Best regards
Christian