How to apply anomaly detection

In a regression problem with deep neural networks model,
How to detect the examples that make algorithm learn incorrectly, and then modify the values of those examples, not deleting them.
modifying process will be based on reducing the error or the residuals

Do not modify the data to get artificial results.

the idea is when i delete those examples and apply algorithm again, new outliers will be developed, so i need just to correct them with slight changing, is that possible in your opinion

Have you taken DLS yet? They discuss issues like this in DLS Course 3.

Doing “error analysis” is a useful technique when your model is not performing well enough. That means you select a subset of the test or training samples on which the model predicts incorrectly and try to see if there are patterns that explain the incorrect results. E.g. are the images poorly lit or is there something else anomalous about them like the objects are partially obscured. Or it could even be that the labels on the data are incorrect. In large datasets, there can be errors like that. Depending on what you find, it may give you “actionable intelligence”: e.g. if the labels are wrong, you need to fix them. Or if some significant fraction of the errors are on images that were taken in low light, then you need to get more training data that has similar conditions perhaps by data augmentation. Of course there is no guarantee that the answer will be something that is easy to do. Gathering more data is typically expensive and time consuming, so you want to be pretty sure it will help before you invest the effort in that direction as opposed to changing the architecture of your model.

The higher level point being that it takes experience and careful analysis to make good decisions on issues like this and DLS Course 3 gives you systematic ways to approach this type of issue.