Any ETL process is about digesting data. Somewhere along your path, you are trying to digest bad data.
So how would you write a system that tries to digest, say, a point and tries to load it into a polygon?
Sure you can write stuff to allow to digest it. If it is a point, well, then buffer it by 5 meters! bam! You have a digestable geometry without manual intervention.
But that is not the point.
Currently you are thinking of your ETL process as a binary black box for your user ("works" vs "does not work") - and you want the "does not work" to go away.
This is fundamentally a fallacy.
Think of your ETL process as a series of gates instead. Some things can pass, and some things cannot. That crap polygon you have there, most certainly came from a geoprocessing function or a topology snapping operation where the geometry collapsed onto itself because of some tolerance problem.
You don't want that in your GIS until it is fixed.
The gate should stop it, because, trust me, that polygon will cause more problems if it is let inside the rest of your GIS.
My point is that silent failures is most of the time (with some exceptions) a bad approach - even more so for ETL.
Best Answer
For future reference:
https://www.nuget.org/packages/NetTopologySuite.IO.ShapeFile/2.0.0
https://github.com/NetTopologySuite/NetTopologySuite.IO.ShapeFile
Example:
https://seydahatipoglu.wordpress.com/2017/01/12/how-to-read-a-shapefile-in-nettopologysuite/
Licence:
https://tldrlegal.com/license/gnu-lesser-general-public-license-v3-(lgpl-3)