[GIS] ArcPy script stops running randomly with no error returned

arcpyidlesubprocesswatershed

I am doing a batch watershed and analysis process on 135 points. This script worked great the first time I ran it.

Ever since the first time, however, I have had the script stop at random moments. CPU usage remains high, memory remains allocated, yet nothing is happening. Sometimes it will get through 100 features, sometimes only 15. It never has a specific geoprocessing function that trips the process up (hence all my print messages!) Also of note, unless I end the python.exe process via task manager, a lock remains on the feature class that the script was last working with. I don't know if that is helpful.

I've tried using del and delete_management on temp files and in scratch workspace but the same issue appears. I've manually deleted all files and folders after it doesn't work. I've even tried migrating to the local C:\ drive rather than external storage. Nothing works.
Any ideas?

Here is my code (with sensitive info ##### -ed out). It may seem a little redundant in the beginning but I was trying to hash out any potential errors:

import os
from subprocess import Popen
import arcpy
arcpy.CheckOutExtension("spatial")
import arcpy.sa
arcpy.env.workspace = "C:\\Users\\####\\Documents\\ArcGIS\\Watersheds\\outputs"
arcpy.env.scratchWorkspace = "C:\\Users\\####\\Documents\\ArcGIS\\Watersheds\\scratch"
arcpy.env.overwriteOutput = True
arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(102685)
OutputFolder = "C:\\Users\\####\Documents\\ArcGIS\\Watersheds\\outputs\\"
ScratchFolder = "C:\\Users\\####\\Documents\\ArcGIS\\Watersheds\\scratch\\"
ParcelsFinal = "C:\\Users\\####\\Documents\\ArcGIS\\Watersheds\\inputs\\ParcelsFINAL.shp"
Imperv_Merged = "C:\\Users\\####\\Documents\\ArcGIS\\Watersheds\\inputs\\Imperv_Merged.shp"
flowDir = "C:\\Users\\####\\Documents\\ArcGIS\\Watersheds\\inputs\\dir"
pointInput = "C:\\Users\\####\\Documents\\ArcGIS\\Watersheds\\inputs\\pourPoints.shp"
c = int(arcpy.GetCount_management(pointInput).getOutput(0))
pointRow = arcpy.SearchCursor(pointInput)
fieldList = arcpy.ListFields(pointInput)

try:
    for row in pointRow:
        feat = row.Shape
        outFeat = OutputFolder + row.Name.replace("-","_")
        outScratch = ScratchFolder + row.Name.replace("-","_")
        print "Starting watershed analysis for " + row.Name + "."
        print ""
        outWater_r = arcpy.sa.Watershed(flowDir,feat)
        print "Watershed raster for " + row.Name + " created succesfully."
        Watershed = arcpy.RasterToPolygon_conversion(outWater_r, outFeat + "_watershed.shp", "SIMPLIFY")
        print "Watershed polygon for " + row.Name + " created succesfully."
        tempParcelClip = arcpy.Clip_analysis(ParcelsFinal, Watershed, outScratch + "tempClip1.shp")
        print "Temp parcel clip complete"
        ParcelERA = arcpy.Dissolve_management(tempParcelClip, outFeat + "_parcels.shp", "SWM_ERA", "", "MULTI_PART", "DISSOLVE_LINES")
        print "Parcels clip complete"
        tempImpervClip = arcpy.Clip_analysis(Imperv_Merged, Watershed, outScratch + "tempClip2.shp")
        print "Temp impervious clip complete"
        tempIntersect = arcpy.Intersect_analysis([tempImpervClip,ParcelERA], outScratch + "tempIntersect.shp", "ALL")
        print "Temp intersect complete"
        ImpervERA = arcpy.Dissolve_management(tempIntersect, outFeat + "_impervious.shp", "SWM_ERA", "", "MULTI_PART", "DISSOLVE_LINES")
        print "Impervious clip complete"
        print "Clipping analysis for " + row.Name + " complete." 
        arcpy.AddField_management(Watershed, "W_ACRES", "DOUBLE", "", "", "")
        arcpy.CalculateField_management(Watershed, "W_ACRES", "!" + arcpy.Describe(Watershed).shapefieldname + ".AREA@ACRES!", "PYTHON", "")
        arcpy.AddField_management(ParcelERA, "P_ACRES", "DOUBLE", "", "", "")
        arcpy.CalculateField_management(ParcelERA, "P_ACRES", "!" + arcpy.Describe(Watershed).shapefieldname + ".AREA@ACRES!", "PYTHON", "")
        arcpy.AddField_management(ImpervERA, "I_ACRES", "DOUBLE", "", "", "")
        arcpy.CalculateField_management(ImpervERA, "I_ACRES", "!" + arcpy.Describe(Watershed).shapefieldname + ".AREA@ACRES!", "PYTHON", "")
        arcpy.AddField_management(Watershed, "VISTA_NUM", "TEXT", "", "", 10)
        arcpy.CalculateField_management(Watershed, "VISTA_NUM", '"' + row.Name.replace("-","_") + '"', "PYTHON", "")
        arcpy.AddField_management(ParcelERA, "VISTA_NUM", "TEXT", "", "", 10)
        arcpy.CalculateField_management(ParcelERA, "VISTA_NUM", '"' + row.Name.replace("-","_") + '"', "PYTHON", "")
        arcpy.AddField_management(ImpervERA, "VISTA_NUM", "TEXT", "", "", 10)
        arcpy.CalculateField_management(ImpervERA, "VISTA_NUM", '"' + row.Name.replace("-","_") + '"', "PYTHON", "")
        print "Calculations complete"
        c -= 1
        print "Watershed analysis for " + row.Name + " complete."
        print "There are " + str(c) + " points remaining."
        print ""
        print ""
except:
    print arcpy.GetMessages()

p = Popen("E:\####\GIS Data\Code\Scripts\upper.bat", cwd=r"E:\####\GIS Data\Code\Scripts")
stdout, stderr = p.communicate()
print "Uppercase conversion complete"

try:
    flist = arcpy.ListFeatureClasses()
    MergeSHP = arcpy.Merge_management(flist, "E:\\####\\####\\Modeling\\Watersheds\\MassWatersheds\\MergeWatersheds.shp")
    arcpy.AddField_management(MergeSHP, "CALC_ACRES", "DOUBLE", "", "", "")
except:
    print arcpy.GetMessages() 

Best Answer

The symptoms you describe are to me typical of running a script/tool with insufficient RAM available for the size of input datasets, in conjunction with anything else you are doing on your PC/laptop.

I recommend:

  • Close everything and reboot your machine
  • Run your script/tool without anything else running
  • (if you wish, watch your memory usage via Task Manager, and I suspect it will be creeping up)

If it still does not complete, then repeat the procedure using a PC/laptop with more RAM.

I had a lot of compute intensive models/scripts that worked great on all but my largest input datasets on a machine with 4Gb RAM, but on my largest datasets they threw random hangs and errors. As soon as I ran them on a machine with 12Gb RAM they ran fine.

Related Question