[GIS] Maximizing the use of CPU

arcpygeoprocessingperformance

My script is intersecting lines with polygons. It's a long process since there are more than 3000 lines and more than 500000 polygons. I executed from PyScripter:

# Import
import arcpy
import time

# Set envvironment
arcpy.env.workspace = r"E:\DensityMaps\DensityMapsTest1.gdb"
arcpy.env.overwriteOutput = True

# Set timer
from datetime import datetime
startTime = datetime.now()

# Set local variables
inFeatures = [r"E:\DensityMaps\DensityMapsTest.gdb\Grid1km_Clip", "JanuaryLines2"]
outFeatures = "JanuaryLinesIntersect"
outType = "LINE"

# Make lines
arcpy.Intersect_analysis(inFeatures, outFeatures, "", "", outType)

#Print end time
print "Finished "+str(datetime.now() - startTime)

My question is: is there a way to make the CPU work at 100%? It's running at 25% all the time. I guess that the script would run faster if the processor was at 100%. Wrong guess?

My machine is:

  • Windows Server 2012 R2 Standard
  • Processor: Intel Xeon CPU E5-2630 0 @2.30 GHz 2.29 GHz
  • Installed memory: 31,6 GB
  • System type: 64-bit Operating System, x64-based processor

enter image description here

Best Answer

Let me guess: Your cpu has 4 cores, so 25% cpu usage, is 100% usage of one core, and 3 idle cores.

So only solution is to make the code multi threaded, but that is no simple task.

Related Question