This seems like a good place to describe a simple, fast, and more than reasonably accurate way to compute slopes for a globally extensive DEM.
Principles
Recall that the slope of a surface at a point is essentially the largest ratio of "rise" to "run" encountered at all possible bearings from that point. The issue is that when a projection has scale distortion, the values of "run" will be incorrectly computed. Even worse, when the scale distortion varies with bearing--which is the case with all projections that are not conformal--how the slope varies with bearing will be incorrectly estimated, preventing accurate identification of the maximum rise:run ratio (and skewing the calculation of the aspect).
We can solve this by using a conformal projection to ensure that the scale distortion does not vary with bearing, and then correcting the slope estimates to account for the scale distortion (which varies from point to point throughout the map). The trick is to use a global conformal projection that allows a simple expression for its scale distortion.
The Mercator projection fits the bill: assuming scale is correct at the Equator, its distortion equals the secant of the latitude. That is, distances on the map appear to be multiplied by the secant. This causes any slope calculation to compute rise:(sec(f)*run) (which is a ratio), where f is the latitude. To correct this, we need to multiply the computed slopes by sec(f); or, equivalently, divide them by cos(f). This gives us the simple recipe:
Compute the slope (as rise:run or a percent) using a Mercator projection, then divide the result by the cosine of the latitude.
Workflow
To do this with a grid given in decimal degrees (such as an SRTM DEM), perform the following steps:
Create a latitude grid. (This is just the y-coordinate grid.)
Compute its cosine.
Project both the DEM and the cosine of the latitude using a Mercator projection in which scale is true at the Equator.
If necessary, convert the elevation units to agree with the units of the projected coordinates (usually meters).
Compute the slope of the projected DEM either as a pure slope or a percent (not as an angle).
Divide this slope by the projected cosine(latitude) grid.
If desired, reproject the slope grid to any other coordinate system for further analysis or mapping.
The errors in the slope calculations will be up to 0.3% (because this procedure uses a spherical earth model rather than an ellipsoidal one, which is flattened by 0.3%). That error is substantially smaller than other errors that go into slope calculations and so can be neglected.
Fully global calculations
The Mercator projection cannot handle either pole. For work in polar regions, consider using a polar Stereographic projection with true scale at the pole. The scale distortion equals 2 / (1 + sin(f)). Use this expression in place of sec(f) in the workflow. Specifically, instead of computing a cosine(latitude) grid, compute a grid whose values are (1 + sin(latitude))/2 (edit: use -latitude for the South Pole, as discussed in the comments). Then proceed exactly as before.
For a complete global solution, consider breaking the terrestrial grid into three parts--one around each pole and one around the equator--, performing a slope calculation separately in each part using a suitable projection, and mosaicing the results. A reasonable place to split the globe is along circles of latitude at latitudes of 2*ArcTan(1/3), which is about 37 degrees, because at these latitudes the Mercator and Stereographic correction factors are equal to each other (having a common value of 5/4) and it would be nice to minimize the sizes of the corrections made. As a check of the computations, the grids should be in very close agreement where they overlap (tiny amounts of floating point imprecision and differences due to resampling of the projected grids ought to be the only sources of discrepancies).
References
John P. Snyder, Map Projections--A Working Manual. USGS Professional Paper 1395, 1987.
This is very similar to what our output looks like from the path distance tool incorporating a dem, vertical raster, and vertical factor specification (which is basically what you are trying to do with your resistance layer but it differentiates between uphill and downhill movement). It may just be what's expected given your elevation range and resistance weightings. But, based on a quick look at your DEM and output there appear to be a number of things that could be causing your results to not appear as you might wish and that you may want to take a second look at to be sure.
1) You have a sizable chunk in the southwestern part of your area that seems to have been coded as nodata (either in the DEM or resistance layer). In this funciton, GIS treats nodata pixels as essentially having an infinite resistance. (This is why that island thing has a very high distance value)
2) If you are using path distance and specifying a vertical raster but not vertical factors (or vice versa) or if any of those two parts are improperly specified or formatted, the function will simply fail to execute this portion of the tool and use the rest of the algorithm to produce output, but will not issue any warnings or indications that the vertical or horizontal direction portions of the analyses were not executed properly. Also, sometimes the program will use an ASCII vertical or horizontal factor file in some situations but not others (like it will work if using the GUI, but not python), regardless of formatting. This can make this tool difficult to troubleshoot. We usually go in and compare the distance values from a run with and without the vertical factors to see if they are different.
3) You may be able to see more detail about what the tool is doing if you run it on your test points one at a time (right now you can only view the shorter of the two distances at each pixel, since the function only records the distances from each pixel back to one of the two points in the input)
4) Without large differences in altitude across a study area and/or a wide range in weightings for the VRMA factors the output from an analysis that looks at including the cost of moving up and down a hill often just doesn't look much different from a euclidean analysis of distance. However, the numbers you get will be slightly different, and in some cases if you map the least cost paths they will take slightly different routes.
5) Technically I think you're supposed to use a z-score raster instead of a DEM as input for the vertical raster, but both are used frequently on the forums and, at least for our data, the differences in output are minimal.
ESRI's documentation on this is a little scattered, but this explanation of the vertical factors is pretty good: http://webhelp.esri.com/arcgisdesktop/9.3/index.cfm?TopicName=Path%20Distance:%20adding%20more%20cost%20complexity
Best Answer
I am with @whuber in that you should have a very good reason to do this and not just because you feel like you need an attribute table. Most operations can be accomplished on a floating point raster. Following the advice of a straight conversion to an integer raster, the data is truncated or rounded, and you can introduce serious issues such as contour bias.
That said, you can get to an attribute table and keep precision by multiplying by a constant (eg.. 100) and then coercing to an integer raster.
In ArcGIS 10.2 raster calculator this can be done in one fell swoop.
By using a constant, you can coerce back to a floating point raster by reversing the process.