ArcPy – Replacing Non-English Characters in Attribute Tables

arcgis-10.1arcpyfield-calculatorunicodeencodeerror

I have a few shapefiles where some of the attributes contain the non-English characters ÅÄÖ. Since some queries doesn't work with these characters (specifically ChangeDetector), I tried to change them in advance with a simple script and add the new strings to another field.

However, change in characters works fine but not updating the field with arcpy.UpdateCursor.

What is an appropriate way of solving this?

I have also tried to do this via the Field Calculator while posting "code" to the codeblock, with the same error.

Error message:
Runtime error
Traceback (most recent call last):
File "", line 1, in
File "c:/gis/python/teststring.py", line 28, in
val = code(str(prow.Typkod))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc4' in position 3: ordinal not in range(128)

Code:

# -*- coding: cp1252 -*-
def code(infield):
    data = ''
    for i in infield:
##        print i
        if i == 'Ä':
            data = data + 'AE'
        elif i == 'ä':
            data = data + 'ae'
        elif i == 'Å':
            data = data + 'AA'
        elif i == 'å':
            data = data + 'aa'
        elif i == 'Ö':
            data = data + 'OE'
        elif i == 'ö':
            data = data + 'oe'
        else:
            data = data + i
    return data


shp = r'O:\XXX\250000\DB\ArcView\shape.shp'

prows = arcpy.UpdateCursor(shp)

for prow in prows:
    val = code(unicode(str(prow.Typkod), "utf-8"))
    prow.Typkod_U = val
    print val
    prows.updateRow(prow)

The values of Typkod are of the type:
[D, D, S, DDRÄ, TRÄ] etc.

I use ArcMap Basic (10.1) on Windows 7.


New Error message:
Runtime error
Traceback (most recent call last):
File "", line 1, in
File "c:/gis/python/teststring.py", line 29, in
val = code(unicode(str(row.Typkod), "utf-8"))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc4' in position 3: ordinal not in range(128)

>>> val
'DDRÄ'
>>> type(val)
type 'str'


It appears as if the output from the function is wrong somehow. When there's ÅÄÖ involved it returns data = u'DDR\xc4' and not (as was my intention) data = 'DDRAE'. Any suggestions on what might cause this?

Best Answer

I am too quite often dealing with special characters such as you have in Swedish (ä,ö,å), but also some others presenting in other languages such as Portuguese and Spanish (é,í,ú,ó etc.). For instance, I have data where the name of city is written in plain Latin with all the accents removed, so the "Göteborg" becomes "Goteborg" and "Åre" is "Are". In order to perform the joins and match the data I have to replace the accents to the English Latin-based character.

I used to do this as you've shown in your own answer first, but this logic soon became rather cumbersome to maintain. Now I use the unicodedata module which is already available with Python installation and arcpy for iterating the features.

import unicodedata
import arcpy
import os

def strip_accents(s):
   return ''.join(c for c in unicodedata.normalize('NFD', s)
                  if unicodedata.category(c) != 'Mn')

arcpy.env.workspace = r"C:\TempData_processed.gdb"
workspace = arcpy.env.workspace

in_fc = os.path.join(workspace,"FC")
fields = ["Adm_name","Adm_Latin"]
with arcpy.da.UpdateCursor(in_fc,fields) as upd_cursor:
    for row in upd_cursor:
        row[1] = strip_accents(u"{0}".format(row[0]))
        upd_cursor.updateRow(row)

See the link for more information about using the unicodedata module at What is the best way to remove accents in a python unicode string?

Related Question