1.1. 问题 Problem
You need to deal with data that doesn’t fit in the ASCII character set.
你需要处理不适合用ASCII字符集表示的数据.
1.2. 解决 Solution
Unicode strings can be encoded in plain strings in a variety of ways, according to whichever encoding you choose:
Unicode字符串可以用多种方式编码为普通字符串, 依照你所选择的编码(encoding):
<!– Inject Script Filtered –> Toggle line numbers
1 #将Unicode转换成普通的Python字符串:"编码(encode)"
2 unicodestring = u"Hello world"
3 utf8string = unicodestring.encode("utf-8")
4 asciistring = unicodestring.encode("ascii")
5 isostring = unicodestring.encode("ISO-8859-1")
6 utf16string = unicodestring.encode("utf-16")
7
8
9 #将普通的Python字符串转换成Unicode: "解码(decode)"
10 plainstring1 = unicode(utf8string, "utf-8")
11 plainstring2 = unicode(asciistring, "ascii")
12 plainstring3 = unicode(isostring, "ISO-8859-1")
13 plainstring4 = unicode(utf16string, "utf-16")
14
15 assert plainstring1==plainstring2==plainstring3==plainstring4
1.3. 讨论 Discussion
If you find yourself dealing with text that contains non-ASCII characters, you have to learn about Unicode梬hat it is, how it works, and how Python uses it.
如果你发现自己正在处理包含非ASCII码字符的文本, 你必须学习Unicode,关于它是什么,如何工作,而且Python如何使用它。
Unicode is a big topic.Luckily, you don’t need to know everything about Unicode to be able to solve real-world problems with it: a few basic bits of knowledge are enough.First, you must understand the difference between bytes and characters.In older, ASCII-centric languages and environments, bytes and characters are treated as the same thing.Since a byte can hold up to 256 values, these environments are limited to 256 characters.Unicode, on the other hand, has tens of thousands of characters.That means that each Unicode character takes more than one byte, so you need to make the distinction between characters and bytes.
Unicode是一个大的主题。幸运地,你并不需要知道关于Unicode码的每件事,就能够用它解决真 实世界的问题: 一些基本知识就够了。首先,你得了解在字节和字符之间的不同。原先,在以ASCII码为中心的语言和环境中,字节和字符被当做相同的事物。由于一个字节只 能有256个值,这些环境就受限为只支持 256个字符。Unicode码,另一方面,有数万个字符,那意谓着每个Unicode字符占用多个字节,因此,你需要在字符和字节之间作出区别。
Standard Python strings are really byte strings, and a Python character is really a byte.Other terms for the standard Python type are “8-bit string” and “plain string.”,In this recipe we will call them byte strings, to remind you of their byte-orientedness.
标准的Python字符串确实是字节字符串,而且一个Python字符真的是一个字节。换个术语,标准的 Python字符串类型的是 “8位字符串(8-bit string)”和”普通字符串(plain string)”. 在这一份配方中我们把它们称作是字节串(byte strings), 并记住它们是基于字节的。
Conversely, a Python Unicode character is an abstract object big enough to hold the character, analogous to Python’s long integers.You don’t have to worry about the internal representation;the representation of Unicode characters becomes an issue only when you are trying to send them to some byte-oriented function, such as the write method for files or the send method for network sockets.At that point, you must choose how to represent the characters as bytes.Converting from Unicode to a byte string is called encoding the string.Similarly, when you load Unicode strings from a file, socket, or other byte-oriented object, you need to decode the strings from bytes to characters.
相反地,一个Python Unicode码字符是一个大到足够支持(Unicode)字符的一个抽象对象, 类似于Python中的长整数。 你不必要为内在的表示担忧; 只有当你正在尝试把它们传递给给一些基于字节的函数的时候,Unicode字符的表示变成一个议题, 比如文件的write方法或网络套接字的send 方法。那时,你必须要选择该如何表示这些(Unicode)字符为字节。从Unicode码到字节串的转换被叫做编码。同样地,当你从文件,套接字或其他 的基于字节的对象中装入一个Unicode字符串的时候,你需要把字节串解码为(Unicode)字符。
There are many ways of converting Unicode objects to byte strings, each of which is called an encoding.For a variety of historical, political, and technical reasons, there is no one “right” encoding.Every encoding has a case-insensitive name, and that name is passed to the decode method as a parameter. Here are a few you should know about:
将Unicode码对象转换成字节串有许多方法, 每个被称为一个编码(encoding)。由于多种历史的,政治上的,和技术上的原因,没有一个 “正确的”编码。每个编码有一个大小写无关的名字,而且那一个名字被作为一个叁数传给解码方法。这里是一些你应该知道的:
The UTF-8 encoding can handle any Unicode character.It is also backward compatible with ASCII, so a pure ASCII file can also be considered a UTF-8 file, and a UTF-8 file that happens to use only ASCII characters is identical to an ASCII file with the same characters.This property makes UTF-8 very backward-compatible, especially with older Unix tools.UTF-8 is far and away the dominant encoding on Unix.It’s primary weakness is that it is fairly inefficient for Eastern texts.
UTF-8 编码能处理任何的Unicode字符。它也是与ASCII码向后兼容的,因此一个纯粹的ASCII码文件也能被考虑为一个UTF-8 文件,而且一个碰巧只使用ASCII码字符的 UTF-8 文件和拥有同样字符的ASCII码文件是相同的。 这个特性使得UTF-8的向后兼容性非常好,尤其使用较旧的 Unix工具时。UTF-8 无疑地是在 Unix 上的占优势的编码。它主要的弱点是对东方文字是非常低效的。
The UTF-16 encoding is favored by Microsoft operating systems and the Java environment.It is less efficient for Western languages but more efficient for Eastern ones.A variant of UTF-16 is sometimes known as UCS-2.
UTF-16 编码在微软的操作系统和Java环境下受到偏爱。它对西方语言是比较低效,但对于东方语言是更有效率的。一个 UTF-16 的变体有时叫作UCS-2 。
The ISO-8859 series of encodings are 256-character ASCII supersets.They cannot support all of the Unicode characters;they can support only some particular language or family of languages.ISO-8859-1, also known as Latin-1, covers most Western European and African languages, but not Arabic.ISO-8859-2, also known as Latin-2,covers many Eastern European languages such as Hungarian and Polish.
ISO-8859编码系列是256个字符的ASCII码的超集。他们不能够支援所有的Unicode码字符; 他们只能支援一些特别的语言或语言家族。ISO-8859-1, 也既Latin-1,包括大多数的西欧和非洲语言, 但是不含阿拉伯语。ISO-8859-2,也既Latin-2,包括许多东欧的语言,像是匈牙利语和波兰语。
If you want to be able to encode all Unicode characters, you probably want to use UTF-8.You will probably need to deal with the other encodings only when you are handed data in those encodings created by some other application.
如果你想要能够编码所有的Unicode码字符,你或许想要使用UTF-8。只有当你需要处理那些由其他应用产生的其它编码的数据时,你或许才需要处理其他编码。