Python How To Regex
Python How To Regex
Release 3.3.2
Contents
1 2 Introduction Simple Patterns 2.1 Matching Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Repeating Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Regular Expressions 3.1 Compiling Regular Expressions 3.2 The Backslash Plague . . . . . . 3.3 Performing Matches . . . . . . . 3.4 Module-Level Functions . . . . 3.5 Compilation Flags . . . . . . . . ii ii ii iii
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
More Pattern Power 4.1 More Metacharacters . . . . . . . 4.2 Grouping . . . . . . . . . . . . . 4.3 Non-capturing and Named Groups 4.4 Lookahead Assertions . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Modifying Strings xiv 5.1 Splitting Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv 5.2 Search and Replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Common Problems 6.1 Use String Methods . . . . 6.2 match() versus search() . . 6.3 Greedy versus Non-Greedy 6.4 Using re.VERBOSE . . . Feedback xvi xvii xvii xvii xviii xix
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Author A.M. Kuchling <amk@amk.ca> Abstract This document is an introductory tutorial to using regular expressions in Python with the re module. It provides a gentler introduction than the corresponding section in the Library Reference.
1 Introduction
Regular expressions (called REs, or regexes, or regex patterns) are essentially a tiny, highly specialized programming language embedded inside Python and made available through the re module. Using this little language, you specify the rules for the set of possible strings that you want to match; this set might contain English sentences, or e-mail addresses, or TeX commands, or anything you like. You can then ask questions such as Does this string match the pattern?, or Is there a match for the pattern anywhere in this string?. You can also use REs to modify a string or to split it apart in various ways. Regular expression patterns are compiled into a series of bytecodes which are then executed by a matching engine written in C. For advanced use, it may be necessary to pay careful attention to how the engine will execute a given RE, and write the RE in a certain way in order to produce bytecode that runs faster. Optimization isnt covered in this document, because it requires that you have a good understanding of the matching engines internals. The regular expression language is relatively small and restricted, so not all possible string processing tasks can be done using regular expressions. There are also tasks that can be done with regular expressions, but the expressions turn out to be very complicated. In these cases, you may be better off writing Python code to do the processing; while Python code will be slower than an elaborate regular expression, it will also probably be more understandable.
2 Simple Patterns
Well start by learning about the simplest possible regular expressions. Since regular expressions are used to operate on strings, well begin with the most common task: matching characters. For a detailed explanation of the computer science underlying regular expressions (deterministic and non-deterministic nite automata), you can refer to almost any textbook on writing compilers.
c; this is the same as [a-c], which uses a range to express the same set of characters. If you wanted to match only lowercase letters, your RE would be [a-z]. Metacharacters are not active inside classes. For example, [akm$] will match any of the characters a, k, m, or $; $ is usually a metacharacter, but inside a character class its stripped of its special nature. You can match the characters not listed within the class by complementing the set. This is indicated by including a ^ as the rst character of the class; ^ outside a character class will simply match the ^ character. For example, [^5] will match any character except 5. Perhaps the most important metacharacter is the backslash, \. As in Python string literals, the backslash can be followed by various characters to signal various special sequences. Its also used to escape all the metacharacters so you can still match them in patterns; for example, if you need to match a [ or \, you can precede them with a backslash to remove their special meaning: \[ or \\. Some of the special sequences beginning with \ represent predened sets of characters that are often useful, such as the set of digits, the set of letters, or the set of anything that isnt whitespace. The following predened special sequences are a subset of those available. The equivalent classes are for bytes patterns. For a complete list of sequences and expanded class denitions for Unicode string patterns, see the last part of Regular Expression Syntax. \d Matches any decimal digit; this is equivalent to the class [0-9]. \D Matches any non-digit character; this is equivalent to the class [^0-9]. \s Matches any whitespace character; this is equivalent to the class [ \t\n\r\f\v]. \S Matches any non-whitespace character; this is equivalent to the class [^ \t\n\r\f\v]. \w Matches any alphanumeric character; this is equivalent to the class [a-zA-Z0-9_]. \W Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9_]. These sequences can be included inside a character class. For example, [\s,.] is a character class that will match any whitespace character, or , or .. The nal metacharacter in this section is .. It matches anything except a newline character, and theres an alternate mode (re.DOTALL) where it will match even a newline. . is often used where you want to match any character.
Step 1 2 3 4 5 6 6
Explanation The a in the RE matches. The engine matches [bcd]*, going as far as it can, which is to the end of the string. The engine tries to match b, but the current position is at the end of the string, so it fails. Back up, so that [bcd]* matches one less character. Try b again, but the current position is at the last character, which is a d. Back up again, so that [bcd]* is only matching bc. Try b again. This time the character at the current position is b, so it succeeds.
The end of the RE has now been reached, and it has matched abcb. This demonstrates how the matching engine goes as far as it can at rst, and if no match is found it will then progressively back up and retry the rest of the RE again and again. It will back up until it has tried zero matches for [bcd]*, and if that subsequently fails, the engine will conclude that the string doesnt match the RE at all. Another repeating metacharacter is +, which matches one or more times. Pay careful attention to the difference between * and +; * matches zero or more times, so whatevers being repeated may not be present at all, while + requires at least one occurrence. To use a similar example, ca+t will match cat (1 a), caaat (3 as), but wont match ct. There are two more repeating qualiers. The question mark character, ?, matches either once or zero times; you can think of it as marking something as being optional. For example, home-?brew matches either homebrew or home-brew. The most complicated repeated qualier is {m,n}, where m and n are decimal integers. This qualier means there must be at least m repetitions, and at most n. For example, a/{1,3}b will match a/b, a//b, and a///b. It wont match ab, which has no slashes, or a////b, which has four. You can omit either m or n; in that case, a reasonable value is assumed for the missing value. Omitting m is interpreted as a lower limit of 0, while omitting n results in an upper bound of innity actually, the upper bound is the 2-billion limit mentioned earlier, but that might as well be innity. Readers of a reductionist bent may notice that the three other qualiers can all be expressed using this notation. {0,} is the same as *, {1,} is equivalent to +, and {0,1} is the same as ?. Its better to use *, +, or ? when you can, simply because theyre shorter and easier to read.
The RE is passed to re.compile() as a string. REs are handled as strings because regular expressions arent part of the core Python language, and no special syntax was created for expressing them. (There are applications that dont need REs at all, so theres no need to bloat the language specication by including them.) Instead, the re module is simply a C extension module included with Python, just like the socket or zlib modules. Putting REs in strings keeps the Python language simpler, but has one disadvantage which is the topic of the next section.
In short, to match a literal backslash, one has to write \\\\ as the RE string, because the regular expression must be \\, and each backslash must be expressed as \\ inside a regular Python string literal. In REs that feature backslashes repeatedly, this leads to lots of repeated backslashes and makes the resulting strings difcult to understand. The solution is to use Pythons raw string notation for regular expressions; backslashes are not handled in any special way in a string literal prexed with r, so r"\n" is a two-character string containing \ and n, while "\n" is a one-character string containing a newline. Regular expressions will often be written in Python code using this raw string notation. Regular String "ab*" "\\\\section" "\\w+\\s+\\1" Raw string r"ab*" r"\\section" r"\w+\s+\1"
match() and search() return None if no match can be found. If theyre successful, a match object instance is returned, containing information about the match: where it starts and ends, the substring it matched, and more. You can learn about this by interactively experimenting with the re module. If you have tkinter available, you may also want to look at Tools/demo/redemo.py, a demonstration program included with the Python distribution. It allows you to enter REs and strings, and displays whether the RE matches or fails. redemo.py can be quite useful
when trying to debug a complicated RE. Phil Schwartzs Kodos is also an interactive tool for developing and testing RE patterns. This HOWTO uses the standard Python interpreter for its examples. First, run the Python interpreter, import the re module, and compile a RE: >>> import re >>> p = re.compile([a-z]+) >>> p <_sre.SRE_Pattern object at 0x...> Now, you can try matching various strings against the RE [a-z]+. An empty string shouldnt match at all, since + means one or more repetitions. match() should return None in this case, which will cause the interpreter to print no output. You can explicitly print the result of match() to make this clear. >>> p.match("") >>> print(p.match("")) None Now, lets try it on a string that it should match, such as tempo. In this case, match() will return a match object, so you should store the result in a variable for later use. >>> m = p.match(tempo) >>> m <_sre.SRE_Match object at 0x...> Now you can query the match object for information about the matching string. match object instances also have several methods and attributes; the most important ones are: Method/Attribute group() start() end() span() Purpose Return the string matched by the RE Return the starting position of the match Return the ending position of the match Return a tuple containing the (start, end) positions of the match
Trying these methods will soon clarify their meaning: >>> m.group() tempo >>> m.start(), m.end() (0, 5) >>> m.span() (0, 5) group() returns the substring that was matched by the RE. start() and end() return the starting and ending index of the match. span() returns both start and end indexes in a single tuple. Since the match() method only checks if the RE matches at the start of a string, start() will always be zero. However, the search() method of patterns scans through the string, so the match may not start at zero in that case. >>> print(p.match(::: message)) None >>> m = p.search(::: message); print(m) <_sre.SRE_Match object at 0x...> >>> m.group() message >>> m.span() (4, 11) In actual programs, the most common style is to store the match object in a variable, and then check if it was None. This usually looks like:
p = re.compile( ... ) m = p.match( string goes here ) if m: print(Match found: , m.group()) else: print(No match) Two pattern methods return all of the matches for a pattern. findall() returns a list of matching strings: >>> p = re.compile(\d+) >>> p.findall(12 drummers drumming, 11 pipers piping, 10 lords a-leaping) [12, 11, 10] findall() has to create the entire list before it can be returned as the result. The finditer() method returns a sequence of match object instances as an iterator: >>> iterator = p.finditer(12 drummers drumming, 11 ... 10 ...) >>> iterator <callable_iterator object at 0x...> >>> for match in iterator: ... print(match.span()) ... (0, 2) (22, 24) (29, 31)
I IGNORECASE Perform case-insensitive matching; character class and literal strings will match letters by ignoring case. For example, [A-Z] will match lowercase letters, too, and Spam will match Spam, spam, or spAM. This lowercasing doesnt take the current locale into account; it will if you also set the LOCALE ag. L LOCALE Make \w, \W, \b, and \B, dependent on the current locale. Locales are a feature of the C library intended to help in writing programs that take account of language differences. For example, if youre processing French text, youd want to be able to write \w+ to match words, but \w only matches the character class [A-Za-z]; it wont match or . If your system is congured properly and a French locale is selected, certain C functions will tell the program that should also be considered a letter. Setting the LOCALE ag when compiling a regular expression will cause the resulting compiled object to use these C functions for \w; this is slower, but also enables \w+ to match French words as youd expect. M MULTILINE (^ and $ havent been explained yet; theyll be introduced in section More Metacharacters.) Usually ^ matches only at the beginning of the string, and $ matches only at the end of the string and immediately before the newline (if any) at the end of the string. When this ag is specied, ^ matches at the beginning of the string and at the beginning of each line within the string, immediately following each newline. Similarly, the $ metacharacter matches either at the end of the string and at the end of each line (immediately preceding each newline). S DOTALL Makes the . special character match any character at all, including a newline; without this ag, . will match anything except a newline. A ASCII Make \w, \W, \b, \B, \s and \S perform ASCII-only matching instead of full Unicode matching. This is only meaningful for Unicode patterns, and is ignored for byte patterns. X
VERBOSE This ag allows you to write regular expressions that are more readable by granting you more exibility in how you can format them. When this ag has been specied, whitespace within the RE string is ignored, except when the whitespace is in a character class or preceded by an unescaped backslash; this lets you organize and indent the RE more clearly. This ag also lets you put comments within a RE that will be ignored by the engine; comments are marked by a # thats neither in a character class or preceded by an unescaped backslash. For example, heres a RE that uses re.VERBOSE; see how much easier it is to read? charref = re.compile(r""" &[#] # Start of a numeric entity reference ( 0[0-7]+ # Octal form | [0-9]+ # Decimal form | x[0-9a-fA-F]+ # Hexadecimal form ) ; # Trailing semicolon """, re.VERBOSE) Without the verbose setting, the RE would look like this: charref = re.compile("&#(0[0-7]+" "|[0-9]+" "|x[0-9a-fA-F]+);") In the above example, Pythons automatic concatenation of string literals has been used to break up the RE into smaller pieces, but its still more difcult to understand than the version using re.VERBOSE.
>>> print(re.search(^From, From Here to Eternity)) <_sre.SRE_Match object at 0x...> >>> print(re.search(^From, Reciting From Memory)) None $ Matches at the end of a line, which is dened as either the end of the string, or any location followed by a newline character. >>> print(re.search(}$, <_sre.SRE_Match object at >>> print(re.search(}$, None >>> print(re.search(}$, <_sre.SRE_Match object at {block})) 0x...> {block} )) {block}\n)) 0x...>
To match a literal $, use \$ or enclose it inside a character class, as in [$]. \A Matches only at the start of the string. When not in MULTILINE mode, \A and ^ are effectively the same. In MULTILINE mode, theyre different: \A still matches only at the beginning of the string, but ^ may match at any location inside the string that follows a newline character. \Z Matches only at the end of the string. \b Word boundary. This is a zero-width assertion that matches only at the beginning or end of a word. A word is dened as a sequence of alphanumeric characters, so the end of a word is indicated by whitespace or a nonalphanumeric character. The following example matches class only when its a complete word; it wont match when its contained inside another word. >>> p = re.compile(r\bclass\b) >>> print(p.search(no class at all)) <_sre.SRE_Match object at 0x...> >>> print(p.search(the declassified algorithm)) None >>> print(p.search(one subclass is)) None There are two subtleties you should remember when using this special sequence. First, this is the worst collision between Pythons string literals and regular expression sequences. In Pythons string literals, \b is the backspace character, ASCII value 8. If youre not using raw strings, then Python will convert the \b to a backspace, and your RE wont match as you expect it to. The following example looks the same as our previous RE, but omits the r in front of the RE string. >>> p = re.compile(\bclass\b) >>> print(p.search(no class at all)) None >>> print(p.search(\b + class + \b)) <_sre.SRE_Match object at 0x...> Second, inside a character class, where theres no use for this assertion, \b represents the backspace character, for compatibility with Pythons string literals. \B Another zero-width assertion, this is the opposite of \b, only matching when the current position is not at a word boundary.
4.2 Grouping
Frequently you need to obtain more information than just whether the RE matched or not. Regular expressions are often used to dissect strings by writing a RE divided into several subgroups which match different components of interest. For example, an RFC-822 header line is divided into a header name and a value, separated by a :, like this: From: author@example.com User-Agent: Thunderbird 1.5.0.9 (X11/20061227) MIME-Version: 1.0 To: editor@example.com This can be handled by writing a regular expression which matches an entire header line, and has one group which matches the header name, and another group which matches the headers value. Groups are marked by the (, ) metacharacters. ( and ) have much the same meaning as they do in mathematical expressions; they group together the expressions contained inside them, and you can repeat the contents of a group with a repeating qualier, such as *, +, ?, or {m,n}. For example, (ab)* will match zero or more repetitions of ab. >>> p = re.compile((ab)*) >>> print(p.match(ababababab).span()) (0, 10) Groups indicated with (, ) also capture the starting and ending index of the text that they match; this can be retrieved by passing an argument to group(), start(), end(), and span(). Groups are numbered starting with 0. Group 0 is always present; its the whole RE, so match object methods all have group 0 as their default argument. Later well see how to express groups that dont capture the span of text that they match. >>> p = re.compile((a)b) >>> m = p.match(ab) >>> m.group() ab >>> m.group(0) ab Subgroups are numbered from left to right, from 1 upward. Groups can be nested; to determine the number, just count the opening parenthesis characters, going from left to right. >>> p = re.compile((a(b)c)d) >>> m = p.match(abcd) >>> m.group(0) abcd >>> m.group(1) abc >>> m.group(2) b group() can be passed multiple group numbers at a time, in which case it will return a tuple containing the corresponding values for those groups. >>> m.group(2,1,2) (b, abc, b) The groups() method returns a tuple containing the strings for all the subgroups, from 1 up to however many there are. >>> m.groups() (abc, b)
Backreferences in a pattern allow you to specify that the contents of an earlier capturing group must also be found at the current location in the string. For example, \1 will succeed if the exact contents of group 1 can be found at the current position, and fails otherwise. Remember that Pythons string literals also use a backslash followed by numbers to allow including arbitrary characters in a string, so be sure to use a raw string when incorporating backreferences in a RE. For example, the following RE detects doubled words in a string. >>> p = re.compile(r(\b\w+)\s+\1) >>> p.search(Paris in the the spring).group() the the Backreferences like this arent often useful for just searching through a string there are few text formats which repeat data in this way but youll soon nd out that theyre very useful when performing string substitutions.
existing pattern, since you can add new groups without changing how all the other groups are numbered. It should be mentioned that theres no performance difference in searching between capturing and non-capturing groups; neither form is any faster than the other. A more signicant feature is named groups: instead of referring to them by numbers, groups can be referenced by a name. The syntax for a named group is one of the Python-specic extensions: (?P<name>...). name is, obviously, the name of the group. Named groups also behave exactly like capturing groups, and additionally associate a name with a group. The match object methods that deal with capturing groups all accept either integers that refer to the group by number or strings that contain the desired groups name. Named groups are still given numbers, so you can retrieve information about a group in two ways: >>> p = re.compile(r(?P<word>\b\w+\b)) >>> m = p.search( (((( Lots of punctuation ))) ) >>> m.group(word) Lots >>> m.group(1) Lots Named groups are handy because they let you use easily-remembered names, instead of having to remember numbers. Heres an example RE from the imaplib module: InternalDate = re.compile(rINTERNALDATE " r(?P<day>[ 123][0-9])-(?P<mon>[A-Z][a-z][a-z])- r(?P<year>[0-9][0-9][0-9][0-9]) r (?P<hour>[0-9][0-9]):(?P<min>[0-9][0-9]):(?P<sec>[0-9][0-9]) r (?P<zonen>[-+])(?P<zoneh>[0-9][0-9])(?P<zonem>[0-9][0-9]) r") Its obviously much easier to retrieve m.group(zonem), instead of having to remember to retrieve group 9. The syntax for backreferences in an expression such as (...)\1 refers to the number of the group. Theres naturally a variant that uses the group name instead of the number. This is another Python extension: (?P=name) indicates that the contents of the group called name should again be matched at the current point. The regular expression for nding doubled words, (\b\w+)\s+\1 can also be written as (?P<word>\b\w+)\s+(?P=word): >>> p = re.compile(r(?P<word>\b\w+)\s+(?P=word)) >>> p.search(Paris in the the spring).group() the the
Notice that the . needs to be treated specially because its a metacharacter; Ive put it inside a character class. Also notice the trailing $; this is added to ensure that all the rest of the string must be included in the extension. This regular expression matches foo.bar and autoexec.bat and sendmail.cf and printers.conf. Now, consider complicating the problem a bit; what if you want to match lenames where the extension is not bat? Some incorrect attempts: .*[.][^b].*$ The rst attempt above tries to exclude bat by requiring that the rst character of the extension is not a b. This is wrong, because the pattern also doesnt match foo.bar. .*[.]([^b]..|.[^a].|..[^t])$ The expression gets messier when you try to patch up the rst solution by requiring one of the following cases to match: the rst character of the extension isnt b; the second character isnt a; or the third character isnt t. This accepts foo.bar and rejects autoexec.bat, but it requires a three-letter extension and wont accept a lename with a two-letter extension such as sendmail.cf. Well complicate the pattern again in an effort to x it. .*[.]([^b].?.?|.[^a]?.?|..?[^t]?)$ In the third attempt, the second and third letters are all made optional in order to allow matching extensions shorter than three characters, such as sendmail.cf. The patterns getting really complicated now, which makes it hard to read and understand. Worse, if the problem changes and you want to exclude both bat and exe as extensions, the pattern would get even more complicated and confusing. A negative lookahead cuts through all this confusion: .*[.](?!bat$).*$ The negative lookahead means: if the expression bat doesnt match at this point, try the rest of the pattern; if bat$ does match, the whole pattern will fail. The trailing $ is required to ensure that something like sample.batch, where the extension only starts with bat, will be allowed. Excluding another lename extension is now easy; simply add it as an alternative inside the assertion. The following pattern excludes lenames that end in either bat or exe: .*[.](?!bat$|exe$).*$
5 Modifying Strings
Up to this point, weve simply performed searches against a static string. Regular expressions are also commonly used to modify strings in various ways, using the following pattern methods: Method/Attribute split() sub() subn() Purpose Split the string into a list, splitting it wherever the RE matches Find all substrings where the RE matches, and replace them with a different string Does the same thing as sub(), but returns the new string and the number of replacements
You can limit the number of splits made, by passing a value for maxsplit. When maxsplit is nonzero, at most maxsplit splits will be made, and the remainder of the string is returned as the nal element of the list. In the following example, the delimiter is any sequence of non-alphanumeric characters. >>> p = re.compile(r\W+) >>> p.split(This is a test, short and sweet, of split().) [This, is, a, test, short, and, sweet, of, split, ] >>> p.split(This is a test, short and sweet, of split()., 3) [This, is, a, test, short and sweet, of split().] Sometimes youre not only interested in what the text between delimiters is, but also need to know what the delimiter was. If capturing parentheses are used in the RE, then their values are also returned as part of the list. Compare the following calls: >>> p = re.compile(r\W+) >>> p2 = re.compile(r(\W+)) >>> p.split(This... is a test.) [This, is, a, test, ] >>> p2.split(This... is a test.) [This, ... , is, , a, , test, ., ] The module-level function re.split() adds the RE to be used as the rst argument, but is otherwise the same. >>> re.split([\W]+, Words, words, words.) [Words, words, words, ] >>> re.split(([\W]+), Words, words, words.) [Words, , , words, , , words, ., ] >>> re.split([\W]+, Words, words, words., 1) [Words, words, words.]
Empty matches are replaced only when theyre not adjacent to a previous match. >>> p = re.compile(x*) >>> p.sub(-, abxd) -a-b-d- If replacement is a string, any backslash escapes in it are processed. That is, \n is converted to a single newline character, \r is converted to a carriage return, and so forth. Unknown escapes such as \j are left alone. Backreferences, such as \6, are replaced with the substring matched by the corresponding group in the RE. This lets you incorporate portions of the original text in the resulting replacement string. This example matches the word section followed by a string enclosed in {, }, and changes section to subsection: >>> p = re.compile(section{ ( [^}]* ) }, re.VERBOSE) >>> p.sub(rsubsection{\1},section{First} section{second}) subsection{First} subsection{second} Theres also a syntax for referring to named groups as dened by the (?P<name>...) syntax. \g<name> will use the substring matched by the group named name, and \g<number> uses the corresponding group number. \g<2> is therefore equivalent to \2, but isnt ambiguous in a replacement string such as \g<2>0. (\20 would be interpreted as a reference to group 20, not a reference to group 2 followed by the literal character 0.) The following substitutions are all equivalent, but use all three variations of the replacement string. >>> p = re.compile(section{ (?P<name> [^}]* ) }, re.VERBOSE) >>> p.sub(rsubsection{\1},section{First}) subsection{First} >>> p.sub(rsubsection{\g<1>},section{First}) subsection{First} >>> p.sub(rsubsection{\g<name>},section{First}) subsection{First} replacement can also be a function, which gives you even more control. If replacement is a function, the function is called for every non-overlapping occurrence of pattern. On each call, the function is passed a match object argument for the match and can use this information to compute the desired replacement string and return it. In the following example, the replacement function translates decimals into hexadecimal: >>> def hexrepl(match): ... "Return the hex string for a decimal number" ... value = int(match.group()) ... return hex(value) ... >>> p = re.compile(r\d+) >>> p.sub(hexrepl, Call 65490 for printing, 49152 for user code.) Call 0xffd2 for printing, 0xc000 for user code. When using the module-level re.sub() function, the pattern is passed as the rst argument. The pattern may be provided as an object or as a string; if you need to specify regular expression ags, you must either use a pattern object as the rst parameter, or use embedded modiers in the pattern string, e.g. sub("(?i)b+", "x", "bbbb BBBB") returns x x.
6 Common Problems
Regular expressions are a powerful tool for some applications, but in some ways their behaviour isnt intuitive and at times they dont behave the way you may expect them to. This section will point out some of the most common pitfalls.
Sometimes youll be tempted to keep using re.match(), and just add .* to the front of your RE. Resist this temptation and use re.search() instead. The regular expression compiler does some analysis of REs in order to speed up the process of looking for a match. One such analysis gures out what the rst character of a match must be; for example, a pattern starting with Crow must match starting with a C. The analysis lets the engine quickly scan through the string looking for the starting character, only trying the full match if a C is found. Adding .* defeats this optimization, requiring scanning to the end of the string and then backtracking to nd a match for the rest of the RE. Use re.search() instead.
>>> s = <html><head><title>Title</title> >>> len(s) 32 >>> print(re.match(<.*>, s).span()) (0, 32) >>> print(re.match(<.*>, s).group()) <html><head><title>Title</title> The RE matches the < in <html>, and the .* consumes the rest of the string. Theres still more left in the RE, though, and the > cant match at the end of the string, so the regular expression engine has to backtrack character by character until it nds a match for the >. The nal match extends from the < in <html> to the > in </title>, which isnt what you want. In this case, the solution is to use the non-greedy qualiers *?, +?, ??, or {m,n}?, which match as little text as possible. In the above example, the > is tried immediately after the rst < matches, and when it fails, the engine advances a character at a time, retrying the > at every step. This produces just the right result: >>> print(re.match(<.*?>, s).group()) <html> (Note that parsing HTML or XML with regular expressions is painful. Quick-and-dirty patterns will handle common cases, but HTML and XML have special cases that will break the obvious regular expression; by the time youve written a regular expression that handles all of the possible cases, the patterns will be very complicated. Use an HTML or XML parser module for such tasks.)
7 Feedback
Regular expressions are a complicated topic. Did this document help you understand them? Were there parts that were unclear, or Problems you encountered that werent covered here? If so, please send suggestions for improvements to the author. The most complete book on regular expressions is almost certainly Jeffrey Friedls Mastering Regular Expressions, published by OReilly. Unfortunately, it exclusively concentrates on Perl and Javas avours of regular expressions, and doesnt contain any Python material at all, so it wont be useful as a reference for programming in Python. (The rst edition covered Pythons now-removed regex module, which wont help you much.) Consider checking it out from your library.