Sqlalchemy 0 6 2
Sqlalchemy 0 6 2
Sqlalchemy 0 6 2
Release 0.6.2
Mike Bayer
1 Overview / Installation 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Main Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Code Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.6 Installing SQLAlchemy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.7 Installing a Database API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.8 Checking the Installed SQLAlchemy Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.9 0.5 to 0.6 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
i
3 SQL Expression Language Tutorial 31
3.1 Version Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Connecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Define and Create Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Insert Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5 Executing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6 Executing Multiple Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7 Connectionless / Implicit Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8 Selecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.9 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.10 Conjunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.11 Using Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.12 Using Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.13 Using Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.14 Intro to Generative Selects and Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.15 Everything Else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.15.1 Bind Parameter Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.15.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.15.3 Unions and Other Set Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.15.4 Scalar Selects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.15.5 Correlated Subqueries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.15.6 Ordering, Grouping, Limiting, Offset...ing... . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.16 Inserts and Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.16.1 Correlated Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.17 Deletes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.18 Further Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4 Mapper Configuration 53
4.1 Mapper Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.1 Customizing Column Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.2 Deferred Column Loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.3 SQL Expressions as Mapped Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.4 Changing Attribute Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Simple Validators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Using Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Custom Comparators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.5 Composite Column Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.1.6 Controlling Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.1.7 Mapping Class Inheritance Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Joined Table Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Single Table Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Concrete Table Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Using Relationships with Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.1.8 Mapping a Class against Multiple Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.9 Mapping a Class against Arbitrary Selects . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.10 Multiple Mappers for One Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1.11 Multiple “Persistence” Mappers for One Class . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1.12 Constructors and Object Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1.13 Extending Mapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 Relationship Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2.1 Basic Relational Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
One To Many . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Many To One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
One To One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
ii
Many To Many . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Association Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2.2 Adjacency List Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Self-Referential Query Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Configuring Eager Loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.3 Specifying Alternate Join Conditions to relationship() . . . . . . . . . . . . . . . . . . . . . 76
Specifying Foreign Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Building Query-Enabled Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Multiple Relationships against the Same Parent/Child . . . . . . . . . . . . . . . . . . . . . . 78
4.2.4 Rows that point to themselves / Mutually Dependent Rows . . . . . . . . . . . . . . . . . . 78
4.2.5 Alternate Collection Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Custom Collection Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Annotating Custom Collections via Decorators . . . . . . . . . . . . . . . . . . . . . . . . . 81
Dictionary-Based Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Instrumentation and Custom Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.6 Configuring Loader Strategies: Lazy Loading, Eager Loading . . . . . . . . . . . . . . . . 83
What Kind of Loading to Use ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Routing Explicit Joins/Statements into Eagerly Loaded Collections . . . . . . . . . . . . . . 86
4.2.7 Working with Large Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Dynamic Relationship Loaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Setting Noload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Using Passive Deletes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.2.8 Mutable Primary Keys / Update Cascades . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
iii
5.9 Contextual/Thread-local Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.9.1 Creating a Thread-local Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.9.2 Lifespan of a Contextual Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.10 Partitioning Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.10.1 Vertical Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.10.2 Horizontal Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.11 Extending Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8 Examples 141
8.1 Adjacency List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
iv
8.2 Associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3 Attribute Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.4 Beaker Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5 Derived Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6 Directed Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.7 Dynamic Relations as Dictionaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.8 Horizontal Sharding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.9 Inheritance Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.10 Large Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.11 Nested Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.12 Polymorphic Associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.13 PostGIS Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.14 Versioned Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.15 Vertical Attribute Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.16 XML Persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
v
Internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
9.2.2 Collection Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
9.2.3 Querying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
The Query Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
ORM-Specific Query Constructs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Query Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
9.2.4 Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
9.2.5 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
9.2.6 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
9.3 sqlalchemy.dialects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.3.1 Supported Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Firebird . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Microsoft SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
PostgreSQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
SQLite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Sybase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
9.3.2 Unsupported Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Microsoft Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Informix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
MaxDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
9.4 sqlalchemy.ext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
9.4.1 declarative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Defining Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Association of Metadata and Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Configuring Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Configuring Many-to-Many Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Defining Synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Table Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Mapper Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Inheritance Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Mix-in Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Class Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
9.4.2 associationproxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Simplifying Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Simplifying Association Object Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Building Complex Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
9.4.3 orderinglist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.4.4 serializer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.4.5 SqlSoup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Loading objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Modifying objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Advanced Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
9.4.6 compiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
vi
Dialect-specific compilation rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Compiling sub-elements of a custom expression construct . . . . . . . . . . . . . . . . . . . 320
Changing the default compilation of existing constructs . . . . . . . . . . . . . . . . . . . . . 321
Changing Compilation of Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Subclassing Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.4.7 Horizontal Shard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
API Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Index 329
vii
viii
CHAPTER
ONE
OVERVIEW / INSTALLATION
1.1 Overview
The SQLAlchemy SQL Toolkit and Object Relational Mapper is a comprehensive set of tools for working with
databases and Python. It has several distinct areas of functionality which can be used individually or combined
together. Its major components are illustrated below. The arrows represent the general dependencies of components:
1
SQLAlchemy Documentation, Release 0.6.2
Above, the two most significant front-facing portions of SQLAlchemy are the Object Relational Mapper and the
SQL Expression Language. SQL Expressions can be used independently of the ORM. When using the ORM, the
SQL Expression language remains part of the public facing API as it is used within object-relational configurations
and queries.
1.2 Tutorials
• Object Relational Tutorial - This describes the richest feature of SQLAlchemy, its object relational mapper. If
you want to work with higher-level SQL which is constructed automatically for you, as well as management of
Python objects, proceed to this tutorial.
• SQL Expression Language Tutorial - The core of SQLAlchemy is its SQL expression language. The SQL
Expression Language is a toolkit all its own, independent of the ORM package, which can be used to construct
manipulable SQL expressions which can be programmatically constructed, modified, and executed, returning
cursor-like result sets. It’s a lot more lightweight than the ORM and is appropriate for higher scaling SQL
operations. It’s also heavily present within the ORM’s public facing API, so advanced ORM users will want to
master this language as well.
# easy_install SQLAlchemy
This command will download the latest version of SQLAlchemy from the Python Cheese Shop and install it to your
system.
• setuptools
• install setuptools
• pypi
Otherwise, you can install from the distribution using the setup.py script:
TWO
In this tutorial we will cover a basic SQLAlchemy object-relational mapping scenario, where we store and retrieve
Python objects from a database representation. The tutorial is in doctest format, meaning each >>> line represents
something you can type at a Python command prompt, and the following text represents the expected return value.
2.2 Connecting
For this tutorial we will use an in-memory-only SQLite database. To connect we use create_engine():
The echo flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python’s standard
logging module. With it enabled, we’ll see all the generated SQL produced. If you are working through this
tutorial and want less output generated, set it to False. This tutorial will format the SQL behind a popup window so
it doesn’t get in our way; just click the “SQL” links to see what’s being generated.
>>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
>>> metadata = MetaData()
>>> users_table = Table(’users’, metadata,
... Column(’id’, Integer, primary_key=True),
5
SQLAlchemy Documentation, Release 0.6.2
Database Meta Data covers all about how to define Table objects, as well as how to load their definition from an
existing database (known as reflection).
Next, we can issue CREATE TABLE statements derived from our table metadata, by calling create_all() and
passing it the engine instance which points to our database. This will check for the presence of a table first before
creating, so it’s safe to call multiple times:
>>> metadata.create_all(engine)
PRAGMA table_info("users")
()
CREATE TABLE users (
id INTEGER NOT NULL,
name VARCHAR,
fullname VARCHAR,
password VARCHAR,
PRIMARY KEY (id)
)
()
COMMIT
Note: Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated
without a length; on SQLite and Postgresql, this is a valid datatype, but on others, it’s not allowed. So if running this
tutorial on one of those databases, and you wish to use SQLAlchemy to issue CREATE TABLE, a “length” may be
provided to the String type as below:
Column(’name’, String(50))
The length field on String, as well as similar precision/scale fields available on Integer, Numeric, etc. are not
referenced by SQLAlchemy other than when creating tables.
Additionally, Firebird and Oracle require sequences to generate new primary key identifiers, and SQLAlchemy doesn’t
generate or assume these without being instructed. For that, you use the Sequence construct:
to our users table, let’s create a rudimentary User class. It only need subclass Python’s built-in object class (i.e.
it’s a new style class):
The class has an __init__() and a __repr__() method for convenience. These methods are both entirely
optional, and can be of any form. SQLAlchemy never calls __init__() directly.
The mapper() function creates a new Mapper object and stores it away for future reference, associated with our
class. Let’s now create and inspect a User object:
The id attribute, which while not defined by our __init__() method, exists due to the id column present within
the users_table object. By default, the mapper creates class attributes for all columns present within the Table.
These class attributes exist as Python descriptors, and define instrumentation for the mapped class. The functionality
of this instrumentation is very rich and includes the ability to track modifications and automatically load new data
from the database when needed.
Since we have not yet told SQLAlchemy to persist Ed Jones within the database, its id is None. When we persist
the object later, this attribute will be populated with a newly generated value.
Above, the declarative_base() function defines a new class which we name Base, from which all of our
ORM-enabled classes will derive. Note that we define Column objects with no “name” field, since it’s inferred from
the given attribute name.
The underlying Table object created by our declarative_base() version of User is accessible via the
__table__ attribute:
Full documentation for declarative can be found in the API Reference section for declarative.
Yet another “declarative” method is available for SQLAlchemy as a third party library called Elixir. This is a full-
featured configurational product which also includes many higher level mapping configurations built in. Like declara-
tive, once classes and mappings are defined, ORM usage is the same as with a classical SQLAlchemy configuration.
In the case where your application does not yet have an Engine when you define your module-level objects, just set
it up like this:
Later, when you create your engine with create_engine(), connect it to the Session using configure():
This custom-made Session class will create new Session objects which are bound to our database. Other transac-
tional characteristics may be defined when calling sessionmaker() as well; these are described in a later chapter.
Then, whenever you need to have a conversation with the database, you instantiate a Session:
The above Session is associated with our SQLite engine, but it hasn’t opened any connections yet. When it’s
first used, it retrieves a connection from a pool of connections maintained by the engine, and holds onto it until we
commit all changes and/or close the session object.
At this point, the instance is pending; no SQL has yet been issued. The Session will issue the SQL to persist Ed
Jones as soon as is needed, using a process known as a flush. If we query the database for Ed Jones, all pending
information will first be flushed, and the query is issued afterwards.
For example, below we create a new Query object which loads instances of User. We “filter by” the name attribute
of ed, and indicate that we’d like only the first result in the full list of rows. A User instance is returned which is
equivalent to that which we’ve added:
In fact, the Session has identified that the row returned is the same row as one already represented within its internal
map of objects, so we actually got back the identical instance as that which we just added:
The ORM concept at work here is known as an identity map and ensures that all operations upon a particular row
within a Session operate upon the same set of data. Once an object with a particular primary key is present in the
Session, all SQL queries on that Session will always return the same Python object for that particular primary
key; it also will raise an error if an attempt is made to place a second, already-persisted object with the same primary
key within the session.
We can add more User objects at once using add_all():
>>> session.add_all([
... User(’wendy’, ’Wendy Williams’, ’foobar’),
... User(’mary’, ’Mary Contrary’, ’xxg527’),
... User(’fred’, ’Fred Flinstone’, ’blah’)])
Also, Ed has already decided his password isn’t too secure, so lets change it:
The Session is paying attention. It knows, for example, that Ed Jones has been modified:
>>> session.dirty
IdentitySet([<User(’ed’,’Ed Jones’, ’f8s7ccs’)>])
>>> session.new
IdentitySet([<User(’wendy’,’Wendy Williams’, ’foobar’)>,
<User(’mary’,’Mary Contrary’, ’xxg527’)>,
<User(’fred’,’Fred Flinstone’, ’blah’)>])
We tell the Session that we’d like to issue all remaining changes to the database and commit the transaction, which
has been in progress throughout. We do this via commit():
>>> session.commit()
UPDATE users SET password=? WHERE users.id = ?
(’f8s7ccs’, 1)
INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
(’wendy’, ’Wendy Williams’, ’foobar’)
INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
(’mary’, ’Mary Contrary’, ’xxg527’)
INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
(’fred’, ’Fred Flinstone’, ’blah’)
COMMIT
commit() flushes whatever remaining changes remain to the database, and commits the transaction. The connection
resources referenced by the session are now returned to the connection pool. Subsequent operations with this session
will occur in a new transaction, which will again re-acquire connection resources when first needed.
If we look at Ed’s id attribute, which earlier was None, it now has a value:
>>> ed_user.id
BEGIN
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, us
FROM users
WHERE users.id = ?
(1,)1
After the Session inserts new rows in the database, all newly generated identifiers and database-generated defaults
become available on the instance, either immediately or via load-on-first-access. In this case, the entire row was re-
loaded on access because a new transaction was begun after we issued commit(). SQLAlchemy by default refreshes
data from a previous transaction the first time it’s accessed within a new transaction, so that the most recent state is
available. The level of reloading is configurable as is described in the chapter on Sessions.
Querying the session, we can see that they’re flushed into the current transaction:
Rolling back, we can see that ed_user‘s name is back to ed, and fake_user has been kicked out of the session:
>>> session.rollback()
ROLLBACK
>>> ed_user.name
BEGIN
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, us
FROM users
WHERE users.id = ?
(1,)u’ed’
>>> fake_user in session
False
2.10 Querying
A Query is created using the query() function on Session. This function takes a variable number of arguments,
which can be any combination of classes and class-instrumented descriptors. Below, we indicate a Query which loads
User instances. When evaluated in an iterative context, the list of User objects present is returned:
The Query also accepts ORM-instrumented descriptors as arguments. Any time multiple class entities or column-
based entities are expressed as arguments to the query() function, the return result is expressed as tuples:
The tuples returned by Query are named tuples, and can be treated much like an ordinary Python object. The names
are the same as the attribute’s name for an attribute, and the class name for a class:
You can control the names using the label() construct for scalar attributes and aliased() for class constructs:
Basic operations with Query include issuing LIMIT and OFFSET, most conveniently using Python array slices and
typically in conjunction with ORDER BY:
and filtering results, which is accomplished either with filter_by(), which uses keyword arguments:
...or filter(), which uses more flexible SQL expression language constructs. These allow you to use regular
Python operators with the class-level attributes on your mapped class:
The Query object is fully generative, meaning that most method calls return a new Query object upon which further
criteria may be added. For example, to query for users named “ed” with a full name of “Ed Jones”, you can call
filter() twice, which joins criteria using AND:
• equals:
query.filter(User.name == ’ed’)
• not equals:
query.filter(User.name != ’ed’)
• LIKE:
query.filter(User.name.like(’%ed%’))
• IN:
2.10. Querying 13
SQLAlchemy Documentation, Release 0.6.2
query.filter(User.name.in_(session.query(User.name).filter(User.name.like(’%ed%’))))
• NOT IN:
• IS NULL:
filter(User.name == None)
• IS NOT NULL:
filter(User.name != None)
• AND:
• OR:
• match:
query.filter(User.name.match(’wendy’))
The all(), one(), and first() methods of Query immediately issue SQL and return a non-iterator value.
all() returns a list:
first() applies a limit of one and returns the first result as a scalar:
>>> query.first()
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, us
FROM users
WHERE users.name LIKE ? ORDER BY users.id
LIMIT 1 OFFSET 0
(’%ed’,)<User(’ed’,’Ed Jones’, ’f8s7ccs’)>
one(), fully fetches all rows, and if not exactly one object identity or composite row is present in the result, raises an
error:
Literal strings can be used flexibly with Query. Most methods accept strings in addition to SQLAlchemy clause
constructs. For example, filter() and order_by():
Bind parameters can be specified with string-based SQL, using a colon. To specify the values, use the params()
method:
To use an entirely string-based statement, using from_statement(); just ensure that the columns clause of the
statement contains the column names normally used by the mapper (below illustrated using an asterisk):
2.10. Querying 15
SQLAlchemy Documentation, Release 0.6.2
You can use from_statement() to go completely “raw”, using string names to identify desired columns:
2.10.4 Counting
>>> session.query(User).filter(User.name.like(’%ed’)).count()
SELECT count(1) AS count_1
FROM users
WHERE users.name LIKE ?
(’%ed’,)2
The count() method is used to determine how many rows the SQL statement would return, and is mainly intended
to return a simple count of a single type of entity, in this case User. For more complicated sets of columns or entities
where the “thing to be counted” needs to be indicated more specifically, count() is probably not what you want.
Below, a query for individual columns does return the expected result:
...but if you look at the generated SQL, SQLAlchemy saw that we were placing individual column expressions and
decided to wrap whatever it was we were doing in a subquery, so as to be assured that it returns the “number of rows”.
This defensive behavior is not really needed here and in other cases is not what we want at all, such as if we wanted a
grouping of counts per name:
>>> session.query(User.name).group_by(User.name).count()
SELECT count(1) AS count_1
FROM (SELECT users.name AS users_name
FROM users GROUP BY users.name) AS anon_1
()4
We don’t want the number 4, we wanted some rows back. So for detailed queries where you need to count something
specific, use the func.count() function as a column expression:
which stores email addresses, which we will call addresses. Using declarative, we define this table along with its
mapped class, Address:
The above class introduces a foreign key constraint which references the users table. This defines for SQLAlchemy
the relationship between the two tables at the database level. The relationship between the User and Address
classes is defined separately using the relationship() function, which defines an attribute user to be placed
on the Address class, as well as an addresses collection to be placed on the User class. Such a relationship
is known as a bidirectional relationship. Because of the placement of the foreign key, from Address to User it
is many to one, and from User to Address it is one to many. SQLAlchemy is automatically aware of many-to-
one/one-to-many based on foreign keys.
Note: The relationship() function has historically been known as relation(), which is the name that’s
available in all versions of SQLAlchemy prior to 0.6beta2, including the 0.5 and 0.4 series. relationship()
is only available starting with SQLAlchemy 0.6beta2. relation() will remain available in SQLAlchemy for the
foreseeable future to enable cross-compatibility.
The relationship() function is extremely flexible, and could just have easily been defined on the User class:
class User(Base):
# ....
addresses = relationship(Address, order_by=Address.id, backref="user")
We are also free to not define a backref, and to define the relationship() only on one class and not the other. It
is also possible to define two separate relationship() constructs for either direction, which is generally safe for
many-to-one and one-to-many relationships, but not for many-to-many relationships.
When using the declarative extension, relationship() gives us the option to use strings for most arguments
that concern the target class, in the case that the target class has not yet been defined. This only works in conjunction
with declarative:
class User(Base):
....
addresses = relationship("Address", order_by="Address.id", backref="user")
When declarative is not in use, you typically define your mapper() well after the target classes and Table
objects have been defined, so string expressions are not needed.
We’ll need to create the addresses table in the database, so we will issue another CREATE from our metadata,
which will skip over tables which have already been created:
>>> metadata.create_all(engine)
PRAGMA table_info("users")
()
PRAGMA table_info("addresses")
()
CREATE TABLE addresses (
id INTEGER NOT NULL,
email_address VARCHAR NOT NULL,
user_id INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(user_id) REFERENCES users (id)
)
()
COMMIT
We are free to add Address objects on our User object. In this case we just assign a full list directly:
When using a bidirectional relationship, elements added in one direction automatically become visible in the other
direction. This is the basic behavior of the backref keyword, which maintains the relationship purely in memory,
without using any SQL:
>>> jack.addresses[1]
<Address(’[email protected]’)>
>>> jack.addresses[1].user
<User(’jack’,’Jack Bean’, ’gjffdd’)>
Let’s add and commit Jack Bean to the database. jack as well as the two Address members in his addresses
collection are both added to the session at once, using a process known as cascading:
>>> session.add(jack)
>>> session.commit()
INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
(’jack’, ’Jack Bean’, ’gjffdd’)
INSERT INTO addresses (email_address, user_id) VALUES (?, ?)
(’[email protected]’, 5)
INSERT INTO addresses (email_address, user_id) VALUES (?, ?)
(’[email protected]’, 5)
COMMIT
Querying for Jack, we get just Jack back. No SQL is yet issued for Jack’s addresses:
>>> jack.addresses
SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, ad
FROM addresses
WHERE ? = addresses.user_id ORDER BY addresses.id
(5,)[<Address(’[email protected]’)>, <Address(’[email protected]’)>]
When we accessed the addresses collection, SQL was suddenly issued. This is an example of a lazy loading
relationship. The addresses collection is now loaded and behaves just like an ordinary list.
If you want to reduce the number of queries (dramatically, in many cases), we can apply an eager load to the query
operation, using the joinedload() function. This function is a query option that gives additional instructions
to the query on how we would like it to load, in this case we’d like to indicate that we’d like addresses to load
“eagerly”. SQLAlchemy then constructs an outer join between the users and addresses tables, and loads them
at once, populating the addresses collection on each User object if it’s not already populated:
>>> jack.addresses
[<Address(’[email protected]’)>, <Address(’[email protected]’)>]
See Configuring Loader Strategies: Lazy Loading, Eager Loading for information on joinedload() and its new
brother, subqueryload(). We’ll also see another way to “eagerly” load in the next section.
Or we can make a real JOIN construct; the most common way is to use join():
>>> session.query(User).join(Address).\
... filter(Address.email_address==’[email protected]’).all()
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, us
FROM users JOIN addresses ON users.id = addresses.user_id
WHERE addresses.email_address = ?
(’[email protected]’,)[<User(’jack’,’Jack Bean’, ’gjffdd’)>]
join() knows how to join between User and Address because there’s only one foreign key between them. If
there were no foreign keys, or several, join() works better when one of the following forms are used:
Note that when join() is called with an explicit target as well as an ON clause, we use a tuple as the argument. This
is so that multiple joins can be chained together, as in:
session.query(Foo).join(
Foo.bars,
(Bat, bar.bats),
(Widget, Bat.widget_id==Widget.id)
)
The above would produce SQL something like foo JOIN bars ON <onclause> JOIN bats ON
<onclause> JOIN widgets ON <onclause>.
The general functionality of join() is also available as a standalone function join(), which is an ORM-enabled
version of the same function present in the SQL expression language. This function accepts two or three arguments
(left side, right side, optional ON clause) and can be used in conjunction with the select_from() method to set an
explicit FROM clause:
The “eager loading” capabilities of the joinedload() function and the join-construction capabilities of join()
or an equivalent can be combined together using the contains_eager() option. This is typically used for a query
that is already joining to some related entity (more often than not via many-to-one), and you’d like the related entity to
also be loaded onto the resulting objects in one step without the need for additional queries and without the “automatic”
join embedded by the joinedload() function:
Note that above the join was used both to limit the rows to just those Address objects which had a related User ob-
ject with the name “jack”. It’s safe to have the Address.user attribute populated with this user using an inner join.
However, when filtering on a join that is filtering on a particular member of a collection, using contains_eager()
to populate a related collection may populate the collection with only part of what it actually references, since the col-
lection itself is filtered.
When querying across multiple tables, if the same table needs to be referenced more than once, SQL typically requires
that the table be aliased with another name, so that it can be distinguished against other occurrences of that table. The
Query supports this most explicitly using the aliased construct. Below we join to the Address entity twice, to
locate a user who has two distinct email addresses at the same time:
The Query is suitable for generating statements which can be used as subqueries. Suppose we wanted to load User
objects along with a count of how many Address records each user has. The best way to generate SQL like this is to
get the count of addresses grouped by user ids, and JOIN to the parent. In this case we use a LEFT OUTER JOIN so
that we get rows back for those users who don’t have any addresses, e.g.:
Using the Query, we build a statement like this from the inside out. The statement accessor returns a SQL
expression representing the statement generated by a particular Query - this is an instance of a select() construct,
which are described in SQL Expression Language Tutorial:
The func keyword generates SQL functions, and the subquery() method on Query produces a SQL
expression construct representing a SELECT statement embedded within an alias (it’s actually shorthand for
query.statement.alias()).
Once we have our statement, it behaves like a Table construct, such as the one we created for users at the start of
this tutorial. The columns on the statement are accessible through an attribute called c:
Above, we just selected a result that included a column from a subquery. What if we wanted our subquery to map to
an entity ? For this we use aliased() to associate an “alias” of a mapped class to a subquery:
The EXISTS keyword in SQL is a boolean operator which returns True if the given expression contains any rows. It
may be used in many scenarios in place of joins, and is also useful for locating rows which do not have a corresponding
row in a related table.
There is an explicit EXISTS construct, which looks like this:
The Query features several operators which make usage of EXISTS automatically. Above, the statement can be
expressed along the User.addresses relationship using any():
has() is the same operator as any() for many-to-one relationships (note the ~ operator here too, which means
“NOT”):
>>> session.query(Address).filter(~Address.user.has(User.name==’jack’)).all()
SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
addresses.user_id AS addresses_user_id
FROM addresses
WHERE NOT (EXISTS (SELECT 1
FROM users
WHERE users.id = addresses.user_id AND users.name = ?))
(’jack’,)[]
query.filter(Address.user == someuser)
query.filter(Address.user != someuser)
query.filter(Address.user == None)
query.filter(User.addresses.contains(someaddress))
query.filter(User.addresses.any(Address.email_address == ’bar’))
query.filter(Address.user.has(name=’ed’))
session.query(Address).with_parent(someuser, ’addresses’)
2.14 Deleting
Let’s try to delete jack and see how that goes. We’ll mark as deleted in the session, then we’ll issue a count query
to see that no rows remain:
>>> session.delete(jack)
>>> session.query(User).filter_by(name=’jack’).count()
UPDATE addresses SET user_id=? WHERE addresses.id = ?
(None, 1)
UPDATE addresses SET user_id=? WHERE addresses.id = ?
(None, 2)
DELETE FROM users WHERE users.id = ?
(5,)
SELECT count(1) AS count_1
FROM users
WHERE users.name = ?
(’jack’,)0
>>> session.query(Address).filter(
... Address.email_address.in_([’[email protected]’, ’[email protected]’])
... ).count()
SELECT count(1) AS count_1
FROM addresses
WHERE addresses.email_address IN (?, ?)
(’[email protected]’, ’[email protected]’)2
Uh oh, they’re still there ! Analyzing the flush SQL, we can see that the user_id column of each address was set to
NULL, but the rows weren’t deleted. SQLAlchemy doesn’t assume that deletes cascade, you have to tell it to do so.
We will configure cascade options on the User.addresses relationship to change the behavior. While
SQLAlchemy allows you to add new attributes and relationships to mappings at any point in time, in this case the
existing relationship needs to be removed, so we need to tear down the mappings completely and start again. This is
not a typical operation and is here just for illustrative purposes.
Removing all ORM state is as follows:
Below, we use mapper() to reconfigure an ORM mapping for User and Address, on our existing but currently un-
mapped classes. The User.addresses relationship now has delete, delete-orphan cascade on it, which
indicates that DELETE operations will cascade to attached Address objects as well as Address objects which are
removed from their parent:
Now when we load Jack (below using get(), which loads by primary key), removing an address from his
addresses collection will result in that Address being deleted:
2.14. Deleting 25
SQLAlchemy Documentation, Release 0.6.2
Deleting Jack will delete both Jack and his remaining Address:
>>> session.delete(jack)
>>> session.query(User).filter_by(name=’jack’).count()
DELETE FROM addresses WHERE addresses.id = ?
(1,)
DELETE FROM users WHERE users.id = ?
(5,)
SELECT count(1) AS count_1
FROM users
WHERE users.name = ?
(’jack’,)0
>>> session.query(Address).filter(
... Address.email_address.in_([’[email protected]’, ’[email protected]’])
... ).count()
SELECT count(1) AS count_1
FROM addresses
WHERE addresses.email_address IN (?, ?)
(’[email protected]’, ’[email protected]’)0
Above, the many-to-many relationship is BlogPost.keywords. The defining feature of a many-to-many rela-
tionship is the secondary keyword argument which references a Table object representing the association table.
This table only contains columns which reference the two sides of the relationship; if it has any other columns, such
as its own primary key, or foreign keys to other tables, SQLAlchemy requires a different usage pattern called the
“association object”, described at Association Object.
The many-to-many relationship is also bi-directional using the backref keyword. This is the one case where usage
of backref is generally required, since if a separate posts relationship were added to the Keyword entity, both
relationships would independently add and remove rows from the post_keywords table and produce conflicts.
We would also like our BlogPost class to have an author field. We will add this as another bidirectional relation-
ship, except one issue we’ll have is that a single user might have lots of blog posts. When we access User.posts,
we’d like to be able to filter results further so as not to load the entire collection. For this we use a setting accepted by
relationship() called lazy=’dynamic’, which configures an alternate loader strategy on the attribute. To
use it on the “reverse” side of a relationship(), we use the backref() function:
>>> metadata.create_all(engine)
PRAGMA table_info("users")
()
PRAGMA table_info("addresses")
()
PRAGMA table_info("posts")
()
PRAGMA table_info("keywords")
()
PRAGMA table_info("post_keywords")
()
CREATE TABLE posts (
id INTEGER NOT NULL,
user_id INTEGER,
headline VARCHAR(255) NOT NULL,
body TEXT,
PRIMARY KEY (id),
FOREIGN KEY(user_id) REFERENCES users (id)
)
()
COMMIT
CREATE TABLE keywords (
id INTEGER NOT NULL,
keyword VARCHAR(50) NOT NULL,
PRIMARY KEY (id),
UNIQUE (keyword)
)
()
COMMIT
CREATE TABLE post_keywords (
post_id INTEGER,
keyword_id INTEGER,
FOREIGN KEY(post_id) REFERENCES posts (id),
FOREIGN KEY(keyword_id) REFERENCES keywords (id)
)
()
COMMIT
Usage is not too different from what we’ve been doing. Let’s give Wendy some blog posts:
We’re storing keywords uniquely in the database, but we know that we don’t have any yet, so we can just create them:
>>> post.keywords.append(Keyword(’wendy’))
>>> post.keywords.append(Keyword(’firstpost’))
We can now look up all blog posts with the keyword ‘firstpost’. We’ll use the any operator to locate “blog posts where
any of its keywords has the keyword string ‘firstpost”’:
>>> session.query(BlogPost).filter(BlogPost.keywords.any(keyword=’firstpost’)).all()
INSERT INTO keywords (keyword) VALUES (?)
(’wendy’,)
INSERT INTO keywords (keyword) VALUES (?)
(’firstpost’,)
INSERT INTO posts (user_id, headline, body) VALUES (?, ?, ?)
(2, "Wendy’s Blog Post", ’This is a test’)
INSERT INTO post_keywords (post_id, keyword_id) VALUES (?, ?)
((1, 1), (1, 2))
SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headli
FROM posts
WHERE EXISTS (SELECT 1
FROM post_keywords, keywords
WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywo
(’firstpost’,)[BlogPost("Wendy’s Blog Post", ’This is a test’, <User(’wendy’,’Wendy William
If we want to look up just Wendy’s posts, we can tell the query to narrow down to her as a parent:
>>> session.query(BlogPost).filter(BlogPost.author==wendy).\
... filter(BlogPost.keywords.any(keyword=’firstpost’)).all()
SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headli
FROM posts
WHERE ? = posts.user_id AND (EXISTS (SELECT 1
FROM post_keywords, keywords
WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywo
(2, ’firstpost’)[BlogPost("Wendy’s Blog Post", ’This is a test’, <User(’wendy’,’Wendy Willi
Or we can use Wendy’s own posts relationship, which is a “dynamic” relationship, to query straight from there:
>>> wendy.posts.filter(BlogPost.keywords.any(keyword=’firstpost’)).all()
SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headli
FROM posts
WHERE ? = posts.user_id AND (EXISTS (SELECT 1
FROM post_keywords, keywords
WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywo
(2, ’firstpost’)[BlogPost("Wendy’s Blog Post", ’This is a test’, <User(’wendy’,’Wendy Willi
THREE
This tutorial will cover SQLAlchemy SQL Expressions, which are Python constructs that represent SQL statements.
The tutorial is in doctest format, meaning each >>> line represents something you can type at a Python command
prompt, and the following text represents the expected return value. The tutorial has no prerequisites.
3.2 Connecting
For this tutorial we will use an in-memory-only SQLite database. This is an easy way to test things without needing
to have an actual database defined anywhere. To connect we use create_engine():
The echo flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python’s standard
logging module. With it enabled, we’ll see all the generated SQL produced. If you are working through this
tutorial and want less output generated, set it to False. This tutorial will format the SQL behind a popup window so
it doesn’t get in our way; just click the “SQL” links to see what’s being generated.
31
SQLAlchemy Documentation, Release 0.6.2
We define our tables all within a catalog called MetaData, using the Table construct, which resembles regular SQL
CREATE TABLE statements. We’ll make two tables, one of which represents “users” in an application, and another
which represents zero or more “email addreses” for each row in the “users” table:
>>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
>>> metadata = MetaData()
>>> users = Table(’users’, metadata,
... Column(’id’, Integer, primary_key=True),
... Column(’name’, String),
... Column(’fullname’, String),
... )
All about how to define Table objects, as well as how to create them from an existing database automatically, is
described in Database Meta Data.
Next, to tell the MetaData we’d actually like to create our selection of tables for real inside the SQLite database, we
use create_all(), passing it the engine instance which points to our database. This will check for the presence
of each table first before creating, so it’s safe to call multiple times:
>>> metadata.create_all(engine)
PRAGMA table_info("users")
()
PRAGMA table_info("addresses")
()
CREATE TABLE users (
id INTEGER NOT NULL,
name VARCHAR,
fullname VARCHAR,
PRIMARY KEY (id)
)
()
COMMIT
CREATE TABLE addresses (
id INTEGER NOT NULL,
user_id INTEGER,
email_address VARCHAR NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(user_id) REFERENCES users (id)
)
()
COMMIT
Note: Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated
without a length; on SQLite and Postgresql, this is a valid datatype, but on others, it’s not allowed. So if running this
tutorial on one of those databases, and you wish to use SQLAlchemy to issue CREATE TABLE, a “length” may be
provided to the String type as below:
Column(’name’, String(50))
The length field on String, as well as similar precision/scale fields available on Integer, Numeric, etc. are not
referenced by SQLAlchemy other than when creating tables.
Additionally, Firebird and Oracle require sequences to generate new primary key identifiers, and SQLAlchemy doesn’t
generate or assume these without being instructed. For that, you use the Sequence construct:
To see a sample of the SQL this construct produces, use the str() function:
>>> str(ins)
’INSERT INTO users (id, name, fullname) VALUES (:id, :name, :fullname)’
Notice above that the INSERT statement names every column in the users table. This can be limited by using the
values() method, which establishes the VALUES clause of the INSERT explicitly:
Above, while the values method limited the VALUES clause to just two columns, the actual data we placed in
values didn’t get rendered into the string; instead we got named bind parameters. As it turns out, our data is stored
within our Insert construct, but it typically only comes out when the statement is actually executed; since the data
consists of literal values, SQLAlchemy automatically generates bind parameters for them. We can peek at this data
for now by looking at the compiled form of the statement:
>>> ins.compile().params
{’fullname’: ’Jack Jones’, ’name’: ’jack’}
3.5 Executing
The interesting part of an Insert is executing it. In this tutorial, we will generally focus on the most explicit method
of executing a SQL construct, and later touch upon some “shortcut” ways to do it. The engine object we created
is a repository for database connections capable of issuing SQL to the database. To acquire a connection, we use the
connect() method:
The Connection object represents an actively checked out DBAPI connection resource. Lets feed it our Insert
object and see what happens:
So the INSERT statement was now issued to the database. Although we got positional “qmark” bind parameters
instead of “named” bind parameters in the output. How come ? Because when executed, the Connection used the
SQLite dialect to help generate the statement; when we use the str() function, the statement isn’t aware of this
dialect, and falls back onto a default which uses named parameters. We can view this manually as follows:
What about the result variable we got when we called execute() ? As the SQLAlchemy Connection object
references a DBAPI connection, the result, known as a ResultProxy object, is analogous to the DBAPI cursor
object. In the case of an INSERT, we can get important information from it, such as the primary key values which
were generated from our statement:
>>> result.inserted_primary_key
[1]
The value of 1 was automatically generated by SQLite, but only because we did not specify the id column in our
Insert statement; otherwise, our explicit value would have been used. In either case, SQLAlchemy always knows
how to get at a newly generated primary key value, even though the method of generating them is different across
different databases; each database’s Dialect knows the specific steps needed to determine the correct value (or
values; note that inserted_primary_key returns a list so that it supports composite primary keys).
Above, because we specified all three columns in the the execute() method, the compiled Insert included all
three columns. The Insert statement is compiled at execution time based on the parameters we specified; if we
specified fewer parameters, the Insert would have fewer entries in its VALUES clause.
To issue many inserts using DBAPI’s executemany() method, we can send in a list of dictionaries each containing
a distinct set of parameters to be inserted, as we do here to add some email addresses:
>>> conn.execute(addresses.insert(), [
... {’user_id’: 1, ’email_address’ : ’[email protected]’},
... {’user_id’: 1, ’email_address’ : ’[email protected]’},
... {’user_id’: 2, ’email_address’ : ’[email protected]’},
... {’user_id’: 2, ’email_address’ : ’[email protected]’},
... ])
INSERT INTO addresses (user_id, email_address) VALUES (?, ?)
((1, ’[email protected]’), (1, ’[email protected]’), (2, ’[email protected]’), (2, ’[email protected]’))
COMMIT<sqlalchemy.engine.base.ResultProxy object at 0x...>
Above, we again relied upon SQLite’s automatic generation of primary key identifiers for each addresses row.
When executing multiple sets of parameters, each dictionary must have the same set of keys; i.e. you cant have fewer
keys in some dictionaries than others. This is because the Insert statement is compiled against the first dictionary
in the list, and it’s assumed that all subsequent argument dictionaries are compatible with that statement.
and you can save even more steps than that, if you connect the Engine to the MetaData object we created earlier.
When this is done, all SQL expressions which involve tables within the MetaData object will be automatically
bound to the Engine. In this case, we call it implicit execution:
When the MetaData is bound, statements will also compile against the engine’s dialect. Since a lot of the examples
here assume the default dialect, we’ll detach the engine from the metadata which we just attached:
Detailed examples of connectionless and implicit execution are available in the “Engines” chapter: Connectionless
Execution, Implicit Execution.
3.8 Selecting
We began with inserts just so that our test database had some data in it. The more interesting part of the data is selecting
it ! We’ll cover UPDATE and DELETE statements later. The primary construct used to generate SELECT statements
is the select() function:
Above, we issued a basic select() call, placing the users table within the COLUMNS clause of the select, and
then executing. SQLAlchemy expanded the users table into the set of each of its columns, and also generated a
FROM clause for us. The result returned is again a ResultProxy object, which acts much like a DBAPI cursor,
including methods such as fetchone() and fetchall(). The easiest way to get rows from it is to just iterate:
Above, we see that printing each row produces a simple tuple-like result. We have more options at accessing the data
in each row. One very common way is through dictionary access, using the string names of columns:
But another way, whose usefulness will become apparent later on, is to use the Column objects directly as keys:
Result sets which have pending rows remaining should be explicitly closed before discarding. While the resources
referenced by the ResultProxy will be closed when the object is garbage collected, it’s better to make it explicit as
some database APIs are very picky about such things:
>>> result.close()
If we’d like to more carefully control the columns which are placed in the COLUMNS clause of the select, we reference
individual Column objects from our Table. These are available as named attributes off the c attribute of the Table
object:
Lets observe something interesting about the FROM clause. Whereas the generated statement contains two distinct
sections, a “SELECT columns” part and a “FROM table” part, our select() construct only has a list containing
columns. How does this work ? Let’s try putting two tables into our select() statement:
It placed both tables into the FROM clause. But also, it made a real mess. Those who are familiar with SQL joins know
that this is a Cartesian product; each row from the users table is produced against each row from the addresses
table. So to put some sanity into this statement, we need a WHERE clause. Which brings us to the second argument
of select():
3.8. Selecting 37
SQLAlchemy Documentation, Release 0.6.2
So that looks a lot better, we added an expression to our select() which had the effect of adding WHERE
users.id = addresses.user_id to our statement, and our results were managed down so that the join of
users and addresses rows made sense. But let’s look at that expression? It’s using just a Python equality opera-
tor between two different Column objects. It should be clear that something is up. Saying 1==1 produces True, and
1==2 produces False, not a WHERE clause. So lets see exactly what that expression is doing:
>>> users.c.id==addresses.c.user_id
<sqlalchemy.sql.expression._BinaryExpression object at 0x...>
>>> str(users.c.id==addresses.c.user_id)
’users.id = addresses.user_id’
As you can see, the == operator is producing an object that is very much like the Insert and select() objects
we’ve made so far, thanks to Python’s __eq__() builtin; you call str() on it and it produces SQL. By now, one
can see that everything we are working with is ultimately the same type of object. SQLAlchemy terms the base class
of all of these expressions as sqlalchemy.sql.ClauseElement.
3.9 Operators
Since we’ve stumbled upon SQLAlchemy’s operator paradigm, let’s go through some of its capabilities. We’ve seen
how to equate two columns to each other:
If we use a literal value (a literal meaning, not a SQLAlchemy clause object), we get a bind parameter:
The 7 literal is embedded in ClauseElement; we can use the same trick we did with the Insert object to see it:
>>> (users.c.id==7).compile().params
{u’id_1’: 7}
Most Python operators, as it turns out, produce a SQL expression here, like equals, not equals, etc.:
Interestingly, the type of the Column is important ! If we use + with two string based columns (recall we put types
like Integer and String on our Column objects at the beginning), we get something different:
Where || is the string concatenation operator used on most databases. But not all of them. MySQL users, fear not:
The above illustrates the SQL that’s generated for an Engine that’s connected to a MySQL database; the || operator
now compiles as MySQL’s concat() function.
If you have come across an operator which really isn’t available, you can always use the op() method; this generates
whatever operator you need:
This function can also be used to make bitwise operators explicit. For example:
somecolumn.op(’&’)(0xff)
3.10 Conjunctions
We’d like to show off some of our operators inside of select() constructs. But we need to lump them together a little
more, so let’s first introduce some conjunctions. Conjunctions are those little words like AND and OR that put things
together. We’ll also hit upon NOT. AND, OR and NOT can work from the corresponding functions SQLAlchemy
provides (notice we also throw in a LIKE):
3.10. Conjunctions 39
SQLAlchemy Documentation, Release 0.6.2
And you can also use the re-jiggered bitwise AND, OR and NOT operators, although because of Python operator
precedence you have to watch your parenthesis:
So with all of this vocabulary, let’s select all users who have an email address at AOL or MSN, whose name starts with
a letter between “m” and “z”, and we’ll also generate a column containing their full name combined with their email
address. We will add two new constructs to this statement, between() and label(). between() produces
a BETWEEN clause, and label() is used in a column expression to produce labels using the AS keyword; it’s
recommended when selecting from expressions that otherwise would not have a name:
Once again, SQLAlchemy figured out the FROM clause for our statement. In fact it will determine the FROM clause
based on all of its other bits; the columns clause, the where clause, and also some other elements which we haven’t
covered yet, which include ORDER BY, GROUP BY, and HAVING.
parameters with text(), always use the named colon format. Such as below, we create a text() and execute it,
feeding in the bind parameters to the execute() method:
To gain a “hybrid” approach, the select() construct accepts strings for most of its arguments. Below we combine the
usage of strings with our constructed select() object, by using the select() object to structure the statement,
and strings to provide all the content within the structure. For this example, SQLAlchemy is not given any Column
or Table objects in any of its expressions, so it cannot generate a FROM clause. So we also give it the from_obj
keyword argument, which is a list of ClauseElements (or strings) to be placed within the FROM clause:
Going from constructed SQL to text, we lose some capabilities. We lose the capability for SQLAlchemy to compile our
expression to a specific target database; above, our expression won’t work with MySQL since it has no || construct.
It also becomes more tedious for SQLAlchemy to be made aware of the datatypes in use; for example, if our bind
parameters required UTF-8 encoding before going in, or conversion from a Python datetime into a string (as is
required with SQLite), we would have to add extra information to our text() construct. Similar issues arise on the
result set side, where SQLAlchemy also performs type-specific data conversion in some cases; still more information
can be added to text() to work around this. But what we really lose from our statement is the ability to manipulate
it, transform it, and analyze it. These features are critical when using the ORM, which makes heavy usage of relational
transformations. To show off what we mean, we’ll first introduce the ALIAS construct and the JOIN construct, just so
we have some juicier bits to play with.
include when you self-join a table to itself, or more commonly when you need to join from a parent table to a child
table multiple times. For example, we know that our user jack has two email addresses. How can we locate jack
based on the combination of those two addresses? We need to join twice to it. Let’s construct two distinct aliases for
the addresses table and join:
>>> a1 = addresses.alias(’a1’)
>>> a2 = addresses.alias(’a2’)
>>> s = select([users], and_(
... users.c.id==a1.c.user_id,
... users.c.id==a2.c.user_id,
... a1.c.email_address==’[email protected]’,
... a2.c.email_address==’[email protected]’
... ))
>>> print conn.execute(s).fetchall()
SELECT users.id, users.name, users.fullname
FROM users, addresses AS a1, addresses AS a2
WHERE users.id = a1.user_id AND users.id = a2.user_id AND a1.email_address = ? AND a2.email
(’[email protected]’, ’[email protected]’)[(1, u’jack’, u’Jack Jones’)]
Easy enough. One thing that we’re going for with the SQL Expression Language is the melding of programmatic
behavior with SQL generation. Coming up with names like a1 and a2 is messy; we really didn’t need to use those
names anywhere, it’s just the database that needed them. Plus, we might write some code that uses alias objects that
came from several different places, and it’s difficult to ensure that they all have unique names. So instead, we just let
SQLAlchemy make the names for us, using “anonymous” aliases:
>>> a1 = addresses.alias()
>>> a2 = addresses.alias()
>>> s = select([users], and_(
... users.c.id==a1.c.user_id,
... users.c.id==a2.c.user_id,
... a1.c.email_address==’[email protected]’,
... a2.c.email_address==’[email protected]’
... ))
>>> print conn.execute(s).fetchall()
SELECT users.id, users.name, users.fullname
FROM users, addresses AS addresses_1, addresses AS addresses_2
WHERE users.id = addresses_1.user_id AND users.id = addresses_2.user_id AND addresses_1.ema
(’[email protected]’, ’[email protected]’)[(1, u’jack’, u’Jack Jones’)]
One super-huge advantage of anonymous aliases is that not only did we not have to guess up a random name, but we
can also be guaranteed that the above SQL string is deterministically generated to be the same every time. This is
important for databases such as Oracle which cache compiled “query plans” for their statements, and need to see the
same SQL string in order to make use of it.
Aliases can of course be used for anything which you can SELECT from, including SELECT statements themselves.
We can self-join the users table back to the select() we’ve created by making an alias of the entire statement.
The correlate(None) directive is to avoid SQLAlchemy’s attempt to “correlate” the inner users table with the
outer one:
>>> a1 = s.correlate(None).alias()
>>> s = select([users.c.name], users.c.id==a1.c.id)
>>> print conn.execute(s).fetchall()
SELECT users.name
The alert reader will see more surprises; SQLAlchemy figured out how to JOIN the two tables ! The ON condition
of the join, as it’s called, was automatically generated based on the ForeignKey object which we placed on the
addresses table way at the beginning of this tutorial. Already the join() construct is looking like a much better
way to join tables.
Of course you can join on whatever expression you want, such as if we want to join on all users who use the same
name in their email address as their username:
When we create a select() construct, SQLAlchemy looks around at the tables we’ve mentioned and then places
them in the FROM clause of the statement. When we use JOINs however, we know what FROM clause we want, so
here we make usage of the from_obj keyword argument:
The outerjoin() function just creates LEFT OUTER JOIN constructs. It’s used just like join():
That’s the output outerjoin() produces, unless, of course, you’re stuck in a gig using Oracle prior to version 9,
and you’ve set up your engine (which would be using OracleDialect) to use Oracle-specific SQL:
SELECT users.fullname
FROM users, addresses
WHERE users.id = addresses.user_id(+)
If you don’t know what that SQL means, don’t worry ! The secret tribe of Oracle DBAs don’t want their black magic
being found out ;).
Next, we encounter that they’d like the results in descending order by full name. We apply ORDER BY, using an extra
modifier desc:
We also come across that they’d like only users who have an address at MSN. A quick way to tack this on is by using
an EXISTS clause, which we correlate to the users table in the enclosing SELECT:
And finally, the application also wants to see the listing of email addresses at once; so to save queries, we outerjoin the
addresses table (using an outer join so that users with no addresses come back as well; since we’re programmatic,
we might not have kept track that we used an EXISTS clause against the addresses table too...). Additionally,
since the users and addresses table both have a column named id, let’s isolate their names from each other in
the COLUMNS clause by using labels:
>>> conn.execute(query).fetchall()
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, ad
FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id
WHERE users.name = ? AND (EXISTS (SELECT addresses.id
FROM addresses
WHERE addresses.user_id = users.id AND addresses.email_address LIKE ?)) ORDER BY users.full
(’jack’, ’%@msn.com’)[(1, u’jack’, u’Jack Jones’, 1, 1, u’[email protected]’), (1, u’jack’, u’
So we started small, added one little thing at a time, and at the end we have a huge statement..which actually works.
Now let’s do one more thing; the searching function wants to add another email_address criterion on, however it
doesn’t want to construct an alias of the addresses table; suppose many parts of the application are written to deal
specifically with the addresses table, and to change all those functions to support receiving an arbitrary alias of the
address would be cumbersome. We can actually convert the addresses table within the existing statement to be an
alias of itself, using replace_selectable():
>>> a1 = addresses.alias()
>>> query = query.replace_selectable(addresses, a1)
>>> print query
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, ad
FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
WHERE users.name = :name_1 AND (EXISTS (SELECT addresses_1.id
FROM addresses AS addresses_1
WHERE addresses_1.user_id = users.id AND addresses_1.email_address LIKE :email_address_1))
One more thing though, with automatic labeling applied as well as anonymous aliasing, how do we retrieve the
columns from the rows for this thing ? The label for the email_addresses column is now the generated name
addresses_1_email_address; and in another statement might be something different ! This is where accessing
by result columns by Column object becomes very useful:
The above example, by its end, got significantly more intense than the typical end-user constructed SQL will usually
be. However when writing higher-level tools such as ORMs, they become more significant. SQLAlchemy’s ORM
relies very heavily on techniques like this.
Throughout all these examples, SQLAlchemy is busy creating bind parameters wherever literal expressions occur. You
can also specify your own bind parameters with your own names, and use the same statement repeatedly. The database
dialect converts to the appropriate named or positional style, as here where it converts to positional for SQLite:
Another important aspect of bind parameters is that they may be assigned a type. The type of the bind parameter will
determine its behavior within expressions and also how the data bound to it is processed before being sent off to the
database:
Bind parameters of the same name can also be used multiple times, where only a single named value is needed in the
execute parameters:
3.15.2 Functions
SQL functions are created using the func keyword, which generates functions using attribute access:
By “generates”, we mean that any SQL function is created based on the word you choose:
Certain function names are known by SQLAlchemy, allowing special behavioral rules to be applied. Some for example
are “ANSI” functions, which mean they don’t get the parenthesis added after them, such as CURRENT_TIMESTAMP:
Functions are most typically used in the columns clause of a select statement, and can also be labeled as well as given
a type. Labeling a function is recommended so that the result can be targeted in a result row based on a string name,
and assigning it a type is required when you need result-set processing to occur, such as for Unicode conversion and
date conversions. Below, we use the result function scalar() to just read the first column of the first row and then
close the result; the label, even though present, is not important in this case:
Databases such as PostgreSQL and Oracle which support functions that return whole result sets can be assembled
into selectable units, which can be used in statements. Such as, a database function calculate() which takes the
parameters x and y, and returns three columns which we’d like to name q, z and r, we can construct using “lexical”
column objects as well as bind parameters:
If we wanted to use our calculate statement twice with different bind parameters, the unique_params()
function will create copies for us, and mark the bind parameters as “unique” so that conflicting names are isolated.
Note we also make two separate aliases of our selectable:
>>> print s
SELECT users.id, users.name, users.fullname
FROM users, (SELECT q, z, r
FROM calculate(:x_1, :y_1)) AS c1, (SELECT q, z, r
FROM calculate(:x_2, :y_2)) AS c2
WHERE users.id BETWEEN c1.z AND c2.z
>>> s.compile().params
{u’x_2’: 5, u’y_2’: 12, u’y_1’: 45, u’x_1’: 17}
Unions come in two flavors, UNION and UNION ALL, which are available via module level functions:
Also available, though not supported on all databases, are intersect(), intersect_all(), except_(), and
except_all():
A common issue with so-called “compound” selectables arises due to the fact that they nest with parenthesis. SQLite
in particular doesn’t like a statement that starts with parenthesis. So when nesting a “compound” inside a “compound”,
it’s often necessary to apply .alias().select() to the first element of the outermost compound, if that element
is also a compound. For example, to nest a “union” and a “select” inside of “except_”, SQLite will want the “union”
to be stated as a subquery:
>>> u = except_(
... union(
... addresses.select(addresses.c.email_address.like(’%@yahoo.com’)),
... addresses.select(addresses.c.email_address.like(’%@msn.com’))
... ).alias().select(), # apply subquery here
... addresses.select(addresses.c.email_address.like(’%@msn.com’))
... )
>>> print conn.execute(u).fetchall()
SELECT anon_1.id, anon_1.user_id, anon_1.email_address
FROM (SELECT addresses.id AS id, addresses.user_id AS user_id,
addresses.email_address AS email_address FROM addresses
WHERE addresses.email_address LIKE ? UNION SELECT addresses.id AS id,
Notice in the examples on “scalar selects”, the FROM clause of each embedded select did not contain the users table
in its FROM clause. This is because SQLAlchemy automatically attempts to correlate embedded FROM objects to
that of an enclosing query. To disable this, or to specify explicit FROM clauses to be correlated, use correlate():
The select() function can take keyword arguments order_by, group_by (as well as having), limit, and
offset. There’s also distinct=True. These are all also available as generative functions. order_by()
expressions can use the modifiers asc() or desc() to indicate ascending or descending.
>>> s = select([addresses]).offset(1).limit(1)
>>> print conn.execute(s).fetchall()
SELECT addresses.id, addresses.user_id, addresses.email_address
FROM addresses
LIMIT 1 OFFSET 1
()[(2, 1, u’[email protected]’)]
conn.execute(
users.insert().values(name=func.upper(’jack’)),
fullname=’Jack Jones’
)
bindparam() constructs can be passed, however the names of the table’s columns are reserved for the “automatic”
generation of bind names:
users.insert().values(id=bindparam(’_id’), name=bindaparam(’_name’))
Updates work a lot like INSERTS, except there is an additional WHERE clause that can be specified:
>>> # with binds, you can also update many rows at once
>>> conn.execute(u,
... {’oldname’:’jack’, ’newname’:’ed’},
... {’oldname’:’wendy’, ’newname’:’mary’},
... {’oldname’:’jim’, ’newname’:’jake’},
... )
UPDATE users SET name=? WHERE users.name = ?
[(’ed’, ’jack’), (’mary’, ’wendy’), (’jake’, ’jim’)]
COMMIT<sqlalchemy.engine.base.ResultProxy object at 0x...>
A correlated update lets you update a table using selection from another table, or the same table:
3.17 Deletes
Finally, a delete. Easy enough:
>>> conn.execute(addresses.delete())
DELETE FROM addresses
()
COMMIT<sqlalchemy.engine.base.ResultProxy object at 0x...>
FOUR
MAPPER CONFIGURATION
This section references most major configurational patterns involving the mapper() and relationship() func-
tions. It assumes you’ve worked through Object Relational Tutorial and know how to construct and use rudimentary
mappers and relationships.
The default behavior of a mapper is to assemble all the columns in the mapped Table into mapped object attributes.
This behavior can be modified in several ways, as well as enhanced by SQL expressions.
To load only a part of the columns referenced by a table as attributes, use the include_properties and
exclude_properties arguments:
To change the name of the attribute mapped to a particular column, place the Column object in the properties
dictionary with the desired key:
To change the names of all attributes using a prefix, use the column_prefix option. This is useful for classes which
wish to add their own property accessors:
The above will place attribute names such as _user_id, _user_name, _password etc. on the mapped User
class.
To place multiple columns which are known to be “synonymous” based on foreign key relationship or join condition
into the same mapped attribute, put them together using a list, as below where we map to a Join:
53
SQLAlchemy Documentation, Release 0.6.2
users_table.c.user_id == addresses_table.c.user_id)
This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first
referenced. It is essentially “column-level lazy loading”. This feature is useful when one wants to avoid loading a
large text or binary field into memory when it’s not needed. Individual columns can be lazy loaded by themselves or
placed into groups that lazy-load together:
class Book(object):
pass
Deferred columns can be placed into groups so that they load together:
class Book(object):
pass
# define a mapper with a ’photos’ deferred group. when one photo is referenced,
# all three photos will be loaded in one SELECT statement. The ’excerpt’ will
# be loaded separately when it is first referenced.
mapper(Book, book_excerpts, properties = {
’excerpt’: deferred(book_excerpts.c.excerpt),
’photo1’: deferred(book_excerpts.c.photo1, group=’photos’),
You can defer or undefer columns at the Query level using the defer and undefer options:
query = session.query(Book)
query.options(defer(’summary’)).all()
query.options(undefer(’excerpt’)).all()
And an entire “deferred group”, i.e. which uses the group keyword argument to deferred(), can be undeferred
using undefer_group(), sending in the group name:
query = session.query(Book)
query.options(undefer_group(’photos’)).all()
To add a SQL clause composed of local or external columns as a read-only, mapped column attribute, use the
column_property() function. Any scalar-returning ClauseElement may be used, as long as it has a name
attribute; usually, you’ll want to call label() to give it a specific name:
Simple Validators
A quick way to add a “validation” routine to an attribute is to use the validates() decorator. This is a shortcut for
using the sqlalchemy.orm.util.Validator attribute extension with individual column or relationship based
attributes. An attribute validator can raise an exception, halting the process of mutating the attribute’s value, or can
change the given value into something different. Validators, like all attribute extensions, are only called by normal
userland code; they are not issued when the ORM is populating the object.
Column(’email’, String)
)
class EmailAddress(object):
@validates(’email’)
def validate_email(self, key, address):
assert ’@’ in address
return address
mapper(EmailAddress, addresses_table)
Validators also receive collection events, when items are added to a collection:
class User(object):
@validates(’addresses’)
def validate_address(self, key, address):
assert ’@’ in address.email
return address
Using Descriptors
A more comprehensive way to produce modified behavior for an attribute is to use descriptors. These are commonly
used in Python using the property() function. The standard SQLAlchemy technique for descriptors is to create a
plain descriptor, and to have it read/write from a mapped attribute with a different name. To have the descriptor named
the same as a column, map the column under a different name, i.e.:
class EmailAddress(object):
def _set_email(self, email):
self._email = email
def _get_email(self):
return self._email
email = property(_get_email, _set_email)
However, the approach above is not complete. While our EmailAddress object will shuttle the value through the
email descriptor and into the _email mapped attribute, the class level EmailAddress.email attribute does not
have the usual expression semantics usable with Query. To provide these, we instead use the synonym() function
as follows:
The email attribute is now usable in the same way as any other mapped attribute, including filter expressions, get/set
operations, etc.:
session.flush()
If the mapped class does not provide a property, the synonym() construct will create a default getter/setter object
automatically.
Custom Comparators
The expressions returned by comparison operations, such as User.name==’ed’, can be customized. SQLAlchemy
attributes generate these expressions using PropComparator objects, which provide common Python ex-
pression overrides including __eq__(), __ne__(), __lt__(), and so on. Any mapped attribute can be
passed a user-defined class via the comparator_factory keyword argument, which subclasses the appropriate
PropComparator in use, which can provide any or all of these methods:
Above, comparisons on the email column are wrapped in the SQL lower() function to produce case-insensitive
matching:
The __clause_element__() method is provided by the base Comparator class in use, and represents the SQL
element which best matches what this attribute represents. For a column-based attribute, it’s the mapped column. For
a composite attribute, it’s a ClauseList consisting of each column represented. For a relationship, it’s the table
mapped by the local mapper (not the remote mapper). __clause_element__() should be honored by the custom
comparator class in most cases since the resulting element will be applied any translations which are in effect, such as
the correctly aliased member when using an aliased() construct or certain with_polymorphic() scenarios.
There are four kinds of Comparator classes which may be subclassed, as according to the type of mapper property
configured:
When using comparable_property(), which is a mapper property that isn’t tied to any column or mapped table,
the __clause_element__() method of PropComparator should also be implemented.
The comparator_factory argument is accepted by all MapperProperty-producing functions:
column_property(), composite(), comparable_property(), synonym(), relationship(),
backref(), deferred(), and dynamic_loader().
Sets of columns can be associated with a single datatype. The ORM treats the group of columns like a single col-
umn which accepts and returns objects using the custom datatype you provide. In this example, we’ll create a table
vertices which stores a pair of x/y coordinates, and a custom datatype Point which is a composite type of an x
and y column:
The requirements for the custom datatype class are that it have a constructor which accepts positional arguments
corresponding to its column format, and also provides a method __composite_values__() which returns
the state of the object as a list or tuple, in order of its column-based attributes. It also should supply ade-
quate __eq__() and __ne__() methods which test the equality of two instances, and may optionally provide
a __set_composite_values__ method which is used to set internal state in some cases (typically when default
values have been generated during a flush):
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __composite_values__(self):
return [self.x, self.y]
def __set_composite_values__(self, x, y):
self.x = x
self.y = y
def __eq__(self, other):
return other.x == self.x and other.y == self.y
def __ne__(self, other):
return not self.__eq__(other)
If __set_composite_values__() is not provided, the names of the mapped columns are taken as the names of
attributes on the object, and setattr() is used to set data.
Setting up the mapping uses the composite() function:
class Vertex(object):
pass
We can now use the Vertex instances as well as querying as though the start and end attributes are regular scalar
attributes:
session = Session()
v = Vertex(Point(3, 4), Point(5, 6))
session.save(v)
The “equals” comparison operation by default produces an AND of all corresponding columns equated to one another.
This can be changed using the comparator_factory, described in Custom Comparators:
class PointComparator(CompositeProperty.Comparator):
def __gt__(self, other):
"""define the ’greater than’ operation"""
The ORM does not generate ordering for any query unless explicitly configured.
The “default” ordering for a collection, which applies to list-based collections, can be configured using the order_by
keyword argument on relationship():
mapper(Address, addresses_table)
Note that when using joined eager loaders with relationships, the tables used by the eager load’s join are anonymously
aliased. You can only order by these columns if you specify it at the relationship() level. To control ordering
at the query level based on a related table, you join() to that relationship, then order by it:
session.query(User).join(’addresses’).order_by(Address.street)
Ordering for rows loaded through Query is usually specified using the order_by() generative method. There is
also an option to set a default ordering for Queries which are against a single mapped entity and where there was no
explicit order_by() stated, which is the order_by keyword argument to mapper():
# order by a column
mapper(User, users_table, order_by=users_table.c.user_id)
Above, a Query issued for the User class will use the value of the mapper’s order_by setting if the Query itself
has no ordering specified.
SQLAlchemy supports three forms of inheritance: single table inheritance, where several types of classes are stored in
one table, concrete table inheritance, where each type of class is stored in its own table, and joined table inheritance,
where the parent/child classes are stored in their own tables that are joined together in a select. Whereas support for
single and joined table inheritance is strong, concrete table inheritance is a less common scenario with some particular
problems so is not quite as flexible.
When mappers are configured in an inheritance relationship, SQLAlchemy has the ability to load elements “polymor-
phically”, meaning that a single query can return objects of multiple types.
For the following sections, assume this class relationship:
class Employee(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.__class__.__name__ + " " + self.name
class Manager(Employee):
def __init__(self, name, manager_data):
self.name = name
self.manager_data = manager_data
def __repr__(self):
return self.__class__.__name__ + " " + self.name + " " + self.manager_data
class Engineer(Employee):
def __init__(self, name, engineer_info):
self.name = name
self.engineer_info = engineer_info
def __repr__(self):
return self.__class__.__name__ + " " + self.name + " " + self.engineer_info
In joined table inheritance, each class along a particular classes’ list of parents is represented by a unique table. The
total set of attributes for a particular instance is represented as a join along all tables in its inheritance path. Here, we
first define a table to represent the Employee class. This table will contain a primary key column (or columns), and
a column for each attribute that’s represented by Employee. In this case it’s just name:
The table also has a column called type. It is strongly advised in both single- and joined- table inheritance scenarios
that the root table contains a column whose sole purpose is that of the discriminator; it stores a value which indicates
the type of object represented within the row. The column may be of any desired datatype. While there are some
“tricks” to work around the requirement that there be a discriminator column, they are more complicated to configure
when one wishes to load polymorphically.
Next we define individual tables for each of Engineer and Manager, which contain columns that represent the
attributes unique to the subclass they represent. Each table also must contain a primary key column (or columns),
and in most cases a foreign key reference to the parent table. It is standard practice that the same column is used for
both of these roles, and that the column is also named the same as that of the parent table. However this is optional
in SQLAlchemy; separate columns may be used for primary key and parent-relationship, the column may be named
differently than that of the parent, and even a custom join condition can be specified between parent and child tables
instead of using a foreign key:
One natural effect of the joined table inheritance configuration is that the identity of any mapped object can be de-
termined entirely from the base table. This has obvious advantages, so SQLAlchemy always considers the primary
key columns of a joined inheritance class to be those of the base table only, unless otherwise manually configured.
In other words, the employee_id column of both the engineers and managers table is not used to locate the
Engineer or Manager object itself - only the value in employees.employee_id is considered, and the pri-
mary key in this case is non-composite. engineers.employee_id and managers.employee_id are still of
course critical to the proper operation of the pattern overall as they are used to locate the joined row, once the parent
row has been determined, either through a distinct SELECT statement or all at once within a JOIN.
We then configure mappers as usual, except we use some additional arguments to indicate the inheritance relationship,
the polymorphic discriminator column, and the polymorphic identity of each class; this is the value that will be stored
in the polymorphic discriminator column.
And that’s it. Querying against Employee will return a combination of Employee, Engineer and
Manager objects. Newly saved Engineer, Manager, and Employee objects will automatically populate the
employees.type column with engineer, manager, or employee, as appropriate.
The with_polymorphic() method of Query affects the specific subclass tables which the Query selects from.
Normally, a query such as this:
session.query(Employee).all()
...selects only from the employees table. When loading fresh from the database, our joined-table setup will query
from the parent table only, using SQL such as this:
As attributes are requested from those Employee objects which are represented in either the engineers or
managers child tables, a second load is issued for the columns in that related row, if the data was not already
loaded. So above, after accessing the objects you’d see further SQL issued along the lines of:
This behavior works well when issuing searches for small numbers of items, such as when using get(), since the
full range of joined tables are not pulled in to the SQL statement unnecessarily. But when querying a larger span of
rows which are known to be of many types, you may want to actively join to some or all of the joined tables. The
with_polymorphic feature of Query and mapper provides this.
Telling our query to polymorphically load Engineer and Manager objects:
produces a query which joins the employees table to both the engineers and managers tables like the follow-
ing:
query.all()
with_polymorphic() accepts a single class or mapper, a list of classes/mappers, or the string ’*’ to indicate all
subclasses:
It also accepts a second argument selectable which replaces the automatic join creation and instead selects directly
from the selectable given. This feature is normally used with “concrete” inheritance, described later, but can be used
with any kind of inheritance setup in the case that specialized SQL should be used to load polymorphically:
# custom selectable
query.with_polymorphic([Engineer, Manager], employees.outerjoin(managers).outerjoin(enginee
with_polymorphic() is also needed when you wish to add filter criteria that are specific to one or more sub-
classes; It makes the subclasses’ columns available to the WHERE clause:
session.query(Employee).with_polymorphic([Engineer, Manager]).\
filter(or_(Engineer.engineer_info==’w’, Manager.manager_data==’q’))
Note that if you only need to load a single subtype, such as just the Engineer objects, with_polymorphic() is
not needed since you would query against the Engineer class directly.
The mapper also accepts with_polymorphic as a configurational argument so that the joined-style load will be
issued automatically. This argument may be the string ’*’, a list of classes, or a tuple consisting of either, followed
by a selectable.
The above mapping will produce a query similar to that of with_polymorphic(’*’) for every query of
Employee objects.
Using with_polymorphic() with Query will override the mapper-level with_polymorphic setting.
The of_type() method is a helper which allows the construction of joins along relationship() paths while
narrowing the criterion to specific subclasses. Suppose the employees table represents a collection of employees
which are associated with a Company object. We’ll add a company_id column to the employees table and a new
table companies:
class Company(object):
pass
When querying from Company onto the Employee relationship, the join() method as well as the any() and
has() operators will create a join from companies to employees, without including engineers or managers
in the mix. If we wish to have criterion which is specifically against the Engineer class, we can tell those methods
to join or subquery against the joined table representing the subclass using the of_type() operator:
session.query(Company).join(Company.employees.of_type(Engineer)).filter(Engineer.engineer_i
A longhand version of this would involve spelling out the full target selectable within a 2-tuple:
session.query(Company).join((employees.join(engineers), Company.employees)).filter(Engineer
Currently, of_type() accepts a single class argument. It may be expanded later on to accept multiple classes. For
now, to join to any group of subclasses, the longhand notation allows this flexibility:
session.query(Company).join((employees.outerjoin(engineers).outerjoin(managers), Company.em
filter(or_(Engineer.engineer_info==’someinfo’, Manager.manager_data==’somedata’))
The any() and has() operators also can be used with of_type() when the embedded criterion is in terms of a
subclass:
session.query(Company).filter(Company.employees.of_type(Engineer).any(Engineer.engineer_inf
Note that the any() and has() are both shorthand for a correlated EXISTS query. To build one by hand looks like:
session.query(Company).filter(
exists([1],
and_(Engineer.engineer_info==’someinfo’, employees.c.company_id==companies.c.compan
from_obj=employees.join(engineers)
)
).all()
The EXISTS subquery above selects from the join of employees to engineers, and also specifies criterion which
correlates the EXISTS subselect back to the parent companies table.
Single table inheritance is where the attributes of the base class as well as all subclasses are represented within a single
table. A column is present in the table for every attribute mapped to the base class and all subclasses; the columns
which correspond to a single subclass are nullable. This configuration looks much like joined-table inheritance except
there’s only one table. In this case, a type column is required, as there would be no other way to discriminate between
classes. The table is specified in the base mapper only; for the inheriting classes, leave their table parameter blank:
Note that the mappers for the derived classes Manager and Engineer omit the specification of their associated table,
as it is inherited from the employee_mapper. Omitting the table specification for derived mappers in single-table
inheritance is required.
Notice in this case there is no type column. If polymorphic loading is not required, there’s no advantage to using
inherits here; you just define a separate mapper for each class.
mapper(Employee, employees_table)
mapper(Manager, managers_table)
mapper(Engineer, engineers_table)
To load polymorphically, the with_polymorphic argument is required, along with a selectable indicating how
rows should be loaded. In this case we must construct a UNION of all three tables. SQLAlchemy includes a helper
function to create these called polymorphic_union(), which will map all the different columns into a structure
of selects with the same numbers and names of columns, and also generate a virtual type column for each subselect:
pjoin = polymorphic_union({
’employee’: employees_table,
’manager’: managers_table,
’engineer’: engineers_table
}, ’type’, ’pjoin’)
session.query(Employee).all()
FROM (
SELECT employees.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data,
CAST(NULL AS VARCHAR(50)) AS engineer_info, ’employee’ AS type
FROM employees
UNION ALL
SELECT managers.employee_id AS employee_id, managers.manager_data AS manager_data, mana
CAST(NULL AS VARCHAR(50)) AS engineer_info, ’manager’ AS type
FROM managers
UNION ALL
SELECT engineers.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data,
engineers.engineer_info AS engineer_info, ’engineer’ AS type
FROM engineers
) AS pjoin
[]
Both joined-table and single table inheritance scenarios produce mappings which are usable in relationship()
functions; that is, it’s possible to map a parent object to a child object which is polymorphic. Similarly, inheriting
mappers can have relationship() objects of their own at any level, which are inherited to each child class.
The only requirement for relationships is that there is a table relationship between parent and child. An example is
the following modification to the joined table inheritance example, which sets a bi-directional relationship between
Employee and Company:
class Company(object):
pass
SQLAlchemy has a lot of experience in this area; the optimized “outer join” approach can be used freely for parent
and child relationships, eager loads are fully useable, aliased() objects and other techniques are fully supported
as well.
In a concrete inheritance scenario, mapping relationships is more difficult since the distinct classes do not share a table.
In this case, you can establish a relationship from parent to child if a join condition can be constructed from parent to
child, if each child table contains a foreign key to the parent:
The big limitation with concrete table inheritance is that relationship() objects placed on each concrete mapper
do not propagate to child mappers. If you want to have the same relationship() objects set up on all con-
crete mappers, they must be configured manually on each. To configure back references in such a configuration the
back_populates keyword may be used instead of backref, such as below where both A(object) and B(A)
bidirectionally reference C:
ajoin = polymorphic_union({
’a’:a_table,
’b’:b_table
}, ’type’, ’ajoin’)
Mappers can be constructed against arbitrary relational units (called Selectables) as well as plain Tables. For
example, The join keyword from the SQL package creates a neat selectable unit comprised of multiple tables,
complete with its own composite primary key, which can be passed in to a mapper as the table.
# a class
class AddressUser(object):
pass
# define a Join
j = join(users_table, addresses_table)
A second example:
# a class
class KeywordUser(object):
pass
In both examples above, “composite” columns were added as properties to the mappers; these are aggregations of
multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same
value.
Similar to mapping against a join, a plain select() object can be used with a mapper as well. Below, an example select
which contains two aggregate functions and a group_by is mapped to a class:
s = select([customers,
func.count(orders).label(’order_count’),
func.max(orders.price).label(’highest_order’)],
customers.c.customer_id==orders.c.customer_id,
group_by=[c for c in customers.c]
).alias(’somealias’)
class Customer(object):
pass
mapper(Customer, s)
Above, the “customers” table is joined against the “orders” table to produce a full row for each customer row, the total
count of related rows in the “orders” table, and the highest price in the “orders” table, grouped against the full set of
columns in the “customers” table. That query is then mapped against the Customer class. New instances of Customer
will contain attributes for each column in the “customers” table as well as an “order_count” and “highest_order”
attribute. Updates to the Customer object will only be reflected in the “customers” table and not the “orders” table.
This is because the primary key columns of the “orders” table are not represented in this mapper and therefore the
table is not affected by save or delete operations.
The first mapper created for a certain class is known as that class’s “primary mapper.” Other mappers can be created
as well on the “load side” - these are called secondary mappers. This is a mapper that must be constructed with
the keyword argument non_primary=True, and represents a load-only mapper. Objects that are loaded with a
secondary mapper will have their save operation processed by the primary mapper. It is also invalid to add new
relationship() objects to a non-primary mapper. To use this mapper with the Session, specify it to the query
method:
example:
# primary mapper
mapper(User, users_table)
# select
result = session.query(othermapper).select()
The “non primary mapper” is a rarely needed feature of SQLAlchemy; in most cases, the Query object can produce
any kind of query that’s desired. It’s recommended that a straight Query be used in place of a non-primary mapper
unless the mapper approach is absolutely needed. Current use cases for the “non primary mapper” are when you want
to map the class to a particular select statement or view to which additional query criterion can be added, and for when
the particular mapped select statement or view is to be placed in a relationship() of a parent mapper.
The non_primary mapper defines alternate mappers for the purposes of loading objects. What if we want the same
class to be persisted differently, such as to different tables ? SQLAlchemy refers to this as the “entity name” pattern,
and in Python one can use a recipe which creates anonymous subclasses which are distinctly mapped. See the recipe
at Entity Name.
Mapping imposes no restrictions or requirements on the constructor (__init__) method for the class. You are free
to require any arguments for the function that you wish, assign attributes to the instance that are unknown to the ORM,
and generally do anything else you would normally do when writing a constructor for a Python class.
The SQLAlchemy ORM does not call __init__ when recreating objects from database rows. The ORM’s process
is somewhat akin to the Python standard library’s pickle module, invoking the low level __new__ method and then
quietly restoring attributes directly on the instance rather than calling __init__.
If you need to do some setup on database-loaded instances before they’re ready to use, you can use the
@reconstructor decorator to tag a method as the ORM counterpart to __init__. SQLAlchemy will call this
method with no arguments every time it loads or reconstructs one of your instances. This is useful for recreating
transient properties that are normally assigned in your __init__:
class MyMappedClass(object):
def __init__(self, data):
self.data = data
# we need stuff on all instances, but not in the database.
self.stuff = []
@orm.reconstructor
def init_on_load(self):
self.stuff = []
When obj = MyMappedClass() is executed, Python calls the __init__ method as normal and the data argu-
ment is required. When instances are loaded during a Query operation as in query(MyMappedClass).one(),
init_on_load is called instead.
Any method may be tagged as the reconstructor(), even the __init__ method. SQLAlchemy will call the
reconstructor method with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be
available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only
contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush()
operation, so the activity within a reconstructor should be conservative.
While the ORM does not call your __init__ method, it will modify the class’s __init__ slightly. The method
is lightly wrapped to act as a trigger for the ORM, allowing mappers to be compiled automatically and will fire a
init_instance() event that MapperExtension objects may listen for. MapperExtension objects can
also listen for a reconstruct_instance event, analogous to the reconstructor() decorator above.
Mappers can have functionality augmented or replaced at many points in its execution via the usage of the MapperEx-
tension class. This class is just a series of “hooks” where various functionality takes place. An application can make
its own MapperExtension objects, overriding only the methods it needs. Methods that are not overridden return the
special value sqlalchemy.orm.EXT_CONTINUE to allow processing to continue to the next MapperExtension or
simply proceed normally if there are no more extensions.
API documentation for MapperExtension: sqlalchemy.orm.interfaces.MapperExtension
To use MapperExtension, make your own subclass of it and just send it off to a mapper:
Multiple extensions will be chained together and processed in order; they are specified as a list:
A quick walkthrough of the basic relational patterns. Note that the relationship() function is known as
relation() in all SQLAlchemy versions prior to 0.6beta2, including the 0.5 and 0.4 series.
One To Many
A one to many relationship places a foreign key in the child table referencing the parent. SQLAlchemy creates the
relationship as a collection on the parent object containing instances of the child object.
class Parent(object):
pass
class Child(object):
pass
mapper(Child, child_table)
To establish a bi-directional relationship in one-to-many, where the “reverse” side is a many to one, specify the
backref option:
mapper(Child, child_table)
Many To One
Many to one places a foreign key in the parent table referencing the child. The mapping setup is identical to one-
to-many, however SQLAlchemy creates the relationship as a scalar attribute on the parent object referencing a single
instance of the child object.
class Parent(object):
pass
class Child(object):
pass
mapper(Child, child_table)
Backref behavior is available here as well, where backref="parents" will place a one-to-many collection on the
Child class.
One To One
One To One is essentially a bi-directional relationship with a scalar attribute on both sides. To achieve this, the
uselist=False flag indicates the placement of a scalar attribute instead of a collection on the “many” side of the
relationship. To convert one-to-many into one-to-one:
Many To Many
Many to Many adds an association table between two classes. The association table is indicated by the secondary
argument to relationship().
mapper(Child, right_table)
For a bi-directional relationship, both sides of the relationship contain a collection by default, which can be modified on
either side via the uselist flag to be scalar. The backref keyword will automatically use the same secondary
argument for the reverse relationship:
Association Object
The association object pattern is a variant on many-to-many: it specifically is used when your association table contains
additional columns beyond those which are foreign keys to the left and right tables. Instead of using the secondary
argument, you map a new class directly to the association table. The left side of the relationship references the
association object via one-to-many, and the association class references the right side via many-to-one.
mapper(Child, right_table)
mapper(Child, right_table)
Working with the association pattern in its direct form requires that child objects are associated with an association
instance before being appended to the parent; similarly, access from parent to child goes through the association object:
To enhance the association object pattern such that direct access to the Association object is optional,
SQLAlchemy provides the associationproxy.
Important Note: it is strongly advised that the secondary table argument not be combined with the Association Ob-
ject pattern, unless the relationship() which contains the secondary argument is marked viewonly=True.
Otherwise, SQLAlchemy may persist conflicting data to the underlying association table since it is represented by two
conflicting mappings. The Association Proxy pattern should be favored in the case where access to the underlying
association data is only sometimes needed.
The adjacency list pattern is a common relational pattern whereby a table contains a foreign key reference to itself.
This is the most common and simple way to represent hierarchical data in flat tables. The other way is the “nested
sets” model, sometimes called “modified preorder”. Despite what many online articles say about modified preorder,
the adjacency list model is probably the most appropriate pattern for the large majority of hierarchical storage needs,
for reasons of concurrency, reduced complexity, and that modified preorder has little advantage over an application
which can fully load subtrees into the application space.
SQLAlchemy commonly refers to an adjacency list relationship as a self-referential mapper. In this example, we’ll
work with a single table called treenodes to represent a tree structure:
id parent_id data
--- ------- ----
1 NULL root
2 1 child1
3 1 child2
4 3 subchild1
5 3 subchild2
6 1 child3
SQLAlchemy’s mapper() configuration for a self-referential one-to-many relationship is exactly like a “nor-
mal” one-to-many relationship. When SQLAlchemy encounters the foreign key relationship from treenodes to
treenodes, it assumes one-to-many unless told otherwise:
# entity class
class Node(object):
pass
To create a many-to-one relationship from child to parent, an extra indicator of the “remote side” is added, which
contains the Column object or objects indicating the remote side of the relationship:
There are several examples included with SQLAlchemy illustrating self-referential strategies; these include Adjacency
List and XML Persistence.
Querying self-referential structures is done in the same way as any other query in SQLAlchemy, such as below, we
query for any node whose data attribute stores the value child2:
On the subject of joins, i.e. those described in datamapping_joins, self-referential structures require the usage of
aliases so that the same table can be referenced multiple times within the FROM clause of the query. Aliasing can be
done either manually using the nodes Table object as a source of aliases:
session.query(Node).filter(Node.data==’subchild1’).\
filter(and_(Node.parent_id==nodealias.c.id, nodealias.c.data==’child2’)).all()
SELECT treenodes.id AS treenodes_id, treenodes.parent_id AS treenodes_parent_id, treenodes.
FROM treenodes, treenodes AS treenodes_1
WHERE treenodes.data = ? AND treenodes.parent_id = treenodes_1.id AND treenodes_1.data = ?
[’subchild1’, ’child2’]
# get all nodes named ’subchild1’ with a parent named ’child2’ and a grandparent ’root’
session.query(Node).filter(Node.data==’subchild1’).\
join(’parent’, aliased=True).filter(Node.data==’child2’).\
join(’parent’, aliased=True, from_joinpoint=True).filter(Node.data==’root’).all()
SELECT treenodes.id AS treenodes_id, treenodes.parent_id AS treenodes_parent_id, treenodes.
FROM treenodes JOIN treenodes AS treenodes_1 ON treenodes_1.id = treenodes.parent_id JOIN t
WHERE treenodes.data = ? AND treenodes_1.data = ? AND treenodes_2.data = ?
[’subchild1’, ’child2’, ’root’]
Eager loading of relationships occurs using joins or outerjoins from parent to child table during a normal query opera-
tion, such that the parent and its child collection can be populated from a single SQL statement, or a second statement
for all collections at once. SQLAlchemy’s joined and subquery eager loading uses aliased tables in all cases when
joining to related items, so it is compatible with self-referential joining. However, to use eager loading with a self-
referential relationship, SQLAlchemy needs to be told how many levels deep it should join; otherwise the eager load
will not take place. This depth setting is configured via join_depth:
session.query(Node).all()
SELECT treenodes_1.id AS treenodes_1_id, treenodes_1.parent_id AS treenodes_1_parent_id, tr
FROM treenodes LEFT OUTER JOIN treenodes AS treenodes_2 ON treenodes.id = treenodes_2.paren
[]
The relationship() function uses the foreign key relationship between the parent and child tables to formulate
the primary join condition between parent and child; in the case of a many-to-many relationship it also formulates
the secondary join condition:
many to many:
-------------
If you are working with a Table which has no ForeignKey objects on it (which can be the case when using
reflected tables with MySQL), or if the join condition cannot be expressed by a simple foreign key relationship, use
the primaryjoin and possibly secondaryjoin conditions to create the appropriate relationship.
In this example we create a relationship boston_addresses which will only load the user addresses with a city of
“Boston”:
class User(object):
pass
class Address(object):
pass
mapper(Address, addresses_table)
mapper(User, users_table, properties={
’boston_addresses’: relationship(Address, primaryjoin=
and_(users_table.c.user_id==addresses_table.c.user_id,
addresses_table.c.city==’Boston’))
})
Many to many relationships can be customized by one or both of primaryjoin and secondaryjoin, shown
below with just the default many-to-many relationship explicitly set:
class User(object):
pass
class Keyword(object):
pass
mapper(Keyword, keywords_table)
mapper(User, users_table, properties={
’keywords’: relationship(Keyword, secondary=userkeywords_table,
primaryjoin=users_table.c.user_id==userkeywords_table.c.user_id,
secondaryjoin=userkeywords_table.c.keyword_id==keywords_table.c.keyword_id
)
})
When using primaryjoin and secondaryjoin, SQLAlchemy also needs to be aware of which columns in the
relationship reference the other. In most cases, a Table construct will have ForeignKey constructs which take care
of this; however, in the case of reflected tables on a database that does not report FKs (like MySQL ISAM) or when
using join conditions on columns that don’t have foreign keys, the relationship() needs to be told specifically
which columns are “foreign” using the foreign_keys collection:
mapper(Address, addresses_table)
mapper(User, users_table, properties={
’addresses’: relationship(Address, primaryjoin=
users_table.c.user_id==addresses_table.c.user_id,
foreign_keys=[addresses_table.c.user_id])
})
Very ambitious custom join conditions may fail to be directly persistable, and in some cases may not even load
correctly. To remove the persistence part of the equation, use the flag viewonly=True on the relationship(),
which establishes it as a read-only attribute (data written to the collection will be ignored on flush()). However, in
extreme cases, consider using a regular Python property in conjunction with Query as follows:
class User(object):
def _get_addresses(self):
return object_session(self).query(Address).with_parent(self).filter(...).all()
addresses = property(_get_addresses)
Theres no restriction on how many times you can relate from parent to child. SQLAlchemy can usually figure out what
you want, particularly if the join conditions are straightforward. Below we add a newyork_addresses attribute to
complement the boston_addresses attribute:
This is a very specific case where relationship() must perform an INSERT and a second UPDATE in order to properly
populate a row (and vice versa an UPDATE and DELETE in order to delete without violating foreign key constraints).
The two use cases are:
• A table contains a foreign key to itself, and a single row will have a foreign key value pointing to its own primary
key.
• Two tables each contain a foreign key referencing the other table, with a row in each table referencing the other.
For example:
user
---------------------------------
user_id name related_user_id
1 ’ed’ 1
Or:
widget entry
------------------------------------------- ---------------------------------
widget_id name favorite_entry_id entry_id name widget_id
1 ’somewidget’ 5 5 ’someentry’ 1
In the first case, a row points to itself. Technically, a database that uses sequences such as PostgreSQL or Oracle
can INSERT the row at once using a previously generated value, but databases which rely upon autoincrement-style
primary key identifiers cannot. The relationship() always assumes a “parent/child” model of row population
during flush, so unless you are populating the primary key/foreign key columns directly, relationship() needs
to use two statements.
In the second case, the “widget” row must be inserted before any referring “entry” rows, but then the “fa-
vorite_entry_id” column of that “widget” row cannot be set until the “entry” rows have been generated. In this case,
it’s typically impossible to insert the “widget” and “entry” rows using just two INSERT statements; an UPDATE must
be performed in order to keep foreign key constraints fulfilled. The exception is if the foreign keys are configured
as “deferred until commit” (a feature some databases support) and if the identifiers were populated manually (again
essentially bypassing relationship()).
To enable the UPDATE after INSERT / UPDATE before DELETE behavior on relationship(), use the
post_update flag on one of the relationships, preferably the many-to-one side:
When a structure using the above mapping is flushed, the “widget” row will be INSERTed minus the “fa-
vorite_entry_id” value, then all the “entry” rows will be INSERTed referencing the parent “widget” row, and then
an UPDATE statement will populate the “favorite_entry_id” column of the “widget” table (it’s one row at a time for
the time being).
Mapping a one-to-many or many-to-many relationship results in a collection of values accessible through an attribute
on the parent instance. By default, this collection is a list:
mapper(Parent, properties={
children = relationship(Child)
})
parent = Parent()
parent.children.append(Child())
print parent.children[0]
Collections are not limited to lists. Sets, mutable sequences and almost any other Python object that can act as a con-
tainer can be used in place of the default list, by specifying the collection_class option on relationship().
# use a set
mapper(Parent, properties={
children = relationship(Child, collection_class=set)
})
parent = Parent()
child = Child()
parent.children.add(child)
assert child in parent.children
You can use your own types for collections as well. For most cases, simply inherit from list or set and add the
custom behavior.
Collections in SQLAlchemy are transparently instrumented. Instrumentation means that normal operations on the col-
lection are tracked and result in changes being written to the database at flush time. Additionally, collection operations
can fire events which indicate some secondary operation must take place. Examples of a secondary operation include
saving the child item in the parent’s Session (i.e. the save-update cascade), as well as synchronizing the state
of a bi-directional relationship (i.e. a backref).
The collections package understands the basic interface of lists, sets and dicts and will automatically apply instrumen-
tation to those built-in types and their subclasses. Object-derived types that implement a basic collection interface are
detected and instrumented via duck-typing:
class ListLike(object):
def __init__(self):
self.data = []
def append(self, item):
self.data.append(item)
def remove(self, item):
self.data.remove(item)
def extend(self, items):
self.data.extend(items)
def __iter__(self):
return iter(self.data)
def foo(self):
return ’foo’
append, remove, and extend are known list-like methods, and will be instrumented automatically. __iter__ is
not a mutator method and won’t be instrumented, and foo won’t be either.
Duck-typing (i.e. guesswork) isn’t rock-solid, of course, so you can be explicit about the interface you are implement-
ing by providing an __emulates__ class attribute:
class SetLike(object):
__emulates__ = set
def __init__(self):
self.data = set()
def append(self, item):
self.data.add(item)
def remove(self, item):
self.data.remove(item)
def __iter__(self):
return iter(self.data)
This class looks list-like because of append, but __emulates__ forces it to set-like. remove is known to be part
of the set interface and will be instrumented.
But this class won’t work quite yet: a little glue is needed to adapt it for use by SQLAlchemy. The ORM needs to
know which methods to use to append, remove and iterate over members of the collection. When using a type like
list or set, the appropriate methods are well-known and used automatically when present. This set-like class does
not provide the expected add method, so we must supply an explicit mapping for the ORM via a decorator.
Decorators can be used to tag the individual methods the ORM needs to manage collections. Use them when your
class doesn’t quite meet the regular interface for its container type, or you simply would like to use a different method
to get the job done.
class SetLike(object):
__emulates__ = set
def __init__(self):
self.data = set()
@collection.appender
def append(self, item):
self.data.add(item)
def __iter__(self):
return iter(self.data)
And that’s all that’s needed to complete the example. SQLAlchemy will add instances via the append method.
remove and __iter__ are the default methods for sets and will be used for removing and iteration. Default
methods can be changed as well:
class MyList(list):
@collection.remover
def zark(self, item):
# do something special...
@collection.iterator
def hey_use_this_instead_for_iteration(self):
# ...
There is no requirement to be list-, or set-like at all. Collection classes can be any shape, so long as they have the
append, remove and iterate interface marked for SQLAlchemy’s use. Append and remove methods will be called with
a mapped entity as the single argument, and iterator methods are called with no arguments and must return an iterator.
Dictionary-Based Collections
A dict can be used as a collection, but a keying strategy is needed to map entities loaded by the ORM to key,
value pairs. The sqlalchemy.orm.collections package provides several built-in types for dictionary-based
collections:
# ...
item = Item()
item.notes[’color’] = Note(’color’, ’blue’)
print item.notes[’color’]
These functions each provide a dict subclass with decorated set and remove methods and the keying strategy of
your choice.
The sqlalchemy.orm.collections.MappedCollection class can be used as a base class for your custom
types or as a mix-in to quickly add dict collection support to other classes. It uses a keying function to delegate to
__setitem__ and __delitem__:
The ORM understands the dict interface just like lists and sets, and will automatically instrument all dict-like meth-
ods if you choose to subclass dict or provide dict-like collection behavior in a duck-typed class. You must deco-
rate appender and remover methods, however- there are no compatible methods in the basic dictionary interface for
SQLAlchemy to use by default. Iteration will go through itervalues() unless otherwise decorated.
Many custom types and existing library classes can be used as a entity collection type as-is without further ado.
However, it is important to note that the instrumentation process _will_ modify the type, adding decorators around
methods automatically.
The decorations are lightweight and no-op outside of relationships, but they do add unneeded overhead when triggered
elsewhere. When using a library class as a collection, it can be good practice to use the “trivial subclass” trick to restrict
the decorations to just your usage in relationships. For example:
class MyAwesomeList(some.great.library.AwesomeList):
pass
The ORM uses this approach for built-ins, quietly substituting a trivial subclass when a list, set or dict is used
directly.
The collections package provides additional decorators and support for authoring custom types. See the
sqlalchemy.orm.collections package for more information and discussion of advanced usage and Python
2.3-compatible decoration options.
By default, all inter-object relationships are lazy loading. The scalar or collection attribute associated with a
relationship() contains a trigger which fires the first time the attribute is accessed, which issues a SQL call
at that point:
>>> jack.addresses
SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
addresses.user_id AS addresses_user_id
FROM addresses
WHERE ? = addresses.user_id
[5][<Address(u’[email protected]’)>, <Address(u’[email protected]’)>]
A second option for eager loading exists, called “subquery” loading. This kind of eager loading emits an additional
SQL statement for each collection requested, aggregated across all parent objects:
>>>jack = session.query(User).options(subqueryload(’addresses’)).filter_by(name=’jack’).all
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname,
users.password AS users_password
FROM users
WHERE users.name = ?
(’jack’,)
SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
The default loader strategy for any relationship() is configured by the lazy keyword argument, which de-
faults to select. Below we set it as joined so that the children relationship is eager loading, using a join:
We can also set it to eagerly load using a second query for all collections, using subquery:
When querying, all three choices of loader strategy are available on a per-query basis, using the joinedload(),
subqueryload() and lazyload() query options:
To reference a relationship that is deeper than one level, separate the names by periods:
session.query(Parent).options(joinedload(’foo.bar.bat’)).all()
When using dot-separated names with joinedload() or subqueryload(), option applies only to the actual
attribute named, and not its ancestors. For example, suppose a mapping from A to B to C, where the relationships,
named atob and btoc, are both lazy-loading. A statement like the following:
session.query(A).options(joinedload(’atob.btoc’)).all()
will load only A objects to start. When the atob attribute on each A is accessed, the returned B objects will eagerly
load their C objects.
Therefore, to modify the eager load to load both atob as well as btoc, place joinedloads for both:
session.query(A).options(joinedload(’atob’), joinedload(’atob.btoc’)).all()
session.query(A).options(joinedload_all(’atob.btoc’)).all()
There are two other loader strategies available, dynamic loading and no loading; these are described in Working with
Large Collections.
Which type of loading to use typically comes down to optimizing the tradeoff between number of SQL executions,
complexity of SQL emitted, and amount of data fetched. Lets take two examples, a relationship() which
references a collection, and a relationship() that references a scalar many-to-one reference.
• When using the default lazy loading, if you load 100 objects, and then access a collection on each of them, a total
of 101 SQL statements will be emitted, although each statement will typically be a simple SELECT without any
joins.
• When using joined loading, the load of 100 objects and their collections will emit only one SQL statement.
However, the total number of rows fetched will be equal to the sum of the size of all the collections, plus one
extra row for each parent object that has an empty collection. Each row will also contain the full set of columns
represented by the parents, repeated for each collection item - SQLAlchemy does not re-fetch these columns
other than those of the primary key, however most DBAPIs (with some exceptions) will transmit the full data
of each parent over the wire to the client connection in any case. Therefore joined eager loading only makes
sense when the size of the collections are relatively small. The LEFT OUTER JOIN can also be performance
intensive compared to an INNER join.
• When using subquery loading, the load of 100 objects will emit two SQL statements. The second statement
will fetch a total number of rows equal to the sum of the size of all collections. An INNER JOIN is used, and
a minimum of parent columns are requested, only the primary keys. So a subquery load makes sense when the
collections are larger.
• When multiple levels of depth are used with joined or subquery loading, loading collections-within- collections
will multiply the total number of rows fetched in a cartesian fashion. Both forms of eager loading always join
from the original parent class.
• When using the default lazy loading, a load of 100 objects will like in the case of the collection emit as many
as 101 SQL statements. However - there is a significant exception to this, in that if the many-to-one reference
is a simple foreign key reference to the target’s primary key, each reference will be checked first in the current
identity map using query.get(). So here, if the collection of objects references a relatively small set of
target objects, or the full set of possible target objects have already been loaded into the session and are strongly
referenced, using the default of lazy=’select’ is by far the most efficient way to go.
• When using joined loading, the load of 100 objects will emit only one SQL statement. The join will be a
LEFT OUTER JOIN, and the total number of rows will be equal to 100 in all cases. If you know that each
parent definitely has a child (i.e. the foreign key reference is NOT NULL), the joined load can be configured
with innerjoin=True, which is usually specified within the relationship(). For a load of objects
where there are many possible target references which may have not been loaded already, joined loading with
an INNER JOIN is extremely efficient.
• Subquery loading will issue a second load for all the child objects, so for a load of 100 objects there would be
two SQL statements emitted. There’s probably not much advantage here over joined loading, however, except
perhaps that subquery loading can use an INNER JOIN in all cases whereas joined loading requires that the
foreign key is NOT NULL.
The behavior of joinedload() is such that joins are created automatically, the results of which are routed into
collections and scalar references on loaded objects. It is often the case that a query already includes the necessary
joins which represent a particular collection or scalar reference, and the joins added by the joinedload feature are
redundant - yet you’d still like the collections/references to be populated.
For this SQLAlchemy supplies the contains_eager() option. This option is used in the same manner as the
joinedload() option except it is assumed that the Query will specify the appropriate joins explicitly. Below it’s
used with a from_statement load:
session.query(User).outerjoin(User.addresses).options(contains_eager(User.addresses)).all()
If the “eager” portion of the statement is “aliased”, the alias keyword argument to contains_eager() may be
used to indicate it. This is a string alias name or reference to an actual Alias (or other selectable) object:
The alias argument is used only as a source of columns to match up to the result set. You can use it even to match
up the result to arbitrary label names in a string SQL statement, by passing a selectable() which links those labels to
the mapped Table:
addresses.c.email_address.label(’a2’),
addresses.c.user_id.label(’a3’)])
# select from a raw SQL statement which uses those label names for the
# addresses table. contains_eager() matches them up.
query = session.query(User).\
from_statement("select users.*, addresses.address_id as a1, "
"addresses.email_address as a2, addresses.user_id as a3 "
"from users left outer join addresses on users.user_id=addresses.user_id").\
options(contains_eager(User.addresses, alias=eager_columns))
The path given as the argument to contains_eager() needs to be a full path from the starting entity. For example
if we were loading Users->orders->Order->items->Item, the string version would look like:
query(User).options(contains_eager(’orders’, ’items’))
query(User).options(contains_eager(User.orders, Order.items))
A variant on contains_eager() is the contains_alias() option, which is used in the rare case that the
parent object is loaded from an alias within a user-defined SELECT statement:
# create query, indicating "ulist" is an alias for the main table, "addresses" property sho
# be eager loaded
query = session.query(User).options(contains_alias(’ulist’), contains_eager(’addresses’))
# results
r = query.from_statement(statement)
The default behavior of relationship() is to fully load the collection of items in, as according to the loading
strategy of the relationship. Additionally, the Session by default only knows how to delete objects which are actually
present within the session. When a parent instance is marked for deletion and flushed, the Session loads its full list
of child items in so that they may either be deleted as well, or have their foreign key value set to null; this is to avoid
constraint violations. For large collections of child items, there are several strategies to bypass full loading of child
items both at load time as well as deletion time.
The most useful by far is the dynamic_loader() relationship. This is a variant of relationship() which
returns a Query object in place of a collection when accessed. filter() criterion may be applied as well as limits
and offsets, either explicitly or via array slices:
jack = session.query(User).get(id)
The dynamic relationship supports limited write operations, via the append() and remove() methods. Since the
read side of the dynamic relationship always queries the database, changes to the underlying collection will not be
visible until the data has been flushed:
jack.posts.append(Post(’new post’))
Note that eager/lazy loading options cannot be used in conjunction dynamic relationships at this time.
Setting Noload
The opposite of the dynamic relationship is simply “noload”, specified using lazy=’noload’:
Above, the children collection is fully writeable, and changes to it will be persisted to the database as well as
locally available for reading at the time they are added. However when instances of MyClass are freshly loaded from
the database, the children collection stays empty.
Use passive_deletes=True to disable child object loading on a DELETE operation, in conjunction with “ON
DELETE (CASCADE|SET NULL)” on your database to automatically cascade deletes to child objects. Note that
“ON DELETE” is not supported on SQLite, and requires InnoDB tables when using MySQL:
mapper(MyOtherClass, myothertable)
When passive_deletes is applied, the children relationship will not be loaded into memory when an instance
of MyClass is marked for deletion. The cascade="all, delete-orphan" will take effect for instances of
MyOtherClass which are currently present in the session; however for instances of MyOtherClass which are
not loaded, SQLAlchemy assumes that “ON DELETE CASCADE” rules will ensure that those rows are deleted by
the database and that no foreign key violation will occur.
When the primary key of an entity changes, related items which reference the primary key must also be updated as
well. For databases which enforce referential integrity, it’s required to use the database’s ON UPDATE CASCADE
functionality in order to propagate primary key changes. For those which don’t, the passive_updates flag can
be set to False which instructs SQLAlchemy to issue UPDATE statements individually. The passive_updates
flag can also be False in conjunction with ON UPDATE CASCADE functionality, although in that case it issues
UPDATE statements unnecessarily.
A typical mutable primary key setup might look like:
class User(object):
pass
class Address(object):
pass
passive_updates is set to True by default. Foreign key references to non-primary key columns are supported as well.
FIVE
The Mapper is the entrypoint to the configurational API of the SQLAlchemy object relational mapper. But the primary
object one works with when using the ORM is the Session.
# create a Session
session = Session()
91
SQLAlchemy Documentation, Release 0.6.2
session.add(myobject)
session.commit()
Above, the sessionmaker() call creates a class for us, which we assign to the name Session. This class is
a subclass of the actual sqlalchemy.orm.session.Session class, which will instantiate with a particular
bound engine.
When you write your application, place the call to sessionmaker() somewhere global, and then make your new
Session class available to the rest of your application.
In our previous example regarding sessionmaker(), we specified a bind for a particular Engine. If we’d like
to construct a sessionmaker() without an engine available and bind it later on, or to specify other options to an
existing sessionmaker(), we may use the configure() method:
It’s actually entirely optional to bind a Session to an engine. If the underlying mapped Table objects use “bound”
metadata, the Session will make use of the bound engine instead (or will even use multiple engines if multiple binds
are present within the mapped tables). “Bound” metadata is described at Creating and Dropping Database Tables.
The Session also has the ability to be bound to multiple engines explicitly. Descriptions of these scenarios are
described in Partitioning Strategies.
The Session can also be explicitly bound to an individual database Connection. Reasons for doing this may in-
clude to join a Session with an ongoing transaction local to a specific Connection object, or to bypass connection
pooling by just having connections persistently checked out and associated with distinct, long running sessions:
engine = create_engine(’postgresql://...’)
...
Note that create_session() disables all optional “automation” by default. Called with no arguments, the session
produced is not autoflushing, does not auto-expire, and does not maintain a transaction (i.e. it begins and commits a
new transaction for each flush()). SQLAlchemy uses create_session() extensively within its own unit tests.
Configurational arguments accepted by sessionmaker() and create_session() are the same as that of the
Session class itself, and are described at sqlalchemy.orm.sessionmaker().
Note that the defaults of create_session() are the opposite of that of sessionmaker(): autoflush and ex-
pire_on_commit are False, autocommit is True. It is recommended to use the sessionmaker() function instead of
create_session(). create_session() is used to get a session with no automation turned on and is useful
for testing.
It’s helpful to know the states which an instance can have within a session:
• Transient - an instance that’s not in a session, and is not saved to the database; i.e. it has no database identity.
The only relationship such an object has to the ORM is that its class has a mapper() associated with it.
• Pending - when you add() a transient instance, it becomes pending. It still wasn’t actually flushed to the
database yet, but it will be when the next flush occurs.
• Persistent - An instance which is present in the session and has a record in the database. You get persistent
instances by either flushing so that the pending instances become persistent, or by querying the database for
existing instances (or moving persistent instances from other sessions into your local session).
• Detached - an instance which has a record in the database, but is not in any session. There’s nothing wrong with
this, and you can use objects normally when they’re detached, except they will not be able to issue any SQL in
order to load collections or attributes which are not yet loaded, or were marked as “expired”.
Knowing these states is important, since the Session tries to be strict about ambiguous operations (such as trying to
save the same object to two different sessions at the same time).
You typically invoke Session() when you first need to talk to your database, and want to save
some objects or load some existing ones. Then, you work with it, save your changes, and then dispose
of it....or at the very least close() it. It’s not a “global” kind of object, and should be handled more
like a “local variable”, as it’s generally not safe to use with concurrent threads. Sessions are very
inexpensive to make, and don’t use any resources whatsoever until they are first used...so create some
!
There is also a pattern whereby you’re using a contextual session, this is described later in
Contextual/Thread-local Sessions. In this pattern, a helper object is maintaining a Session for
you, most commonly one that is local to the current thread (and sometimes also local to an applica-
tion instance). SQLAlchemy has worked this pattern out such that it still looks like you’re creating
a new session as you need one...so in that case, it’s still a guaranteed win to just say Session()
whenever you want a session.
• Is the Session a cache ?
Yeee...no. It’s somewhat used as a cache, in that it implements the identity map pattern, and stores
objects keyed to their primary key. However, it doesn’t do any kind of query caching. This means, if
you say session.query(Foo).filter_by(name=’bar’), even if Foo(name=’bar’)
is right there, in the identity map, the session has no idea about that. It has to issue SQL to the
database, get the rows back, and then when it sees the primary key in the row, then it can look in the lo-
cal identity map and see that the object is already there. It’s only when you say query.get({some
primary key}) that the Session doesn’t have to issue a query.
Additionally, the Session stores object instances using a weak reference by default. This also defeats
the purpose of using the Session as a cache, unless the weak_identity_map flag is set to False.
The Session is not designed to be a global object from which everyone consults as a “registry”
of objects. That is the job of a second level cache. A good library for implementing second level
caching is Memcached. It is possible to “sort of” use the Session in this manner, if you set it to
be non-transactional and it never flushes any SQL, but it’s not a terrific solution, since if concurrent
threads load the same objects at the same time, you may have multiple copies of the same objects
present in collections.
• How can I get the Session for a certain object ?
5.3.3 Querying
The query() function takes one or more entities and returns a new Query object which will issue mapper queries
within the context of this Session. An entity is defined as a mapped class, a Mapper object, an orm-enabled descriptor,
or an AliasedClass object:
When Query returns results, each object instantiated is stored within the identity map. When a row matches an
object which is already present, the same object is returned. In the latter case, whether or not the row is populated
onto an existing object depends upon whether the attributes of the instance have been expired or not. A default-
configured Session automatically expires all instances along transaction boundaries, so that with a normally isolated
transaction, there shouldn’t be any issue of instances representing data which is stale with regards to the current
transaction.
add() is used to place instances in the session. For transient (i.e. brand new) instances, this will have the effect of
an INSERT taking place for those instances upon the next flush. For instances which are persistent (i.e. were loaded
by this session), they are already present and do not need to be added. Instances which are detached (i.e. have been
removed from a session) may be re-associated with a session using this method:
user1 = User(name=’user1’)
user2 = User(name=’user2’)
session.add(user1)
session.add(user2)
The add() operation cascades along the save-update cascade. For more details see the section Cascades.
5.3.5 Merging
merge() reconciles the current state of an instance and its associated children with existing data in the database, and
returns a copy of the instance associated with the session. Usage is as follows:
merged_object = session.merge(existing_object)
• It examines the primary key of the instance. If it’s present, it attempts to load an instance with that primary key
(or pulls from the local identity map).
• If there’s no primary key on the given instance, or the given primary key does not exist in the database, a new
instance is created.
• The state of the given instance is then copied onto the located/newly created instance.
• The operation is cascaded to associated child items along the merge cascade. Note that all changes present on
the given instance, including changes to collections, are merged.
• The new instance is returned.
With merge(), the given instance is not placed within the session, and can be associated with a different session or
detached. merge() is very useful for taking the state of any kind of object structure without regard for its origins or
current session associations and placing that state within a session. Here’s two examples:
• An application which reads an object structure from a file and wishes to save it to the database might parse the
file, build up the structure, and then use merge() to save it to the database, ensuring that the data within the
file is used to formulate the primary key of each element of the structure. Later, when the file has changed, the
same process can be re-run, producing a slightly different object structure, which can then be merged in again,
and the Session will automatically update the database to reflect those changes.
• A web application stores mapped entities within an HTTP session object. When each request starts up, the
serialized data can be merged into the session, so that the original entity may be safely shared among requests
and threads.
merge() is frequently used by applications which implement their own second level caches. This refers to an
application which uses an in memory dictionary, or an tool like Memcached to store objects over long running spans
of time. When such an object needs to exist within a Session, merge() is a good choice since it leaves the
original cached object untouched. For this use case, merge provides a keyword option called load=False. When
this boolean flag is set to False, merge() will not issue any SQL to reconcile the given object against the current
state of the database, thereby reducing query overhead. The limitation is that the given object and all of its children
may not contain any pending changes, and it’s also of course possible that newer information in the database will not
be present on the merged object, since no load is issued.
5.3.6 Deleting
The delete() method places an instance into the Session’s list of objects to be marked as deleted:
The big gotcha with delete() is that nothing is removed from collections. Such as, if a User has a collection of
three Addresses, deleting an Address will not remove it from user.addresses:
The caveat with Session.delete() is that you need to have an object handy already in order to delete. The Query
includes a delete() method which deletes based on filtering criteria:
session.query(User).filter(User.id==7).delete()
The Query.delete() method includes functionality to “expire” objects already in the session which match the
criteria. However it does have some caveats, including that “delete” and “delete-orphan” cascades won’t be fully
expressed for collections which are already loaded. See the API docs for delete() for more details.
5.3.7 Flushing
When the Session is used with its default configuration, the flush step is nearly always done transparently. Specif-
ically, the flush occurs before any individual Query is issued, as well as within the commit() call before the
transaction is committed. It also occurs before a SAVEPOINT is issued when begin_nested() is used.
Regardless of the autoflush setting, a flush can always be forced by issuing flush():
session.flush()
The “flush-on-Query” aspect of the behavior can be disabled by constructing sessionmaker() with the flag
autoflush=False:
Session = sessionmaker(autoflush=False)
Additionally, autoflush can be temporarily disabled by setting the autoflush flag at any time:
mysession = Session()
mysession.autoflush = False
5.3.8 Committing
commit() is used to commit the current transaction. It always issues flush() beforehand to flush any remaining
state to the database; this is independent of the “autoflush” setting. If no transaction is present, it raises an error. Note
that the default behavior of the Session is that a transaction is always present; this behavior can be disabled by
setting autocommit=True. In autocommit mode, a transaction can be initiated by calling the begin() method.
Another behavior of commit() is that by default it expires the state of all instances present after the commit is
complete. This is so that when the instances are next accessed, either through attribute access or by them being present
in a Query result set, they receive the most recent state. To disable this behavior, configure sessionmaker() with
expire_on_commit=False.
Normally, instances loaded into the Session are never changed by subsequent queries; the assumption is that the
current transaction is isolated so the state most recently loaded is correct as long as the transaction continues. Setting
autocommit=True works against this model to some degree since the Session behaves in exactly the same way
with regard to attribute state, except no transaction is present.
rollback() rolls back the current transaction. With a default configured session, the post-rollback state of the
session is as follows:
• All connections are rolled back and returned to the connection pool, unless the Session was bound directly to a
Connection, in which case the connection is still maintained (but still rolled back).
• Objects which were initially in the pending state when they were added to the Session within the lifespan
of the transaction are expunged, corresponding to their INSERT statement being rolled back. The state of their
attributes remains unchanged.
• Objects which were marked as deleted within the lifespan of the transaction are promoted back to the persistent
state, corresponding to their DELETE statement being rolled back. Note that if those objects were first pending
within the transaction, that operation takes precedence instead.
• All objects not expunged are fully expired.
With that state understood, the Session may safely continue usage after a rollback occurs.
When a flush() fails, typically for reasons like primary key, foreign key, or “not nullable” constraint violations,
a rollback() is issued automatically (it’s currently not possible for a flush to continue after a partial failure).
However, the flush process always uses its own transactional demarcator called a subtransaction, which is described
more fully in the docstrings for Session. What it means here is that even though the database transaction has been
rolled back, the end user must still issue rollback() to fully reset the state of the Session.
5.3.10 Expunging
Expunge removes an object from the Session, sending persistent instances to the detached state, and pending instances
to the transient state:
session.expunge(obj1)
To remove all items, call expunge_all() (this method was formerly known as clear()).
5.3.11 Closing
The close() method issues a expunge_all(), and releases any transactional/connection resources. When con-
nections are returned to the connection pool, transactional state is rolled back as well.
To assist with the Session’s “sticky” behavior of instances which are present, individual objects can have all of their
attributes immediately re-loaded from the database, or marked as “expired” which will cause a re-load to occur upon
the next access of any of the object’s mapped attributes. Any changes marked on the object are discarded:
When an expired object reloads, all non-deferred column-based attributes are loaded in one query. Current behavior
for expired relationship-based attributes is that they load individually upon access - this behavior may be enhanced in
a future release. When a refresh is invoked on an object, the ultimate operation is equivalent to a Query.get(), so
any relationships configured with eager loading should also load within the scope of the refresh operation.
refresh() and expire() also support being passed a list of individual attribute names in which to be refreshed.
These names can refer to any attribute, column-based or relationship based:
# expire the attributes ’hello’, ’world’ objects obj1, obj2, attributes will be reloaded
# on the next access:
session.expire(obj1, [’hello’, ’world’])
session.expire(obj2, [’hello’, ’world’])
The full contents of the session may be expired at once using expire_all():
session.expire_all()
refresh() and expire() are usually not needed when working with a default-configured Session.
The usual need is when an UPDATE or DELETE has been issued manually within the transaction using
Session.execute().
The Session itself acts somewhat like a set-like collection. All items present may be accessed using the iterator
interface:
if obj in session:
print "Object is present"
The session is also keeping track of all newly created (i.e. pending) objects, all objects which have had changes since
they were last loaded or saved (i.e. “dirty”), and everything that’s been marked as deleted:
Note that objects within the session are by default weakly referenced. This means that when they are dereferenced in
the outside application, they fall out of scope from within the Session as well and are subject to garbage collection by
the Python interpreter. The exceptions to this include objects which are pending, objects which are marked as deleted,
or persistent objects which have pending changes on them. After a full flush, these collections are all empty, and all
objects are again weakly referenced. To disable the weak referencing behavior and force all objects within the ses-
sion to remain until explicitly expunged, configure sessionmaker() with the weak_identity_map=False
setting.
5.4 Cascades
Mappers support the concept of configurable cascade behavior on relationship() constructs. This behavior
controls how the Session should treat the instances that have a parent-child relationship with another instance that is
operated upon by the Session. Cascade is indicated as a comma-separated list of string keywords, with the possible
values all, delete, save-update, refresh-expire, merge, expunge, and delete-orphan.
Cascading is configured by setting the cascade keyword argument on a relationship():
The above mapper specifies two relationships, items and customer. The items relationship specifies “all, delete-
orphan” as its cascade value, indicating that all add, merge, expunge, refresh delete and expire oper-
ations performed on a parent Order instance should also be performed on the child Item instances attached to it.
The delete-orphan cascade value additionally indicates that if an Item instance is no longer associated with an
Order, it should also be deleted. The “all, delete-orphan” cascade argument allows a so-called lifecycle relationship
between an Order and an Item object.
The customer relationship specifies only the “save-update” cascade value, indicating most operations will not be
cascaded from a parent Order instance to a child User instance except for the add() operation. “save-update”
cascade indicates that an add() on the parent will cascade to all child items, and also that items added to a parent
which is already present in a session will also be added to that same session. “save-update” cascade also cascades
the pending history of a relationship()-based attribute, meaning that objects which were removed from a scalar or
collection attribute whose changes have not yet been flushed are also placed into the new session - this so that foreign
key clear operations and deletions will take place (new in 0.6).
Note that the delete-orphan cascade only functions for relationships where the target object can have a single
parent at a time, meaning it is only appropriate for one-to-one or one-to-many relationships. For a relationship()
which establishes one-to-one via a local foreign key, i.e. a many-to-one that stores only a single parent, or one-to-
one/one-to-many via a “secondary” (association) table, a warning will be issued if delete-orphan is configured.
To disable this warning, also specify the single_parent=True flag on the relationship, which constrains objects
to allow attachment to only one parent at a time.
The default value for cascade on relationship() is save-update, merge.
Session = sessionmaker()
session = Session()
try:
item1 = session.query(Item).get(1)
item2 = session.query(Item).get(2)
item1.foo = ’bar’
item2.bar = ’foo’
A session which is configured with autocommit=True may be placed into a transaction using begin(). With an
autocommit=True session that’s been placed into a transaction using begin(), the session releases all connection
resources after a commit() or rollback() and remains transaction-less (with the exception of flushes) until the
next begin() call:
Session = sessionmaker(autocommit=True)
session = Session()
session.begin()
try:
item1 = session.query(Item).get(1)
item2 = session.query(Item).get(2)
item1.foo = ’bar’
item2.bar = ’foo’
session.commit()
except:
session.rollback()
raise
The begin() method also returns a transactional token which is compatible with the Python 2.6 with statement:
Session = sessionmaker(autocommit=True)
session = Session()
with session.begin():
item1 = session.query(Item).get(1)
item2 = session.query(Item).get(2)
item1.foo = ’bar’
item2.bar = ’foo’
SAVEPOINT transactions, if supported by the underlying engine, may be delineated using the begin_nested()
method:
Session = sessionmaker()
session = Session()
session.add(u1)
session.add(u2)
begin_nested() may be called any number of times, which will issue a new SAVEPOINT with a unique identifier
for each call. For each begin_nested() call, a corresponding rollback() or commit() must be issued.
When begin_nested() is called, a flush() is unconditionally issued (regardless of the autoflush setting).
This is so that when a rollback() occurs, the full state of the session is expired, thus causing all subsequent
attribute/instance access to reference the full state of the Session right before begin_nested() was called.
Finally, for MySQL, PostgreSQL, and soon Oracle as well, the session can be instructed to use two-phase commit
semantics. This will coordinate the committing of transactions across databases so that the transaction is either com-
mitted or rolled back in all databases. You can also prepare() the session for interacting with transactions not
managed by SQLAlchemy. To use two phase transactions set the flag twophase=True on the session:
engine1 = create_engine(’postgresql://db1’)
engine2 = create_engine(’postgresql://db2’)
Session = sessionmaker(twophase=True)
session = Session()
# commit. session will issue a flush to all DBs, and a prepare step to all DBs,
# before committing both transactions
session.commit()
class SomeClass(object):
pass
mapper(SomeClass, some_table)
someobject = session.query(SomeClass).get(5)
This technique works both for INSERT and UPDATE statements. After the flush/commit operation, the value
attribute on someobject above is expired, so that when next accessed the newly generated value will be loaded
from the database.
Session = sessionmaker(bind=engine)
session = Session()
The current Connection held by the Session is accessible using the connection() method:
connection = session.connection()
The examples above deal with a Session that’s bound to a single Engine or Connection. To execute statements
using a Session which is bound either to multiple engines, or none at all (i.e. relies upon bound metadata), both
execute() and connection() accept a mapper keyword argument, which is passed a mapped class or Mapper
instance, which is used to locate the proper context for the desired engine:
Session = sessionmaker()
session = Session()
connection = session.connection(MyMappedClass)
Session = sessionmaker()
Note that above, we issue a commit() both on the Session as well as the Transaction. This is an example of
where we take advantage of Connection‘s ability to maintain subtransactions, or nested begin/commit pairs. The
Session is used exactly as though it were managing the transaction on its own; its commit() method issues its
flush(), and commits the subtransaction. The subsequent transaction the Session starts after commit will not
begin until it’s next used. Above we issue a close() to prevent this from occurring. Finally, the actual transaction
is committed using Transaction.commit().
When using the threadlocal engine context, the process above is simplified; the Session uses the same connec-
tion/transaction as everyone else in the current thread, whether or not you explicitly bind it:
session = Session() # session takes place in the transaction like everyone else
# ... go nuts
The scoped_session() function wraps around the sessionmaker() function, and produces an object which
behaves the same as the Session subclass returned by sessionmaker():
However, when you instantiate this Session “class”, in reality the object is pulled from a threadlocal variable, or if
it doesn’t exist yet, it’s created using the underlying class generated by sessionmaker():
>>> # call Session() the first time. the new Session instance is created.
>>> session = Session()
>>> # later, in the same application thread, someone else calls Session()
>>> session2 = Session()
Since the Session() constructor now returns the same Session object every time within the current thread, the
object returned by scoped_session() also implements most of the Session methods and properties at the
“class” level, such that you don’t even need to instantiate Session():
u2 = User()
# commit changes
Session.commit()
After remove() is called, the next operation with the contextual session will start a new Session for the current
thread.
A (really, really) common question is when does the contextual session get created, when does it get disposed ? We’ll
consider a typical lifespan as used in a web application:
The above example illustrates an explicit call to Session.remove(). This has the effect such that each web request
starts fresh with a brand new session. When integrating with a web framework, there’s actually many options on how
to proceed for this step:
• Session.remove() - this is the most cut and dry approach; the Session is thrown away, all of its transac-
tional/connection resources are closed out, everything within it is explicitly gone. A new Session will be
used on the next request.
• Session.close() - Similar to calling remove(), in that all objects are explicitly expunged and all transac-
tional/connection resources closed, except the actual Session object hangs around. It doesn’t make too much
difference here unless the start of the web request would like to pass specific options to the initial construction
of Session(), such as a specific Engine to bind to.
• Session.commit() - In this case, the behavior is that any remaining changes pending are flushed, and the trans-
action is committed. The full state of the session is expired, so that when the next web request is started, all data
will be reloaded. In reality, the contents of the Session are weakly referenced anyway so its likely that it will
be empty on the next request in any case.
• Session.rollback() - Similar to calling commit, except we assume that the user would have called commit ex-
plicitly if that was desired; the rollback() ensures that no transactional state remains and expires all data, in
the case that the request was aborted and did not roll back itself.
• do nothing - this is a valid option as well. The controller code is responsible for doing one of the above steps at
the end of the request.
Vertical partitioning places different kinds of objects, or different tables, across multiple databases:
engine1 = create_engine(’postgresql://db1’)
engine2 = create_engine(’postgresql://db2’)
Session = sessionmaker(twophase=True)
session = Session()
Horizontal partitioning partitions the rows of a single table (or a set of tables) across multiple databases.
See the “sharding” example in attribute_shard.py
class MySessionExtension(SessionExtension):
def before_commit(self, session):
Session = sessionmaker(extension=MySessionExtension())
or with create_session():
session = create_session(extension=MySessionExtension())
The same SessionExtension instance can be used with any number of sessions.
SIX
DATABASE ENGINES
The Engine is the starting point for any SQLAlchemy application. It’s “home base” for the actual database and its
DBAPI, delivered to the SQLAlchemy application through a connection pool and a Dialect, which describes how to
talk to a specific kind of database/DBAPI combination.
The general structure is this:
+-----------+ __________
/---| Pool |---\ (__________)
+-------------+ / +-----------+ \ +--------+ | |
connect() <--| Engine |---x x----| DBAPI |---| database |
+-------------+ \ +-----------+ / +--------+ | |
\---| Dialect |---/ |__________|
+-----------+ (__________)
Where above, a Engine references both a Dialect and Pool, which together interpret the DBAPI’s module
functions as well as the behavior of the database.
Creating an engine is just a matter of issuing a single call, create_engine():
engine = create_engine(’postgresql://scott:tiger@localhost:5432/mydatabase’)
The above engine invokes the postgresql dialect and a connection pool which references localhost:5432.
Note that the appropriate usage of create_engine() is once per particular configuration, held globally for the
lifetime of a single application process (not including child processes via fork() - these would require a new engine).
A single Engine manages connections on behalf of the process and is intended to be called upon in a concurrent
fashion. Creating engines for each particular operation is not the intended usage.
The engine can be used directly to issue SQL to the database. The most generic way is to use connections, which you
get via the connect() method:
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print "username:", row[’username’]
connection.close()
The connection is an instance of Connection, which is a proxy object for an actual DBAPI connection. The
returned result is an instance of ResultProxy, which acts very much like a DBAPI cursor.
When you say engine.connect(), a new Connection object is created, and a DBAPI connection is retrieved
from the connection pool. Later, when you call connection.close(), the DBAPI connection is returned to the
pool; nothing is actually “closed” from the perspective of the database.
109
SQLAlchemy Documentation, Release 0.6.2
To execute some SQL more quickly, you can skip the Connection part and just say:
Where above, the execute() method on the Engine does the connect() part for you, and returns the
ResultProxy directly. The actual Connection is inside the ResultProxy, waiting for you to finish read-
ing the result. In this case, when you close() the ResultProxy, the underlying Connection is closed, which
returns the DBAPI connection to the pool.
To summarize the above two examples, when you use a Connection object, it’s known as explicit execution. When
you don’t see the Connection object, but you still use the execute() method on the Engine, it’s called explicit,
connectionless execution. A third variant of execution also exists called implicit execution; this will be described
later.
The Engine and Connection can do a lot more than what we illustrated above; SQL strings are only its most
rudimentary function. Later chapters will describe how “constructed SQL” expressions can be used with engines; in
many cases, you don’t have to deal with the Engine at all after it’s created. The Object Relational Mapper (ORM),
an optional feature of SQLAlchemy, also uses the Engine in order to get at connections; that’s also a case where you
can often create the engine once, and then forget about it.
• yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
• yes / OS platform - The DBAPI supports that platform.
• no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support.
• no / OS platform - The DBAPI does not support that platform.
• partial - the DBAPI is partially usable on the target platform but has major unresolved issues.
• development - a development version of the dialect exists, but is not yet usable.
• thirdparty - the dialect itself is maintained by a third party, who should be consulted for information on current
support.
• * - indicates the given DBAPI is the “default” for SQLAlchemy, i.e. when just the database name is specified
Further detail on dialects is available at sqlalchemy.dialects as well as additional notes on the wiki at Database Notes
dialect+driver://username:password@host:port/database
Dialect names include the identifying name of the SQLAlchemy dialect which include sqlite, mysql,
postgresql, oracle, mssql, and firebird. The drivername is the name of the DBAPI to be used to con-
nect to the database using all lowercase letters. If not specified, a “default” DBAPI will be imported if available - this
default is typically the most widely known driver available for that backend (i.e. cx_oracle, pysqlite/sqlite3, psycopg2,
mysqldb). For Jython connections, specify the zxjdbc driver, which is the JDBC-DBAPI bridge included with Jython.
# postgresql on Jython
pg_db = create_engine(’postgresql+zxjdbc://scott:tiger@localhost/mydatabase’)
# mysql on Jython
mysql_db = create_engine(’mysql+zxjdbc://localhost/foo’)
SQLite connects to file based databases. The same URL format is used, omitting the hostname, and using the “file”
portion as the filename of the database. This has the effect of four slashes being present for an absolute file path:
# sqlite://<nohostname>/<path>
# where <path> is relative:
sqlite_db = create_engine(’sqlite:///foo.db’)
sqlite_memory_db = create_engine(’sqlite://’)
The Engine will ask the connection pool for a connection when the connect() or execute() methods are
called. The default connection pool, QueuePool, as well as the default connection pool used with SQLite,
SingletonThreadPool, will open connections to the database on an as-needed basis. As concurrent statements
are executed, QueuePool will grow its pool of connections to a default size of five, and will allow a default “over-
flow” of ten. Since the Engine is essentially “home base” for the connection pool, it follows that you should keep a
single Engine per database established within an application, rather than creating a new one for each connection.
Custom arguments used when issuing the connect() call to the underlying DBAPI may be issued in three distinct
ways. String-based arguments can be passed directly from the URL string as query arguments:
db = create_engine(’postgresql://scott:tiger@localhost/test?argument1=foo&argument2=bar’)
If SQLAlchemy’s database connector is aware of a particular query argument, it may convert its type from string to its
proper type.
create_engine() also takes an argument connect_args which is an additional dictionary that will be passed
to connect(). This can be used when arguments of a type other than string are required, and SQLAlchemy’s
database connector has no type conversion logic present for that parameter:
The most customizable connection method of all is to pass a creator argument, which specifies a callable that
returns a DBAPI connection:
def connect():
return psycopg.connect(user=’scott’, host=’localhost’)
db = create_engine(’postgresql://’, creator=connect)
connection = engine.connect()
result = connection.execute(select([table1], table1.c.col1==5))
for row in result:
print row[’col1’], row[’col2’]
connection.close()
The above SQL construct is known as a select(). The full range of SQL constructs available are described in SQL
Expression Language Tutorial.
Both Connection and Engine fulfill an interface known as Connectable which specifies common functional-
ity between the two objects, namely being able to call connect() to return a Connection object (Connection
just returns itself), and being able to call execute() to get a result set. Following this, most SQLAlchemy func-
tions and objects which accept an Engine as a parameter or attribute with which to execute SQL will also accept a
Connection. This argument is named bind:
engine = create_engine(’sqlite:///:memory:’)
Connection facts:
• the Connection object is not thread-safe. While a Connection can be shared among threads using properly
synchronized access, this is also not recommended as many DBAPIs have issues with, if not outright disallow,
sharing of connection state between threads.
• The Connection object represents a single dbapi connection checked out from the connection pool. In this
state, the connection pool has no affect upon the connection, including its expiration or timeout state. For the
connection pool to properly manage connections, connections should be returned to the connection pool (i.e.
‘‘connection.close()‘‘) whenever the connection is not in use. If your application has a need for management
of multiple connections or is otherwise long running (this includes all web applications, threaded or not), don’t
hold a single connection open at the module level.
trans = connection.begin()
try:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), col1=7, col2=’this is some data’)
trans.commit()
except:
trans.rollback()
raise
The Transaction object also handles “nested” behavior by keeping track of the outermost begin/commit pair.
In this example, two functions both issue a transaction on a Connection, but only the outermost Transaction object
actually takes effect when it is committed.
raise
Above, method_a is called first, which calls connection.begin(). Then it calls method_b. When
method_b calls connection.begin(), it just increments a counter that is decremented when it calls
commit(). If either method_a or method_b calls rollback(), the whole transaction is rolled back. The trans-
action is not committed until method_a calls the commit() method. This “nesting” behavior allows the creation
of functions which “guarantee” that a transaction will be used if one was not already available, but will automatically
participate in an enclosing transaction if one exists.
Note that SQLAlchemy’s Object Relational Mapper also provides a way to control transaction scope at a higher level;
this is described in Managing Transactions. Transaction Facts:
• the Transaction object, just like its parent Connection, is not thread-safe.
The above transaction example illustrates how to use Transaction so that several executions can take part
in the same transaction. What happens when we issue an INSERT, UPDATE or DELETE call without using
Transaction? The answer is autocommit. While many DBAPIs implement a flag called autocommit, the
current SQLAlchemy behavior is such that it implements its own autocommit. This is achieved by detecting state-
ments which represent data-changing operations, i.e. INSERT, UPDATE, DELETE, etc., and then issuing a COMMIT
automatically if no transaction is in progress. The detection is based on compiled statement attributes, or in the case
of a text-only statement via regular expressions.
conn = engine.connect()
conn.execute("INSERT INTO users VALUES (1, ’john’)") # autocommits
A summary of all three methods follows below. First, assume the usage of the following MetaData and Table
objects; while we haven’t yet introduced these concepts, for now you only need to know that we are representing a
database table, and are creating an “executable” SQL construct which issues a statement to the database. These objects
are described in Database Meta Data.
meta = MetaData()
users_table = Table(’users’, meta,
Column(’id’, Integer, primary_key=True),
Column(’name’, String(50))
)
Explicit execution delivers the SQL text or constructed SQL expression to the execute() method of Connection:
engine = create_engine(’sqlite:///file.db’)
connection = engine.connect()
result = connection.execute(users_table.select())
for row in result:
# ....
connection.close()
Explicit, connectionless execution delivers the expression to the execute() method of Engine:
engine = create_engine(’sqlite:///file.db’)
result = engine.execute(users_table.select())
for row in result:
# ....
result.close()
Implicit execution is also connectionless, and calls the execute() method on the expression itself, utilizing the fact
that either an Engine or Connection has been bound to the expression object (binding is discussed further in the
next section, Database Meta Data):
engine = create_engine(’sqlite:///file.db’)
meta.bind = engine
result = users_table.select().execute()
for row in result:
# ....
result.close()
In both “connectionless” examples, the Connection is created behind the scenes; the ResultProxy returned by
the execute() call references the Connection used to issue the SQL statement. When we issue close() on the
ResultProxy, or if the result set object falls out of scope and is garbage collected, the underlying Connection
is closed for us, resulting in the DBAPI connection being returned to the pool.
The “threadlocal” engine strategy is used by non-ORM applications which wish to bind a transaction to the current
thread, such that all parts of the application can participate in that transaction implicitly without the need to explicitly
reference a Connection. “threadlocal” is designed for a very specific pattern of use, and is not appropriate unless
this very specfic pattern, described below, is what’s desired. It has no impact on the “thread safety” of SQLAlchemy
components or one’s application. It also should not be used when using an ORM Session object, as the Session it-
self represents an ongoing transaction and itself handles the job of maintaining connection and transactional resources.
Enabling threadlocal is achieved as follows:
db = create_engine(’mysql://localhost/test’, strategy=’threadlocal’)
When the engine above is used in a “connectionless” style, meaning engine.execute() is called, a DBAPI
connection is retrieved from the connection pool and then associated with the current thread. Subsequent operations on
the Engine while the DBAPI connection remains checked out will make use of the same DBAPI connection object.
The connection stays allocated until all returned ResultProxy objects are closed, which occurs for a particular
ResultProxy after all pending results are fetched, or immediately for an operation which returns no rows (such as
an INSERT).
# execute one statement and receive results. r1 now references a DBAPI connection resource
r1 = db.execute("select * from table1")
# execute a second statement and receive results. r2 now references the *same* resource as
r2 = db.execute("select * from table2")
# close r2. with no more references to the underlying connection resources, they
# are returned to the pool.
r2.close()
The above example does not illustrate any pattern that is particularly useful, as it is not a frequent occurence that
two execute/result fetching operations “leapfrog” one another. There is a slight savings of connection pool checkout
overhead between the two operations, and an implicit sharing of the same transactional context, but since there is no
explicitly declared transaction, this association is short lived.
The real usage of “threadlocal” comes when we want several operations to occur within the scope of a shared trans-
action. The Engine now has begin(), commit() and rollback() methods which will retrieve a connection
resource from the pool and establish a new transaction, maintaining the connection against the current thread until the
transaction is committed or rolled back:
db.begin()
try:
call_operation1()
call_operation2()
db.commit()
except:
db.rollback()
call_operation1() and call_operation2() can make use of the Engine as a global variable, using the
“connectionless” execution style, and their operations will participate in the same transaction:
def call_operation1():
engine.execute("insert into users values (?, ?)", 1, "john")
def call_operation2():
users.update(users.c.user_id==5).execute(name=’ed’)
When using threadlocal, operations that do call upon the engine.connect() method will receive a Connection
that is outside the scope of the transaction. This can be used for operations such as logging the status of an operation
regardless of transaction success:
db.begin()
conn = db.connect()
try:
conn.execute(log_table.insert(), message="Operation started")
call_operation1()
call_operation2()
db.commit()
conn.execute(log_table.insert(), message="Operation succeeded")
except:
db.rollback()
conn.execute(log_table.insert(), message="Operation failed")
finally:
conn.close()
Functions which are written to use an explicit Connection object, but wish to participate in the threadlocal
transaction, can receive their Connection object from the contextual_connect() method, which returns
a Connection that is inside the scope of the transaction:
conn = db.contextual_connect()
call_operation3(conn)
conn.close()
Calling close() on the “contextual” connection does not release the connection resources to the pool if other re-
sources are making use of it. A resource-counting mechanism is employed so that the connection is released back to
the pool only when all users of that connection, including the transaction established by engine.begin(), have
been completed.
So remember - if you’re not sure if you need to use strategy="threadlocal" or not, the answer is no ! It’s
driven by a specific programming pattern that is generally not the norm.
• sqlalchemy.engine - controls SQL echoing. set to logging.INFO for SQL query output,
logging.DEBUG for query + result set output.
• sqlalchemy.dialects - controls custom logging for SQL dialects. See the documentation of individual
dialects for details.
• sqlalchemy.pool - controls connection pool logging. set to logging.INFO or lower to log connection
pool checkouts/checkins.
• sqlalchemy.orm - controls logging of various ORM functions. set to logging.INFO for configurational logging as we
import logging
logging.basicConfig()
logging.getLogger(’sqlalchemy.engine’).setLevel(logging.INFO)
logging.getLogger(’sqlalchemy.orm.unitofwork’).setLevel(logging.DEBUG)
By default, the log level is set to logging.ERROR within the entire sqlalchemy namespace so that no log opera-
tions occur, even within an application that has logging enabled otherwise.
The echo flags present as keyword arguments to create_engine() and others as well as the echo property
on Engine, when set to True, will first attempt to ensure that logging is enabled. Unfortunately, the logging
module provides no way of determining if output has already been configured (note we are referring to if a logging
configuration has been set up, not just that the logging level is set). For this reason, any echo=True flags will result
in a call to logging.basicConfig() using sys.stdout as the destination. It also sets up a default format using the
level name, timestamp, and logger name. Note that this configuration has the affect of being configured in addition to
any existing logger configurations. Therefore, when using Python logging, ensure all echo flags are set to False at
all times, to avoid getting duplicate log lines.
The logger name of instance such as an Engine or Pool defaults to using a truncated hex identifier string.
To set this to a specific name, use the “logging_name” and “pool_logging_name” keyword arguments with
sqlalchemy.create_engine().
SEVEN
metadata = MetaData()
MetaData is a container object that keeps together many different features of a database (or multiple databases)
being described.
To represent a table, use the Table class. Its two primary arguments are the table name, then the MetaData object
which it will be associated with. The remaining positional arguments are mostly Column objects describing each
column:
Above, a table called user is described, which contains four columns. The primary key of the table consists of the
user_id column. Multiple columns may be assigned the primary_key=True flag which denotes a multi-column
primary key, known as a composite primary key.
121
SQLAlchemy Documentation, Release 0.6.2
Note also that each column describes its datatype using objects corresponding to genericized types, such as Integer
and String. SQLAlchemy features dozens of types of varying levels of specificity as well as the ability to create
custom types. Documentation on the type system can be found at Column and Data Types.
The MetaData object contains all of the schema constructs we’ve associated with it. It supports a few methods of
accessing these table objects, such as the sorted_tables accessor which returns a list of each Table object in
order of foreign key dependency (that is, each table is preceded by all tables which it references):
In most cases, individual Table objects have been explicitly declared, and these objects are typically accessed directly
as module-level variables in an application. Once a Table has been defined, it has a full set of accessors which allow
inspection of its properties. Given the following Table definition:
Note the ForeignKey object used in this table - this construct defines a reference to a remote table, and is fully
described in Defining Foreign Keys. Methods of accessing information about this table include:
# or just
employees.c.employee_id
# via string
employees.c[’employee_id’]
employees.metadata
# get the "key" of a column, which defaults to its name, but can
# be any user-defined string:
employees.c.name.key
Once you’ve defined some Table objects, assuming you’re working with a brand new database one thing you might
want to do is issue CREATE statements for those tables and their related constructs (as an aside, it’s also quite possible
that you don’t want to do this, if you already have some preferred methodology such as tools included with your
database or an existing scripting system - if that’s the case, feel free to skip this section - SQLAlchemy has no
requirement that it be used to create your tables).
The usual way to issue CREATE is to use create_all() on the MetaData object. This method will issue queries
that first check for the existence of each individual table, and if not found will issue the CREATE statements:
engine = create_engine(’sqlite:///:memory:’)
metadata = MetaData()
metadata.create_all(engine)
PRAGMA table_info(user){}
CREATE TABLE user(
create_all() creates foreign key constraints between tables usually inline with the table definition itself, and for
this reason it also generates the tables in order of their dependency. There are options to change this behavior such
that ALTER TABLE is used instead.
Dropping all tables is similarly achieved using the drop_all() method. This method does the exact opposite of
create_all() - the presence of each table is checked first, and tables are dropped in reverse order of dependency.
Creating and dropping individual tables can be done via the create() and drop() methods of Table. These
methods by default issue the CREATE or DROP regardless of the table being present:
engine = create_engine(’sqlite:///:memory:’)
meta = MetaData()
drop() method:
employees.drop(engine)
DROP TABLE employee
To enable the “check first for the table existing” logic, add the checkfirst=True argument to create() or
drop():
employees.create(engine, checkfirst=True)
employees.drop(engine, checkfirst=False)
Notice in the previous section the creator/dropper methods accept an argument for the database engine in use. When a
schema construct is combined with an Engine object, or an individual Connection object, we call this the bind. In
the above examples the bind is associated with the schema construct only for the duration of the operation. However,
the option exists to persistently associate a bind with a set of schema constructs via the MetaData object’s bind
attribute:
engine = create_engine(’sqlite://’)
# create MetaData
meta = MetaData()
# bind to an engine
meta.bind = engine
We can now call methods like create_all() without needing to pass the Engine:
meta.create_all()
The MetaData’s bind is used for anything that requires an active connection, such as loading the definition of a table
from the database automatically (called reflection):
# describe a table called ’users’, query the database for its columns
users_table = Table(’users’, meta, autoload=True)
As well as for executing SQL constructs that are derived from that MetaData’s table objects:
Binding the MetaData to the Engine is a completely optional feature. The above operations can be achieved without
the persistent bind using parameters:
# describe a table called ’users’, query the database for its columns
users_table = Table(’users’, meta, autoload=True, autoload_with=engine)
Should you use bind ? It’s probably best to start without it, and wait for a specific need to arise. Bind is useful if:
• You aren’t using the ORM, are usually using “connectionless” execution, and find yourself constantly needing to
specify the same Engine object throughout the entire application. Bind can be used here to provide “implicit”
execution.
• Your application has multiple schemas that correspond to different engines. Using one MetaData for each
schema, bound to each engine, provides a decent place to delineate between the schemas. The ORM will also
integrate with this approach, where the Session will naturally use the engine that is bound to each table via
its metadata (provided the Session itself has no bind configured.).
• Your application talks to multiple database engines at different times, which use the same set of Table objects.
It’s usually confusing and unnecessary to begin to create “copies” of Table objects just so that different engines
can be used for different operations. An example is an application that writes data to a “master” database while
performing read-only operations from a “read slave”. A global MetaData object is not appropriate for per-
request switching like this, although a ThreadLocalMetaData object is.
• You are using the ORM Session to handle which class/table is bound to which engine, or you are using the
Session to manage switching between engines. Its a good idea to keep the “binding of tables to engines”
in one place - either using MetaData only (the Session can of course be present, it just has no bind
configured), or using Session only (the bind attribute of MetaData is left empty).
A Table object can be instructed to load information about itself from the corresponding database schema object
already existing within the database. This process is called reflection. Most simply you need only specify the table
name, a MetaData object, and the autoload=True flag. If the MetaData is not persistently bound, also add the
autoload_with argument:
The above operation will use the given engine to query the database for information about the messages table, and
will then generate Column, ForeignKey, and other objects corresponding to this information as though the Table
object were hand-constructed in Python.
When tables are reflected, if a given table references another one via foreign key, a second Table object is created
within the MetaData object representing the connection. Below, assume the table shopping_cart_items ref-
erences a table named shopping_carts. Reflecting the shopping_cart_items table has the effect such that
the shopping_carts table will also be loaded:
The MetaData has an interesting “singleton-like” behavior such that if you requested both tables individually,
MetaData will ensure that exactly one Table object is created for each distinct table name. The Table con-
structor actually returns to you the already-existing Table object if one already exists with the given name. Such as
below, we can access the already generated shopping_carts table just by naming it:
Of course, it’s a good idea to use autoload=True with the above table regardless. This is so that the table’s
attributes will be loaded if they have not been already. The autoload operation only occurs for the table if it hasn’t
already been loaded; once loaded, new calls to Table with the same name will not re-issue any reflection queries.
Individual columns can be overridden with explicit values when reflecting tables; this is handy for specifying custom
datatypes, constraints such as primary keys that may not be configured within the database, etc.:
Reflecting Views
The reflection system can also reflect views. Basic usage is the same as that of a table:
Above, my_view is a Table object with Column objects representing the names and types of each column within
the view “some_view”.
Usually, it’s desired to have at least a primary key constraint when reflecting a view, if not foreign keys as well. View
reflection doesn’t extrapolate these constraints.
Use the “override” technique for this, specifying explicitly those columns which are part of the primary key or have
foreign key constraints:
The MetaData object can also get a listing of tables and reflect the full set. This is achieved by using the reflect()
method. After calling it, all located tables are present within the MetaData object’s dictionary of tables:
meta = MetaData()
meta.reflect(bind=someengine)
users_table = meta.tables[’users’]
addresses_table = meta.tables[’addresses’]
metadata.reflect() also provides a handy way to clear or delete all the rows in a database:
meta = MetaData()
meta.reflect(bind=someengine)
for table in reversed(meta.sorted_tables):
someengine.execute(table.delete())
A low level interface which provides a backend-agnostic system of loading lists of schema, table, column, and con-
straint descriptions from a given database is also available. This is known as the “Inspector” and is described in the
API documentation at Schema Introspection.
Some databases support the concept of multiple schemas. A Table can reference this by specifying the schema
keyword argument:
Within the MetaData collection, this table will be identified by the combination of financial_info and
remote_banks. If another table called financial_info is referenced without the remote_banks schema, it
will refer to a different Table. ForeignKey objects can specify references to columns in this table using the form
remote_banks.financial_info.id.
The schema argument should be used for any name qualifiers required, including Oracle’s “owner” attribute and
similar. It also can accommodate a dotted name for longer schemes:
schema="dbo.scott"
Table supports database-specific options. For example, MySQL has different table backend types, including “My-
ISAM” and “InnoDB”. This can be expressed with Table using mysql_engine:
Other backends may support table-level options as well. See the API documentation for each backend for further
details.
The general rule for all insert/update defaults is that they only take effect if no value for a particular column is passed
as an execute() parameter; otherwise, the given value is used.
The simplest kind of default is a scalar value used as the default value of a column:
Table("mytable", meta,
Column("somecolumn", Integer, default=12)
)
Above, the value “12” will be bound as the column value during an INSERT if no other value is supplied.
A scalar value may also be associated with an UPDATE statement, though this is not very common (as UPDATE
statements are usually looking for dynamic defaults):
Table("mytable", meta,
Column("somecolumn", Integer, onupdate=25)
)
The default and onupdate keyword arguments also accept Python functions. These functions are invoked at the
time of insert or update if no other value for that column is supplied, and the value returned is used for the column’s
value. Below illustrates a crude “sequence” that assigns an incrementing counter to a primary key column:
t = Table("mytable", meta,
Column(’id’, Integer, primary_key=True, default=mydefault),
)
It should be noted that for real “incrementing sequence” behavior, the built-in capabilities of the database should nor-
mally be used, which may include sequence objects or other autoincrementing capabilities. For primary key columns,
SQLAlchemy will in most cases use these capabilities automatically. See the API documentation for Column includ-
ing the autoincrement flag, as well as the section on Sequence later in this chapter for background on standard
primary key generation techniques.
To illustrate onupdate, we assign the Python datetime function now to the onupdate attribute:
import datetime
t = Table("mytable", meta,
Column(’id’, Integer, primary_key=True),
When an update statement executes and no value is passed for last_updated, the
datetime.datetime.now() Python function is executed and its return value used as the value for
last_updated. Notice that we provide now as the function itself without calling it (i.e. there are no
parenthesis following) - SQLAlchemy will execute the function at the time the statement executes.
The Python functions used by default and onupdate may also make use of the current statement’s context in order
to determine a value. The context of a statement is an internal SQLAlchemy object which contains all information
about the statement being executed, including its source expression, the parameters associated with it and the cursor.
The typical use case for this context with regards to default generation is to have access to the other values being
inserted or updated on the row. To access the context, provide a function that accepts a single context argument:
def mydefault(context):
return context.current_parameters[’counter’] + 12
t = Table(’mytable’, meta,
Column(’counter’, Integer),
Column(’counter_plus_twelve’, Integer, default=mydefault, onupdate=mydefault)
)
Above we illustrate a default function which will execute for all INSERT and UPDATE statements where a value for
counter_plus_twelve was otherwise not provided, and the value will be that of whatever value is present in the
execution for the counter column, plus the number 12.
While the context object passed to the default function has many attributes, the current_parameters member is
a special member provided only during the execution of a default function for the purposes of deriving defaults from
its existing values. For a single statement that is executing many sets of bind parameters, the user-defined function is
called for each set of parameters, and current_parameters will be provided with each individual parameter set
for each execution.
The “default” and “onupdate” keywords may also be passed SQL expressions, including select statements or direct
function calls:
t = Table("mytable", meta,
Column(’id’, Integer, primary_key=True),
Above, the create_date column will be populated with the result of the now() SQL function (which, depending
on backend, compiles into NOW() or CURRENT_TIMESTAMP in most cases) during an INSERT statement, and the
key column with the result of a SELECT subquery from another table. The last_modified column will be
populated with the value of UTC_TIMESTAMP(), a function specific to MySQL, when an UPDATE statement is
emitted for this table.
Note that when using func functions, unlike when using Python datetime functions we do call the function, i.e. with
parenthesis “()” - this is because what we want in this case is the return value of the function, which is the SQL
expression construct that will be rendered into the INSERT or UPDATE statement.
The above SQL functions are usually executed “inline” with the INSERT or UPDATE statement being executed,
meaning, a single statement is executed which embeds the given expressions or subqueries within the VALUES or
SET clause of the statement. Although in some cases, the function is “pre-executed” in a SELECT statement of its
own beforehand. This happens when all of the following is true:
• the inline=True flag is not set on the Insert() or Update() construct, and the statement has not defined
an explicit returning() clause.
Whether or not the default generation clause “pre-executes” is not something that normally needs to be considered,
unless it is being addressed for performance reasons.
When the statement is executed with a single set of parameters (that is, it is not an “executemany” style ex-
ecution), the returned ResultProxy will contain a collection accessible via result.postfetch_cols()
which contains a list of all Column objects which had an inline-executed default. Similarly, all parameters which
were bound to the statement, including all Python and SQL expressions which were pre-executed, are present
in the last_inserted_params() or last_updated_params() collections on ResultProxy. The
inserted_primary_key collection contains a list of primary key values for the row inserted (a list so that single-
column and composite-column primary keys are represented in the same format).
A variant on the SQL expression default is the server_default, which gets placed in the CREATE TABLE
statement during a create() operation:
t = Table(’test’, meta,
Column(’abc’, String(20), server_default=’abc’),
Column(’created_at’, DateTime, server_default=text("sysdate"))
)
The behavior of server_default is similar to that of a regular SQL default; if it’s placed on a primary key column
for a database which doesn’t have a way to “postfetch” the ID, and the statement is not “inlined”, the SQL expression
is pre-executed; otherwise, SQLAlchemy lets the default fire off on the database side normally.
Columns with values set by a database trigger or other external process may be called out with a marker:
t = Table(’test’, meta,
Column(’abc’, String(20), server_default=FetchedValue()),
Column(’def’, String(20), server_onupdate=FetchedValue())
)
These markers do not emit a “default” clause when the table is created, however they do set the same internal flags as
a static server_default clause, providing hints to higher-level tools that a “post-fetch” of these rows should be
performed after an insert or update.
SQLAlchemy represents database sequences using the Sequence object, which is considered to be a special case
of “column default”. It only has an effect on databases which have explicit support for sequences, which currently
includes Postgresql, Oracle, and Firebird. The Sequence object is otherwise ignored.
The Sequence may be placed on any column as a “default” generator to be used during INSERT operations, and can
also be configured to fire off during UPDATE operations if desired. It is most commonly used in conjunction with a
single integer primary key column:
Where above, the table “cartitems” is associated with a sequence named “cart_id_seq”. When INSERT statements
take place for “cartitems”, and no value is passed for the “cart_id” column, the “cart_id_seq” sequence will be used to
generate a value.
When the Sequence is associated with a table, CREATE and DROP statements issued for that table will also issue
CREATE/DROP for the sequence object as well, thus “bundling” the sequence object with its parent table.
The Sequence object also implements special functionality to accommodate Postgresql’s SERIAL datatype. The
SERIAL type in PG automatically generates a sequence that is used implicitly during inserts. This means that if
a Table object defines a Sequence on its primary key column so that it works with Oracle and Firebird, the
Sequence would get in the way of the “implicit” sequence that PG would normally use. For this use case, add
the flag optional=True to the Sequence object - this indicates that the Sequence should only be used if the
database provides no other option for generating primary key identifiers.
The Sequence object also has the ability to be executed standalone like a SQL expression, which has the effect of
calling its “next value” function:
seq = Sequence(’some_sequence’)
nextid = connection.execute(seq)
A foreign key in SQL is a table-level construct that constrains one or more columns in that table to only allow values
that are present in a different set of columns, typically but not always located on a different table. We call the columns
which are constrained the foreign key columns and the columns which they are constrained towards the referenced
columns. The referenced columns almost always define the primary key for their owning table, though there are
exceptions to this. The foreign key is the “joint” that connects together pairs of rows which have a relationship with
each other, and SQLAlchemy assigns very deep importance to this concept in virtually every area of its operation.
In SQLAlchemy as well as in DDL, foreign key constraints can be defined as additional attributes within the table
clause, or for single-column foreign keys they may optionally be specified within the definition of a single column.
The single column foreign key is more common, and at the column level is specified by constructing a ForeignKey
object as an argument to a Column object:
Above, we define a new table user_preference for which each row must contain a value in the user_id column
that also exists in the user table’s user_id column.
The argument to ForeignKey is most commonly a string of the form <tablename>.<columnname>, or for a table
in a remote schema or “owner” of the form <schemaname>.<tablename>.<columnname>. It may also be an actual
Column object, which as we’ll see later is accessed from an existing Table object via its c collection:
ForeignKey(user.c.user_id)
The advantage to using a string is that the in-python linkage between user and user_preference is resolved
only when first needed, so that table objects can be easily spread across multiple modules and defined in any order.
Foreign keys may also be defined at the table level, using the ForeignKeyConstraint object. This object can
describe a single- or multi-column foreign key. A multi-column foreign key is known as a composite foreign key, and
almost always references a table that has a composite primary key. Below we define a table invoice which has a
composite primary key:
And then a table invoice_item with a composite foreign key referencing invoice:
It’s important to note that the ForeignKeyConstraint is the only way to define a composite foreign key.
While we could also have placed individual ForeignKey objects on both the invoice_item.invoice_id
and invoice_item.ref_num columns, SQLAlchemy would not be aware that these two values should be paired
together - it would be two individual foreign key constraints instead of a single composite foreign key referencing two
columns.
In all the above examples, the ForeignKey object causes the “REFERENCES” keyword to be added in-
line to a column definition within a “CREATE TABLE” statement when create_all() is issued, and
ForeignKeyConstraint invokes the “CONSTRAINT” keyword inline with “CREATE TABLE”. There are some
cases where this is undesireable, particularly when two tables reference each other mutually, each with a foreign key
referencing the other. In such a situation at least one of the foreign key constraints must be generated after both
tables have been built. To support such a scheme, ForeignKey and ForeignKeyConstraint offer the flag
use_alter=True. When using this flag, the constraint will be generated using a definition similar to “ALTER
TABLE <tablename> ADD CONSTRAINT <name> ...”. Since a name is required, the name attribute must also be
specified. For example:
Most databases support cascading of foreign key values, that is the when a parent row is updated the new value
is placed in child rows, or when the parent row is deleted all corresponding child rows are set to null or deleted.
In data definition language these are specified using phrases like “ON UPDATE CASCADE”, “ON DELETE CAS-
CADE”, and “ON DELETE SET NULL”, corresponding to foreign key constraints. The phrase after “ON UPDATE”
or “ON DELETE” may also other allow other phrases that are specific to the database in use. The ForeignKey and
ForeignKeyConstraint objects support the generation of this clause via the onupdate and ondelete key-
word arguments. The value is any string which will be output after the appropriate “ON UPDATE” or “ON DELETE”
phrase:
Note that these clauses are not supported on SQLite, and require InnoDB tables when used with MySQL. They may
also not be supported on other databases.
Unique constraints can be created anonymously on a single column using the unique keyword on Column. Ex-
plicitly named unique constraints and/or those with multiple columns are created via the UniqueConstraint
table-level construct.
meta = MetaData()
mytable = Table(’mytable’, meta,
Column(’col2’, Integer),
Column(’col3’, Integer),
Check constraints can be named or unnamed and can be created at the Column or Table level, using the
CheckConstraint construct. The text of the check constraint is passed directly through to the database, so there
is limited “database independent” behavior. Column level check constraints generally should only refer to the column
to which they are placed, while table level constraints can refer to any columns in the table.
Note that some databases do not actively support check constraints such as MySQL and SQLite.
meta = MetaData()
mytable = Table(’mytable’, meta,
Column(’col2’, Integer),
Column(’col3’, Integer),
mytable.create(engine)
CREATE TABLE mytable (
col1 INTEGER CHECK (col1>5),
col2 INTEGER,
col3 INTEGER,
CONSTRAINT check1 CHECK (col2 > col3 + 5)
)
7.3.4 Indexes
Indexes can be created anonymously (using an auto-generated name ix_<column label>) for a single column
using the inline index keyword on Column, which also modifies the usage of unique to apply the uniqueness to
the index itself, instead of adding a separate UNIQUE constraint. For indexes with specific names or which encompass
more than one column, use the Index construct, which requires a name.
Note that the Index construct is created externally to the table which it corresponds, using Column objects and not
strings.
Below we illustrate a Table with several Index objects associated. The DDL for “CREATE INDEX” is issued right
after the create statements for the table:
meta = MetaData()
mytable = Table(’mytable’, meta,
# an indexed column, with index "ix_mytable_col1"
Column(’col1’, Integer, index=True),
Column(’col3’, Integer),
Column(’col4’, Integer),
Column(’col5’, Integer),
Column(’col6’, Integer),
)
mytable.create(engine)
CREATE TABLE mytable (
col1 INTEGER,
col2 INTEGER,
col3 INTEGER,
col4 INTEGER,
col5 INTEGER,
col6 INTEGER
)
CREATE INDEX ix_mytable_col1 ON mytable (col1)
CREATE UNIQUE INDEX ix_mytable_col2 ON mytable (col2)
CREATE UNIQUE INDEX myindex ON mytable (col5, col6)
CREATE INDEX idx_col34 ON mytable (col3, col4)
i = Index(’someindex’, mytable.c.col5)
i.create(engine)
CREATE INDEX someindex ON mytable (col5)
The sqlalchemy.schema package contains SQL expression constructs that provide DDL expressions. For exam-
ple, to produce a CREATE TABLE statement:
Above, the CreateTable construct works like any other expression construct (such as select(),
table.insert(), etc.). A full reference of available constructs is in DDL Generation.
The DDL constructs all extend a common base class which provides the capability to be associated with an individual
Table or MetaData object, to be invoked upon create/drop events. Consider the example of a table which contains
a CHECK constraint:
users.create(engine)
CREATE TABLE users (
user_id SERIAL NOT NULL,
user_name VARCHAR(40) NOT NULL,
PRIMARY KEY (user_id),
CONSTRAINT cst_user_name_length CHECK (length(user_name) >= 8)
)
The above table contains a column “user_name” which is subject to a CHECK constraint that validates that the length
of the string is at least eight characters. When a create() is issued for this table, DDL for the CheckConstraint
will also be issued inline within the table definition.
The CheckConstraint construct can also be constructed externally and associated with the Table afterwards:
So far, the effect is the same. However, if we create DDL elements corresponding to the creation and removal of this
constraint, and associate them with the Table as events, these new events will take over the job of issuing DDL for
the constraint. Additionally, the constraint will be added via ALTER:
AddConstraint(constraint).execute_at("after-create", users)
DropConstraint(constraint).execute_at("before-drop", users)
users.create(engine)
CREATE TABLE users (
user_id SERIAL NOT NULL,
user_name VARCHAR(40) NOT NULL,
PRIMARY KEY (user_id)
)
The real usefulness of the above becomes clearer once we illustrate the on attribute of a DDL event. The on parameter
is part of the constructor, and may be a string name of a database dialect name, a tuple containing dialect names, or a
Python callable. This will limit the execution of the item to just those dialects, or when the return value of the callable
is True. So if our CheckConstraint was only supported by Postgresql and not other databases, we could limit it
to just that dialect:
When using a callable, the callable is passed the ddl element, event name, the Table or MetaData object whose
“create” or “drop” event is in progress, and the Connection object being used for the operation, as well as additional
information as keyword arguments. The callable can perform checks, such as whether or not a given item already
exists. Below we define should_create() and should_drop() callables that check for the presence of our
named constraint:
users.create(engine)
CREATE TABLE users (
user_id SERIAL NOT NULL,
user_name VARCHAR(40) NOT NULL,
PRIMARY KEY (user_id)
)
Custom DDL phrases are most easily achieved using the DDL construct. This construct works like all the other DDL
elements except it accepts a string which is the text to be emitted:
A more comprehensive method of creating libraries of DDL constructs is to use the compiler extension. See that
chapter for full details.
EIGHT
EXAMPLES
The SQLAlchemy distribution includes a variety of code examples illustrating a select set of patterns, some typical
and some not so typical. All are runnable and can be found in the /examples directory of the distribution. Each
example contains a README in its __init__.py file, each of which are listed below.
Additional SQLAlchemy examples, some user contributed, are available on the wiki at
http://www.sqlalchemy.org/trac/wiki/UsageRecipes.
node = TreeNode(’rootnode’)
node.append(’node1’)
node.append(’node3’)
session.add(node)
session.commit()
dump_tree(node)
8.2 Associations
Location: /examples/association/ Examples illustrating the usage of the “association object” pattern, where an
intermediary object associates two endpoint objects together.
The first example illustrates a basic association from a User object to a collection or Order objects, each which refer-
ences a collection of Item objects.
The second example builds upon the first to add the Association Proxy extension.
E.g.:
# create an order
order = Order(’john smith’)
141
SQLAlchemy Documentation, Release 0.6.2
# append two more Items via the transparent "items" proxy, which
# will create OrderItems automatically using the default price.
order.items.append(item(’SA Mug’))
order.items.append(item(’SA Hat’))
E.g.:
# query
print q.all()
To run, both SQLAlchemy and Beaker (1.4 or greater) must be installed or on the current PYTHONPATH. The demo
will create a local directory for datafiles, insert initial data, and run. Running the demo a second time will utilize the
cache files already present, and exactly one SQL statement against two tables will be emitted - the displayed result
however will utilize dozens of lazyloads that all pull from cache.
The demo scripts themselves, in order of complexity, are run as follows:
python examples/beaker_caching/helloworld.py
python examples/beaker_caching/relationship_caching.py
python examples/beaker_caching/advanced.py
python examples/beaker_caching/local_session_caching.py
Listing of files:
environment.py - Establish data / cache file paths, and configurations, bootstrap fixture data if necessary.
meta.py - Represent persistence structures which allow the usage of Beaker caching with SQLAlchemy.
Introduces a query option called FromCache.
model.py - The datamodel, which represents Person that has multiple Address objects, each with Postal-
Code, City, Country
fixture_data.py - creates demo PostalCode, Address, Person objects in the database.
helloworld.py - the basic idea.
relationship_caching.py - Illustrates how to add cache options on relationship endpoints, so that lazyloads
load from cache.
advanced.py - Further examples of how to use FromCache. Combines techniques from the first two
scripts.
local_session_caching.py - Grok everything so far ? This example creates a new Beaker container that
will persist data in a dictionary which is local to the current session. remove() the session and the cache
is gone.
class BaseInterval(object):
@hybrid
def contains(self,point):
return (self.start <= point) & (point < self.end)
n2 = Node(2)
n5 = Node(5)
n2.add_neighbor(n5)
print n2.higher_neighbors()
• a function which can return a list of shard ids to try, given a particular Query (“query_chooser”). If it returns all
shard ids, all shards will be queried and the results joined together.
In this example, four sqlite databases will store information about weather data on a database-per-continent basis. We
provide example shard_chooser, id_chooser and query_chooser functions. The query_chooser illustrates inspection of
the SQL expression element in order to attempt to determine a single shard being requested.
• poly_assoc.py - imitates the non-foreign-key schema used by Ruby on Rails’ Active Record.
• poly_assoc_fk.py - Adds a polymorphic association table so that referential integrity can be maintained.
• poly_assoc_generic.py - further automates the approach of poly_assoc_fk.py to also generate the
association table definitions automatically.
The implementation is limited to only public, well known and simple to use extension points.
E.g.:
print session.query(Road).filter(Road.road_geom.intersects(r1.road_geom)).all()
nosetests -w examples/versioning/
class SomeClass(Base):
__tablename__ = ’sometable’
id = Column(Integer, primary_key=True)
name = Column(String(50))
sess = Session()
sc = SomeClass(name=’sc1’)
sess.add(sc)
sess.commit()
sc.name = ’sc1modified’
sess.commit()
assert sc.version == 2
SomeClassHistory = SomeClass.__history_mapper__.class_
assert sess.query(SomeClassHistory).\
filter(SomeClassHistory.version == 1).\
all() \
== [SomeClassHistory(version=1, name=’sc1’)]
To apply VersionedMeta to a subset of classes (probably more typical), the metaclass can be applied on a per-class
basis:
Base = declarative_base(bind=engine)
class SomeClass(Base):
__tablename__ = ’sometable’
# ...
class SomeVersionedClass(Base):
__metaclass__ = VersionedMeta
__tablename__ = ’someothertable’
# ...
The VersionedMeta is a declarative metaclass - to use the extension with plain mappers, the _history_mapper
function can be applied:
m = mapper(SomeClass, sometable)
_history_mapper(m)
SomeHistoryClass = SomeClass.__history_mapper__.class_
shrew = Animal(u’shrew’)
shrew[u’cuteness’] = 5
shrew[u’weasel-like’] = False
shrew[u’poisonous’] = True
session.add(shrew)
session.flush()
q = (session.query(Animal).
filter(Animal.facts.any(
and_(AnimalFact.key == u’weasel-like’,
AnimalFact.value == True))))
print ’weasel-like animals’, q.all()
• pickle.py - Quick and dirty, serialize the whole DOM into a BLOB column. While the example is very
brief, it has very limited functionality.
• adjacency_list.py - Each DOM node is stored in an individual table row, with attributes represented in
a separate table. The nodes are associated in a hierarchy using an adjacency list structure. A query function
is introduced which can search for nodes along any path with a given structure of attributes, basically a (very
narrow) subset of xpath.
• optimized_al.py - Uses the same strategy as adjacency_list.py, but adds a MapperExtension
which optimizes how the hierarchical structure is loaded, such that the full set of DOM nodes are loaded within
a single table result set, and are organized hierarchically as they are received during a load.
E.g.:
NINE
API REFERENCE
9.1 sqlalchemy
9.1.1 Connections
Creating Engines
create_engine(*args, **kwargs)
Create a new Engine instance.
The standard method of specifying the engine is via URL as the first positional argument, to indicate the ap-
propriate database dialect and connection arguments, with additional keyword arguments sent as options to the
dialect and resulting Engine.
The URL is a string in the form dialect+driver://user:password@host/dbname[?key=value..],
where dialect is a database name such as mysql, oracle, postgresql, etc., and driver the name of
a DBAPI, such as psycopg2, pyodbc, cx_oracle, etc. Alternatively, the URL can be an instance of URL.
**kwargs takes a wide variety of options which are routed towards their appropriate components. Arguments
may be specific to the Engine, the underlying Dialect, as well as the Pool. Specific dialects also accept key-
word arguments that are unique to that dialect. Here, we describe the parameters that are common to most
create_engine() usage.
Parameters
• assert_unicode – Deprecated. A warning is raised in all cases when a non-Unicode object is
passed when SQLAlchemy would coerce into an encoding (note: but not when the DBAPI
handles unicode objects natively). To suppress or raise this warning to an error, use the
Python warnings filter documented at: http://docs.python.org/library/warnings.html
• connect_args – a dictionary of options which will be passed directly to the DBAPI’s
connect() method as additional keyword arguments.
• convert_unicode=False – if set to True, all String/character based types will convert Unicode
values to raw byte values going into the database, and all raw byte values to Python Unicode
coming out in result sets. This is an engine-wide method to provide unicode conversion
across the board. For unicode conversion on a column-by-column level, use the Unicode
column type instead, described in types.
• creator – a callable which returns a DBAPI connection. This creation function will be
passed to the underlying connection pool and will be used to create all new database connec-
tions. Usage of this function causes connection parameters specified in the URL argument
to be bypassed.
• echo=False – if True, the Engine will log all statements as well as a repr() of their parameter
lists to the engines logger, which defaults to sys.stdout. The echo attribute of Engine can
be modified at any time to turn logging on and off. If set to the string "debug", result rows
149
SQLAlchemy Documentation, Release 0.6.2
will be printed to the standard output as well. This flag ultimately controls a Python logger;
see Configuring Logging for information on how to configure logging directly.
• echo_pool=False – if True, the connection pool will log all checkouts/checkins to the log-
ging stream, which defaults to sys.stdout. This flag ultimately controls a Python logger; see
Configuring Logging for information on how to configure logging directly.
• encoding=’utf-8’ – the encoding to use for all Unicode translations, both by engine-wide
unicode conversion as well as the Unicode type object.
• execution_options – Dictionary execution options which will be applied to all connections.
See execution_options()
• label_length=None – optional integer value which limits the size of dynamically generated
column labels to that many characters. If less than 6, labels are generated as “_(counter)”.
If None, the value of dialect.max_identifier_length is used instead.
• listeners – A list of one or more PoolListener objects which will receive connection
pool events.
• logging_name – String identifier which will be used within the “name” field of logging
records generated within the “sqlalchemy.engine” logger. Defaults to a hexstring of the
object’s id.
• max_overflow=10 – the number of connections to allow in connection pool “overflow”, that
is connections that can be opened above and beyond the pool_size setting, which defaults to
five. this is only used with QueuePool.
• module=None – used by database implementations which support multiple DBAPI modules,
this is a reference to a DBAPI2 module to be used instead of the engine’s default module.
For PostgreSQL, the default is psycopg2. For Oracle, it’s cx_Oracle.
• pool=None – an already-constructed instance of Pool, such as a QueuePool instance. If
non-None, this pool will be used directly as the underlying connection pool for the engine,
bypassing whatever connection parameters are present in the URL argument. For informa-
tion on constructing connection pools manually, see pooling.
• poolclass=None – a Pool subclass, which will be used to create a connection pool instance
using the connection parameters given in the URL. Note this differs from pool in that you
don’t actually instantiate the pool in this case, you just indicate what type of pool to be used.
• pool_logging_name – String identifier which will be used within the “name” field of log-
ging records generated within the “sqlalchemy.pool” logger. Defaults to a hexstring of the
object’s id.
• pool_size=5 – the number of connections to keep open inside the connection pool. This
used with QueuePool as well as SingletonThreadPool.
• pool_recycle=-1 – this setting causes the pool to recycle connections after the given num-
ber of seconds has passed. It defaults to -1, or no timeout. For example, setting to 3600
means connections will be recycled after one hour. Note that MySQL in particular will
disconnect automatically if no activity is detected on a connection for eight hours
(although this is configurable with the MySQLDB connection itself and the server configu-
ration as well).
• pool_timeout=30 – number of seconds to wait before giving up on getting a connection from
the pool. This is only used with QueuePool.
• strategy=’plain’ – used to invoke alternate implementations. Currently available is the
threadlocal strategy, which is described in Using the Threadlocal Execution Strategy.
engine_from_config(configuration, prefix=’sqlalchemy.’, **kwargs)
Create a new Engine instance using a configuration dictionary.
The dictionary is typically produced from a config file where keys are prefixed, such as sqlalchemy.url,
sqlalchemy.echo, etc. The ‘prefix’ argument indicates the prefix to be searched for.
A select set of keyword arguments will be “coerced” to their expected type based on string values. In a future
release, this functionality will be expanded and include dialect-specific arguments.
Connectables
•When a dropped connection is detected, it is assumed that all connections held by the pool are poten-
tially dropped, and the entire pool is replaced.
•An application may want to use dispose() within a test suite that is creating multiple engines.
It is critical to note that dispose() does not guarantee that the application will release all open database
connections - only those connections that are checked into the pool are closed. Connections which remain
checked out or have been detached from the engine are not affected.
driver
Driver name of the Dialect in use by this Engine.
drop(entity, connection=None, **kwargs)
Drop a table or index within this engine’s database connection given a schema object.
echo
When True, enable log output for this element.
This has the effect of setting the Python logging level for the namespace of this element’s class and object
reference. A value of boolean True indicates that the loglevel logging.INFO will be set for the logger,
whereas the string value debug will set the loglevel to logging.DEBUG.
name
String name of the Dialect in use by this Engine.
raw_connection()
Return a DB-API connection.
reflecttable(table, connection=None, include_columns=None)
Given a Table object, reflects its columns and properties from the database.
table_names(schema=None, connection=None)
Return a list of all table names available in the database.
Parameters
• schema – Optional, retrieve names from a non-default schema.
• connection – Optional, use a specified connection. Default is the
contextual_connect for this Engine.
text(text, *args, **kwargs)
Return a sql.text() object for performing literal queries.
transaction(callable_, *args, **kwargs)
Execute the given function within a transaction boundary.
This is a shortcut for explicitly calling begin() and commit() and optionally rollback() when exceptions are
raised. The given *args and **kwargs will be passed to the function.
The connection used is that of contextual_connect().
See also the similar method on Connection itself.
update_execution_options(**opt)
update the execution_options dictionary of this Engine.
For details on execution_options, see Connection.execution_options() as well as
sqlalchemy.sql.expression.Executable.execution_options().
class Connection(engine, connection=None, close_with_result=False, _branch=False, _execu-
tion_options=None)
Provides high-level functionality for a wrapped DB-API connection.
Provides execution support for string-based SQL statements as well as ClauseElement, Compiled and Default-
Generator objects. Provides a begin() method to return Transaction objects.
The Connection object is not thread-safe.
__init__(engine, connection=None, close_with_result=False, _branch=False, _execution_options=None)
Construct a new Connection.
Connection objects are typically constructed by an Engine, see the connect() and
contextual_connect() methods of Engine.
begin()
Begin a transaction and return a Transaction handle.
Repeated calls to begin on the same Connection will create a lightweight, emulated nested transaction.
Only the outermost transaction may commit. Calls to commit on inner transactions are ignored. Any
transaction in the hierarchy may rollback, however.
begin_nested()
Begin a nested transaction and return a Transaction handle.
Nested transactions require SAVEPOINT support in the underlying database. Any transaction in the hier-
archy may commit and rollback, however the outermost transaction still controls the overall commit
or rollback of the transaction of a whole.
begin_twophase(xid=None)
Begin a two-phase or XA transaction and return a Transaction handle.
Parameter xid – the two phase transaction id. If not supplied, a random id will be generated.
close()
Close this Connection.
closed
Return True if this connection is closed.
connect()
Returns self.
This Connectable interface method returns self, allowing Connections to be used interchangably with
Engines in most situations that require a bind.
connection
The underlying DB-API connection managed by this Connection.
contextual_connect(**kwargs)
Returns self.
This Connectable interface method returns self, allowing Connections to be used interchangably with
Engines in most situations that require a bind.
create(entity, **kwargs)
Create a Table or Index given an appropriate Schema object.
detach()
Detach the underlying DB-API connection from its connection pool.
This Connection instance will remain useable. When closed, the DB-API connection will be literally
closed and not returned to its pool. The pool will typically lazily create a new connection to replace the
detached connection.
This method can be used to insulate the rest of an application from a modified state on a connection (such as
a transaction isolation level or similar). Also see PoolListener for a mechanism to modify connection
state when connections leave and return to their connection pool.
dialect
Dialect used by this Connection.
drop(entity, **kwargs)
Drop a Table or Index given an appropriate Schema object.
execute(object, *multiparams, **params)
Executes and returns a ResultProxy.
execution_options(**opt)
Set non-SQL options for the connection which take effect during execution.
The method returns a copy of this Connection which references the same underlying DBAPI connec-
tion, but also defines the given execution options which will take effect for a call to execute(). As the
new Connection references the same underlying resource, it is probably best to ensure that the copies
would be discarded immediately, which is implicit if used as in:
result = connection.execution_options(stream_results=True).
The options are the same as those accepted by sqlalchemy.sql.expression.Executable.execution_option
in_transaction()
Return True if a transaction is in progress.
info
A collection of per-DB-API connection instance properties.
invalidate(exception=None)
Invalidate the underlying DBAPI connection associated with this Connection.
The underlying DB-API connection is literally closed (if possible), and is discarded. Its source connection
pool will typically lazily create a new connection to replace it.
Upon the next usage, this Connection will attempt to reconnect to the pool with a new connection.
Transactions in progress remain in an “opened” state (even though the actual transaction is gone); these
must be explicitly rolled back before a reconnect on this Connection can proceed. This is to prevent
applications from accidentally continuing their transactional operations in a non-transactional state.
invalidated
Return True if this connection was invalidated.
reflecttable(table, include_columns=None)
Reflect the columns in the given string table name from the database.
scalar(object, *multiparams, **params)
Executes and returns the first column of the first row.
The underlying result/cursor is closed after execution.
transaction(callable_, *args, **kwargs)
Execute the given function within a transaction boundary.
This is a shortcut for explicitly calling begin() and commit() and optionally rollback() when exceptions are
raised. The given *args and **kwargs will be passed to the function.
See also transaction() on engine.
class Connectable()
Interface for an object which supports execution of SQL constructs.
The two implementations of Connectable are Connection and Engine.
Connectable must also implement the ‘dialect’ member which references a Dialect instance.
contextual_connect()
Return a Connection object which may be part of an ongoing context.
create(entity, **kwargs)
Create a table or index given an appropriate schema object.
drop(entity, **kwargs)
Drop a table or index given an appropriate schema object.
execute(object, *multiparams, **params)
Result Objects
class ResultProxy(context)
Wraps a DB-API cursor object to provide easier access to row columns.
Individual columns may be accessed by their integer position, case-insensitive column name, or by
schema.Column object. e.g.:
row = fetchone()
ResultProxy also handles post-processing of result column data using TypeEngine objects, which are
referenced from the originating SQL statement that produced this result set.
__init__(context)
close(_autoclose_connection=True)
Close this ResultProxy.
Closes the underlying DBAPI cursor corresponding to the execution.
Note that any data cached within this ResultProxy is still available. For some types of results, this may
include buffered rows.
If this ResultProxy was generated from an implicit execution, the underlying Connection will also be
closed (returns the underlying DBAPI connection to the connection pool.)
This method is called automatically when:
•all result rows are exhausted using the fetchXXX() methods.
•cursor.description is None.
fetchall()
Fetch all rows, just like DB-API cursor.fetchall().
fetchmany(size=None)
Fetch many rows, just like DB-API cursor.fetchmany(size=cursor.arraysize).
If rows are present, the cursor remains open after this is called. Else the cursor is automatically closed and
an empty list is returned.
fetchone()
Fetch one row, just like DB-API cursor.fetchone().
If a row is present, the cursor remains open after this is called. Else the cursor is automatically closed and
None is returned.
first()
Fetch the first row and then close the result set unconditionally.
Returns None if no row is present.
keys()
Return the current set of string keys for rows.
last_inserted_ids()
deprecated. use inserted_primary_key.
Use inserted_primary_key
last_inserted_params()
Return last_inserted_params() from the underlying ExecutionContext.
See ExecutionContext for details.
last_updated_params()
Return last_updated_params() from the underlying ExecutionContext.
See ExecutionContext for details.
lastrow_has_defaults()
Return lastrow_has_defaults() from the underlying ExecutionContext.
See ExecutionContext for details.
lastrowid
return the ‘lastrowid’ accessor on the DBAPI cursor.
This is a DBAPI specific method and is only functional for those backends which support it, for statements
where it is appropriate. It’s behavior is not consistent across backends.
Usage of this method is normally unnecessary; the inserted_primary_key method provides a tuple of pri-
mary key values for a newly inserted row, regardless of database backend.
postfetch_cols()
Return postfetch_cols() from the underlying ExecutionContext.
See ExecutionContext for details.
scalar()
Fetch the first column of the first row, and close the result set.
Returns None if no row is present.
supports_sane_multi_rowcount()
Return supports_sane_multi_rowcount from the dialect.
supports_sane_rowcount()
Return supports_sane_rowcount from the dialect.
class RowProxy(parent, row, processors, keymap)
Proxy values from a single cursor row.
Mostly follows “ordered dictionary” behavior, mapping result values to the string-based column name, the
integer position of the result in the row, as well as Column instances which can be mapped to the original
Columns that produced this result set (for results that correspond to constructed SQL expressions).
has_key(key)
Return True if this RowProxy contains the given key.
items()
Return a list of tuples, each tuple containing a key/value pair.
keys()
Return the list of keys as strings represented by this RowProxy.
Transactions
Internals
connection_memoize(key)
Decorator, memoize a function in a connection.info stash.
Only applicable to functions which take no arguments other than a connection. The memo will be stored in
connection.info[key].
class Dialect()
Define the behavior of a specific database and DB-API combination.
Any aspect of metadata definition, SQL query generation, execution, result-set handling, or anything else which
varies between databases is defined under the general category of the Dialect. The Dialect acts as a factory for
other database-specific object implementations including ExecutionContext, Compiled, DefaultGenerator, and
TypeEngine.
All Dialects implement the following attributes:
name identifying name for the dialect from a DBAPI-neutral point of view (i.e. ‘sqlite’)
driver identifying name for the dialect’s DBAPI
positional True if the paramstyle for this Dialect is positional.
paramstyle the paramstyle to be used (some DB-APIs support multiple paramstyles).
convert_unicode True if Unicode conversion should be applied to all str types.
encoding type of encoding to use for unicode, usually defaults to ‘utf-8’.
statement_compiler a Compiled class used to compile SQL statements
ddl_compiler a Compiled class used to compile DDL statements
server_version_info a tuple containing a version number for the DB backend in use. This value is only avail-
able for supporting dialects, and is typically populated during the initial connection to the database.
default_schema_name the name of the default schema. This value is only available for supporting dialects,
and is typically populated during the initial connection to the database.
execution_ctx_cls a ExecutionContext class used to handle statement execution
execute_sequence_format either the ‘tuple’ or ‘list’ type, depending on what cursor.execute() accepts for the
second argument (they vary).
preparer a IdentifierPreparer class used to quote identifiers.
supports_alter True if the database supports ALTER TABLE.
max_identifier_length The maximum length of identifier names.
supports_unicode_statements Indicate whether the DB-API can receive SQL statements as Python unicode
strings
supports_unicode_binds Indicate whether the DB-API can receive string bind parameters as Python unicode
strings
supports_sane_rowcount Indicate whether the dialect properly implements rowcount for UPDATE and
DELETE statements.
supports_sane_multi_rowcount Indicate whether the dialect properly implements rowcount for UPDATE and
DELETE statements when executed via executemany.
preexecute_autoincrement_sequences True if ‘implicit’ primary key functions must be executed separately
in order to get their value. This is currently oriented towards Postgresql.
implicit_returning use RETURNING or equivalent during INSERT execution in order to load newly gen-
erated primary keys and other column defaults in one execution, which are then available via in-
serted_primary_key. If an insert statement has returning() specified explicitly, the “implicit” functionality
is not used and inserted_primary_key will not be available.
dbapi_type_map A mapping of DB-API type objects present in this Dialect’s DB-API implementation mapped
to TypeEngine implementations used by the dialect.
This is used to apply types to result sets based on the DB-API types present in cursor.description; it only
takes effect for result sets against textual statements where no explicit typemap was present.
colspecs A dictionary of TypeEngine classes from sqlalchemy.types mapped to subclasses that are specific to
the dialect class. This dictionary is class-level only and is not accessed from the dialect instance itself.
do_commit(connection)
Implementations might want to put logic here for turning autocommit on/off, etc.
do_rollback(connection)
Implementations might want to put logic here for turning autocommit on/off, etc.
execute_sequence_format
alias of tuple
get_pk_constraint(conn, table_name, schema=None, **kw)
Compatiblity method, adapts the result of get_primary_keys() for those dialects which don’t implement
get_pk_constraint().
on_connect()
return a callable which sets up a newly created DBAPI connection.
This is used to set dialect-wide per-connection options such as isolation modes, unicode modes, etc.
If a callable is returned, it will be assembled into a pool listener that receives the direct DBAPI connection,
with all wrappers removed.
If None is returned, no listener will be generated.
preparer
alias of IdentifierPreparer
statement_compiler
alias of SQLCompiler
type_descriptor(typeobj)
Provide a database-specific TypeEngine object, given the generic object which comes from the types
module.
This method looks for a dictionary called colspecs as a class or instance-level variable, and passes on
to types.adapt_type().
class DefaultExecutionContext(dialect, connection, compiled_sql=None, compiled_ddl=None, state-
ment=None, parameters=None)
Bases: sqlalchemy.engine.base.ExecutionContext
__init__(dialect, connection, compiled_sql=None, compiled_ddl=None, statement=None, parameters=None)
get_lastrowid()
return self.cursor.lastrowid, or equivalent, after an INSERT.
This may involve calling special cursor functions, issuing a new SELECT on the cursor (or a new one), or
returning a stored value that was calculated within post_exec().
This function will only be called for dialects which support “implicit” primary key generation, keep pre-
execute_autoincrement_sequences set to False, and when no explicit id value was bound to the statement.
The function is called once, directly after post_exec() and before the transaction is committed or Result-
Proxy is generated. If the post_exec() method assigns a value to self._lastrowid, the value is used in place
of calling get_lastrowid().
Note that this method is not equivalent to the lastrowid method on ResultProxy, which is a direct
proxy to the DBAPI lastrowid accessor in all cases.
set_input_sizes(translate=None, exclude_types=None)
Given a cursor and ClauseParameters, call the appropriate style of setinputsizes() on the cursor,
using DB-API types from the bind parameter’s TypeEngine objects.
class ExecutionContext()
A messenger object for a Dialect that corresponds to a single execution.
ExecutionContext should have these data members:
connection Connection object which can be freely used by default value generators to execute SQL. This
Connection should reference the same underlying connection/transactional resources of root_connection.
root_connection Connection object which is the source of this ExecutionContext. This Connection may have
close_with_result=True set, in which case it can only be used once.
dialect dialect which created this ExecutionContext.
cursor DB-API cursor procured from the connection,
compiled if passed to constructor, sqlalchemy.engine.base.Compiled object being executed,
statement string version of the statement to be executed. Is either passed to the constructor, or must be created
from the sql.Compiled object by the time pre_exec() has completed.
parameters bind parameters passed to the execute() method. For compiled statements, this is a dictionary or
list of dictionaries. For textual statements, it should be in a format suitable for the dialect’s paramstyle (i.e.
dict or list of dicts for non positional, list or list of lists/tuples for positional).
isinsert True if the statement is an INSERT.
isupdate True if the statement is an UPDATE.
should_autocommit True if the statement is a “committable” statement.
postfetch_cols a list of Column objects for which a server-side default or inline SQL expression value was fired
off. Applies to inserts and updates.
create_cursor()
Return a new cursor generated from this ExecutionContext’s connection.
Some dialects may wish to change the behavior of connection.cursor(), such as postgresql which may
return a PG “server side” cursor.
get_rowcount()
Return the number of rows produced (by a SELECT query) or affected (by an INSERT/UPDATE/DELETE
statement).
Note that this row count may not be properly implemented in some dialects; this is indicated by the
supports_sane_rowcount and supports_sane_multi_rowcount dialect attributes.
handle_dbapi_exception(e)
Receive a DBAPI exception which occured upon execute, result fetch, etc.
last_inserted_params()
Return a dictionary of the full parameter dictionary for the last compiled INSERT statement.
Includes any ColumnDefaults or Sequences that were pre-executed.
last_updated_params()
Return a dictionary of the full parameter dictionary for the last compiled UPDATE statement.
Includes any ColumnDefaults that were pre-executed.
lastrow_has_defaults()
Return True if the last INSERT or UPDATE row contained inlined or database-side defaults.
post_exec()
Called after the execution of a compiled statement.
If a compiled statement was passed to this ExecutionContext, the last_insert_ids, last_inserted_params,
etc. datamembers should be available after this method completes.
pre_exec()
Called before an execution of a compiled statement.
If a compiled statement was passed to this ExecutionContext, the statement and parameters datamembers
must be initialized after this statement is complete.
result()
Return a result object corresponding to this ExecutionContext.
Returns a ResultProxy.
should_autocommit_text(statement)
Parse the given textual statement and return True if it refers to a “committable” statement
SQLAlchemy ships with a connection pooling framework that integrates with the Engine system and can also be used
on its own to manage plain DB-API connections.
At the base of any database helper library is a system for efficiently acquiring connections to the database. Since the
establishment of a database connection is typically a somewhat expensive operation, an application needs a way to
get at database connections repeatedly without incurring the full overhead each time. Particularly for server-side web
applications, a connection pool is the standard way to maintain a group or “pool” of active database connections which
are reused from request to request in a single server process.
The Engine returned by the create_engine() function in most cases has a QueuePool integrated, pre-
configured with reasonable pooling defaults. If you’re reading this section to simply enable pooling- congratulations!
You’re already done.
The most common QueuePool tuning parameters can be passed directly to create_engine() as keyword argu-
ments: pool_size, max_overflow, pool_recycle and pool_timeout. For example:
engine = create_engine(’postgresql://me@localhost/mydb’,
pool_size=20, max_overflow=0)
In the case of SQLite, a SingletonThreadPool is provided instead, to provide compatibility with SQLite’s
restricted threading model.
Pool instances may be created directly for your own use or to supply to sqlalchemy.create_engine() via
the pool= keyword argument.
Constructing your own pool requires supplying a callable function the Pool can use to create new connections. The
function will be called with no arguments.
Through this method, custom connection schemes can be made, such as a using connections from another library’s
pool, or making a new connection that automatically executes some initialization commands:
def getconn():
c = psycopg2.connect(username=’ed’, host=’127.0.0.1’, dbname=’test’)
# execute an initialization function on the connection before returning
c.cursor.execute("setup_encodings()")
return c
Or with SingletonThreadPool:
p = pool.SingletonThreadPool(lambda: sqlite.connect(filename=’myfile.db’))
create_connection()
dispose()
Dispose of this pool.
This method leaves the possibility of checked-out connections remaining open, It is advised to not reuse
the pool once dispose() is called, and to instead use a new pool constructed by the recreate() method.
do_get()
do_return_conn(conn)
get()
recreate()
Return a new instance with identical creation arguments.
return_conn(record)
status()
unique_connection()
class QueuePool(creator, pool_size=5, max_overflow=10, timeout=30, **kw)
Bases: sqlalchemy.pool.Pool
A Pool that imposes a limit on the number of open connections.
__init__(creator, pool_size=5, max_overflow=10, timeout=30, **kw)
Construct a QueuePool.
Parameters
• creator – a callable function that returns a DB-API connection object. The function will
be called with parameters.
• pool_size – The size of the pool to be maintained. This is the largest number of connections
that will be kept persistently in the pool. Note that the pool begins with no connections;
once this number of connections is requested, that number of connections will remain.
Defaults to 5.
• max_overflow – The maximum overflow size of the pool. When the number of checked-
out connections reaches the size set in pool_size, additional connections will be returned
up to this limit. When those additional connections are returned to the pool, they are
disconnected and discarded. It follows then that the total number of simultaneous connec-
tions the pool will allow is pool_size + max_overflow, and the total number of “sleeping”
connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate
no overflow limit; no limit will be placed on the total number of concurrent connections.
Defaults to 10.
• timeout – The number of seconds to wait before giving up on returning a connection.
Defaults to 30.
• recycle – If set to non -1, number of seconds between connection recycling, which means
upon checkout, if this timeout is surpassed the connection will be closed and replaced with
a newly opened connection. Defaults to -1.
• echo – If True, connections being pulled and retrieved from the pool will be logged to
the standard output, as well as pool sizing information. Echoing can also be achieved by
enabling logging for the “sqlalchemy.pool” namespace. Defaults to False.
• use_threadlocal – If set to True, repeated calls to connect() within the same application
thread will be guaranteed to return the same connection object, if one has already been re-
trieved from the pool and has not been returned yet. Offers a slight performance advantage
at the cost of individual transactions by default. The unique_connection() method
is provided to bypass the threadlocal behavior installed into connect().
• reset_on_return – If true, reset the database state of connections returned to the pool. This
is typically a ROLLBACK to release locks and transaction resources. Disable at your own
peril. Defaults to True.
Any PEP 249 DB-API module can be “proxied” through the connection pool transparently. Usage of the DB-API is
exactly as before, except the connect() method will consult the pool. Below we illustrate this with psycopg2:
psycopg = pool.manage(psycopg)
This produces a _DBProxy object which supports the same connect() function as the original DB-API module.
Upon connection, a connection proxy object is returned, which delegates its calls to a real DB-API connection object.
This connection object is stored persistently within a connection pool (an instance of Pool) that corresponds to the
exact connection arguments sent to the connect() function.
The connection proxy supports all of the methods on the original connection object, most of which are proxied via
__getattr__(). The close() method will return the connection to the pool, and the cursor() method will
return a proxied cursor object. Both the connection proxy and the cursor proxy will also return the underlying connec-
tion to the pool after they have both been garbage collected, which is detected via weakref callbacks (__del__ is not
used).
Additionally, when connections are returned to the pool, a rollback() is issued on the connection unconditionally.
This is to release any locks still held by the connection that may have resulted from normal activity.
By default, the connect() method will return the same connection that is already checked out in the current thread.
This allows a particular connection to be used in a given thread without needing to pass it around between functions.
To disable this behavior, specify use_threadlocal=False to the manage() function.
manage(module, **params)
Return a proxy for a DB-API module that automatically pools connections.
Given a DB-API 2.0 module and pool management parameters, returns a proxy for the module that will auto-
matically pool connections, creating new connection pools for each distinct set of connection arguments sent to
the decorated module’s connect() function.
Parameters
• module – a DB-API 2.0 database module
• poolclass – the class used by the pool module to provide pooling. Defaults to QueuePool.
• **params – will be passed through to poolclass
clear_managers()
Remove all current DB-API 2.0 managers.
All pools and connections are disposed.
Functions
The expression package uses functions to construct SQL expressions. The return value of each function is an object
instance which is a subclass of ClauseElement.
alias(selectable, alias=None)
Return an Alias object.
An Alias represents any FromClause with an alternate name assigned within SQL, typically using the AS
clause when generated, e.g. SELECT * FROM table AS aliasname.
Similar functionality is available via the alias() method available on all FromClause subclasses.
selectable any FromClause subclass, such as a table, select statement, etc..
alias string name to be assigned as the alias. If None, a random name will be generated.
and_(*clauses)
Join a list of clauses together using the AND operator.
The & operator is also overloaded on all _CompareMixin subclasses to produce the same result.
asc(column)
Return an ascending ORDER BY clause element.
e.g.:
order_by = [asc(table1.mycol)]
value a default value for this bind parameter. a bindparam with a value is called a value-based
bindparam.
type_ a sqlalchemy.types.TypeEngine object indicating the type of this bind param, will invoke type-specific
bind parameter processing
unique if True, bind params sharing the same name will have their underlying key modified to a uniquely
generated name. mostly useful with value-based bind params.
required A value is required at execution time.
case(whens, value=None, else_=None)
Produce a CASE statement.
whens A sequence of pairs, or alternatively a dict, to be translated into “WHEN / THEN” clauses.
value Optional for simple case statements, produces a column expression as in “CASE <expr> WHEN ...”
else_ Optional as well, for case defaults produces the “ELSE” portion of the “CASE” statement.
The expressions used for THEN and ELSE, when specified as strings, will be interpreted as bound values. To
specify textual SQL expressions for these, use the literal_column(<string>) or text(<string>) construct.
The expressions used for the WHEN criterion may only be literal strings when “value” is present, i.e. CASE
table.somecol WHEN “x” THEN “y”. Otherwise, literal strings are not accepted in this position, and either the
text(<string>) or literal(<string>) constructs must be used to interpret raw string values.
Usage examples:
Using literal_column(), to allow for databases that do not support bind parameters in the then clause.
The type can be specified which determines the type of the case() construct overall:
or:
cast(table.c.timestamp, DATE)
column(text, type_=None)
Return a textual column clause, as would be in the columns clause of a SELECT statement.
The object returned is an instance of ColumnClause, which represents the “syntactical” portion of the
schema-level Column object.
text the name of the column. Quoting rules will be applied to the clause like any other column name. For
textual column constructs that are not to be quoted, use the literal_column() function.
type_ an optional TypeEngine object which will provide result-set translation for this column.
collate(expression, collation)
Return the clause expression COLLATE collation.
delete(table, whereclause=None, **kwargs)
Return a Delete clause element.
Similar functionality is available via the delete() method on Table.
Parameters
• table – The table to be updated.
• whereclause – A ClauseElement describing the WHERE condition of the UPDATE state-
ment. Note that the where() generative method may be used instead.
desc(column)
Return a descending ORDER BY clause element.
e.g.:
order_by = [desc(table1.mycol)]
distinct(expr)
Return a DISTINCT clause.
except_(*selects, **kwargs)
Return an EXCEPT of multiple selectables.
The returned object is an instance of CompoundSelect.
*selects a list of Select instances.
**kwargs available keyword arguments are the same as those of select().
except_all(*selects, **kwargs)
Return an EXCEPT ALL of multiple selectables.
The returned object is an instance of CompoundSelect.
*selects a list of Select instances.
**kwargs available keyword arguments are the same as those of select().
exists(*args, **kwargs)
Return an EXISTS clause as applied to a Select object.
Calling styles are of the following forms:
extract(field, expr)
Return the clause extract(field FROM expr).
func
Generate SQL function expressions.
func is a special object instance which generates SQL functions based on name-based attributes, e.g.:
Any name can be given to func. If the function name is unknown to SQLAlchemy, it will be rendered exactly
as is. For common SQL functions which SQLAlchemy is aware of, the name may be interpreted as a generic
function which will be compiled appropriately to the target database:
To call functions which are present in dot-separated packages, specify them in the same manner:
SQLAlchemy can be made aware of the return type of functions to enable type-specific lexical and result-based
behavior. For example, to ensure that a string-based function returns a Unicode value and is similarly treated as
a string in expressions, specify Unicode as the type:
Functions which are interpreted as “generic” functions know how to calculate their return type automatically.
For a listing of known generic functions, see Generic Functions.
insert(table, values=None, inline=False, **kwargs)
Return an Insert clause element.
Similar functionality is available via the insert() method on Table.
Parameters
• table – The table to be inserted into.
• values – A dictionary which specifies the column specifications of the INSERT, and is
optional. If left as None, the column specifications are determined from the bind parameters
used during the compile phase of the INSERT statement. If the bind parameters also are
None during the compile phase, then the column specifications will be generated from the
full list of table columns. Note that the values() generative method may also be used for
this.
• prefixes – A list of modifier keywords to be inserted between INSERT and INTO. Alterna-
tively, the prefix_with() generative method may be used.
• inline – if True, SQL defaults will be compiled ‘inline’ into the statement and not pre-
executed.
If both values and compile-time bind parameters are present, the compile-time bind parameters override the
information specified within values on a per-key basis.
The keys within values can be either Column objects or their string identifiers. Each key may reference one of:
literal_column(text, type_=None)
Return a textual column expression, as would be in the columns clause of a SELECT statement.
The object returned supports further expressions in the same way as any other column object, including com-
parison, math and string operations. The type_ parameter is important to determine proper expression behavior
(such as, ‘+’ means string concatenation or numerical addition based on the type).
text the text of the expression; can be any SQL expression. Quoting rules will not be applied. To specify a
column-name expression which should be subject to quoting rules, use the column() function.
type_ an optional TypeEngine object which will provide result-set translation and additional expression
semantics for this column. If left as None the type will be NullType.
not_(clause)
Return a negation of the given clause, i.e. NOT(clause).
The ~ operator is also overloaded on all _CompareMixin subclasses to produce the same result.
null()
Return a _Null object, which compiles to NULL in a sql statement.
or_(*clauses)
Join a list of clauses together using the OR operator.
The | operator is also overloaded on all _CompareMixin subclasses to produce the same result.
outparam(key, type_=None)
Create an ‘OUT’ parameter for usage in functions (stored procedures), for databases which support them.
The outparam can be used like a regular function parameter. The “output” value will be available from the
ResultProxy object via its out_parameters attribute, which returns a dictionary containing the values.
outerjoin(left, right, onclause=None)
Return an OUTER JOIN clause element.
The returned object is an instance of Join.
Similar functionality is also available via the outerjoin() method on any FromClause.
To chain joins together, use the join() or outerjoin() methods on the resulting Join object.
select(columns=None, whereclause=None, from_obj=, [], **kwargs)
Returns a SELECT clause element.
Similar functionality is also available via the select() method on any FromClause.
The returned object is an instance of Select.
All arguments which accept ClauseElement arguments also accept string arguments, which will be con-
verted as appropriate into either text() or literal_column() constructs.
columns A list of ClauseElement objects, typically ColumnElement objects or subclasses, which will
form the columns clause of the resulting statement. For all members which are instances of Selectable,
the individual ColumnElement members of the Selectable will be added individually to the
columns clause. For example, specifying a Table instance will result in all the contained Column
objects within to be added to the columns clause.
This argument is not present on the form of select() available on Table.
whereclause A ClauseElement expression which will be used to form the WHERE clause.
from_obj A list of ClauseElement objects which will be added to the FROM clause of the resulting state-
ment. Note that “from” objects are automatically located within the columns and whereclause ClauseEle-
ments. Use this parameter to explicitly specify “from” objects which are not automatically locatable. This
could include Table objects that aren’t otherwise present, or Join objects whose presence will supercede
that of the Table objects already located in the other clauses.
**kwargs Additional parameters include:
autocommit Deprecated. Use .execution_options(autocommit=<True|False>) to set the autocommit op-
tion.
prefixes a list of strings or ClauseElement objects to include directly after the SELECT keyword in
the generated statement, for dialect-specific query features.
distinct=False when True, applies a DISTINCT qualifier to the columns clause of the resulting state-
ment.
use_labels=False when True, the statement will be generated using labels for each column in the
columns clause, which qualify each column with its parent table’s (or aliases) name so that name
conflicts between columns in different tables don’t occur. The format of the label is <table-
name>_<column>. The “c” collection of the resulting Select object will use these names as well
for targeting column members.
for_update=False when True, applies FOR UPDATE to the end of the resulting statement. Certain
database dialects also support alternate values for this parameter, for example mysql supports “read”
which translates to LOCK IN SHARE MODE, and oracle supports “nowait” which translates to FOR
UPDATE NOWAIT.
correlate=True indicates that this Select object should have its contained FromClause elements
“correlated” to an enclosing Select object. This means that any ClauseElement instance within
the “froms” collection of this Select which is also present in the “froms” collection of an enclosing
select will not be rendered in the FROM clause of this select statement.
group_by a list of ClauseElement objects which will comprise the GROUP BY clause of the resulting
select.
having a ClauseElement that will comprise the HAVING clause of the resulting select when GROUP
BY is used.
order_by a scalar or list of ClauseElement objects which will comprise the ORDER BY clause of the
resulting select.
limit=None a numerical value which usually compiles to a LIMIT expression in the resulting select.
Databases that don’t support LIMIT will attempt to provide similar functionality.
offset=None a numeric value which usually compiles to an OFFSET expression in the resulting select.
Databases that don’t support OFFSET will attempt to provide similar functionality.
bind=None an Engine or Connection instance to which the resulting Select ‘ object
will be bound. The ‘‘Select object will otherwise automatically bind to whatever
Connectable instances can be located within its contained ClauseElement members.
subquery(alias, *args, **kwargs)
Return an Alias object derived from a Select.
table(name, *columns)
Return a TableClause object.
This is a primitive version of the Table object, which is a subclass of this object.
text the text of the SQL statement to be created. use :<param> to specify bind parameters; they will be
compiled to their engine-specific format.
bind an optional connection or engine to be used for this text query.
autocommit=True Deprecated. Use .execution_options(autocommit=<True|False>) to set the autocommit op-
tion.
bindparams a list of bindparam() instances which can be used to define the types and/or initial values for
the bind parameters within the textual statement; the keynames of the bindparams must match those within
the text of the statement. The types will be used for pre-processing on bind values.
typemap a dictionary mapping the names of columns represented in the SELECT clause of the textual statement
to type objects, which will be used to perform post-processing on columns within the result set (for textual
statements that produce result sets).
tuple_(*expr)
Return a SQL tuple.
Main usage is to produce a composite IN construct:
tuple_(table.c.col1, table.c.col2).in_(
[(1, 2), (5, 12), (10, 19)]
)
union(*selects, **kwargs)
Return a UNION of multiple selectables.
The returned object is an instance of CompoundSelect.
A similar union() method is available on all FromClause subclasses.
• values – A dictionary which specifies the SET conditions of the UPDATE, and is optional.
If left as None, the SET conditions are determined from the bind parameters used during the
compile phase of the UPDATE statement. If the bind parameters also are None during the
compile phase, then the SET conditions will be generated from the full list of table columns.
Note that the values() generative method may also be used for this.
• inline – if True, SQL defaults will be compiled ‘inline’ into the statement and not pre-
executed.
If both values and compile-time bind parameters are present, the compile-time bind parameters override the
information specified within values on a per-key basis.
The keys within values can be either Column objects or their string identifiers. Each key may reference one of:
•a literal data value (i.e. string, number, etc.);
•a Column object;
•a SELECT statement.
If a SELECT statement is specified which references this UPDATE statement’s table, the statement will be
correlated against the UPDATE statement.
Classes
compare(other, **kw)
Compare this _BindParamClause to the given clause.
class ClauseElement()
Bases: sqlalchemy.sql.visitors.Visitable
Base class for elements of a programmatically constructed SQL expression.
bind
Returns the Engine or Connection to which this ClauseElement is bound, or None if none found.
compare(other, **kw)
Compare this ClauseElement to the given ClauseElement.
Subclasses should override the default behavior, which is a straight identity comparison.
**kw are arguments consumed by subclass compare() methods and may be used to modify the criteria for
comparison. (see ColumnElement)
compile(bind=None, dialect=None, **kw)
Compile this SQL expression.
The return value is a Compiled object. Calling str() or unicode() on the returned value will yield
a string representation of the result. The Compiled object also can return a dictionary of bind parameter
names and values using the params accessor.
Parameters
• bind – An Engine or Connection from which a Compiled will be acquired. This
argument takes precedence over this ClauseElement‘s bound engine, if any.
• column_keys – Used for INSERT and UPDATE statements, a list of column names which
should be present in the VALUES clause of the compiled statement. If None, all columns
from the target table object are rendered.
• dialect – A Dialect instance frmo which a Compiled will be acquired. This argu-
ment takes precedence over the bind argument as well as this ClauseElement‘s bound
engine, if any.
• inline – Used for INSERT statements, for a dialect which does not support inline retrieval
of newly generated primary key columns, will force the expression used to create the new
primary key value to be rendered inline within the INSERT statement’s VALUES clause.
This typically refers to Sequence execution but may also refer to any server-side default
generation function associated with a primary key Column.
execute(*multiparams, **params)
Compile and execute this ClauseElement.
get_children(**kwargs)
Return immediate child elements of this ClauseElement.
This is used for visit traversal.
**kwargs may contain flags that change the collection that is returned, for example to return a subset of
items in order to cut down on larger traversals, or to return child items from a different context (such as
schema-level collections instead of clause-level).
params(*optionaldict, **kwargs)
Return a copy with bindparam() elments replaced.
Returns a copy of this ClauseElement with bindparam() elements replaced with values taken from the
given dictionary:
>>> clause = column(’x’) + bindparam(’foo’)
>>> print clause.compile().params
{’foo’:None}
>>> print clause.params({’foo’:7}).compile().params
{’foo’:7}
scalar(*multiparams, **params)
Compile and execute this ClauseElement, returning the result’s scalar representation.
unique_params(*optionaldict, **kwargs)
Return a copy with bindparam() elments replaced.
Same functionality as params(), except adds unique=True to affected bind parameters so that multiple
statements can be used.
class ColumnClause(text, selectable=None, type_=None, is_literal=False)
Bases: sqlalchemy.sql.expression._Immutable, sqlalchemy.sql.expression.ColumnElement
Represents a generic column expression from any textual string.
This includes columns associated with tables, aliases and select statements, but also any arbitrary text. May
or may not be bound to an underlying Selectable. ColumnClause is usually created publically via the
column() function or the literal_column() function.
text the text of the element.
selectable parent selectable.
type TypeEngine object which can associate this ColumnClause with a type.
is_literal if True, the ColumnClause is assumed to be an exact expression that will be delivered to the output
with no quoting rules applied regardless of case sensitive settings. the literal_column() function is
usually used to create such a ColumnClause.
__init__(text, selectable=None, type_=None, is_literal=False)
class ColumnCollection(*cols)
Bases: sqlalchemy.util.OrderedProperties
An ordered dictionary that stores a list of ColumnElement instances.
Overrides the __eq__() method to produce SQL clauses between sets of correlated columns.
__init__(*cols)
add(column)
Add a column to this collection.
The key attribute of the column will be used as the hash key for this dictionary.
replace(column)
add the given column to this collection, removing unaliased versions of this column as well as existing
columns with the same key.
e.g.:
t = Table(’sometable’, metadata, Column(’col1’, Integer))
t.columns.replace(Column(’col1’, Integer, key=’columnone’))
will remove the original ‘col1’ from the collection, and add the new column under the name
‘columnname’.
Used by schema.Column to override columns during table reflection.
class ColumnElement()
Bases: sqlalchemy.sql.expression.ClauseElement, sqlalchemy.sql.expression._CompareMixin
Represent an element that is usable within the “column clause” portion of a SELECT statement.
This includes columns associated with tables, aliases, and subqueries, expressions, function calls, SQL keywords
such as NULL, literals, etc. ColumnElement is the ultimate base class for all such elements.
ColumnElement supports the ability to be a proxy element, which indicates that the ColumnElement may
be associated with a Selectable which was derived from another Selectable. An example of a “derived”
Selectable is an Alias of a Table.
A ColumnElement, by subclassing the _CompareMixin mixin class, provides the ability to generate new
ClauseElement objects using Python expressions. See the _CompareMixin docstring for more details.
somecolumn.op(’&’)(0xff)
is a bitwise AND of the value in somecolumn.
operate(op, *other, **kwargs)
reverse_operate(op, other, **kwargs)
startswith(other, escape=None)
Produce the clause LIKE ’<other>%’
class ColumnOperators()
Defines comparison and math operations.
__init__
x.__init__(...) initializes x; see x.__class__.__doc__ for signature
asc()
between(cleft, cright)
collate(collation)
concat(other)
contains(other, **kwargs)
desc()
distinct()
endswith(other, **kwargs)
ilike(other, escape=None)
in_(other)
like(other, escape=None)
match(other, **kwargs)
op(opstring)
operate(op, *other, **kwargs)
reverse_operate(op, other, **kwargs)
startswith(other, **kwargs)
timetuple
Hack, allows datetime objects to be compared on the LHS.
class CompoundSelect(keyword, *selects, **kwargs)
Bases: sqlalchemy.sql.expression._SelectBaseMixin, sqlalchemy.sql.expression.FromClause
Forms the basis of UNION, UNION ALL, and other SELECT-based set operations.
__init__(keyword, *selects, **kwargs)
class Delete(table, whereclause, bind=None, returning=None, **kwargs)
Bases: sqlalchemy.sql.expression._UpdateBase
Represent a DELETE construct.
The Delete object is created using the delete() function.
where(whereclause)
Add the given WHERE clause to a newly returned delete construct.
class Executable()
Bases: sqlalchemy.sql.expression._Generative
Mark a ClauseElement as supporting execution.
Executable is a superclass for all “statement” types of objects, including select(), delete(),
update(), insert(), text().
execution_options(**kw)
Set non-SQL options for the statement which take effect during execution.
Current options include:
•autocommit - when True, a COMMIT will be invoked after execution when executed in ‘autocommit’
mode, i.e. when an explicit transaction is not begun on the connection. Note that DBAPI connections
by default are always in a transaction - SQLAlchemy uses rules applied to different kinds of statements
to determine if COMMIT will be invoked in order to provide its “autocommit” feature. Typically,
all INSERT/UPDATE/DELETE statements as well as CREATE/DROP statements have autocommit
behavior enabled; SELECT constructs do not. Use this option when invokving a SELECT or other
specific SQL construct where COMMIT is desired (typically when calling stored procedures and
such).
•stream_results - indicate to the dialect that results should be “streamed” and not pre-buffered, if pos-
sible. This is a limitation of many DBAPIs. The flag is currently understood only by the psycopg2
dialect.
•compiled_cache - a dictionary where Compiled objects will be cached when the Connection
compiles a clause expression into a dialect- and parameter-specific Compiled object. It is the user’s
responsibility to manage the size of this dictionary, which will have keys corresponding to the dialect,
clause element, the column names within the VALUES or SET clause of an INSERT or UPDATE, as
well as the “batch” mode for an INSERT or UPDATE statement. The format of this dictionary is not
guaranteed to stay the same in future releases.
This option is usually more appropriate to use via the sqlalchemy.engine.base.Connection.execution_o
method of Connection, rather than upon individual statement objects, though the effect is the
same.
See also:
sqlalchemy.engine.base.Connection.execution_options()
sqlalchemy.orm.query.Query.execution_options()
class FunctionElement(*clauses, **kwargs)
Bases: sqlalchemy.sql.expression.Executable, sqlalchemy.sql.expression.ColumnElement,
sqlalchemy.sql.expression.FromClause
Base for SQL function-oriented constructs.
__init__(*clauses, **kwargs)
class Function(name, *clauses, **kw)
Bases: sqlalchemy.sql.expression.FunctionElement
Describe a named SQL function.
__init__(name, *clauses, **kw)
class FromClause()
Bases: sqlalchemy.sql.expression.Selectable
Represent an element that can be used within the FROM clause of a SELECT statement.
alias(name=None)
return an alias of this FromClause.
For table objects, this has the effect of the table being rendered as tablename AS aliasname in a
SELECT statement. For select objects, the effect is that of creating a named subquery, i.e. (select
...) AS aliasname. The alias() method is the general way to create a “subquery” out of an
existing SELECT.
The name parameter is optional, and if left blank an “anonymous” name will be generated at compile
time, guaranteed to be unique against other anonymous constructs used in the same statement.
c
Return the collection of Column objects contained by this FromClause.
columns
Return the collection of Column objects contained by this FromClause.
correspond_on_equivalents(column, equivalents)
Return corresponding_column for the given column, or if None search for a match in the given dictionary.
corresponding_column(column, require_embedded=False)
Given a ColumnElement, return the exported ColumnElement object from this Selectable which
corresponds to that original Column via a common anscestor column.
Parameters
• column – the target ColumnElement to be matched
• require_embedded – only return corresponding columns for the given ColumnElement,
if the given ColumnElement is actually present within a sub-element of this
FromClause. Normally the column will match if it merely shares a common ansces-
tor with one of the exported columns of this FromClause.
count(whereclause=None, **params)
return a SELECT COUNT generated against this FromClause.
description
a brief description of this FromClause.
Used primarily for error message formatting.
foreign_keys
Return the collection of ForeignKey objects which this FromClause references.
is_derived_from(fromclause)
Return True if this FromClause is ‘derived’ from the given FromClause.
An example would be an Alias of a Table is derived from that Table.
join(right, onclause=None, isouter=False)
return a join of this FromClause against another FromClause.
outerjoin(right, onclause=None)
return an outer join of this FromClause against another FromClause.
primary_key
Return the collection of Column objects which comprise the primary key of this FromClause.
replace_selectable(old, alias)
replace all occurences of FromClause ‘old’ with the given Alias object, returning a copy of this
FromClause.
select(whereclause=None, **params)
return a SELECT of this FromClause.
class Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, **kwargs)
Bases: sqlalchemy.sql.expression._ValuesBase
Represent an INSERT construct.
The Insert object is created using the insert() function.
prefix_with(clause)
Add a word or expression between INSERT and INTO. Generative.
If multiple prefixes are supplied, they will be separated with spaces.
values(*args, **kwargs)
specify the VALUES clause for an INSERT statement, or the SET clause for an UPDATE.
**kwargs key=<somevalue> arguments
*args A single dictionary can be sent as the first positional argument. This allows non-string
based keys, such as Column objects, to be used.
correlate(*fromclauses)
return a new select() construct which will correlate the given FROM clauses to that of an enclosing select(),
if a match is found.
By “match”, the given fromclause must be present in this select’s list of FROM objects and also present in
an enclosing select’s list of FROM objects.
Calling this method turns off the select’s default behavior of “auto-correlation”. Normally, select() auto-
correlates all of its FROM clauses to those of an embedded select when compiled.
If the fromclause is None, correlation is disabled for the returned select().
distinct()
return a new select() construct which will apply DISTINCT to its columns clause.
except_(other, **kwargs)
return a SQL EXCEPT of this select() construct against the given selectable.
except_all(other, **kwargs)
return a SQL EXCEPT ALL of this select() construct against the given selectable.
froms
Return the displayed list of FromClause elements.
get_children(column_collections=True, **kwargs)
return child elements as per the ClauseElement specification.
having(having)
return a new select() construct with the given expression added to its HAVING clause, joined to the existing
clause via AND, if any.
inner_columns
an iterator of all ColumnElement expressions which would be rendered into the columns clause of the
resulting SELECT statement.
intersect(other, **kwargs)
return a SQL INTERSECT of this select() construct against the given selectable.
intersect_all(other, **kwargs)
return a SQL INTERSECT ALL of this select() construct against the given selectable.
prefix_with(clause)
return a new select() construct which will apply the given expression to the start of its columns clause, not
using any commas.
select_from(fromclause)
return a new select() construct with the given FROM expression applied to its list of FROM objects.
self_group(against=None)
return a ‘grouping’ construct as per the ClauseElement specification.
This produces an element that can be embedded in an expression. Note that this method is called automat-
ically as needed when constructing expressions.
union(other, **kwargs)
return a SQL UNION of this select() construct against the given selectable.
union_all(other, **kwargs)
return a SQL UNION ALL of this select() construct against the given selectable.
where(whereclause)
return a new select() construct with the given expression added to its WHERE clause, joined to the existing
clause via AND, if any.
with_hint(selectable, text, dialect_name=None)
Add an indexing hint for the given selectable to this Select.
The text of the hint is written specific to a specific backend, and typically uses Python string substitution
syntax to render the name of the table or alias, such as for Oracle:
order_by(*clauses)
return a new selectable with the given list of ORDER BY criterion applied.
The criterion will be appended to any pre-existing ORDER BY criterion.
class TableClause(name, *columns)
Bases: sqlalchemy.sql.expression._Immutable, sqlalchemy.sql.expression.FromClause
Represents a “table” construct.
Note that this represents tables only as another syntactical construct within SQL expressions; it does not provide
schema-level functionality.
__init__(name, *columns)
count(whereclause=None, **params)
return a SELECT COUNT generated against this TableClause.
delete(whereclause=None, **kwargs)
Generate a delete() construct.
insert(values=None, inline=False, **kwargs)
Generate an insert() construct.
update(whereclause=None, values=None, inline=False, **kwargs)
Generate an update() construct.
class Update(table, whereclause, values=None, inline=False, bind=None, returning=None, **kwargs)
Bases: sqlalchemy.sql.expression._ValuesBase
Represent an Update construct.
The Update object is created using the update() function.
where(whereclause)
return a new update() construct with the given expression added to its WHERE clause, joined to the existing
clause via AND, if any.
values(*args, **kwargs)
specify the VALUES clause for an INSERT statement, or the SET clause for an UPDATE.
**kwargs key=<somevalue> arguments
*args A single dictionary can be sent as the first positional argument. This allows non-string
based keys, such as Column objects, to be used.
Generic Functions
SQL functions which are known to SQLAlchemy with regards to database-specific rendering, return types and argu-
ment behavior. Generic functions are invoked like all SQL functions, using the func attribute:
select([func.count()]).select_from(sometable)
class AnsiFunction(**kwargs)
Bases: sqlalchemy.sql.functions.GenericFunction
__init__(**kwargs)
class GenericFunction(type_=None, args=(), **kwargs)
Bases: sqlalchemy.sql.expression.Function
__init__(type_=None, args=(), **kwargs)
class ReturnTypeFromArgs(*args, **kwargs)
Bases: sqlalchemy.sql.functions.GenericFunction
Define a function whose return type is the same as its arguments.
__init__(*args, **kwargs)
class char_length(arg, **kwargs)
Bases: sqlalchemy.sql.functions.GenericFunction
__init__(arg, **kwargs)
class coalesce(*args, **kwargs)
Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
class concat(*args, **kwargs)
Bases: sqlalchemy.sql.functions.GenericFunction
__init__(*args, **kwargs)
class count(expression=None, **kwargs)
Bases: sqlalchemy.sql.functions.GenericFunction
The ANSI COUNT aggregate function. With no arguments, emits COUNT *.
__init__(expression=None, **kwargs)
class current_date(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class current_time(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class current_timestamp(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class current_user(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class localtime(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class localtimestamp(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class max(*args, **kwargs)
Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
class min(*args, **kwargs)
Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
class now(type_=None, args=(), **kwargs)
Bases: sqlalchemy.sql.functions.GenericFunction
class random(*args, **kwargs)
Bases: sqlalchemy.sql.functions.GenericFunction
__init__(*args, **kwargs)
class session_user(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class sum(*args, **kwargs)
Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
class sysdate(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
class user(**kwargs)
Bases: sqlalchemy.sql.functions.AnsiFunction
SQLAlchemy schema definition language. For more usage examples, see Database Meta Data.
# use no arguments
Column(’level’, Integer)
The type argument may be the second positional argument or specified by keyword.
There is partial support for automatic detection of the type based on that of a
ForeignKey associated with this column, if the type is specified as None. However,
this feature is not fully implemented and may not function in all cases.
• *args – Additional positional arguments include various SchemaItem derived constructs
which will be applied as options to the column. These include instances of Constraint,
ForeignKey, ColumnDefault, and Sequence. In some cases an equivalent key-
word argument is available such as server_default, default and unique.
• autoincrement – This flag may be set to False to indicate an integer primary key col-
umn that should not be considered to be the “autoincrement” column, that is the integer
primary key column which generates values implicitly upon INSERT and whose value is
usually returned via the DBAPI cursor.lastrowid attribute. It defaults to True to satisfy
the common use case of a table with a single integer primary key column. If the table has
a composite primary key consisting of more than one integer column, set this flag to True
only on the column that should be considered “autoincrement”.
The setting only has an effect for columns which are:
– Integer derived (i.e. INT, SMALLINT, BIGINT)
– Part of the primary key
– Are not referenced by any foreign keys
– have no server side or client side defaults (with the exception of Postgresql SERIAL).
The setting has these two effects on columns that meet the above criteria:
– DDL issued for the column will include database-specific keywords intended to signify
this column as an “autoincrement” column, such as AUTO INCREMENT on MySQL,
SERIAL on Postgresql, and IDENTITY on MS-SQL. It does not issue AUTOINCRE-
MENT for SQLite since this is a special SQLite flag that is not required for autoincre-
menting behavior. See the SQLite dialect documentation for information on SQLite’s
AUTOINCREMENT.
– The column will be considered to be available as cursor.lastrowid or equivalent, for
those dialects which “post fetch” newly inserted identifiers after a row has been inserted
(SQLite, MySQL, MS-SQL). It does not have any effect in this regard for databases that
use sequences to generate primary key identifiers (i.e. Firebird, Postgresql, Oracle).
• default – A scalar, Python callable, or ClauseElement representing the default value
for this column, which will be invoked upon insert if this column is otherwise not specified
in the VALUES clause of the insert. This is a shortcut to using ColumnDefault as a
positional argument.
Contrast this argument to server_default which creates a default generator on the
database side.
• doc – optional String that can be used by the ORM or similar to document attributes. This
attribute does not render SQL comments (a future attribute ‘comment’ will achieve that).
• key – An optional string identifier which will identify this Column object on the Table.
When a key is provided, this is the only identifier referencing the Column within the
application, including ORM attribute mapping; the name field is used only when rendering
SQL.
• index – When True, indicates that the column is indexed. This is a shortcut for using
a Index construct on the table. To specify indexes with explicit names or indexes that
contain multiple columns, use the Index construct instead.
• info – A dictionary which defaults to {}. A space to store application specific data. This
must be a dictionary.
• nullable – If set to the default of True, indicates the column will be rendered as allow-
ing NULL, else it’s rendered as NOT NULL. This parameter is only used when issuing
CREATE TABLE statements.
• onupdate – A scalar, Python callable, or ClauseElement representing a default value
to be applied to the column within UPDATE statements, which wil be invoked upon update
if this column is not present in the SET clause of the update. This is a shortcut to using
ColumnDefault as a positional argument with for_update=True.
• primary_key – If True, marks this column as a primary key column. Multiple columns
can have this flag set to specify composite primary keys. As an alternative, the primary
key of a Table can be specified via an explicit PrimaryKeyConstraint object.
• server_default – A FetchedValue instance, str, Unicode or text() construct repre-
senting the DDL DEFAULT value for the column.
String types will be emitted as-is, surrounded by single quotes:
Column(’x’, Text, server_default="val")
metadata = MetaData()
# define tables
Table(’mytable’, metadata, ...)
# connect to an engine later, perhaps after loading a URL from a
# configuration file
metadata.bind = an_engine
MetaData is a thread-safe object after tables have been explicitly defined or loaded via reflection.
__init__(bind=None, reflect=False)
Create a new MetaData object.
bind An Engine or Connection to bind to. May also be a string or URL instance, these are passed to
create_engine() and this MetaData will be bound to the resulting engine.
reflect Optional, automatically load all tables from the bound database. Defaults to False. bind is
required when this option is set. For finer control over loaded tables, use the reflect method of
MetaData.
append_ddl_listener(event, listener)
Append a DDL event listener to this MetaData.
The listener callable will be triggered when this MetaData is involved in DDL creates or drops, and
will be invoked either before all Table-related actions or after.
Arguments are:
If a callable is provided, it will be used as a boolean predicate to filter the list of potential table names.
The callable is called with a table name and this MetaData instance as positional arguments and
should return a true value for any table to reflect.
remove(table)
Remove the given Table object from this MetaData.
sorted_tables
Returns a list of Table objects sorted in order of dependency.
class Table(*args, **kw)
Bases: sqlalchemy.schema.SchemaItem, sqlalchemy.sql.expression.TableClause
Represent a table in a database.
e.g.:
The Table object constructs a unique instance of itself based on its name within the given MetaData object.
Constructor arguments are as follows:
Parameters
• name – The name of this table as represented in the database.
This property, along with the schema, indicates the singleton identity of this table in relation
to its parent MetaData. Additional calls to Table with the same name, metadata, and
schema name will return the same Table object.
Names which contain no upper case characters will be treated as case insensitive names, and
will not be quoted unless they are a reserved word. Names with any number of upper case
characters will be quoted and sent exactly. Note that this behavior applies even for databases
which standardize upper case names as case insensitive such as Oracle.
• metadata – a MetaData object which will contain this table. The metadata is used as a
point of association of this table with other tables which are referenced via foreign key. It
also may be used to associate this table with a particular Connectable.
• *args – Additional positional arguments are used primarily to add the list of Column objects
contained within this table. Similar to the style of a CREATE TABLE statement, other
SchemaItem constructs may be added here, including PrimaryKeyConstraint, and
ForeignKeyConstraint.
• autoload – Defaults to False: the Columns for this table should be reflected from the
database. Usually there will be no Column objects in the constructor if this property is
set.
• autoload_with – If autoload==True, this is an optional Engine or Connection instance to be
used for the table reflection. If None, the underlying MetaData’s bound connectable will be
used.
• implicit_returning – True by default - indicates that RETURNING can be used by default
to fetch newly inserted primary key values, for backends which support this. Note that
create_engine() also provides an implicit_returning flag.
• include_columns – A list of strings indicating a subset of columns to be loaded via the
autoload operation; table columns who aren’t present in this list will not be represented
on the resulting Table object. Defaults to None which indicates all columns should be
reflected.
• info – A dictionary which defaults to {}. A space to store application specific data. This
must be a dictionary.
• mustexist – When True, indicates that this Table must already be present in the given
MetaData‘ collection.
• prefixes – A list of strings to insert after CREATE in the CREATE TABLE statement. They
will be separated by spaces.
• quote – Force quoting of this table’s name on or off, corresponding to True or False.
When left at its default of None, the column identifier will be quoted according to whether
the name is case sensitive (identifiers with at least one upper case character are treated as
case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a
reserved word which is not known by the SQLAlchemy dialect.
• quote_schema – same as ‘quote’ but applies to the schema identifier.
• schema – The schema name for this table, which is required if the table resides in a schema
other than the default selected schema for the engine’s database connection. Defaults to
None.
• useexisting – When True, indicates that if this Table is already present in the given
MetaData, apply further arguments within the constructor to the existing Table. If this
flag is not set, an error is raised when the parameters of an existing Table are overwritten.
__init__(*args, **kw)
add_is_dependent_on(table)
Add a ‘dependency’ for this Table.
This is another Table object which must be created first before this one can, or dropped after this one.
Usually, dependencies between tables are determined via ForeignKey objects. However, for other sit-
uations that create dependencies outside of foreign keys (rules, inheriting), this method can manually
establish such a link.
append_column(column)
Append a Column to this Table.
append_constraint(constraint)
Append a Constraint to this Table.
append_ddl_listener(event, listener)
Append a DDL event listener to this Table.
The listener callable will be triggered when this Table is created or dropped, either directly before
or after the DDL is issued to the database. The listener may modify the Table, but may not abort the event
itself.
Arguments are:
event One of Table.ddl_events; e.g. ‘before-create’, ‘after-create’, ‘before-drop’ or ‘after-drop’.
listener A callable, invoked with three positional arguments:
event The event currently being handled
target The Table object being created or dropped
bind The Connection bueing used for DDL execution.
Listeners are added to the Table’s ddl_listeners attribute.
bind
Return the connectable associated with this Table.
create(bind=None, checkfirst=False)
Issue a CREATE statement for this table.
See also metadata.create_all().
drop(bind=None, checkfirst=False)
Issue a DROP statement for this table.
See also metadata.drop_all().
exists(bind=None)
Return True if this table exists.
get_children(column_collections=True, schema_visitor=False, **kwargs)
key
primary_key
tometadata(metadata, schema=<symbol ’retain_schema>)
Return a copy of this Table associated with a different MetaData.
class ThreadLocalMetaData()
Bases: sqlalchemy.schema.MetaData
A MetaData variant that presents a different bind in every thread.
Makes the bind property of the MetaData a thread-local value, allowing this collection of tables to be bound
to different Engine implementations or connections in each thread.
The ThreadLocalMetaData starts off bound to None in each thread. Binds must be made explicitly by assigning
to the bind property or using connect(). You can also re-bind dynamically multiple times per thread, just
like a regular MetaData.
__init__()
Construct a ThreadLocalMetaData.
bind
The bound Engine or Connection for this thread.
This property may be assigned an Engine or Connection, or assigned a string or URL to automatically
create a basic Engine for this bind with create_engine().
dispose()
Dispose all bound engines, in all thread contexts.
is_bound()
True if there is a bind for this thread.
Constraints
t = Table("remote_table", metadata,
Column("remote_id", ForeignKey("main_table.id"))
)
Note that ForeignKey is only a marker object that defines a dependency between two columns. The actual
constraint is in all cases represented by the ForeignKeyConstraint object. This object will be generated
automatically when a ForeignKey is associated with a Column which in turn is associated with a Table.
Conversely, when ForeignKeyConstraint is applied to a Table, ForeignKey markers are automati-
cally generated to be present on each associated Column, which are also associated with the constraint object.
Note that you cannot define a “composite” foreign key constraint, that is a constraint between a
grouping of multiple parent/child columns, using ForeignKey objects. To define this grouping, the
ForeignKeyConstraint object must be used, and applied to the Table. The associated ForeignKey
objects are created automatically.
The ForeignKey objects associated with an individual Column object are available in the foreign_keys col-
lection of that column.
Further examples of foreign key configuration are in Defining Foreign Keys.
__init__(column, _constraint=None, use_alter=False, name=None, onupdate=None, ondelete=None, de-
ferrable=None, initially=None, link_to_name=False)
Construct a column-level FOREIGN KEY.
The ForeignKey object when constructed generates a ForeignKeyConstraint which is associated
with the parent Table object’s collection of constraints.
Parameters
• column – A single target column for the key relationship. A Column object or a column
name as a string: tablename.columnkey or schema.tablename.columnkey.
columnkey is the key which has been assigned to the column (defaults to the column
name itself), unless link_to_name is True in which case the rendered name of the
column is used.
• name – Optional string. An in-database name for the key if constraint is not provided.
• onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this
constraint. Typical values include CASCADE, DELETE and RESTRICT.
• ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this
constraint. Typical values include CASCADE, DELETE and RESTRICT.
• deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when
issuing DDL for this constraint.
• initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this
constraint.
• link_to_name – if True, the string name given in column is the rendered name of the
referenced column, not its locally assigned key.
• use_alter – passed to the underlying ForeignKeyConstraint to indicate the con-
straint should be generated/dropped externally from the CREATE TABLE/ DROP TABLE
statement. See that classes’ constructor for details.
copy(schema=None)
Produce a copy of this ForeignKey object.
get_referent(table)
Return the column in the given table referenced by this ForeignKey.
Returns None if this ForeignKey does not reference the given table.
references(table)
Return True if the given table is referenced by this ForeignKey.
target_fullname
class ForeignKeyConstraint(columns, refcolumns, name=None, onupdate=None, ondelete=None, de-
ferrable=None, initially=None, use_alter=False, link_to_name=False, ta-
ble=None)
Bases: sqlalchemy.schema.Constraint
A table-level FOREIGN KEY constraint.
Defines a single column or composite FOREIGN KEY ... REFERENCES constraint. For a no-frills, single
column foreign key, adding a ForeignKey to the definition of a Column is a shorthand equivalent for an
unnamed, single column ForeignKeyConstraint.
Examples of foreign key configuration are in Defining Foreign Keys.
__init__(columns, refcolumns, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None,
use_alter=False, link_to_name=False, table=None)
Construct a composite-capable FOREIGN KEY.
Parameters
• columns – A sequence of local column names. The named columns must be defined and
present in the parent Table. The names should match the key given to each column
(defaults to the name) unless link_to_name is True.
• refcolumns – A sequence of foreign column names or Column objects. The columns must
all be located within the same Table.
• name – Optional, the in-database name of the key.
• onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this
constraint. Typical values include CASCADE, DELETE and RESTRICT.
• ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this
constraint. Typical values include CASCADE, DELETE and RESTRICT.
• deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when
issuing DDL for this constraint.
• initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this
constraint.
• link_to_name – if True, the string name given in column is the rendered name of the
referenced column, not its locally assigned key.
• use_alter – If True, do not emit the DDL for this constraint as part of the CREATE TABLE
definition. Instead, generate it via an ALTER TABLE statement issued after the full col-
lection of tables have been created, and drop it via an ALTER TABLE statement before the
full collection of tables are dropped. This is shorthand for the usage of AddConstraint
and DropConstraint applied as “after-create” and “before-drop” events on the Meta-
Data object. This is normally used to generate/drop constraints on objects that are mutually
dependent on each other.
columns
copy(**kw)
elements
class Index(name, *columns, **kwargs)
Bases: sqlalchemy.schema.SchemaItem
A table-level INDEX.
Defines a composite (one or more column) INDEX. For a no-frills, single column index, adding index=True
to the Column definition is a shorthand equivalent for an unnamed, single column Index.
__init__(name, *columns, **kwargs)
Construct an index object.
Arguments are:
name The name of the index
*columns Columns to include in the index. All columns must belong to the same table.
**kwargs Keyword arguments include:
unique Defaults to False: create a unique index.
postgresql_where Defaults to None: create a partial index when using PostgreSQL
bind
Return the connectable associated with this Index.
create(bind=None)
drop(bind=None)
class PrimaryKeyConstraint(*columns, **kw)
Bases: sqlalchemy.schema.ColumnCollectionConstraint
A table-level PRIMARY KEY constraint.
Defines a single column or composite PRIMARY KEY constraint. For a no-frills primary key, adding
primary_key=True to one or more Column definitions is a shorthand equivalent for an unnamed single- or
multiple-column PrimaryKeyConstraint.
class UniqueConstraint(*columns, **kw)
Bases: sqlalchemy.schema.ColumnCollectionConstraint
A table-level UNIQUE constraint.
Defines a single column or composite UNIQUE constraint. For a no-frills, single column constraint, adding
unique=True to the Column definition is a shorthand equivalent for an unnamed, single column Unique-
Constraint.
__init__(arg, **kwargs)
class DefaultClause(arg, for_update=False)
Bases: sqlalchemy.schema.FetchedValue
A DDL-specified DEFAULT column value.
class DefaultGenerator(for_update=False)
Bases: sqlalchemy.schema.SchemaItem
Base class for column default values.
__init__(for_update=False)
bind
Return the connectable associated with this default.
execute(bind=None, **kwargs)
class FetchedValue(for_update=False)
Bases: object
A default that takes effect on the database side.
__init__(for_update=False)
class PassiveDefault(*arg, **kw)
Bases: sqlalchemy.schema.DefaultClause
__init__(*arg, **kw)
class Sequence(name, start=None, increment=None, schema=None, optional=False, quote=None, meta-
data=None, for_update=False)
Bases: sqlalchemy.schema.DefaultGenerator
Represents a named database sequence.
__init__(name, start=None, increment=None, schema=None, optional=False, quote=None, metadata=None,
for_update=False)
bind
create(bind=None, checkfirst=True)
Creates this sequence in the database.
drop(bind=None, checkfirst=True)
Drops this sequence from the database.
DDL Generation
class DDLElement()
Bases: sqlalchemy.sql.expression.Executable, sqlalchemy.sql.expression.ClauseElement
Base class for DDL expression constructs.
against(target)
Return a copy of this DDL against a specific schema item.
bind
execute(bind=None, target=None)
Execute this DDL immediately.
Executes the DDL statement in isolation using the supplied Connectable or Connectable assigned
to the .bind property, if not supplied. If the DDL has a conditional on criteria, it will be invoked with
None as the event.
bind Optional, an Engine or Connection. If not supplied, a valid Connectable must be present
in the .bind property.
target Optional, defaults to None. The target SchemaItem for the execute call. Will be passed to the on
callable if any, and may also provide string expansion data for the statement. See execute_at for
more information.
execute_at(event, target)
Link execution of this DDL to the DDL lifecycle of a SchemaItem.
Links this DDLElement to a Table or MetaData instance, executing it when that schema item is
created or dropped. The DDL statement will be executed using the same Connection and transactional
context as the Table create/drop itself. The .bind property of this statement is ignored.
event One of the events defined in the schema item’s .ddl_events; e.g. ‘before-create’, ‘after-create’,
‘before-drop’ or ‘after-drop’
target The Table or MetaData instance for which this DDLElement will be associated with.
A DDLElement instance can be linked to any number of schema items.
execute_at builds on the append_ddl_listener interface of MetaDta and Table objects.
Caveat: Creating or dropping a Table in isolation will also trigger any DDL set to execute_at that
Table’s MetaData. This may change in a future release.
class DDL(statement, on=None, context=None, bind=None)
Bases: sqlalchemy.schema.DDLElement
A literal DDL statement.
Specifies literal SQL DDL to be executed by the database. DDL objects can be attached to Tables or
MetaData instances, conditionally executing SQL as part of the DDL lifecycle of those schema items. Basic
templating support allows a single DDL instance to handle repetitive tasks for multiple tables.
Examples:
When operating on Table events, the following statement string substitions are available:
The DDL’s context, if any, will be combined with the standard substutions noted above. Keys present in the
context will override the standard substitutions.
__init__(statement, on=None, context=None, bind=None)
Create a DDL statement.
statement A string or unicode string to be executed. Statements will be processed with Python’s string
formatting operator. See the context argument and the execute_at method.
A literal ‘%’ in a statement must be escaped as ‘%%’.
SQL bind parameters are not available in DDL statements.
on Optional filtering criteria. May be a string, tuple or a callable predicate. If a string, it will be compared
to the name of the executing database dialect:
DDL(’something’, on=’postgresql’)
Internals
class SchemaItem()
Bases: sqlalchemy.sql.visitors.Visitable
Base class for items that define a database schema.
get_children(**kwargs)
used to allow SchemaVisitor access
class SchemaVisitor()
Bases: sqlalchemy.sql.visitors.ClauseVisitor
Define the visiting for SchemaItem objects.
SQLAlchemy provides rich schema introspection capabilities. The most common methods for this include the “au-
toload” argument of Table:
Further examples of reflection using Table and MetaData can be found at Reflecting Tables.
There is also a low-level inspection interface available for more specific operations, known as the Inspector:
class Inspector(bind)
Bases: object
Performs database schema inspection.
The Inspector acts as a proxy to the reflection methods of the Dialect, providing a consistent interface as well
as caching support for previously fetched metadata.
The preferred method to construct an Inspector is via the Inspector.from_engine() method. I.e.:
engine = create_engine(’...’)
insp = Inspector.from_engine(engine)
Where above, the Dialect may opt to return an Inspector subclass that provides additional methods
specific to the dialect’s target database.
__init__(bind)
Initialize a new Inspector.
Parameter bind – a Connectable, which is typically an instance of Engine or
Connection.
For a dialect-specific instance of Inspector, see Inspector.from_engine()
default_schema_name
Return the default schema name presented by the dialect for the current engine’s database user.
E.g. this is typically public for Postgresql and dbo for SQL Server.
class from_engine(bind)
Construct a new dialect-specific Inspector object from the given engine or connection.
Parameter bind – a Connectable, which is typically an instance of Engine or
Connection.
This method differs from direct a direct constructor call of Inspector in that the Dialect is given a
chance to provide a dialect-specific Inspector instance, which may provide additional methods.
See the example at Inspector.
get_columns(table_name, schema=None, **kw)
Return information about columns in table_name.
Given a string table_name and an optional string schema, return column information as a list of dicts with
these keys:
name the column’s name
type TypeEngine
nullable boolean
default the column’s default value
attrs dict containing optional column attributes
get_foreign_keys(table_name, schema=None, **kw)
Return information about foreign_keys in table_name.
Given a string table_name, and an optional string schema, return foreign key information as a list of dicts
with these keys:
constrained_columns a list of column names that make up the foreign key
referred_schema the name of the referred schema
referred_table the name of the referred table
referred_columns a list of column names in the referred table that correspond to constrained_columns
name optional name of the foreign key constraint.
**kw other options passed to the dialect’s get_foreign_keys() method.
get_indexes(table_name, schema=None, **kw)
Return information about indexes in table_name.
Given a string table_name and an optional string schema, return index information as a list of dicts with
these keys:
name the index’s name
column_names list of column names in order
unique boolean
**kw other options passed to the dialect’s get_indexes() method.
get_pk_constraint(table_name, schema=None, **kw)
Return information about primary key constraint on table_name.
Given a string table_name, and an optional string schema, return primary key information as a dictionary
with these keys:
constrained_columns a list of column names that make up the primary key
engine = create_engine(’...’)
meta = MetaData()
user_table = Table(’user’, meta)
insp = Inspector.from_engine(engine)
insp.reflecttable(user_table, None)
Parameters
• table – a Table instance.
• include_columns – a list of string column names to include in the reflection process. If
None, all columns are reflected.
SQLAlchemy provides abstractions for most common database data types, and a mechanism for specifying your own
custom data types.
The methods and attributes of type objects are rarely used directly. Type objects are supplied to Table definitions
and can be supplied as type hints to functions for occasions where the database driver returns an incorrect type.
SQLAlchemy will use the Integer and String(32) type information when issuing a CREATE TABLE state-
ment and will use it again when reading back rows SELECTed from the database. Functions that accept a type
(such as Column()) will typically accept a type class or instance; Integer is equivalent to Integer() with no
construction arguments in this case.
Generic Types
Generic types specify a column that can read, write and store a particular type of Python data. SQLAlchemy will
choose the best database column type available on the target database when issuing a CREATE TABLE statement. For
complete control over which column type is emitted in CREATE TABLE, such as VARCHAR see SQL Standard Types
and the other sections of this chapter.
class Boolean(create_constraint=True, name=None)
Bases: sqlalchemy.types.TypeEngine, sqlalchemy.types.SchemaType
A bool datatype.
Boolean typically uses BOOLEAN or SMALLINT on the DDL side, and on the Python side deals in True or
False.
__init__(create_constraint=True, name=None)
Construct a Boolean.
Parameters
• create_constraint – defaults to True. If the boolean is generated as an int/smallint, also
create a CHECK constraint on the table that ensures 1 or 0 as a value.
• name – if a CHECK constraint is generated, specify the name of the constraint.
class Date(*args, **kwargs)
Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine
A type for datetime.date() objects.
class DateTime(timezone=False)
Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine
A type for datetime.datetime() objects.
Date and time types return objects from the Python datetime module. Most DBAPIs have built in support for
the datetime module, with the noted exception of SQLite. In the case of SQLite, date and time types are stored
as strings which are then converted back to datetime objects when rows are returned.
__init__(timezone=False)
class Enum(*enums, **kw)
Bases: sqlalchemy.types.String, sqlalchemy.types.SchemaType
Generic Enum Type.
The Enum type provides a set of possible string values which the column is constrained towards.
By default, uses the backend’s native ENUM type if available, else uses VARCHAR + a CHECK constraint.
__init__(*enums, **kw)
Construct an enum.
Keyword arguments which don’t apply to a specific backend are ignored by that backend.
Parameters
• *enums – string or unicode enumeration labels. If unicode labels are present, the con-
vert_unicode flag is auto-enabled.
• convert_unicode – Enable unicode-aware bind parameter and result-set processing for this
Enum’s data. This is set automatically based on the presence of unicode label strings.
• metadata – Associate this type directly with a MetaData object. For types that exist
on the target database as an independent schema construct (Postgresql), this type will be
created and dropped within create_all() and drop_all() operations. If the type
is not associated with any MetaData object, it will associate itself with each Table in
which it is used, and will be created when any of those individual tables are created, after
a check is performed for it’s existence. The type is only dropped when drop_all() is
called for that Table object’s metadata, however.
• name – The name of this type. This is required for Postgresql and any future supported
database which requires an explicitly named type, or an explicitly named constraint in
order to generate the type and/or a table that uses it.
• native_enum – Use the database’s native ENUM type when available. Defaults to True.
When False, uses VARCHAR + check constraint for all backends.
• schema – Schemaname of this type. For types that exist on the target database as an
independent schema construct (Postgresql), this parameter specifies the named schema in
which the type is present.
• quote – Force quoting to be on or off on the type’s name. If left as the default of None,
the usual schema-level “case sensitive”/”reserved name” rules are used to determine if this
type’s name should be quoted.
class Float(precision=None, asdecimal=False, **kwargs)
Bases: sqlalchemy.types.Numeric
A type for float numbers.
Returns Python float objects by default, applying conversion as needed.
__init__(precision=None, asdecimal=False, **kwargs)
Construct a Float.
Parameters
• precision – the numeric precision for use in DDL CREATE TABLE.
• asdecimal – the same flag as that of Numeric, but defaults to False. Note that setting
this flag to True results in floating point conversion.
class Integer(*args, **kwargs)
Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine
A type for int integers.
class Interval(native=True, second_precision=None, day_precision=None)
Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeDecorator
A type for datetime.timedelta() objects.
The Interval type deals with datetime.timedelta objects. In PostgreSQL, the native INTERVAL type is
used; for others, the value is stored as a date which is relative to the “epoch” (Jan. 1, 1970).
Note that the Interval type does not currently provide date arithmetic operations on platforms which do not
support interval types natively. Such operations usually require transformation of both sides of the expression
(such as, conversion of both sides into integer epoch values first) which currently is a manual procedure (such
as via func).
__init__(native=True, second_precision=None, day_precision=None)
Construct an Interval object.
Parameters
• native – when True, use the actual INTERVAL type provided by the database, if supported
(currently Postgresql, Oracle). Otherwise, represent the interval data as an epoch value
regardless.
• second_precision – For native interval types which support a “fractional seconds preci-
sion” parameter, i.e. Oracle and Postgresql
• day_precision – for native interval types which support a “day precision” parameter, i.e.
Oracle.
impl
alias of DateTime
class LargeBinary(length=None)
Bases: sqlalchemy.types._Binary
A type for large binary byte data.
The Binary type generates BLOB or BYTEA when tables are created, and also converts incoming values using
the Binary callable provided by each DB-API.
__init__(length=None)
Construct a LargeBinary type.
Parameter length – optional, a length for the column for use in DDL statements, for those BLOB
types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY
type - use the BINARY/VARBINARY types specifically for those. May be safely omitted if
no CREATE TABLE will be issued. Certain databases may require a length for use in DDL,
and will raise an exception when the CREATE TABLE DDL is issued.
class Numeric(precision=None, scale=None, asdecimal=True)
Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine
A type for fixed precision numbers.
Typically generates DECIMAL or NUMERIC. Returns decimal.Decimal objects by default, applying con-
version as needed.
__init__(precision=None, scale=None, asdecimal=True)
Construct a Numeric.
Parameters
• precision – the numeric precision for use in DDL CREATE TABLE.
• scale – the numeric scale for use in DDL CREATE TABLE.
• asdecimal – default True. Return whether or not values should be sent as Python Decimal
objects, or as floats. Different DBAPIs send one or the other based on datatypes - the Nu-
meric type will ensure that return values are one or the other across DBAPIs consistently.
When using the Numeric type, care should be taken to ensure that the asdecimal setting is apppropriate
for the DBAPI in use - when Numeric applies a conversion from Decimal->float or float-> Decimal, this
conversion incurs an additional performance overhead for all result columns received.
DBAPIs that return Decimal natively (e.g. psycopg2) will have better accuracy and higher performance
with a setting of True, as the native translation to Decimal reduces the amount of floating- point issues at
play, and the Numeric type itself doesn’t need to apply any further conversions. However, another DBAPI
which returns floats natively will incur an additional conversion overhead, and is still subject to floating
point data loss - in which case asdecimal=False will at least remove the extra conversion overhead.
class PickleType(protocol=2, pickler=None, mutable=True, comparator=None)
Bases: sqlalchemy.types.MutableType, sqlalchemy.types.TypeDecorator
Holds Python objects, which are serialized using pickle.
PickleType builds upon the Binary type to apply Python’s pickle.dumps() to incoming objects, and
pickle.loads() on the way out, allowing any pickleable Python object to be stored as a serialized binary
field.
Note: be sure to read the notes for MutableType regarding ORM performance implications.
in result rows. This may require SQLAlchemy to explicitly coerce incoming Python uni-
codes into an encoding, and from an encoding back to Unicode, or it may not require any
interaction from SQLAlchemy at all, depending on the DBAPI in use.
When SQLAlchemy performs the encoding/decoding, the encoding used is configured via
encoding, which defaults to utf-8.
The “convert_unicode” behavior can also be turned on for all String types by setting
sqlalchemy.engine.base.Dialect.convert_unicode on create_engine().
To instruct SQLAlchemy to perform Unicode encoding/decoding even on a platform that
already handles Unicode natively, set convert_unicode=’force’. This will incur significant
performance overhead when fetching unicode result columns.
• assert_unicode – Deprecated. A warning is raised in all cases when a non-Unicode object
is passed when SQLAlchemy would coerce into an encoding (note: but not when the
DBAPI handles unicode objects natively). To suppress or raise this warning to an error, use
the Python warnings filter documented at: http://docs.python.org/library/warnings.html
• unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves
like the ‘errors’ keyword argument to the standard library’s string.decode() functions. This
flag requires that convert_unicode is set to “force” - otherwise, SQLAlchemy is not guar-
anteed to handle the task of unicode conversion. Note that this flag adds significant per-
formance overhead to row-fetching operations for backends that already return unicode
objects natively (which most DBAPIs do). This flag should only be used as an absolute
last resort for reading strings from a column with varied or corrupted encodings, which
only applies to databases that accept invalid encodings in the first place (i.e. MySQL. not
PG, Sqlite, etc.)
class Text(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False)
Bases: sqlalchemy.types.String
A variably sized string type.
In SQL, usually corresponds to CLOB or TEXT. Can also take Python unicode objects and encode to the
database’s encoding in bind params (and the reverse for result sets.)
class Time(timezone=False)
Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine
A type for datetime.time() objects.
__init__(timezone=False)
class Unicode(length=None, **kwargs)
Bases: sqlalchemy.types.String
A variable length Unicode string.
The Unicode type is a String which converts Python unicode objects (i.e., strings that are defined as
u’somevalue’) into encoded bytestrings when passing the value to the database driver, and similarly decodes
values from the database back into Python unicode objects.
It’s roughly equivalent to using a String object with convert_unicode=True, however the type has
other significances in that it implies the usage of a unicode-capable type being used on the backend, such as
NVARCHAR. This may affect what type is emitted when issuing CREATE TABLE and also may effect some
DBAPI-specific details, such as type information passed along to setinputsizes().
When using the Unicode type, it is only appropriate to pass Python unicode objects, and not plain str. If
a bytestring (str) is passed, a runtime warning is issued. If you notice your application raising these warnings
but you’re not sure where, the Python warnings filter can be used to turn these warnings into exceptions which
will illustrate a stack trace:
import warnings
warnings.simplefilter(’error’)
Bytestrings sent to and received from the database are encoded using the dialect’s encoding, which defaults
to utf-8.
__init__(length=None, **kwargs)
Create a Unicode-converting String type.
Parameters
• length – optional, a length for the column for use in DDL statements. May be safely
omitted if no CREATE TABLE will be issued. Certain databases may require a length
for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued.
Whether the value is interpreted as bytes or characters is database specific.
• **kwargs – passed through to the underlying String type.
class UnicodeText(length=None, **kwargs)
Bases: sqlalchemy.types.Text
An unbounded-length Unicode string.
See Unicode for details on the unicode behavior of this object.
Like Unicode, usage the UnicodeText type implies a unicode-capable type being used on the backend,
such as NCLOB.
__init__(length=None, **kwargs)
Create a Unicode-converting Text type.
Parameter length – optional, a length for the column for use in DDL statements. May be safely
omitted if no CREATE TABLE will be issued. Certain databases may require a length for
use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. Whether
the value is interpreted as bytes or characters is database specific.
The SQL standard types always create database column types of the same name when CREATE TABLE is issued.
Some types may not be supported on all databases.
class BINARY(length=None)
Bases: sqlalchemy.types._Binary
The SQL BINARY type.
class BLOB(length=None)
Bases: sqlalchemy.types.LargeBinary
The SQL BLOB type.
class BOOLEAN(create_constraint=True, name=None)
Bases: sqlalchemy.types.Boolean
The SQL BOOLEAN type.
class CHAR(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False)
Bases: sqlalchemy.types.String
The SQL CHAR type.
class CLOB(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False)
Bases: sqlalchemy.types.Text
The CLOB type.
This type is found in Oracle and Informix.
Vendor-Specific Types
Database-specific types are also available for import from each database’s dialect module. See the sqlalchemy.dialects
reference for the database you’re interested in.
For example, MySQL has a BIGINTEGER type and PostgreSQL has an INET type. To use these, import them from
the module explicitly:
Each dialect provides the full set of typenames supported by that backend within its __all__ collection, so that a simple
import * or similar will import all supported types as implemented for that backend:
t = Table(’mytable’, metadata,
Column(’id’, INTEGER, primary_key=True),
Column(’name’, VARCHAR(300)),
Column(’inetaddr’, INET)
)
Where above, the INTEGER and VARCHAR types are ultimately from sqlalchemy.types, and INET is specific to the
Postgresql dialect.
Some dialect level types have the same name as the SQL standard type, but also provide additional arguments. For
example, MySQL implements the full range of character and string types including additional arguments such as
collation and charset:
Custom Types
User-defined types may be created to match special capabilities of a particular database or simply for implementing
custom processing logic in Python.
The simplest method is implementing a TypeDecorator, a helper class that makes it easy to augment the bind
parameter and result processing capabilities of one of the built in types.
To build a type object from scratch, subclass :class:UserDefinedType.
class TypeDecorator(*args, **kwargs)
Bases: sqlalchemy.types.AbstractType
Allows the creation of types which add additional functionality to an existing type.
This method is preferred to direct subclassing of SQLAlchemy’s built-in types as it ensures that all required
functionality of the underlying type is kept in place.
Typical usage:
class MyType(types.TypeDecorator):
’’’Prefixes Unicode values with "PREFIX:" on the way in and
strips it off on the way out.
’’’
impl = types.Unicode
def copy(self):
return MyType(self.impl.length)
The class-level “impl” variable is required, and can reference any TypeEngine class. Alternatively, the
load_dialect_impl() method can be used to provide different type classes based on the dialect given; in this
case, the “impl” variable can reference TypeEngine as a placeholder.
Types that receive a Python type that isn’t similar to the ultimate type used may want to define the
TypeDecorator.coerce_compared_value() method. This is used to give the expression system a
hint when coercing Python objects into bind parameters within expressions. Consider this expression:
Above, if “somecol” is an Integer variant, it makes sense that we’re doing date arithmetic, where above is
usually interpreted by databases as adding a number of days to the given date. The expression system does the
right thing by not attempting to coerce the “date()” value into an integer-oriented bind parameter.
However, in the case of TypeDecorator, we are usually changing an incoming Python type to something
new - TypeDecorator by default will “coerce” the non-typed side to be the same type as itself. Such as
below, we define an “epoch” type that stores a date value as an integer:
class MyEpochType(types.TypeDecorator):
impl = types.Integer
epoch = datetime.date(1970, 1, 1)
Our expression of somecol + date with the above type will coerce the “date” on the right side to also be
treated as MyEpochType.
This behavior can be overridden via the coerce_compared_value() method, which returns a type that
should be used for the value of the expression. Below we set it such that an integer value will be treated as an
Integer, and any other value is assumed to be a date and will be treated as a MyEpochType:
__init__(*args, **kwargs)
adapt(cls)
bind_processor(dialect)
coerce_compared_value(op, value)
Suggest a type for a ‘coerced’ Python value in an expression.
By default, returns self. This method is called by the expression system when an object using this type is on
the left or right side of an expression against a plain Python object which does not yet have a SQLAlchemy
type assigned:
expr = table.c.somecolumn + 35
Where above, if somecolumn uses this type, this method will be called with the value operator.add
and 35. The return value is whatever SQLAlchemy type should be used for 35 for this particular operation.
compare_values(x, y)
compile(dialect)
copy()
copy_value(value)
dialect_impl(dialect)
get_dbapi_type(dbapi)
is_mutable()
load_dialect_impl(dialect)
Loads the dialect-specific implementation of this type.
by default calls dialect.type_descriptor(self.impl), but can be overridden to provide different behavior.
process_bind_param(value, dialect)
process_result_value(value, dialect)
result_processor(dialect, coltype)
type_engine(dialect)
class MyType(types.UserDefinedType):
def __init__(self, precision = 8):
self.precision = precision
def get_col_spec(self):
return "MYTYPE(%s)" % self.precision
__init__(*args, **kwargs)
adapt(cls)
adapt_operator(op)
A hook which allows the given operator to be adapted to something new.
See also UserDefinedType._adapt_expression(), an as-yet- semi-public method with greater capability in
this regard.
bind_processor(dialect)
Return a conversion function for processing bind values.
Returns a callable which will receive a bind parameter value as the sole positional argument and will return
a value to send to the DB-API.
If processing is not necessary, the method should return None.
compare_values(x, y)
Compare two values for equality.
compile(dialect)
copy_value(value)
dialect_impl(dialect, **kwargs)
get_dbapi_type(dbapi)
Return the corresponding type object from the underlying DB-API, if any.
This can be useful for calling setinputsizes(), for example.
is_mutable()
Return True if the target Python type is ‘mutable’.
This allows systems like the ORM to know if a column value can be considered ‘not changed’ by compar-
ing the identity of objects alone.
Use the MutableType mixin or override this method to return True in custom types that hold mutable
values such as dict, list and custom objects.
result_processor(dialect, coltype)
Return a conversion function for processing result row values.
Returns a callable which will receive a result row column value as the sole positional argument and will
return a value to return to the user.
If processing is not necessary, the method should return None.
class TypeEngine(*args, **kwargs)
Bases: sqlalchemy.types.AbstractType
Base for built-in types.
__init__(*args, **kwargs)
adapt(cls)
bind_processor(dialect)
Return a conversion function for processing bind values.
Returns a callable which will receive a bind parameter value as the sole positional argument and will return
a value to send to the DB-API.
If processing is not necessary, the method should return None.
compare_values(x, y)
Compare two values for equality.
compile(dialect)
copy_value(value)
dialect_impl(dialect, **kwargs)
get_dbapi_type(dbapi)
Return the corresponding type object from the underlying DB-API, if any.
This can be useful for calling setinputsizes(), for example.
is_mutable()
Return True if the target Python type is ‘mutable’.
This allows systems like the ORM to know if a column value can be considered ‘not changed’ by compar-
ing the identity of objects alone.
Use the MutableType mixin or override this method to return True in custom types that hold mutable
values such as dict, list and custom objects.
result_processor(dialect, coltype)
Return a conversion function for processing result row values.
Returns a callable which will receive a result row column value as the sole positional argument and will
return a value to return to the user.
If processing is not necessary, the method should return None.
class AbstractType(*args, **kwargs)
Bases: sqlalchemy.sql.visitors.Visitable
__init__(*args, **kwargs)
bind_processor(dialect)
Defines a bind parameter processing function.
Parameter dialect – Dialect instance in use.
compare_values(x, y)
Compare two values for equality.
compile(dialect)
copy_value(value)
get_dbapi_type(dbapi)
Return the corresponding type object from the underlying DB-API, if any.
This can be useful for calling setinputsizes(), for example.
is_mutable()
Return True if the target Python type is ‘mutable’.
This allows systems like the ORM to know if a column value can be considered ‘not changed’ by compar-
ing the identity of objects alone.
Use the MutableType mixin or override this method to return True in custom types that hold mutable
values such as dict, list and custom objects.
result_processor(dialect, coltype)
Defines a result-column processing function.
Parameters
• dialect – Dialect instance in use.
• coltype – DBAPI coltype argument received in cursor.description.
class MutableType()
Bases: object
A mixin that marks a TypeEngine as representing a mutable Python object type.
“mutable” means that changes can occur in place to a value of this type. Examples includes Python lists,
dictionaries, and sets, as well as user-defined objects. The primary need for identification of “mutable” types is
by the ORM, which applies special rules to such values in order to guarantee that changes are detected. These
rules may have a significant performance impact, described below.
A MutableType usually allows a flag called mutable=True to enable/disable the “mutability” flag, rep-
resented on this class by is_mutable(). Examples include PickleType and ARRAY. Setting this flag to
False effectively disables any mutability- specific behavior by the ORM.
copy_value() and compare_values() represent a copy and compare function for values of this type -
implementing subclasses should override these appropriately.
The usage of mutable types has significant performance implications when using the ORM. In order to detect
changes, the ORM must create a copy of the value when it is first accessed, so that changes to the current value
can be compared against the “clean” database-loaded value. Additionally, when the ORM checks to see if any
data requires flushing, it must scan through all instances in the session which are known to have “mutable”
attributes and compare the current value of each one to its “clean” value. So for example, if the Session contains
6000 objects (a fairly large amount) and autoflush is enabled, every individual execution of Query will require
a full scan of that subset of the 6000 objects that have mutable attributes, possibly resulting in tens of thousands
of additional method calls for every query.
Note that for small numbers (< 100 in the Session at a time) of objects with “mutable” values, the performance
degradation is negligible. In most cases it’s likely that the convenience allowed by “mutable” change detection
outweighs the performance penalty.
It is perfectly fine to represent “mutable” data types with the “mutable” flag set to False, which eliminates
any performance issues. It means that the ORM will only reliably detect changes for values of this type if a
newly modified value is of a different identity (i.e., id(value)) than what was present before - i.e., instead of
operations like these:
myobject.somedict[’foo’] = ’bar’
myobject.someset.add(’bar’)
myobject.somelist.append(’bar’)
myobject.somevalue = {’foo’:’bar’}
myobject.someset = myobject.someset.union([’bar’])
myobject.somelist = myobject.somelist + [’bar’]
A future release of SQLAlchemy will include instrumented collection support for mutable types, such that at
least usage of plain Python datastructures will be able to emit events for in-place changes, removing the need
for pessimistic scanning for changes.
__init__
x.__init__(...) initializes x; see x.__class__.__doc__ for signature
compare_values(x, y)
Compare x == y.
copy_value(value)
Unimplemented.
is_mutable()
Return True, mutable.
class Concatenable()
Bases: object
A mixin that marks a type as supporting ‘concatenation’, typically strings.
__init__
x.__init__(...) initializes x; see x.__class__.__doc__ for signature
class NullType(*args, **kwargs)
Bases: sqlalchemy.types.TypeEngine
An unknown type.
NullTypes will stand in if Table reflection encounters a column data type unknown to SQLAlchemy. The
resulting columns are nearly fully usable: the DB-API adapter will handle all translation to and from the database
data type.
NullType does not have sufficient information to particpate in a CREATE TABLE statement and will raise an
exception if encountered during a create() operation.
9.1.7 Interfaces
class MyProxy(ConnectionProxy):
def execute(self, conn, execute, clauseelement, *multiparams, **params):
print "compiled statement:", clauseelement
return execute(clauseelement, *multiparams, **params)
The execute argument is a function that will fulfill the default execution behavior for the operation. The
signature illustrated in the example should be used.
The proxy is installed into an Engine via the proxy argument:
e = create_engine(’someurl://’, proxy=MyProxy())
begin(conn, begin)
Intercept begin() events.
begin_twophase(conn, begin_twophase, xid)
Intercept begin_twophase() events.
commit(conn, commit)
Intercept commit() events.
commit_twophase(conn, commit_twophase, xid, is_prepared)
Intercept commit_twophase() events.
cursor_execute(execute, cursor, statement, parameters, context, executemany)
Intercept low-level cursor execute() events.
execute(conn, execute, clauseelement, *multiparams, **params)
Intercept high level execute() events.
prepare_twophase(conn, prepare_twophase, xid)
Intercept prepare_twophase() events.
release_savepoint(conn, release_savepoint, name, context)
Intercept release_savepoint() events.
rollback(conn, rollback)
Intercept rollback() events.
rollback_savepoint(conn, rollback_savepoint, name, context)
Intercept rollback_savepoint() events.
rollback_twophase(conn, rollback_twophase, xid, is_prepared)
Intercept rollback_twophase() events.
savepoint(conn, savepoint, name=None)
Intercept savepoint() events.
class PoolListener()
Hooks into the lifecycle of connections in a Pool.
Usage:
class MyListener(PoolListener):
def connect(self, dbapi_con, con_record):
’’’perform connect operations’’’
# etc.
All of the standard connection Pool types can accept event listeners for key connection lifecycle events: cre-
ation, pool check-out and check-in. There are no events fired when a connection closes.
For any given DB-API connection, there will be one connect event, n number of checkout events, and
either n or n - 1 checkin events. (If a Connection is detached from its pool via the detach() method, it
won’t be checked back in.)
These are low-level events for low-level objects: raw Python DB-API connections, without the conveniences of
the SQLAlchemy Connection wrapper, Dialect services or ClauseElement execution. If you execute
SQL through the connection, explicitly closing all cursors and other resources is recommended.
Events also receive a _ConnectionRecord, a long-lived internal Pool object that basically represents a
“slot” in the connection pool. _ConnectionRecord objects have one public attribute of note: info, a
dictionary whose contents are scoped to the lifetime of the DB-API connection managed by the record. You can
use this shared storage area however you like.
There is no need to subclass PoolListener to handle events. Any class that implements one or more of these
methods can be used as a pool listener. The Pool will inspect the methods provided by a listener object and add
the listener to one or more internal event queues based on its capabilities. In terms of efficiency and function
call overhead, you’re much better off only providing implementations for the hooks you’ll be using.
checkin(dbapi_con, con_record)
Called when a connection returns to the pool.
Note that the connection may be closed, and may be None if the connection has been invalidated.
checkin will not be called for detached connections. (They do not return to the pool.)
dbapi_con A raw DB-API connection
con_record The _ConnectionRecord that persistently manages the connection
checkout(dbapi_con, con_record, con_proxy)
Called when a connection is retrieved from the Pool.
dbapi_con A raw DB-API connection
con_record The _ConnectionRecord that persistently manages the connection
con_proxy The _ConnectionFairy which manages the connection for the span of the current check-
out.
If you raise an exc.DisconnectionError, the current connection will be disposed and a fresh con-
nection retrieved. Processing of all checkout listeners will abort and restart using the new connection.
connect(dbapi_con, con_record)
Called once for each new DB-API connection or Pool’s creator().
dbapi_con A newly connected raw DB-API connection (not a SQLAlchemy Connection wrapper).
con_record The _ConnectionRecord that persistently manages the connection
first_connect(dbapi_con, con_record)
Called exactly once for the first DB-API connection.
dbapi_con A newly connected raw DB-API connection (not a SQLAlchemy Connection wrapper).
con_record The _ConnectionRecord that persistently manages the connection
9.1.8 Utilities
class IdentitySet(iterable=None)
A set that considers only object id() for uniqueness.
This strategy has edge cases for builtin types- it’s possible to have two ‘foo’ strings in one of these sets, for
example. Use sparingly.
__init__(iterable=None)
class LRUCache(capacity=100, threshold=0.5)
Dictionary with ‘squishy’ removal of least recently used items.
__init__(capacity=100, threshold=0.5)
class NamedTuple()
tuple() subclass that adds labeled names.
Is also pickleable.
class OrderedDict(_OrderedDict____sequence=None, **kwargs)
A dict that returns keys/values/items in the order they were added.
__init__(_OrderedDict____sequence=None, **kwargs)
class OrderedProperties()
An object that maintains the order in which attributes are set upon it.
Also provides an iterator and a very basic getitem/setitem interface to those attributes.
(Not really a dict, since it iterates over values, not keys. Not really a list, either, since each value must have a
key associated; hence there is no append or extend.)
__init__()
class PopulateDict(creator)
A dict which populates missing values via a creation function.
Note the creation function takes a key, unlike collections.defaultdict.
__init__(creator)
class ScopedRegistry(createfunc, scopefunc)
A Registry that can store one or multiple instances of a single class on a per-thread scoped basis, or on a
customized scope.
createfunc a callable that returns a new object to be placed in the registry
scopefunc a callable that will return a key to store/retrieve an object.
__init__(createfunc, scopefunc)
class UniqueAppender(data, via=None)
Appends items to a collection ensuring uniqueness.
Additional appends() of the same object are ignored. Membership is determined by identity (is a) not equality
(==).
__init__(data, via=None)
class WeakIdentityMapping()
A WeakKeyDictionary with an object identity index.
Adds a .by_id dictionary to a regular WeakKeyDictionary. Trades performance during mutation operations for
accelerated lookups by id().
The usual cautions about weak dictionaries and iteration also apply to this subclass.
__init__()
as_interface(obj, cls=None, methods=None, required=None)
Ensure basic interface compliance for an instance or dict of callables.
Checks that obj implements public methods of cls or has members listed in methods. If required is not
supplied, implementing at least one interface method is sufficient. Methods present on obj that are not in the
interface are ignored.
If obj is a dict and dict does not meet the interface requirements, the keys of the dictionary are inspected.
Keys present in obj that are not in the interface will raise TypeErrors.
Raises TypeError if obj does not meet the interface criteria.
In all passing cases, an object with callable members is returned. In the simple case, obj is returned as-is; if
dict processing kicks in then an anonymous class is returned.
format_argspec_plus(fn, grouped=True)
Returns a dictionary of formatted, introspected function arguments.
A enhanced variant of inspect.formatargspec to support code generation.
fn An inspectable callable or tuple of inspect getargspec() results.
grouped Defaults to True; include (parens, around, argument) lists
Returns:
args Full inspect.formatargspec for fn
self_arg The name of the first positional argument, varargs[0], or None if the function defines no positional
arguments.
apply_pos args, re-written in calling rather than receiving syntax. Arguments are passed positionally.
apply_kw Like apply_pos, except keyword-ish args are passed as keywords.
Example:
function_named(fn, name)
Return a function with a given __name__.
Will assign to __name__ and return the original function if possible on the Python implementation, otherwise a
new function will be constructed.
get_cls_kwargs(cls)
Return the full set of inherited kwargs for the given cls.
Probes a class’s __init__ method, collecting all named arguments. If the __init__ defines a **kwargs catch-all,
then the constructor is presumed to pass along unrecognized keywords to it’s base classes, and the collection
process is repeated recursively on each of the bases.
get_func_kwargs(func)
Return the full set of legal kwargs for the given func.
getargspec_init(method)
inspect.getargspec with considerations for typical __init__ methods
Wraps inspect.getargspec with error handling for typical __init__ cases:
iterate_attributes(cls)
iterate all the keys and attributes associated with a class, without using getattr().
Does not use getattr() so that class-sensitive descriptors (i.e. property.__get__()) are not called.
memoized_instancemethod
Decorate a method memoize its return value.
Best applied to no-arg methods: memoization is not sensitive to argument values, and will always return the
same value even when called with different arguments.
memoized_property
A read-only @property that is only evaluated once.
A slight refinement of the MAGICCOOKIE=object() pattern. The primary advantage of symbol() is its repr().
They are also singletons.
Repeated calls of symbol(‘name’) will all return the same instance.
unbound_method_to_callable(func_or_cls)
Adjust the incoming callable such that a ‘self’ argument is not required.
update_copy(d, _new=None, **kw)
Copy the given dict and update with the given values.
warn_exception(func, *args, **kwargs)
executes the given function, catches all exceptions and converts to a warning.
9.2 sqlalchemy.orm
Defining Mappings
Python classes are mapped to the database using the mapper() function.
mapper(class_, local_table=None, *args, **params)
Return a new Mapper object.
Parameters
• class_ – The class to be mapped.
• local_table – The table to which the class is mapped, or None if this mapper inherits from
another mapper using concrete table inheritance.
• always_refresh – If True, all query operations for this mapped class will overwrite all data
within object instances that already exist within the session, erasing any in-memory changes
with whatever information was loaded from the database. Usage of this flag is highly dis-
couraged; as an alternative, see the method populate_existing() on Query.
• allow_null_pks – This flag is deprecated - this is stated as allow_partial_pks which defaults
to True.
• allow_partial_pks – Defaults to True. Indicates that a composite primary key with some
NULL values should be considered as possibly existing within the database. This affects
whether a mapper will assign an incoming row to an existing identity, as well as if ses-
sion.merge() will check the database first for a particular primary key value. A “partial
primary key” can occur if one has mapped to an OUTER JOIN, for example.
• batch – Indicates that save operations of multiple entities can be batched together for effi-
ciency. setting to False indicates that an instance will be fully saved before saving the next
instance, which includes inserting/updating all table rows corresponding to the entity as well
as calling all MapperExtension methods corresponding to the save operation.
• column_prefix – A string which will be prepended to the key name of all Columns when
creating column-based properties from the given Table. Does not affect explicitly specified
column-based properties
• concrete – If True, indicates this mapper should use concrete table inheritance with its parent
mapper.
• exclude_properties – A list of properties not to map. Columns present in the mapped table
and present in this list will not be automatically converted into properties. Note that nei-
ther this option nor include_properties will allow an end-run around Python inheritance. If
mapped class B inherits from mapped class A, no combination of includes or excludes will
allow B to have fewer properties than its superclass, A.
• extension – A MapperExtension instance or list of MapperExtension instances
which will be applied to all operations by this Mapper.
• include_properties – An inclusive list of properties to map. Columns present in the mapped
table but not present in this list will not be automatically converted into properties.
• inherits – Another Mapper for which this Mapper will have an inheritance relationship
with.
• inherit_condition – For joined table inheritance, a SQL expression (constructed
ClauseElement) which will define how the two tables are joined; defaults to a natu-
ral join between the two tables.
• inherit_foreign_keys – When inherit_condition is used and the condition contains no For-
eignKey columns, specify the “foreign” columns of the join condition in this list. else leave
as None.
• non_primary – Construct a Mapper that will define only the selection of instances, not their
persistence. Any number of non_primary mappers may be created for a particular class.
• order_by – A single Column or list of Columns for which selection operations should use
as the default ordering for entities. Defaults to the OID/ROWID of the table if any, or the
first primary key column of the table.
• passive_updates – Indicates UPDATE behavior of foreign keys when a primary key changes
on a joined-table inheritance or other joined table mapping.
When True, it is assumed that ON UPDATE CASCADE is configured on the foreign key in
the database, and that the database will handle propagation of an UPDATE from a source
column to dependent rows. Note that with databases which enforce referential integrity (i.e.
PostgreSQL, MySQL with InnoDB tables), ON UPDATE CASCADE is required for this
operation. The relationship() will update the value of the attribute on related items which
are locally present in the session during a flush.
When False, it is assumed that the database does not enforce referential integrity and will
not be issuing its own CASCADE operation for an update. The relationship() will issue the
appropriate UPDATE statements to the database in response to the change of a referenced
key, and items locally present in the session during a flush will also be refreshed.
This flag should probably be set to False if primary key changes are expected and the
database in use doesn’t support CASCADE (i.e. SQLite, MySQL MyISAM tables).
Also see the passive_updates flag on relationship().
A future SQLAlchemy release will provide a “detect” feature for this flag.
• polymorphic_on – Used with mappers in an inheritance relationship, a Column which
will identify the class/mapper combination to be used with a particular row. Requires
the polymorphic_identity value to be set for all mappers in the inheritance hierar-
chy. The column specified by polymorphic_on is usually a column that resides directly
within the base mapper’s mapped table; alternatively, it may be a column that is only present
within the <selectable> portion of the with_polymorphic argument.
• polymorphic_identity – A value which will be stored in the Column denoted by polymor-
phic_on, corresponding to the class identity of this mapper.
• properties – A dictionary mapping the string names of object attributes to
MapperProperty instances, which define the persistence behavior of that attribute. Note
that the columns in the mapped table are automatically converted into ColumnProperty
instances based on the key property of each Column (although they can be overridden using
this dictionary).
• primary_key – A list of Column objects which define the primary key to be used against
this mapper’s selectable unit. This is normally simply the primary key of the local_table,
but can be overridden here.
• version_id_col – A Column which must have an integer type that will be used to keep a
running version id of mapped entities in the database. this is used during save operations
to ensure that no other thread or process has updated the instance during the lifetime of the
entity, else a ConcurrentModificationError exception is thrown.
• version_id_generator – A callable which defines the algorithm used to generate new version
ids. Defaults to an integer generator. Can be replaced with one that generates timestamps,
uuids, etc. e.g.:
import uuid
mapper(Cls, table,
version_id_col=table.c.version_uuid,
version_id_generator=lambda version:uuid.uuid4().hex
)
The callable receives the current version identifier as its single argument.
• with_polymorphic – A tuple in the form (<classes>, <selectable>) indicating the
default style of “polymorphic” loading, that is, which tables are queried at once. <classes>
is any single or list of mappers and/or classes indicating the inherited classes that should
be loaded at once. The special value ’*’ may be used to indicate all descending classes
should be loaded immediately. The second tuple argument <selectable> indicates a se-
lectable that will be used to query for multiple classes. Normally, it is left as None, in
which case this mapper will form an outer join from the base mapper’s table to that of all
desired sub-mappers. When specified, it provides the selectable to be used for polymorphic
loading. When with_polymorphic includes mappers which load from a “concrete” inher-
iting table, the <selectable> argument is required, since it usually requires more complex
UNION queries.
Mapper Properties
A basic mapping of a class will simply make the columns of the database table or selectable available as attributes
on the class. Mapper properties allow you to customize and add additional properties to your classes, for example
making the results one-to-many join available as a Python list of related objects.
Mapper properties are most commonly included in the mapper() call:
mapper(Parent, properties={
’children’: relationship(Children)
}
backref(name, **kwargs)
Create a back reference with explicit arguments, which are the same arguments one can send to
relationship().
Used with the backref keyword argument to relationship() in place of a string argument.
column_property(*args, **kwargs)
Provide a column-level property for use with a Mapper.
Column-based properties can normally be applied to the mapper’s properties dictionary using the
schema.Column element directly. Use this function when the given column is not directly present within
the mapper’s selectable; examples include SQL expressions, functions, and scalar SELECT queries.
Columns that aren’t present in the mapper’s selectable won’t be persisted by the mapper and are effectively
“read-only” attributes.
*cols list of Column objects to be mapped.
comparator_factory a class which extends sqlalchemy.orm.properties.ColumnProperty.Comparator
which provides custom SQL clause generation for comparison operations.
group a group name for this property when marked as deferred.
deferred when True, the column property is “deferred”, meaning that it does not load immediately,
and is instead loaded when the attribute is first accessed on an instance. See also deferred().
doc optional string that will be applied as the doc on the class-bound descriptor.
extension an AttributeExtension instance, or list of extensions, which will be prepended
to the list of attribute listeners for the resulting descriptor placed on the class. These listeners
will receive append and set events before the operation proceeds, and may be used to halt (via
exception throw) or change the value used in the operation.
comparable_property(comparator_factory, descriptor=None)
Provide query semantics for an unmanaged attribute.
Allows a regular Python @property (descriptor) to be used in Queries and SQL constructs like a managed at-
tribute. comparable_property wraps a descriptor with a proxy that directs operator overrides such as == (__eq__)
to the supplied comparator but proxies everything else through to the original descriptor:
class MyClass(object):
@property
def myprop(self):
return ’foo’
class MyComparator(sqlalchemy.orm.interfaces.PropComparator):
def __eq__(self, other):
....
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __composite_values__(self):
return self.x, self.y
def __eq__(self, other):
return other is not None and self.x == other.x and self.y == other.y
The composite object may have its attributes populated based on the names of the mapped columns. To override
the way internal state is set, additionally implement __set_composite_values__:
class Point(object):
def __init__(self, x, y):
self.some_x = x
self.some_y = y
def __composite_values__(self):
return self.some_x, self.some_y
def __set_composite_values__(self, x, y):
self.some_x = x
self.some_y = y
def __eq__(self, other):
return other is not None and self.some_x == other.x and self.some_y == other.y
Arguments are:
class_ The “composite type” class.
*cols List of Column objects to be mapped.
group A group name for this property when marked as deferred.
deferred When True, the column property is “deferred”, meaning that it does not load immediately, and is
instead loaded when the attribute is first accessed on an instance. See also deferred().
comparator_factory a class which extends sqlalchemy.orm.properties.CompositeProperty.Comparator
which provides custom SQL clause generation for comparison operations.
doc optional string that will be applied as the doc on the class-bound descriptor.
extension an AttributeExtension instance, or list of extensions, which will be prepended to the list of
attribute listeners for the resulting descriptor placed on the class. These listeners will receive append and
set events before the operation proceeds, and may be used to halt (via exception throw) or change the value
used in the operation.
deferred(*columns, **kwargs)
Return a DeferredColumnProperty, which indicates this object attributes should only be loaded from its
corresponding table column when first accessed.
Used with the properties dictionary sent to mapper().
dynamic_loader(argument, secondary=None, primaryjoin=None, secondaryjoin=None, foreign_keys=None,
backref=None, post_update=False, cascade=False, remote_side=None, en-
able_typechecks=True, passive_deletes=False, doc=None, order_by=None, compara-
tor_factory=None, query_class=None)
Construct a dynamically-loading mapper property.
This property is similar to relationship(), except read operations return an active Query object which
reads from the database when accessed. Items may be appended to the attribute via append(), or removed via
remove(); changes will be persisted to the database during a Sesion.flush(). However, no other Python
list or collection mutation operations are available.
A subset of arguments available to relationship() are available here.
Parameters
• argument – a class or Mapper instance, representing the target of the relationship.
• secondary – for a many-to-many relationship, specifies the intermediary table. The sec-
ondary keyword argument should generally only be used for a table that is not otherwise
expressed in any class mapping. In particular, using the Association Object Pattern is gen-
erally mutually exclusive with the use of the secondary keyword argument.
• query_class – Optional, a custom Query subclass to be used as the basis for dynamic col-
lection.
relation(*arg, **kw)
A synonym for relationship().
relationship(argument, secondary=None, **kwargs)
Provide a relationship of a primary Mapper to a secondary Mapper.
Note: This function is known as relation() in all versions of SQLAlchemy prior to version 0.6beta2,
including the 0.5 and 0.4 series. relationship() is only available starting with SQLAlchemy 0.6beta2.
The relation() name will remain available for the foreseeable future in order to enable cross-compatibility.
This corresponds to a parent-child or associative table relationship. The constructed class is an instance of
RelationshipProperty.
A typical relationship():
mapper(Parent, properties={
’children’: relationship(Children)
})
Parameters
• argument – a class or Mapper instance, representing the target of the relationship.
• secondary – for a many-to-many relationship, specifies the intermediary table. The sec-
ondary keyword argument should generally only be used for a table that is not otherwise
expressed in any class mapping. In particular, using the Association Object Pattern is gen-
erally mutually exclusive with the use of the secondary keyword argument.
• backref – indicates the string name of a property to be placed on the related mapper’s class
that will handle this relationship in the other direction. The other property will be created
automatically when the mappers are configured. Can also be passed as a backref()
object to control the configuration of the new relationship.
• back_populates – Takes a string name and has the same meaning as backref, except
the complementing property is not created automatically, and instead must be config-
ured explicitly on the other mapper. The complementing property should also indicate
back_populates to this relationship to ensure proper functioning.
• cascade – a comma-separated list of cascade rules which determines how Session operations
should be “cascaded” from parent to child. This defaults to False, which means the default
cascade should be used. The default value is "save-update, merge".
Available cascades are:
– save-update - cascade the add() operation. This cascade applies both to future and
past calls to add(), meaning new items added to a collection or scalar relationship get
placed into the same session as that of the parent, and also applies to items which have
been removed from this relationship but are still part of unflushed history.
– merge - cascade the merge() operation
– expunge - cascade the expunge() operation
– delete - cascade the delete() operation
– delete-orphan - if an item of the child’s type with no parent is detected, mark it for
deletion. Note that this option prevents a pending item of the child’s class from being
persisted without a parent present.
– refresh-expire - cascade the expire() and refresh() operations
– all - shorthand for “save-update,merge, refresh-expire, expunge, delete”
• collection_class – a class or callable that returns a new list-holding object. will be used in
place of a plain list for storing elements.
• comparator_factory – a class which extends RelationshipProperty.Comparator
which provides custom SQL clause generation for comparison operations.
• doc – docstring which will be applied to the resulting descriptor.
• extension – an AttributeExtension instance, or list of extensions, which will be
prepended to the list of attribute listeners for the resulting descriptor placed on the class.
These listeners will receive append and set events before the operation proceeds, and may
be used to halt (via exception throw) or change the value used in the operation.
• foreign_keys – a list of columns which are to be used as “foreign key” columns. this param-
eter should be used in conjunction with explicit primaryjoin and secondaryjoin (if
needed) arguments, and the columns within the foreign_keys list should be present
within those join conditions. Normally, relationship() will inspect the columns
within the join conditions to determine which columns are the “foreign key” columns,
based on information in the Table metadata. Use this argument when no ForeignKey’s
are present in the join condition, or to override the table-defined foreign keys.
• innerjoin=False – when True, joined eager loads will use an inner join to join against
related tables instead of an outer join. The purpose of this option is strictly one of perfor-
mance, as inner joins generally perform better than outer joins. This flag can be set to True
when the relationship references an object via many-to-one using local foreign keys that are
not nullable, or when the reference is one-to-one or a collection that is guaranteed to have
one or at least one entry.
• join_depth – when non-None, an integer value indicating how many levels deep “eager”
loaders should join on a self-referring or cyclical relationship. The number counts how
many times the same Mapper shall be present in the loading condition along a particular
join branch. When left at its default of None, eager loaders will stop chaining when they
encounter a the same target mapper which is already higher up in the chain. This option
applies both to joined- and subquery- eager loaders.
(i.e. each row references the other), where it would otherwise be impossible to INSERT
or DELETE both rows fully since one row exists before the other. Use this flag when a
particular mapping arrangement will incur two rows that are dependent on each other, such
as a table that has a one-to-many relationship to a set of child rows, and also has a column
that references a single child row within that list (i.e. both tables contain a foreign key to
each other). If a flush() operation returns an error that a “cyclical dependency” was
detected, this is a cue that you might want to use post_update to “break” the cycle.
• primaryjoin – a ColumnElement (i.e. WHERE criterion) that will be used as the primary
join of this child object against the parent object, or in a many-to-many relationship the join
of the primary object to the association table. By default, this value is computed based on
the foreign key relationships of the parent and child tables (or association table).
• remote_side – used for self-referential relationships, indicates the column or list of columns
that form the “remote side” of the relationship.
• secondaryjoin – a ColumnElement (i.e. WHERE criterion) that will be used as the join of an
association table to the child object. By default, this value is computed based on the foreign
key relationships of the association and child tables.
• single_parent=(True|False) – when True, installs a validator which will prevent objects from
being associated with more than one parent at a time. This is used for many-to-one or many-
to-many relationships that should be treated either as one-to-one or one-to-many. Its usage
is optional unless delete-orphan cascade is also set on this relationship(), in which case its
required (new in 0.5.2).
• uselist=(True|False) – a boolean that indicates if this property should be loaded as a list or a
scalar. In most cases, this value is determined automatically by relationship(), based
on the type and direction of the relationship - one to many forms a list, many to one forms a
scalar, many to many is a list. If a scalar is desired where normally a list would be present,
such as a bi-directional one-to-one relationship, set uselist to False.
• viewonly=False – when set to True, the relationship is used only for loading objects within
the relationship, and has no effect on the unit-of-work flush process. Relationships with
viewonly can specify any kind of join conditions to provide additional views of related
objects onto a parent object. Note that the functionality of a viewonly relationship has its
limits - complicated join conditions may not compile into eager or lazy loaders properly. If
this is the case, use an alternative method.
class MyClass(object):
def _get_status(self):
return self._status
def _set_status(self, value):
self._status = value
status = property(_get_status, _set_status)
The column named status will be mapped to the attribute named _status, and the status attribute on
MyClass will be used to proxy access to the column-based attribute.
Decorators
reconstructor(fn)
Decorate a method as the ‘reconstructor’ hook.
Designates a method as the “reconstructor”, an __init__-like method that will be called by the ORM after
the instance has been loaded from the database or otherwise reconstituted.
The reconstructor will be invoked with no arguments. Scalar (non-collection) database-mapped attributes of the
instance will be available for use within the function. Eagerly-loaded collections are generally not yet available
and will usually only contain the first element. ORM state changes made to objects at this stage will not be
recorded for the next flush() operation, so the activity within a reconstructor should be conservative.
validates(*names)
Decorate a method as a ‘validator’ for one or more named properties.
Designates a method as a validator, a method which receives the name of the attribute as well as a value to be
assigned, or in the case of a collection to be added to the collection. The function can then raise validation
exceptions to halt the process from continuing, or can modify or replace the value before proceeding. The
function should otherwise return the given value.
Utilities
object_mapper(instance)
Given an object, return the primary Mapper associated with the object instance.
Raises UnmappedInstanceError if no mapping is configured.
class_mapper(class_, compile=True)
Given a class, return the primary Mapper associated with the key.
Raises UnmappedClassError if no mapping is configured.
compile_mappers()
Compile all mappers that have been defined.
This is equivalent to calling compile() on any individual mapper.
clear_mappers()
Remove all mappers that have been created thus far.
The mapped classes will return to their initial “unmapped” state and can be re-mapped with new mappers.
Attribute Utilities
del_attribute(instance, key)
Delete the value of an attribute, firing history events.
This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are
required. Custom attribute management schemes will need to make usage of this method to establish attribute
state as understood by SQLAlchemy.
get_attribute(instance, key)
Get the value of an attribute, firing any callables required.
This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are
required. Custom attribute management schemes will need to make usage of this method to make usage of
attribute state as understood by SQLAlchemy.
get_history(obj, key, **kwargs)
Return a History record for the given object and attribute key.
obj is an instrumented object instance. An InstanceState is accepted directly for backwards compatibility but
this usage is deprecated.
init_collection(obj, key)
Initialize a collection attribute and return the collection adapter.
This function is used to provide direct access to collection internals for a previously unloaded attribute. e.g.:
Internals
identity_key_from_row(row, adapter=None)
Return an identity-map key for use in storing/retrieving an item from the identity map.
row A sqlalchemy.engine.base.RowProxy instance or a dictionary corresponding result-set
ColumnElement instances to their values within a row.
isa(other)
Return True if the this mapper inherits from the given mapper.
iterate_properties
return an iterator of all MapperProperty objects.
polymorphic_iterator()
Iterate through the collection including this mapper and all descendant mappers.
This includes not just the immediately inheriting mappers but all their inheriting mappers as well.
To iterate through an entire hierarchy, use mapper.base_mapper.polymorphic_iterator().
primary_key_from_instance(instance)
Return the list of primary key values for the given instance.
primary_mapper()
Return the primary mapper corresponding to this mapper’s class key (class).
This is an in-depth discussion of collection mechanics. For simple examples, see Rows that point to themselves /
Mutually Dependent Rows. Support for collections of mapped entities.
The collections package supplies the machinery used to inform the ORM of collection membership changes. An
instrumentation via decoration approach is used, allowing arbitrary types (including built-ins) to be used as entity
collections without requiring inheritance from a base class.
Instrumentation decoration relays membership change events to the InstrumentedCollectionAttribute
that is currently managing the collection. The decorators observe function call arguments and return values, track-
ing entities entering or leaving the collection. Two decorator approaches are provided. One is a bundle of generic
decorators that map function arguments and return values to events:
@collection.adds(1)
def store(self, item):
self.data.append(item)
@collection.removes_return()
def pop(self):
return self.data.pop()
The second approach is a bundle of targeted decorators that wrap appropriate append and remove notifiers around the
mutation methods present in the standard Python list, set and dict interfaces. These could be specified in terms
of generic decorator recipes, but are instead hand-tooled for increased efficiency. The targeted decorators occasionally
implement adapter-like behavior, such as mapping bulk-set methods (extend, update, __setslice__, etc.) into
the series of atomic mutation events that the ORM requires.
The targeted decorators are used internally for automatic instrumentation of entity collection classes. Every collection
class goes through a transformation process roughly like so:
This process modifies the class at runtime, decorating methods and adding some bookkeeping properties. This isn’t
possible (or desirable) for built-in classes like list, so trivial sub-classes are substituted to hold decoration:
class InstrumentedList(list):
pass
class QueueIsh(list):
def push(self, item):
self.append(item)
def shift(self):
return self.pop(0)
There’s no need to decorate these methods. append and pop are already instrumented as part of the list interface.
Decorating them would fire duplicate events, which should be avoided.
The targeted decoration tries not to rely on other methods in the underlying collection class, but some are unavoidable.
Many depend on ‘read’ methods being present to properly instrument a ‘write’, for example, __setitem__ needs
__getitem__. “Bulk” methods like update and extend may also reimplemented in terms of atomic appends
and removes, so the extend decoration will actually perform many append operations and not call the underlying
method at all.
Tight control over bulk operation and the firing of events is also possible by implementing the instrumentation inter-
nally in your methods. The basic instrumentation package works under the general assumption that collection mutation
will not raise unusual exceptions. If you want to closely orchestrate append and remove events with exception man-
agement, internal instrumentation may be the answer. Within your method, collection_adapter(self) will
retrieve an object that you can use for explicit control over triggering append and remove events.
The owning object and InstrumentedCollectionAttribute are also reachable through the adapter, allowing for some
very sophisticated behavior.
attribute_mapped_collection(attr_name)
A dictionary-based collection type with attribute-based keying.
Returns a MappedCollection factory with a keying based on the ‘attr_name’ attribute of entities in the collection.
The key value must be immutable for the lifetime of the object. You can not, for example, map on foreign key
values if those key values will change during the session, i.e. from None to a database-assigned integer after a
session flush.
class collection()
Decorators for entity collection classes.
The decorators fall into two groups: annotations and interception recipes.
The annotating decorators (appender, remover, iterator, internally_instrumented, on_link) indicate the method’s
purpose and take no arguments. They are not written with parens:
@collection.appender
def append(self, append): ...
The recipe decorators all require parens, even those that take no arguments:
@collection.adds(’entity’):
def insert(self, position, entity): ...
@collection.removes_return()
def popitem(self): ...
Decorators can be specified in long-hand for Python 2.3, or with the class-level dict attribute
‘__instrumentation__’- see the source for details.
class MappedCollection(keyfunc)
A basic dictionary-based collection class.
Extends dict with the minimal bag semantics that collection classes require. set and remove are implemented
in terms of a keying function: any callable that takes an object and returns an object for use as a dictionary key.
__init__(keyfunc)
Create a new collection with keying provided by keyfunc.
keyfunc may be any callable any callable that takes an object and returns an object for use as a dictionary
key.
The keyfunc will be called every time the ORM needs to add a member by value-only (such as when
loading instances from the database) or remove a member. The usual cautions about dictionary keying
apply- keyfunc(object) should return the same output for the life of the collection. Keying based on
mutable properties can result in unreachable instances “lost” in the collection.
remove(value, _sa_initiator=None)
Remove an item by value, consulting the keyfunc for the key.
set(value, _sa_initiator=None)
Add an item by value, consulting the keyfunc for the key.
collection_adapter(collection)
Fetch the CollectionAdapter for a collection.
column_mapped_collection(mapping_spec)
A dictionary-based collection type with column-based keying.
Returns a MappedCollection factory with a keying function generated from mapping_spec, which may be a
Column or a sequence of Columns.
The key value must be immutable for the lifetime of the object. You can not, for example, map on foreign key
values if those key values will change during the session, i.e. from None to a database-assigned integer after a
session flush.
mapped_collection(keyfunc)
A dictionary-based collection type with arbitrary keying.
Returns a MappedCollection factory with a keying function generated from keyfunc, a callable that takes an
entity and returns a key value.
The key value must be immutable for the lifetime of the object. You can not, for example, map on foreign key
values if those key values will change during the session, i.e. from None to a database-assigned integer after a
session flush.
9.2.3 Querying
q = session.query(SomeMappedClass)
‘evaluate’ - Evaluate the query’s criteria in Python straight on the objects in the session. If
evaluation of the criteria isn’t implemented, an error is raised. In that case you probably want
to use the ‘fetch’ strategy as a fallback.
The expression evaluator currently doesn’t account for differing string collations between the
database and Python.
Returns the number of rows deleted, excluding any cascades.
The method does not offer in-Python cascading of relationships - it is assumed that ON DELETE CAS-
CADE is configured for any foreign key references which require it. The Session needs to be expired
(occurs automatically after commit(), or call expire_all()) in order for the state of dependent objects sub-
ject to delete or delete-orphan cascade to be correctly represented.
Also, the before_delete() and after_delete() MapperExtension methods are not called
from this method. For a delete hook here, use the after_bulk_delete() MapperExtension
method.
distinct()
Apply a DISTINCT to the query and return the newly resulting Query.
enable_assertions(value)
Control whether assertions are generated.
When set to False, the returned Query will not assert its state before certain operations, including that
LIMIT/OFFSET has not been applied when filter() is called, no criterion exists when get() is called, and
no “from_statement()” exists when filter()/order_by()/group_by() etc. is called. This more permissive
mode is used by custom Query subclasses to specify criterion or other modifiers outside of the usual usage
patterns.
Care should be taken to ensure that the usage pattern is even possible. A statement applied by
from_statement() will override any criterion set by filter() or order_by(), for example.
enable_eagerloads(value)
Control whether or not eager joins and subqueries are rendered.
When set to False, the returned Query will not render eager joins regardless of joinedload(),
subqueryload() options or mapper-level lazy=’joined’/lazy=’subquery’ configurations.
This is used primarily when nesting the Query’s statement into a subquery or other selectable.
except_(*q)
Produce an EXCEPT of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
except_all(*q)
Produce an EXCEPT ALL of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
execution_options(**kwargs)
Set non-SQL options which take effect during execution.
The options are the same as those accepted by sqlalchemy.sql.expression.Executable.execution_option
Note that the stream_results execution option is enabled automatically if the yield_per()
method is used.
filter(criterion)
apply the given filtering criterion to the query and return the newly resulting Query
the criterion is any sql.ClauseElement applicable to the WHERE clause of a select.
filter_by(**kwargs)
apply the given filtering criterion to the query and return the newly resulting Query.
first()
Return the first result of this Query or None if the result doesn’t contain any row.
first() applies a limit of one within the generated SQL, so that only one primary entity row is generated on
the server side (note this may consist of multiple result rows if join-loaded collections are present).
Calling first() results in an execution of the underlying query.
from_self(*entities)
return a Query that selects from this Query’s SELECT statement.
*entities - optional list of entities which will replace those being selected.
from_statement(statement)
Execute the given SELECT statement and return results.
This method bypasses all internal statement compilation, and the statement is executed without modifica-
tion.
The statement argument is either a string, a select() construct, or a text() construct, and should
return the set of columns appropriate to the entity class represented by this Query.
Also see the instances() method.
get(ident)
Return an instance of the object based on the given identifier, or None if not found.
The ident argument is a scalar or tuple of primary key column values in the order of the table def’s primary
key columns.
group_by(*criterion)
apply one or more GROUP BY criterion to the query and return the newly resulting Query
having(criterion)
apply a HAVING criterion to the query and return the newly resulting Query.
instances(cursor, _Query__context=None)
Given a ResultProxy cursor as returned by connection.execute(), return an ORM result as an iterator.
e.g.:
result = engine.execute("select * from users")
for u in session.query(User).instances(result):
print u
intersect(*q)
Produce an INTERSECT of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
intersect_all(*q)
Produce an INTERSECT ALL of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
join(*props, **kwargs)
Create a join against this Query object’s criterion and apply generatively, returning the newly resulting
Query.
Each element in *props may be:
•a string property name, i.e. “rooms”. This will join along the relationship of the same name from this
Query’s “primary” mapper, if one is present.
•a class-mapped attribute, i.e. Houses.rooms. This will create a join from “Houses” table to that of the
“rooms” relationship.
•a 2-tuple containing a target class or selectable, and an “ON” clause. The ON clause can be the
property name/ attribute like above, or a SQL expression.
e.g.:
# join along string attribute names
session.query(Company).join(’employees’)
session.query(Company).join(’employees’, ’tasks’)
session.query(Person).join((Palias, Person.friends))
Calling one() results in an execution of the underlying query. As of 0.6, one() fully fetches all results
instead of applying any kind of limit, so that the “unique”-ing of entities does not conceal multiple object
identities.
options(*args)
Return a new Query object, applying the given list of MapperOptions.
order_by(*criterion)
apply one or more ORDER BY criterion to the query and return the newly resulting Query
outerjoin(*props, **kwargs)
Create a left outer join against this Query object’s criterion and apply generatively, retunring the newly
resulting Query.
Usage is the same as the join() method.
params(*args, **kwargs)
add values for bind parameters which may have been specified in filter().
parameters may be specified using **kwargs, or optionally a single dictionary as the first positional argu-
ment. The reason for both is that **kwargs is convenient, however some parameter dictionaries contain
unicode keys in which case **kwargs cannot be used.
populate_existing()
Return a Query that will refresh all instances loaded.
This includes all entities accessed from the database, including secondary entities, eagerly-loaded collec-
tion items.
All changes present on entities which are already present in the session will be reset and the entities will
all be marked “clean”.
An alternative to populate_existing() is to expire the Session fully using session.expire_all().
reset_joinpoint()
return a new Query reset the ‘joinpoint’ of this Query reset back to the starting mapper. Subsequent
generative calls will be constructed from the new joinpoint.
Note that each call to join() or outerjoin() also starts from the root.
scalar()
Return the first element of the first result or None if no rows present. If multiple rows are returned, raises
MultipleResultsFound.
>>> session.query(Item).scalar()
<Item>
>>> session.query(Item.id).scalar()
1
>>> session.query(Item.id).filter(Item.id < 0).scalar()
None
>>> session.query(Item.id, Item.name).scalar()
1
>>> session.query(func.count(Parent.id)).scalar()
20
This results in an execution of the underlying query.
select_from(*from_obj)
Set the from_obj parameter of the query and return the newly resulting Query. This replaces the table
which this Query selects from with the given table.
select_from() also accepts class arguments. Though usually not necessary, can ensure that the full
selectable of the given mapper is applied, e.g. for joined-table mappers.
slice(start, stop)
apply LIMIT/OFFSET to the Query based on a ” “range and return the newly resulting Query.
statement
The full SELECT statement represented by this Query.
The statement by default will not have disambiguating labels applied to the construct unless
with_labels(True) is called first.
subquery()
return the full SELECT statement represented by this Query, embedded within an Alias.
Eager JOIN generation within the query is disabled.
The statement by default will not have disambiguating labels applied to the construct unless
with_labels(True) is called first.
union(*q)
Produce a UNION of this Query against one or more queries.
e.g.:
q1 = sess.query(SomeClass).filter(SomeClass.foo==’bar’)
q2 = sess.query(SomeClass).filter(SomeClass.bar==’foo’)
q3 = q1.union(q2)
The method accepts multiple Query objects so as to control the level of nesting. A series of union()
calls such as:
x.union(y).union(z).all()
will nest on each union(), and produces:
SELECT * FROM (SELECT * FROM (SELECT * FROM X UNION
SELECT * FROM y) UNION SELECT * FROM Z)
Whereas:
x.union(y, z).all()
produces:
SELECT * FROM (SELECT * FROM X UNION SELECT * FROM y UNION
SELECT * FROM Z)
union_all(*q)
Produce a UNION ALL of this Query against one or more queries.
Works the same way as union(). See that method for usage examples.
update(values, synchronize_session=’evaluate’)
Perform a bulk update query.
Updates rows matched by this query in the database.
Parameters
• values – a dictionary with attributes names as keys and literal values or sql expressions as
values.
• synchronize_session – chooses the strategy to update the attributes on objects in the ses-
sion. Valid values are:
False - don’t synchronize the session. This option is the most efficient and is reliable
once the session is expired, which typically occurs after a commit(), or explicitly using
expire_all(). Before the expiration, updated objects may still remain in the session with
stale values on their attributes, which can lead to confusing results.
‘fetch’ - performs a select query before the update to find objects that are matched by the
update query. The updated attributes are expired on matched objects.
‘evaluate’ - Evaluate the Query’s criteria in Python straight on the objects in the session.
If evaluation of the criteria isn’t implemented, an exception is raised.
The expression evaluator currently doesn’t account for differing string collations between
the database and Python.
• discriminator – a column to be used as the “discriminator” column for the given selectable.
If not given, the polymorphic_on attribute of the mapper will be used, if any. This is useful
for mappers that don’t have polymorphic loading behavior by default, such as concrete
table mappers.
yield_per(count)
Yield only count rows at a time.
WARNING: use this method with caution; if the same instance is present in more than one batch of rows,
end-user changes to attributes will be overwritten.
In particular, it’s usually impossible to use this setting with eagerly loaded collections (i.e. any
lazy=’joined’ or ‘subquery’) since those collections will be cleared for a new load when encountered
in a subsequent result batch. In the case of ‘subquery’ loading, the full result for all rows is fetched which
generally defeats the purpose of yield_per().
Also note that many DBAPIs do not “stream” results, pre-buffering all rows before making them available,
including mysql-python and psycopg2. yield_per() will also set the stream_results execution
option to True, which currently is only understood by psycopg2 and causes server side cursors to be used.
aliased
alias of AliasedClass
class AliasedClass(cls, alias=None, name=None)
Represents an “aliased” form of a mapped class for usage with Query.
The ORM equivalent of a sqlalchemy.sql.expression.alias() construct, this object mimics the
mapped class using a __getattr__ scheme and maintains a reference to a real Alias object.
Usage is via the aliased() synonym:
Query Options
contains_alias(alias)
Return a MapperOption that will indicate to the query that the main table has been aliased.
alias is the string name or Alias object representing the alias.
contains_eager(*keys, **kwargs)
Return a MapperOption that will indicate to the query that the given attribute should be eagerly loaded from
columns currently in the query.
Used with options().
The option is used in conjunction with an explicit join that loads the desired rows, i.e.:
sess.query(Order).\
join(Order.user).\
options(contains_eager(Order.user))
The above query would join from the Order entity to its related User entity, and the returned Order objects
would have the Order.user attribute pre-populated.
contains_eager() also accepts an alias argument, which is the string name of an alias, an alias()
construct, or an aliased() construct. Use this when the eagerly-loaded rows are to come from an aliased
table:
user_alias = aliased(User)
sess.query(Order).\
join((user_alias, Order.user)).\
options(contains_eager(Order.user, alias=user_alias))
joinedload() also accepts a keyword argument innerjoin=True which indicates using an inner join instead
of an outer:
query(Order).options(joinedload(Order.user, innerjoin=True))
Note that the join created by joinedload() is aliased such that no other aspects of the query will affect what
it loads. To use joined eager loading with a join that is constructed manually using join() or join(), see
contains_eager().
See also: subqueryload(), lazyload()
joinedload_all(*keys, **kw)
Return a MapperOption that will convert all properties along the given dot-separated path into an joined
eager load.
Note: This function is known as eagerload_all() in all versions of SQLAlchemy prior to version
0.6beta3, including the 0.5 and 0.4 series. eagerload_all() will remain available for the foreseeable
future in order to enable cross-compatibility.
Used with options().
For example:
query.options(joinedload_all(’orders.items.keywords’))...
will set all of ‘orders’, ‘orders.items’, and ‘orders.items.keywords’ to load in one joined eager load.
Individual descriptors are accepted as arguments as well:
The keyword arguments accept a flag innerjoin=True|False which will override the value of the innerjoin flag
specified on the relationship().
See also: subqueryload_all(), lazyload()
lazyload(*keys)
Return a MapperOption that will convert the property of the given name into a lazy load.
Used with options().
See also: eagerload(), subqueryload()
subqueryload(*keys)
Return a MapperOption that will convert the property of the given name into an subquery eager load.
Note: This function is new as of SQLAlchemy version 0.6beta3.
Used with options().
examples:
query.options(subqueryload_all(’orders.items.keywords’))...
will set all of ‘orders’, ‘orders.items’, and ‘orders.items.keywords’ to load in one subquery eager load.
Individual descriptors are accepted as arguments as well:
9.2.4 Sessions
create_session(bind=None, **kwargs)
Create a new Session.
Parameters
• bind – optional, a single Connectable to use for all database access in the created Session.
• **kwargs – optional, passed through to the Session constructor.
Returns an Session instance
The defaults of create_session() are the opposite of that of sessionmaker(); autoflush and
expire_on_commit are False, autocommit is True. In this sense the session acts more like the “clas-
sic” SQLAlchemy 0.3 session with these.
Usage:
scoped_session(session_factory, scopefunc=None)
Provides thread-local management of Sessions.
This is a front-end function to ScopedSession.
Parameters
• session_factory – a callable function that produces Session instances, such as
sessionmaker() or create_session().
• scopefunc – optional, TODO
Returns an ScopedSession instance
Usage:
Session = scoped_session(sessionmaker(autoflush=True))
To instantiate a Session object which is part of the scoped context, instantiate normally:
session = Session()
Most session methods are available as classmethods from the scoped session:
Session.commit()
Session.close()
# global scope
Session = sessionmaker(autoflush=False)
Any keyword arguments sent to the constructor itself will override the “configured” keywords:
Session = sessionmaker()
The class also includes a special classmethod configure(), which allows additional configurational options
to take place after the custom Session class has been generated. This is useful particularly for defining the
specific Engine (or engines) to which new instances of Session should be bound:
Session = sessionmaker()
Session.configure(bind=create_engine(’sqlite:///foo.db’))
sess = Session()
Options:
Parameters
• autocommit – Defaults to False. When True, the Session does not keep a persistent
transaction running, and will acquire connections from the engine on an as-needed basis,
returning them immediately after their use. Flushes will begin and commit (or possibly
rollback) their own transaction if no transaction is present. When using this mode, the
session.begin() method may be used to begin a transaction explicitly.
Leaving it on its default value of False means that the Session will acquire a connection
and begin a transaction the first time it is used, which it will maintain persistently until
rollback(), commit(), or close() is called. When the transaction is released by
any of these methods, the Session is ready for the next usage, which will again acquire
and maintain a new connection/transaction.
• autoflush – When True, all query operations will issue a flush() call to this Session
before proceeding. This is a convenience feature so that flush() need not be called repeat-
edly in order for database queries to retrieve results. It’s typical that autoflush is used in
conjunction with autocommit=False. In this scenario, explicit calls to flush() are
rarely needed; you usually only need to call commit() (which flushes) to finalize changes.
• bind – An optional Engine or Connection to which this Session should be bound.
When specified, all SQL operations performed by this session will execute via this con-
nectable.
• binds –
An optional dictionary which contains more granular “bind” information than the
bind parameter provides. This dictionary can map individual Table instances as
well as Mapper instances to individual Engine or Connection objects. Operations
which proceed relative to a particular Mapper will consult this dictionary for the direct
Mapper instance as well as the mapper’s mapped_table attribute in order to locate
an connectable to use. The full resolution is described in the get_bind() method of
Session. Usage looks like:
sess = Session(binds={
SomeMappedClass: create_engine(’postgresql://engine1’),
somemapper: create_engine(’postgresql://engine2’),
some_table: create_engine(’postgresql://engine3’),
})
Also see the bind_mapper() and bind_table() methods.
• class_ – Specify an alternate class other than sqlalchemy.orm.session.Session
which should be used by the returned class. This is the only argument that is local to the
sessionmaker() function, and is not sent directly to the constructor for Session.
• _enable_transaction_accounting – Defaults to True. A legacy-only flag which when
False disables all 0.5-style object accounting on transaction boundaries, including auto-
expiry of instances on rollback and commit, maintenance of the “new” and “deleted” lists
upon rollback, and autoflush of pending changes upon begin(), all of which are interdepen-
dent.
• expire_on_commit – Defaults to True. When True, all instances will be fully expired after
each commit(), so that all attribute/object access subsequent to a completed transaction
will load from the most recent database state.
• extension – An optional SessionExtension instance, or a list of such instances, which
will receive pre- and post- commit and flush events, as well as a post-rollback event.
User- defined code may be placed within these hooks using a user-defined subclass of
SessionExtension.
• query_cls – Class which should be used to create new Query objects, as returned by the
query() method. Defaults to Query.
•Transient - an instance that’s not in a session, and is not saved to the database; i.e. it has no database
identity. The only relationship such an object has to the ORM is that its class has a mapper() associated
with it.
•Pending - when you add() a transient instance, it becomes pending. It still wasn’t actually flushed to the
database yet, but it will be when the next flush occurs.
•Persistent - An instance which is present in the session and has a record in the database. You get persistent
instances by either flushing so that the pending instances become persistent, or by querying the database
for existing instances (or moving persistent instances from other sessions into your local session).
•Detached - an instance which has a record in the database, but is not in any session. Theres nothing wrong
with this, and you can use objects normally when they’re detached, except they will not be able to issue
any SQL in order to load collections or attributes which are not yet loaded, or were marked as “expired”.
The session methods which control instance state include add(), delete(), merge(), and expunge().
The Session object is generally not threadsafe. A session which is set to autocommit and is only read from
may be used by concurrent threads if it’s acceptable that some object instances may be loaded twice.
The typical pattern to managing Sessions in a multi-threaded environment is either to use mutexes to limit
concurrent access to one thread at a time, or more commonly to establish a unique session for every thread,
using a threadlocal variable. SQLAlchemy provides a thread-managed Session adapter, provided by the
scoped_session() function.
__init__(bind=None, autoflush=True, expire_on_commit=True, _enable_transaction_accounting=True,
autocommit=False, twophase=False, weak_identity_map=True, binds=None, extension=None,
query_cls=<class ’sqlalchemy.orm.query.Query’>)
Construct a new Session.
Arguments to Session are described using the sessionmaker() function.
add(instance)
Place an object in the Session.
Its state will be persisted to the database on the next flush operation.
Repeated calls to add() will be ignored. The opposite of add() is expunge().
add_all(instances)
Add the given collection of instances to this Session.
begin(subtransactions=False, nested=False)
Begin a transaction on this Session.
If this Session is already within a transaction, either a plain transaction or nested transaction, an error is
raised, unless subtransactions=True or nested=True is specified.
The subtransactions=True flag indicates that this begin() can create a subtransaction if a trans-
action is already in progress. A subtransaction is a non-transactional, delimiting construct that allows
matching begin()/commit() pairs to be nested together, with only the outermost begin/commit pair actually
affecting transactional state. When a rollback is issued, the subtransaction will directly roll back the inner-
most real transaction, however each subtransaction still must be explicitly rolled back to maintain proper
stacking of subtransactions.
If no transaction is in progress, then a real transaction is begun.
The nested flag begins a SAVEPOINT transaction and is equivalent to calling begin_nested().
begin_nested()
Begin a nested transaction on this Session.
The target database(s) must support SQL SAVEPOINTs or a SQLAlchemy-supported vendor implemen-
tation of the idea.
The nested transaction is a real transation, unlike a “subtransaction” which corresponds to multiple
begin() calls. The next rollback() or commit() call will operate upon this nested transaction.
bind_mapper(mapper, bind)
Bind operations for a mapper to a Connectable.
mapper A mapper instance or mapped class
bind Any Connectable: a Engine or Connection.
All subsequent operations involving this mapper will use the given bind.
bind_table(table, bind)
Bind operations on a Table to a Connectable.
table A Table instance
bind Any Connectable: a Engine or Connection.
All subsequent operations involving this Table will use the given bind.
close()
Close this Session.
This clears all items and ends any transaction in progress.
If this session were created with autocommit=False, a new transaction is immediately begun. Note
that this new transaction does not use any connection resources until they are first needed.
class close_all()
Close all sessions in memory.
commit()
Flush pending changes and commit the current transaction.
If no transaction is in progress, this method raises an InvalidRequestError.
If a subtransaction is in effect (which occurs when begin() is called multiple times), the subtransaction will
be closed, and the next call to commit() will operate on the enclosing transaction.
For a session configured with autocommit=False, a new transaction will be begun immediately after the
commit, but note that the newly begun transaction does not use any connection resources until the first
SQL is actually emitted.
connection(mapper=None, clause=None)
Return the active Connection.
Retrieves the Connection managing the current transaction. Any operations executed on the Connection
will take place in the same transactional context as Session operations.
For autocommit Sessions with no active manual transaction, connection() is a passthrough to
contextual_connect() on the underlying engine.
Ambiguity in multi-bind or unbound Sessions can be resolved through any of the optional keyword argu-
ments. See get_bind() for more information.
mapper Optional, a mapper or mapped class
clause Optional, any ClauseElement
delete(instance)
Mark an instance as deleted.
The database delete operation occurs upon flush().
deleted
The set of all instances marked as ‘deleted’ within this Session
dirty
The set of all persistent instances considered dirty.
Instances are considered dirty when they were modified but not deleted.
Note that this ‘dirty’ calculation is ‘optimistic’; most attribute-setting or collection modification operations
will mark an instance as ‘dirty’ and place it in this set, even if there is no net change to the attribute’s value.
At flush time, the value of each attribute is compared to its previously saved value, and if there’s no net
change, no SQL operation will occur (this is a more expensive operation so it’s only done at flush time).
To check if an instance has actionable net changes to its attributes, use the is_modified() method.
execute(clause, params=None, mapper=None, **kw)
Execute a clause within the current transaction.
Returns a ResultProxy of execution results. autocommit Sessions will create a transaction on the fly.
Connection ambiguity in multi-bind or unbound Sessions will be resolved by inspecting the clause
for binds. The ‘mapper’ and ‘instance’ keyword arguments may be used if this is insufficient, See
get_bind() for more information.
clause A ClauseElement (i.e. select(), text(), etc.) or string SQL statement to be executed
params Optional, a dictionary of bind parameters.
mapper Optional, a mapper or mapped class
**kw Additional keyword arguments are sent to get_bind() which locates a connectable to use for the
execution. Subclasses of Session may override this.
expire(instance, attribute_names=None)
Expire the attributes on an instance.
Marks the attributes of an instance as out of date. When an expired attribute is next accessed, query will
be issued to the database and the attributes will be refreshed with their current database value. expire()
is a lazy variant of refresh().
class object_session(instance)
Return the Session to which an object belongs.
prepare()
Prepare the current transaction in progress for two phase commit.
If no transaction is in progress, this method raises an InvalidRequestError.
Only root transactions of two phase sessions can be prepared. If the current transaction is not such, an
InvalidRequestError is raised.
prune()
Remove unreferenced instances cached in the identity map.
Note that this method is only meaningful if “weak_identity_map” is set to False. The default weak identity
map is self-pruning.
Removes any object in this Session’s identity map that is not referenced in user code, modified, new or
scheduled for deletion. Returns the number of objects pruned.
query(*entities, **kwargs)
Return a new Query object corresponding to this Session.
refresh(instance, attribute_names=None, lockmode=None)
Expire and refresh the attributes on the given instance.
A query will be issued to the database and all attributes will be refreshed with their current database value.
Lazy-loaded relational attributes will remain lazily loaded, so that the instance-wide refresh operation will
be followed immediately by the lazy load of that attribute.
Eagerly-loaded relational attributes will eagerly load within the single refresh operation.
Parameters
• attribute_names – optional. An iterable collection of string attribute names indicating a
subset of attributes to be refreshed.
• lockmode – Passed to the Query as used by with_lockmode().
rollback()
Rollback the current transaction in progress.
If no transaction is in progress, this method is a pass-through.
This method rolls back the current transaction or nested transaction regardless of subtransactions being in
effect. All subtransactions up to the first real transaction are closed. Subtransactions occur when begin()
is called multiple times.
scalar(clause, params=None, mapper=None, **kw)
Like execute() but return a scalar result.
class ScopedSession(session_factory, scopefunc=None)
Provides thread-local management of Sessions.
Usage:
Session = scoped_session(sessionmaker(autoflush=True))
__init__(session_factory, scopefunc=None)
configure(**kwargs)
reconfigure the sessionmaker used by this ScopedSession.
mapper(*args, **kwargs)
return a mapper() function which associates this ScopedSession with the Mapper.
Session.mapper is deprecated. Please see http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SessionAwareMapper
for information on how to replicate its behavior.
DEPRECATED.
query_property(query_cls=None)
return a class property which produces a Query object against the class when called.
e.g.:: Session = scoped_session(sessionmaker())
class MyClass(object): query = Session.query_property()
# after mappers are defined result = MyClass.query.filter(MyClass.name==’foo’).all()
Produces instances of the session’s configured query class by default. To override and use a custom im-
plementation, provide a query_cls callable. The callable will be invoked with the class’s mapper as a
positional argument and a session keyword argument.
There is no limit to the number of query properties placed on a class.
remove()
Dispose of the current contextual session.
9.2.5 Interfaces
Semi-private module containing various base classes used throughout the ORM.
Defines the extension classes MapperExtension, SessionExtension, and AttributeExtension as well
as other user-subclassable extension objects.
class AttributeExtension()
An event handler for individual attribute change events.
AttributeExtension is assembled within the descriptors associated with a mapped class.
active_history
indicates that the set() method would like to receive the ‘old’ value, even if it means firing lazy callables.
append(state, value, initiator)
Receive a collection append event.
The returned value will be used as the actual value to be appended.
remove(state, value, initiator)
Receive a remove event.
No return value is defined.
set(state, value, oldvalue, initiator)
Receive a set event.
The returned value will be used as the actual value to be set.
class InstrumentationManager(class_)
User-defined class instrumentation extension.
The API for this class should be considered as semi-stable, and may change slightly with new releases.
__init__(class_)
dict_getter(class_)
dispose(class_, manager)
get_instance_dict(class_, instance)
initialize_instance_dict(class_, instance)
install_descriptor(class_, key, inst)
install_member(class_, key, implementation)
install_state(class_, instance, state)
instrument_attribute(class_, key, inst)
instrument_collection_class(class_, key, collection_class)
manage(class_, manager)
manager_getter(class_)
post_configure_attribute(class_, key, inst)
remove_state(class_, instance)
state_getter(class_)
uninstall_descriptor(class_, key)
uninstall_member(class_, key)
class MapperExtension()
Base implementation for customizing Mapper behavior.
New extension classes subclass MapperExtension and are specified using the extension mapper() ar-
gument, which is a single MapperExtension or a list of such. A single mapper can maintain a chain of
MapperExtension objects. When a particular mapping event occurs, the corresponding method on each
MapperExtension is invoked serially, and each method has the ability to halt the chain from proceeding
further.
Each MapperExtension method returns the symbol EXT_CONTINUE by default. This symbol generally
means “move to the next MapperExtension for processing”. For methods that return objects like translated
rows or new object instances, EXT_CONTINUE means the result of the method should be ignored. In some
cases it’s required for a default mapper activity to be performed, such as adding a new instance to a result list.
The symbol EXT_STOP has significance within a chain of MapperExtension objects that the chain will be
stopped when this symbol is returned. Like EXT_CONTINUE, it also has additional significance in some cases
that a default mapper activity will not be performed.
after_delete(mapper, connection, instance)
Receive an object instance after that instance is deleted.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
after_insert(mapper, connection, instance)
Receive an object instance after that instance is inserted.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
after_update(mapper, connection, instance)
Receive an object instance after that instance is updated.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
append_result(mapper, selectcontext, row, instance, result, **flags)
Receive an object instance before that instance is appended to a result list.
If this method returns EXT_CONTINUE, result appending will proceed normally. if this method returns
any other value or None, result appending will not proceed for this instance, giving this extension an
opportunity to do the appending itself, if desired.
mapper The mapper doing the operation.
selectcontext The QueryContext generated from the Query.
row The result row from the database.
instance The object instance to be appended to the result.
result List to which results are being appended.
**flags extra information about the row, same as criterion in create_row_processor() method of
MapperProperty
before_delete(mapper, connection, instance)
Receive an object instance before that instance is deleted.
Note that no changes to the overall flush plan can be made here; and manipulation of the Session will
not have the desired effect. To manipulate the Session within an extension, use SessionExtension.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
before_insert(mapper, connection, instance)
Receive an object instance before that instance is inserted into its table.
This is a good place to set up primary key values and such that aren’t handled otherwise.
Column-based attributes can be modified within this method which will result in the new value being
inserted. However no changes to the overall flush plan can be made, and manipulation of the Session will
not have the desired effect. To manipulate the Session within an extension, use SessionExtension.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
before_update(mapper, connection, instance)
Receive an object instance before that instance is updated.
Note that this method is called for all instances that are marked as “dirty”, even those which have no
net changes to their column-based attributes. An object is marked as dirty when any of its column-based
attributes have a “set attribute” operation called or when any of its collections are modified. If, at update
time, no column-based attributes have any net changes, no UPDATE statement will be issued. This means
that an instance being sent to before_update is not a guarantee that an UPDATE statement will be issued
(although you can affect the outcome here).
To detect if the column-based attributes on the object have net changes, and will therefore gen-
erate an UPDATE statement, use object_session(instance).is_modified(instance,
include_collections=False).
Column-based attributes can be modified within this method which will result in the new value being
updated. However no changes to the overall flush plan can be made, and manipulation of the Session will
not have the desired effect. To manipulate the Session within an extension, use SessionExtension.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
create_instance(mapper, selectcontext, row, class_)
Receive a row when a new object instance is about to be created from that row.
The method can choose to create the instance itself, or it can return EXT_CONTINUE to indicate normal
object creation should take place.
mapper The mapper doing the operation
selectcontext The QueryContext generated from the Query.
row The result row from the database
class_ The class we are mapping.
return value A new object instance, or EXT_CONTINUE
init_failed(mapper, class_, oldinit, instance, args, kwargs)
Receive an instance when it’s constructor has been called, and raised an exception.
This method is only called during a userland construction of an object. It is not called when an object is
loaded from the database.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
init_instance(mapper, class_, oldinit, instance, args, kwargs)
Receive an instance when it’s constructor is called.
This method is only called during a userland construction of an object. It is not called when an object is
loaded from the database.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
instrument_class(mapper, class_)
Receive a class when the mapper is first constructed, and has applied instrumentation to the mapped class.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
populate_instance(mapper, selectcontext, row, instance, **flags)
Receive an instance before that instance has its attributes populated.
This usually corresponds to a newly loaded instance but may also correspond to an already-loaded instance
which has unloaded attributes to be populated. The method may be called many times for a single instance,
as multiple result rows are used to populate eagerly loaded collections.
If this method returns EXT_CONTINUE, instance population will proceed normally. If any other value
or None is returned, instance population will not proceed, giving this extension an opportunity to populate
the instance itself, if desired.
As of 0.5, most usages of this hook are obsolete. For a generic “object has been newly created from a row”
hook, use reconstruct_instance(), or the @orm.reconstructor decorator.
reconstruct_instance(mapper, instance)
Receive an object instance after it has been created via __new__, and after initial attribute population has
occurred.
This typically occurs when the instance is created based on incoming result rows, and is only called once
for that instance’s lifetime.
Note that during a result-row load, this method is called upon the first row received for this instance. Note
that some attributes and collections may or may not be loaded or even initialized, depending on what’s
present in the result rows.
The return value is only significant within the MapperExtension chain; the parent mapper’s behavior
isn’t modified by this method.
translate_row(mapper, context, row)
Perform pre-processing on the given result row and return a new row instance.
This is called when the mapper first receives a row, before the object identity or the instance itself has
been derived from that row. The given row may or may not be a RowProxy object - it will always be
a dictionary-like object which contains mapped columns as keys. The returned object should also be a
dictionary-like object which recognizes mapped columns as keys.
If the ultimate return value is EXT_CONTINUE, the row is not translated.
class PropComparator(prop, mapper, adapter=None)
defines comparison operations for MapperProperty objects.
PropComparator instances should also define an accessor ‘property’ which returns the MapperProperty associ-
ated with this PropComparator.
__init__(prop, mapper, adapter=None)
adapted(adapter)
Return a copy of this PropComparator which will use the given adaption function on the local side of
generated expressions.
any(criterion=None, **kwargs)
Return true if this collection contains any member that meets the given criterion.
criterion an optional ClauseElement formulated against the member class’ table or attributes.
**kwargs key/value pairs corresponding to member class attribute names which will be compared via
equality to the corresponding values.
static any_op(a, b, **kwargs)
has(criterion=None, **kwargs)
Return true if this element references a member which meets the given criterion.
criterion an optional ClauseElement formulated against the member class’ table or attributes.
**kwargs key/value pairs corresponding to member class attribute names which will be compared via
equality to the corresponding values.
static has_op(a, b, **kwargs)
of_type(class_)
Redefine this object in terms of a polymorphic subclass.
Returns a new PropComparator from which further criterion can be evaluated.
e.g.:
query.join(Company.employees.of_type(Engineer)).\
filter(Engineer.name==’foo’)
class_ a class or mapper indicating that criterion will be against this specific subclass.
static of_type_op(a, class_)
class SessionExtension()
An extension hook object for Sessions. Subclasses may be installed into a Session (or sessionmaker) using the
extension keyword argument.
after_attach(session, instance)
Execute after an instance is attached to a session.
This is called after an add, delete or merge.
after_begin(session, transaction, connection)
Execute after a transaction is begun on a connection
transaction is the SessionTransaction. This method is called after an engine level transaction is begun on
a connection.
after_bulk_delete(session, query, query_context, result)
Execute after a bulk delete operation to the session.
This is called after a session.query(...).delete()
query is the query object that this delete operation was called on. query_context was the query context
object. result is the result object returned from the bulk operation.
after_bulk_update(session, query, query_context, result)
Execute after a bulk update operation to the session.
This is called after a session.query(...).update()
query is the query object that this update operation was called on. query_context was the query context
object. result is the result object returned from the bulk operation.
after_commit(session)
Execute after a commit has occured.
Note that this may not be per-flush if a longer running transaction is ongoing.
after_flush(session, flush_context)
Execute after flush has completed, but before commit has been called.
Note that the session’s state is still in pre-flush, i.e. ‘new’, ‘dirty’, and ‘deleted’ lists still show pre-flush
state as well as the history settings on instance attributes.
after_flush_postexec(session, flush_context)
Execute after flush has completed, and after the post-exec state occurs.
This will be when the ‘new’, ‘dirty’, and ‘deleted’ lists are in their final state. An actual commit() may or
may not have occured, depending on whether or not the flush started its own transaction or participated in
a larger transaction.
after_rollback(session)
Execute after a rollback has occured.
Note that this may not be per-flush if a longer running transaction is ongoing.
before_commit(session)
Execute right before commit is called.
Note that this may not be per-flush if a longer running transaction is ongoing.
before_flush(session, flush_context, instances)
Execute before flush process has started.
instances is an optional list of objects which were passed to the flush() method.
9.2.6 Utilities
identity_key(*args, **kwargs)
Get an identity key.
Valid call signatures:
•identity_key(class, ident)
class mapped class (must be a positional argument)
ident primary key, if the key is composite this is a tuple
•identity_key(instance=instance)
instance object instance (must be given as a keyword arg)
•identity_key(class, row=row)
class mapped class (must be a positional argument)
row result proxy row (must be given as a keyword arg)
9.3 sqlalchemy.dialects
Firebird
Dialects
Firebird offers two distinct dialects (not to be confused with a SQLAlchemy Dialect):
dialect 1 This is the old syntax and behaviour, inherited from Interbase pre-6.0.
dialect 3 This is the newer and supported syntax, introduced in Interbase 6.0.
The SQLAlchemy Firebird dialect detects these versions and adjusts its representation of SQL accordingly. However,
support for dialect 1 is not well tested and probably has incompatibilities.
Locking Behavior
Firebird locks tables aggressively. For this reason, a DROP TABLE may hang until other transactions are released.
SQLAlchemy does its best to release transactions as quickly as possible. The most common cause of hanging transac-
tions is a non-fully consumed result set, i.e.:
Where above, the ResultProxy has not been fully consumed. The connection will be returned to the pool and the
transactional state rolled back once the Python garbage collector reclaims the objects which hold onto the connection,
which often occurs asynchronously. The above use case can be alleviated by calling first() on the ResultProxy
which will fetch the first row and immediately close all remaining cursor/connection resources.
RETURNING support
Firebird 2.0 supports returning a result set from inserts, and 2.1 extends that to deletes and updates. This is generically
exposed by the SQLAlchemy returning() method, such as:
# INSERT..RETURNING
result = table.insert().returning(table.c.col1, table.c.col2).\
values(name=’foo’)
print result.fetchall()
# UPDATE..RETURNING
kinterbasdb
The most common way to connect to a Firebird engine is implemented by kinterbasdb, currently maintained directly
by the Firebird people.
The connection URL is of the form firebird[+kinterbasdb]://user:password@host:port/path/to/db[?key=val
Kinterbasedb backend specific keyword arguments are:
• type_conv - select the kind of mapping done on the types: by default SQLAlchemy uses 200 with Unicode,
datetime and decimal support (see details).
• concurrency_level - set the backend policy with regards to threading issues: by default SQLAlchemy uses policy
1 (see details).
• enable_rowcount - True by default, setting this to False disables the usage of “cursor.rowcount” with the Kin-
terbasdb dialect, which SQLAlchemy ordinarily calls upon automatically after any UPDATE or DELETE state-
ment. When disabled, SQLAlchemy’s ResultProxy will return -1 for result.rowcount. The rationale here is that
Kinterbasdb requires a second round trip to the database when .rowcount is called - since SQLA’s resultproxy au-
tomatically closes the cursor after a non-result-returning statement, rowcount must be called, if at all, before the
result object is returned. Additionally, cursor.rowcount may not return correct results with older versions of Fire-
bird, and setting this flag to False will also cause the SQLAlchemy ORM to ignore its usage. The behavior can
also be controlled on a per-execution basis using the enable_rowcount option with execution_options():
conn = engine.connect().execution_options(enable_rowcount=True)
r = conn.execute(stmt)
print r.rowcount
Connecting
IDENTITY columns are supported by using SQLAlchemy schema.Sequence() objects. In other words:
Table(’test’, mss_engine,
Column(’id’, Integer,
Sequence(’blah’,100,10), primary_key=True),
Column(’name’, String(20))
).create()
would yield:
Note that the start and increment values for sequences are optional and will default to 1,1.
Implicit autoincrement behavior works the same in MSSQL as it does in other dialects and results in an
IDENTITY column.
Collation Support
MSSQL specific string types support a collation parameter that creates a column-level specific collation for the column.
The collation parameter accepts a Windows Collation Name or a SQL Collation Name. Supported types are MSChar,
MSNChar, MSString, MSNVarchar, MSText, and MSNText. For example:
will yield:
LIMIT/OFFSET Support
MSSQL has no support for the LIMIT or OFFSET keysowrds. LIMIT is supported directly through the TOP Transact
SQL keyword:
select.limit
will yield:
SELECT TOP n
If using SQL Server 2005 or above, LIMIT with OFFSET support is available through the ROW_NUMBER OVER
construct. For versions below 2005, LIMIT with OFFSET usage will fail.
Nullability
MSSQL has support for three levels of column nullability. The default nullability allows nulls and is explicit in the
CREATE TABLE construct:
If nullable=None is specified then no specification is made. In other words the database’s configured default is
used. This will render:
name VARCHAR(20)
If nullable is True or False then the column will be NULL‘ or ‘‘NOT NULL respectively.
DATE and TIME are supported. Bind parameters are converted to datetime.datetime() objects as required by most
MSSQL drivers, and results are processed from strings if needed. The DATE and TIME types are not available
for MSSQL 2005 and previous - if a server version below 2008 is detected, DDL for these types will be issued as
DATETIME.
Compatibility Levels
MSSQL supports the notion of setting compatibility levels at the database level. This allows, for instance, to run a
database that is compatibile with SQL2000 while running on a SQL2005 database server. server_version_info
will always retrun the database server version information (in this case SQL2005) and not the compatibiility level
information. Because of this, if running under a backwards compatibility mode SQAlchemy may attempt to use
T-SQL statements that are unable to be parsed by the database server.
Known Issues
PyODBC
http://pypi.python.org/pypi/pyodbc/
• mssql+pyodbc://mydsn - connects using the specified DSN named mydsn. The connection string that is
created will appear like:
dsn=mydsn;Trusted_Connection=Yes
• mssql+pyodbc://user:pass@mydsn - connects using the DSN named mydsn passing in the UID and
PWD information. The connection string that is created will appear like:
dsn=mydsn;UID=user;PWD=pass
dsn=mydsn;UID=user;PWD=pass;LANGUAGE=us_english
DRIVER={SQL Server};Server=host;Database=db;UID=user;PWD=pass
DRIVER={SQL Server};Server=host,123;Database=db;UID=user;PWD=pass
DRIVER={SQL Server};Server=host;Database=db;UID=user;PWD=pass;port=123
If you require a connection string that is outside the options presented above, use the odbc_connect keyword to
pass in a urlencoded connection string. What gets passed in will be urldecoded and passed directly.
For example:
mssql+pyodbc:///?odbc_connect=dsn%3Dmydsn%3BDatabase%3Ddb
dsn=mydsn;Database=db
Encoding your connection string can be easily accomplished through the python shell. For example:
mxODBC
http://www.egenix.com/
This was tested with mxODBC 3.1.2 and the SQL Server Native Client connected to MSSQL 2005 and 2008 Express
Editions.
mssql+mxodbc://<username>:<password>@<dsnname>
Execution Modes mxODBC features two styles of statement execution, using the cursor.execute() and
cursor.executedirect() methods (the second being an extension to the DBAPI specification). The former
makes use of the native parameter binding services of the ODBC driver, while the latter uses string escaping. The
primary advantage to native parameter binding is that the same statement, when executed many times, is only prepared
once. Whereas the primary advantage to the latter is that the rules for bind parameter placement are relaxed. MS-SQL
has very strict rules for native binds, including that they cannot be placed within the argument lists of function calls,
anywhere outside the FROM, or even within subqueries within the FROM clause - making the usage of bind param-
eters within SELECT statements impossible for all but the most simplistic statements. For this reason, the mxODBC
dialect uses the “native” mode by default only for INSERT, UPDATE, and DELETE statements, and uses the escaped
string mode for all other statements. This behavior can be controlled completely via execution_options() us-
ing the native_odbc_execute flag with a value of True or False, where a value of True will unconditionally
use native bind parameters and a value of False will uncondtionally use string-escaped parameters.
pymssql
http://pymssql.sourceforge.net/
mssql+pymssql://<username>:<password>@<freetds_name>
Adding “?charset=utf8” or similar will cause pymssql to return strings as Python unicode objects. This can potentially
improve performance in some scenarios as decoding of strings is handled natively.
zxjdbc Notes
Support for the Microsoft SQL Server database via the zxjdbc JDBC connector.
AdoDBAPI
MySQL
SQLAlchemy supports 6 major MySQL versions: 3.23, 4.0, 4.1, 5.0, 5.1 and 6.0, with capabilities increasing with
more modern servers.
Versions 4.1 and higher support the basic SQL functionality that SQLAlchemy uses in the ORM and SQL expressions.
These versions pass the applicable tests in the suite 100%. No heroic measures are taken to work around major missing
SQL features- if your server version does not support sub-selects, for example, they won’t work in SQLAlchemy either.
Most available DBAPI drivers are supported; see below.
Feature Minimum Version
sqlalchemy.orm 4.1.1
Table Reflection 3.23.x
DDL Generation 4.1.1
utf8/Full Unicode Connections 4.1.1
Transactions 3.23.15
Two-Phase Transactions 5.0.3
Nested Transactions 5.0.3
See the official MySQL documentation for detailed information about features supported in any given server release.
Connecting
Data Types
All of MySQL’s standard types are supported. These can also be specified within table metadata, for the purpose
of issuing CREATE TABLE statements which include MySQL-specific extensions. The types are available from the
module, as in:
Table(’mytable’, metadata,
Column(’id’, Integer, primary_key=True),
Column(’ittybittyblob’, mysql.TINYBLOB),
Column(’biggy’, mysql.BIGINT(unsigned=True)))
See the API documentation on specific column types for further details.
Connection Timeouts
MySQL features an automatic connection close behavior, for connections that have been idle for eight hours or more.
To circumvent having this issue, use the pool_recycle option which controls the maximum age of any connection:
Storage Engines
Most MySQL server installations have a default table type of MyISAM, a non-transactional table type. During a
transaction, non-transactional storage engines do not participate and continue to store table changes in autocommit
mode. For fully atomic transactions, all participating tables must use a transactional engine such as InnoDB, Falcon,
SolidDB, PBXT, etc.
Storage engines can be elected when creating tables in SQLAlchemy by supplying a
mysql_engine=’whatever’ to the Table constructor. Any MySQL table creation option can be speci-
fied in this syntax:
Table(’mytable’, metadata,
Column(’data’, String(32)),
mysql_engine=’InnoDB’,
mysql_charset=’utf8’
)
Keys
Not all MySQL storage engines support foreign keys. For MyISAM and similar engines, the information loaded by
table reflection will not include foreign keys. For these tables, you may supply a ForeignKeyConstraint at
reflection time:
Table(’mytable’, metadata,
ForeignKeyConstraint([’other_id’], [’othertable.other_id’]),
autoload=True
)
When creating tables, SQLAlchemy will automatically set AUTO_INCREMENT‘ on an integer primary key column:
You can disable this behavior by supplying autoincrement=False to the Column. This flag can also be used to
enable auto-increment on a secondary column in a multi-column key for some storage engines:
Table(’mytable’, metadata,
Column(’gid’, Integer, primary_key=True, autoincrement=False),
Column(’id’, Integer, primary_key=True)
)
SQL Mode
MySQL SQL modes are supported. Modes that enable ANSI_QUOTES (such as ANSI) require an engine option to
modify SQLAlchemy’s quoting style. When using an ANSI-quoting mode, supply use_ansiquotes=True when
creating your Engine:
create_engine(’mysql://localhost/test’, use_ansiquotes=True)
This is an engine-wide option and is not toggleable on a per-connection basis. SQLAlchemy does not presume to SET
sql_mode for you with this option. For the best performance, set the quoting style server-wide in my.cnf or by
supplying --sql-mode to mysqld. You can also use a sqlalchemy.pool.Pool listener hook to issue a SET
SESSION sql_mode=’...’ on connect to configure each connection.
If you do not specify use_ansiquotes, the regular MySQL quoting style is used by default.
If you do issue a SET sql_mode through SQLAlchemy, the dialect must be updated if the quoting style is changed.
Again, this change will affect all connections:
connection.execute(’SET sql_mode="ansi"’)
connection.dialect.use_ansiquotes = True
Many of the MySQL SQL extensions are handled through SQLAlchemy’s generic function and operator support:
table.select(table.c.password==func.md5(’plaintext’))
table.select(table.c.username.op(’regexp’)(’^[a-d]’))
And of course any valid MySQL statement can be executed as a string as well.
Some limited direct support for MySQL extensions to SQL is currently available.
• SELECT pragma:
update(..., mysql_limit=10)
Troubleshooting
If you have problems that seem server related, first check that you are using the most recent stable MySQL-Python
package available. The Database Notes page on the wiki at http://www.sqlalchemy.org is a good resource for timely
information affecting MySQL in SQLAlchemy.
__init__(display_width=None, **kw)
Construct a TINYINT.
Note: following the usual MySQL conventions, TINYINT(1) columns reflected during Table(..., au-
toload=True) are treated as Boolean columns.
Parameters
• display_width – Optional, maximum display width for this number.
• unsigned – a boolean, optional.
• zerofill – Optional. If true, values will be stored as strings left-padded with zeros. Note that
this does not effect the values returned by the underlying database API, which continue to
be numeric.
class SMALLINT(display_width=None, **kw)
Bases: sqlalchemy.dialects.mysql.base._IntegerType, sqlalchemy.types.SMALLINT
MySQL SMALLINTEGER type.
__init__(display_width=None, **kw)
Construct a SMALLINTEGER.
Parameters
• display_width – Optional, maximum display width for this number.
• unsigned – a boolean, optional.
• zerofill – Optional. If true, values will be stored as strings left-padded with zeros. Note that
this does not effect the values returned by the underlying database API, which continue to
be numeric.
class BIT(length=None)
Bases: sqlalchemy.types.TypeEngine
MySQL BIT type.
This type is for MySQL 5.0.3 or greater for MyISAM, and 5.0.5 or greater for MyISAM, MEMORY, InnoDB
and BDB. For older versions, use a MSTinyInteger() type.
__init__(length=None)
Construct a BIT.
Parameter length – Optional, number of bits.
class DATETIME(timezone=False)
Bases: sqlalchemy.types.DateTime
The SQL DATETIME type.
__init__(timezone=False)
class DATE(*args, **kwargs)
Bases: sqlalchemy.types.Date
The SQL DATE type.
__init__(*args, **kwargs)
class TIME(timezone=False)
Bases: sqlalchemy.types.Time
The SQL TIME type.
__init__(timezone=False)
class TIMESTAMP(timezone=False)
Bases: sqlalchemy.types.TIMESTAMP
MySQL TIMESTAMP type.
__init__(timezone=False)
class YEAR(display_width=None)
Bases: sqlalchemy.types.TypeEngine
MySQL YEAR type, for single byte storage of years 1901-2155.
__init__(display_width=None)
class TEXT(length=None, **kw)
Bases: sqlalchemy.dialects.mysql.base._StringType, sqlalchemy.types.TEXT
MySQL TEXT type, for text up to 2^16 characters.
__init__(length=None, **kw)
Construct a TEXT.
Parameters
• length – Optional, if provided the server may optimize storage by substituting the smallest
TEXT type sufficient to store length characters.
• charset – Optional, a column-level character set for this string value. Takes precedence to
‘ascii’ or ‘unicode’ short-hand.
• collation – Optional, a column-level collation for this string value. Takes precedence to
‘binary’ short-hand.
• ascii – Defaults to False: short-hand for the latin1 character set, generates ASCII in
schema.
• unicode – Defaults to False: short-hand for the ucs2 character set, generates UNICODE
in schema.
• national – Optional. If true, use the server’s configured national character set.
• binary – Defaults to False: short-hand, pick the binary collation type that matches the
column’s character set. Generates BINARY in schema. This does not affect the type of
data stored, only the collation of character data.
class TINYTEXT(**kwargs)
Bases: sqlalchemy.dialects.mysql.base._StringType
MySQL TINYTEXT type, for text up to 2^8 characters.
__init__(**kwargs)
Construct a TINYTEXT.
Parameters
• charset – Optional, a column-level character set for this string value. Takes precedence to
‘ascii’ or ‘unicode’ short-hand.
• collation – Optional, a column-level collation for this string value. Takes precedence to
‘binary’ short-hand.
• ascii – Defaults to False: short-hand for the latin1 character set, generates ASCII in
schema.
• unicode – Defaults to False: short-hand for the ucs2 character set, generates UNICODE
in schema.
• national – Optional. If true, use the server’s configured national character set.
• binary – Defaults to False: short-hand, pick the binary collation type that matches the
column’s character set. Generates BINARY in schema. This does not affect the type of
data stored, only the collation of character data.
class MEDIUMTEXT(**kwargs)
Bases: sqlalchemy.dialects.mysql.base._StringType
MySQL MEDIUMTEXT type, for text up to 2^24 characters.
__init__(**kwargs)
Construct a MEDIUMTEXT.
Parameters
• charset – Optional, a column-level character set for this string value. Takes precedence to
‘ascii’ or ‘unicode’ short-hand.
• collation – Optional, a column-level collation for this string value. Takes precedence to
‘binary’ short-hand.
• ascii – Defaults to False: short-hand for the latin1 character set, generates ASCII in
schema.
• unicode – Defaults to False: short-hand for the ucs2 character set, generates UNICODE
in schema.
• national – Optional. If true, use the server’s configured national character set.
• binary – Defaults to False: short-hand, pick the binary collation type that matches the
column’s character set. Generates BINARY in schema. This does not affect the type of
data stored, only the collation of character data.
class LONGTEXT(**kwargs)
Bases: sqlalchemy.dialects.mysql.base._StringType
MySQL LONGTEXT type, for text up to 2^32 characters.
__init__(**kwargs)
Construct a LONGTEXT.
Parameters
• charset – Optional, a column-level character set for this string value. Takes precedence to
‘ascii’ or ‘unicode’ short-hand.
• collation – Optional, a column-level collation for this string value. Takes precedence to
‘binary’ short-hand.
• ascii – Defaults to False: short-hand for the latin1 character set, generates ASCII in
schema.
• unicode – Defaults to False: short-hand for the ucs2 character set, generates UNICODE
in schema.
• national – Optional. If true, use the server’s configured national character set.
• binary – Defaults to False: short-hand, pick the binary collation type that matches the
column’s character set. Generates BINARY in schema. This does not affect the type of
data stored, only the collation of character data.
class VARCHAR(length=None, **kwargs)
Bases: sqlalchemy.dialects.mysql.base._StringType, sqlalchemy.types.VARCHAR
MySQL VARCHAR type, for variable-length character data.
__init__(length=None, **kwargs)
Construct a VARCHAR.
Parameters
• charset – Optional, a column-level character set for this string value. Takes precedence to
‘ascii’ or ‘unicode’ short-hand.
• collation – Optional, a column-level collation for this string value. Takes precedence to
‘binary’ short-hand.
• ascii – Defaults to False: short-hand for the latin1 character set, generates ASCII in
schema.
• unicode – Defaults to False: short-hand for the ucs2 character set, generates UNICODE
in schema.
• national – Optional. If true, use the server’s configured national character set.
• binary – Defaults to False: short-hand, pick the binary collation type that matches the
column’s character set. Generates BINARY in schema. This does not affect the type of
data stored, only the collation of character data.
__init__(length=None)
Construct a LargeBinary type.
Parameter length – optional, a length for the column for use in DDL statements, for those BLOB
types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY
type - use the BINARY/VARBINARY types specifically for those. May be safely omitted if
no CREATE TABLE will be issued. Certain databases may require a length for use in DDL,
and will raise an exception when the CREATE TABLE DDL is issued.
class TINYBLOB(length=None)
Bases: sqlalchemy.types._Binary
MySQL TINYBLOB type, for binary data up to 2^8 bytes.
__init__(length=None)
class MEDIUMBLOB(length=None)
Bases: sqlalchemy.types._Binary
MySQL MEDIUMBLOB type, for binary data up to 2^24 bytes.
__init__(length=None)
class LONGBLOB(length=None)
Bases: sqlalchemy.types._Binary
MySQL LONGBLOB type, for binary data up to 2^32 bytes.
__init__(length=None)
class ENUM(*enums, **kw)
Bases: sqlalchemy.types.Enum, sqlalchemy.dialects.mysql.base._StringType
MySQL ENUM type.
__init__(*enums, **kw)
Construct an ENUM.
Example:
Column(‘myenum’, MSEnum(“foo”, “bar”, “baz”))
Arguments are:
Parameters
• enums – The range of valid values for this ENUM. Values will be quoted when generating
the schema according to the quoting flag (see below).
• strict – Defaults to False: ensure that a given value is in this ENUM’s range of permissible
values when inserting or updating rows. Note that MySQL will not raise a fatal error if
you attempt to store an out of range value- an alternate value will be stored instead. (See
MySQL ENUM documentation.)
• charset – Optional, a column-level character set for this string value. Takes precedence to
‘ascii’ or ‘unicode’ short-hand.
• collation – Optional, a column-level collation for this string value. Takes precedence to
‘binary’ short-hand.
• ascii – Defaults to False: short-hand for the latin1 character set, generates ASCII in
schema.
• unicode – Defaults to False: short-hand for the ucs2 character set, generates UNICODE
in schema.
• binary – Defaults to False: short-hand, pick the binary collation type that matches the
column’s character set. Generates BINARY in schema. This does not affect the type of
data stored, only the collation of character data.
• quoting – Defaults to ‘auto’: automatically determine enum value quoting. If all enum
values are surrounded by the same quoting character, then use ‘quoted’ mode. Otherwise,
use ‘unquoted’ mode.
‘quoted’: values in enums are already quoted, they will be used directly when generating
the schema - this usage is deprecated.
‘unquoted’: values in enums are not quoted, they will be escaped and surrounded by single
quotes when generating the schema.
Previous versions of this type always required manually quoted values to be supplied;
future versions will always quote the string literals for you. This is a transitional option.
class SET(*values, **kw)
Bases: sqlalchemy.dialects.mysql.base._StringType
MySQL SET type.
__init__(*values, **kw)
Construct a SET.
Example:
Column(’myset’, MSSet("’foo’", "’bar’", "’baz’"))
Arguments are:
Parameters
• values – The range of valid values for this SET. Values will be used exactly as they appear
when generating schemas. Strings must be quoted, as in the example above. Single-
quotes are suggested for ANSI compatibility and are required for portability to servers
with ANSI_QUOTES enabled.
• charset – Optional, a column-level character set for this string value. Takes precedence to
‘ascii’ or ‘unicode’ short-hand.
• collation – Optional, a column-level collation for this string value. Takes precedence to
‘binary’ short-hand.
• ascii – Defaults to False: short-hand for the latin1 character set, generates ASCII in
schema.
• unicode – Defaults to False: short-hand for the ucs2 character set, generates UNICODE
in schema.
• binary – Defaults to False: short-hand, pick the binary collation type that matches the
column’s character set. Generates BINARY in schema. This does not affect the type of
data stored, only the collation of character data.
class BOOLEAN(create_constraint=True, name=None)
Bases: sqlalchemy.types.Boolean
The SQL BOOLEAN type.
__init__(create_constraint=True, name=None)
Construct a Boolean.
Parameters
• create_constraint – defaults to True. If the boolean is generated as an int/smallint, also
create a CHECK constraint on the table that ensures 1 or 0 as a value.
• name – if a CHECK constraint is generated, specify the name of the constraint.
MySQL-Python Notes
http://sourceforge.net/projects/mysql-python
mysql+mysqldb://<user>:<password>@<host>[:<port>]/<dbname>
Character Sets Many MySQL server installations default to a latin1 encoding for client connections. All data
sent through the connection will be converted into latin1, even if you have utf8 or another character set on your
tables and columns. With versions 4.1 and higher, you can change the connection character set either through server
configuration or by including the charset parameter in the URL used for create_engine. The charset option
is passed through to MySQL-Python and has the side-effect of also enabling use_unicode in the driver by default.
For regular encoded strings, also pass use_unicode=0 in the connection arguments:
# set client encoding to utf8; all strings come back as utf8 str
create_engine(’mysql+mysqldb:///mydb?charset=utf8&use_unicode=0’)
Known Issues MySQL-python at least as of version 1.2.2 has a serious memory leak related to unicode conversion,
a feature which is disabled via use_unicode=0. The recommended connection form with SQLAlchemy is:
OurSQL Notes
http://packages.python.org/oursql/
mysql+oursql://<user>:<password>@<host>[:<port>]/<dbname>
Character Sets oursql defaults to using utf8 as the connection charset, but other encodings may be used instead.
Like the MySQL-Python driver, unicode support can be completely disabled:
# oursql sets the connection charset to utf8 automatically; all strings come
# back as utf8 str
create_engine(’mysql+oursql:///mydb?use_unicode=0’)
To not automatically use utf8 and instead use whatever the connection defaults to, there is a separate parameter:
# use the default connection charset; all strings come back as unicode
create_engine(’mysql+oursql:///mydb?default_charset=1’)
# use latin1 as the connection charset; all strings come back as unicode
create_engine(’mysql+oursql:///mydb?charset=latin1’)
MySQL-Connector Notes
Support for the MySQL database via the MySQL Connector/Python adapter.
MySQL Connector/Python is available at:
https://launchpad.net/myconnpy
mysql+mysqlconnector://<user>:<password>@<host>[:<port>]/<dbname>
pyodbc Notes
http://pypi.python.org/pypi/pyodbc/
mysql+pyodbc://<username>:<password>@<dsnname>
Limitations The mysql-pyodbc dialect is subject to unresolved character encoding issues which exist within the cur-
rent ODBC drivers available. (see http://code.google.com/p/pyodbc/issues/detail?id=25). Consider usage of OurSQL,
MySQLdb, or MySQL-connector/Python.
zxjdbc Notes
Support for the MySQL database via Jython’s zxjdbc JDBC connector.
mysql+zxjdbc://<user>:<password>@<hostname>[:<port>]/<database>
Character Sets SQLAlchemy zxjdbc dialects pass unicode straight through to the zxjdbc/JDBC layer. To al-
low multiple character sets to be sent from the MySQL Connector/J JDBC driver, by default SQLAlchemy sets its
characterEncoding connection property to UTF-8. It may be overriden via a create_engine URL param-
eter.
Oracle
Connect Arguments
The dialect supports several create_engine() arguments which affect the behavior of the dialect regardless of
driver in use.
• use_ansi - Use ANSI JOIN constructs (see the section on Oracle 8). Defaults to True. If False, Oracle-8
compatible constructs are used for joins.
• optimize_limits - defaults to False. see the section on LIMIT/OFFSET.
SQLAlchemy Table objects which include integer primary keys are usually assumed to have “autoincrementing”
behavior, meaning they can generate their own primary key values upon INSERT. Since Oracle has no “autoincrement”
feature, SQLAlchemy relies upon sequences to produce these values. With the Oracle dialect, a sequence must always
be explicitly specified to enable autoincrement. This is divergent with the majority of documentation examples which
assume the usage of an autoincrement-capable database. To specify sequences, use the sqlalchemy.schema.Sequence
object which is passed to a Column construct:
t = Table(’mytable’, metadata,
Column(’id’, Integer, Sequence(’id_seq’), primary_key=True),
Column(...), ...
)
This step is also required when using table reflection, i.e. autoload=True:
t = Table(’mytable’, metadata,
Column(’id’, Integer, Sequence(’id_seq’), primary_key=True),
autoload=True
)
Identifier Casing
In Oracle, the data dictionary represents all case insensitive identifier names using UPPERCASE text. SQLAlchemy
on the other hand considers an all-lower case identifier name to be case insensitive. The Oracle dialect converts all
case insensitive identifiers to and from those two formats during schema level communication, such as reflection of
tables and indexes. Using an UPPERCASE name on the SQLAlchemy side indicates a case sensitive identifier, and
SQLAlchemy will quote the name - this will cause mismatches against data dictionary data received from Oracle, so
unless identifier names have been truly created as case sensitive (i.e. using quoted names), all lowercase names should
be used on the SQLAlchemy side.
Unicode
SQLAlchemy 0.6 uses the “native unicode” mode provided as of cx_oracle 5. cx_oracle 5.0.2 or greater is recom-
mended for support of NCLOB. If not using cx_oracle 5, the NLS_LANG environment variable needs to be set in
order for the oracle client library to use proper encoding, such as “AMERICAN_AMERICA.UTF8”.
Also note that Oracle supports unicode data through the NVARCHAR and NCLOB data types. When using the
SQLAlchemy Unicode and UnicodeText types, these DDL types will be used within CREATE TABLE statements.
Usage of VARCHAR2 and CLOB with unicode text still requires NLS_LANG to be set.
LIMIT/OFFSET Support
Oracle has no support for the LIMIT or OFFSET keywords. Whereas previous versions of SQLAlchemy
used the “ROW NUMBER OVER...” construct to simulate LIMIT/OFFSET, SQLAlchemy 0.5 now uses
a wrapped subquery approach in conjunction with ROWNUM. The exact methodology is taken from
http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html . Note that the “FIRST ROWS()” optimiza-
tion keyword mentioned is not used by default, as the user community felt this was stepping into the bounds of opti-
mization that is better left on the DBA side, but this prefix can be added by enabling the optimize_limits=True flag on
create_engine().
ON UPDATE CASCADE
Oracle doesn’t have native ON UPDATE CASCADE functionality. A trigger based solution is available at
http://asktom.oracle.com/tkyte/update_cascade/index.html .
When using the SQLAlchemy ORM, the ORM has limited ability to manually issue cascading updates - spec-
ify ForeignKey objects using the “deferrable=True, initially=’deferred”’ keyword arguments, and specify “pas-
sive_updates=False” on each relationship().
Oracle 8 Compatibility
When Oracle 8 is detected, the dialect internally configures itself to the following behaviors:
• the use_ansi flag is set to False. This has the effect of converting all JOIN phrases into the WHERE clause, and
in the case of LEFT OUTER JOIN makes use of Oracle’s (+) operator.
• the NVARCHAR2 and NCLOB datatypes are no longer generated as DDL when the Unicode is used - VAR-
CHAR2 and CLOB are issued instead. This because these types don’t seem to work correctly on Oracle 8 even
though they are available. The NVARCHAR and NCLOB types will always generate NVARCHAR2 and NCLOB.
• the “native unicode” mode is disabled when using cx_oracle, i.e. SQLAlchemy encodes all Python unicode
objects to “string” before passing in as bind parameters.
Synonym/DBLINK Reflection
When using reflection with Table objects, the dialect can optionally search for tables indicated by synonyms that
reference DBLINK-ed tables by passing the flag oracle_resolve_synonyms=True as a keyword argument to the Table
construct. If DBLINK is not in use this flag should be left off.
In addition to those types at Column and Data Types, datatypes specific to the Oracle dialect include those listed here.
class BFILE(length=None)
Bases: sqlalchemy.types.LargeBinary
__init__(length=None)
Construct a LargeBinary type.
Parameter length – optional, a length for the column for use in DDL statements, for those BLOB
types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY
type - use the BINARY/VARBINARY types specifically for those. May be safely omitted if
no CREATE TABLE will be issued. Certain databases may require a length for use in DDL,
and will raise an exception when the CREATE TABLE DDL is issued.
class DOUBLE_PRECISION(precision=None, scale=None, asdecimal=None)
Bases: sqlalchemy.types.Numeric
__init__(precision=None, scale=None, asdecimal=None)
class INTERVAL(day_precision=None, second_precision=None)
Bases: sqlalchemy.types.TypeEngine
__init__(day_precision=None, second_precision=None)
Construct an INTERVAL.
Note that only DAY TO SECOND intervals are currently supported. This is due to a lack of support for
YEAR TO MONTH intervals within available DBAPIs (cx_oracle and zxjdbc).
Parameters
• day_precision – the day precision value. this is the number of digits to store for the day
field. Defaults to “2”
• second_precision – the second precision value. this is the number of digits to store for the
fractional seconds field. Defaults to “6”.
class NCLOB(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False)
Bases: sqlalchemy.types.Text
__init__(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False)
Create a string-holding type.
Parameters
• length – optional, a length for the column for use in DDL statements. May be safely
omitted if no CREATE TABLE will be issued. Certain databases may require a length
for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued.
Whether the value is interpreted as bytes or characters is database specific.
• convert_unicode – defaults to False. If True, the type will do what is necessary in order to
accept Python Unicode objects as bind parameters, and to return Python Unicode objects
in result rows. This may require SQLAlchemy to explicitly coerce incoming Python uni-
codes into an encoding, and from an encoding back to Unicode, or it may not require any
interaction from SQLAlchemy at all, depending on the DBAPI in use.
When SQLAlchemy performs the encoding/decoding, the encoding used is configured via
encoding, which defaults to utf-8.
The “convert_unicode” behavior can also be turned on for all String types by setting
sqlalchemy.engine.base.Dialect.convert_unicode on create_engine().
To instruct SQLAlchemy to perform Unicode encoding/decoding even on a platform that
already handles Unicode natively, set convert_unicode=’force’. This will incur significant
performance overhead when fetching unicode result columns.
• assert_unicode – Deprecated. A warning is raised in all cases when a non-Unicode object
is passed when SQLAlchemy would coerce into an encoding (note: but not when the
DBAPI handles unicode objects natively). To suppress or raise this warning to an error, use
the Python warnings filter documented at: http://docs.python.org/library/warnings.html
• unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves
like the ‘errors’ keyword argument to the standard library’s string.decode() functions. This
flag requires that convert_unicode is set to “force” - otherwise, SQLAlchemy is not guar-
anteed to handle the task of unicode conversion. Note that this flag adds significant per-
formance overhead to row-fetching operations for backends that already return unicode
objects natively (which most DBAPIs do). This flag should only be used as an absolute
last resort for reading strings from a column with varied or corrupted encodings, which
only applies to databases that accept invalid encodings in the first place (i.e. MySQL. not
PG, Sqlite, etc.)
class NUMBER(precision=None, scale=None, asdecimal=None)
Bases: sqlalchemy.types.Numeric, sqlalchemy.types.Integer
__init__(precision=None, scale=None, asdecimal=None)
class LONG(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False)
Bases: sqlalchemy.types.Text
__init__(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False)
Create a string-holding type.
Parameters
• length – optional, a length for the column for use in DDL statements. May be safely
omitted if no CREATE TABLE will be issued. Certain databases may require a length
for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued.
Whether the value is interpreted as bytes or characters is database specific.
• convert_unicode – defaults to False. If True, the type will do what is necessary in order to
accept Python Unicode objects as bind parameters, and to return Python Unicode objects
in result rows. This may require SQLAlchemy to explicitly coerce incoming Python uni-
codes into an encoding, and from an encoding back to Unicode, or it may not require any
interaction from SQLAlchemy at all, depending on the DBAPI in use.
When SQLAlchemy performs the encoding/decoding, the encoding used is configured via
encoding, which defaults to utf-8.
The “convert_unicode” behavior can also be turned on for all String types by setting
sqlalchemy.engine.base.Dialect.convert_unicode on create_engine().
To instruct SQLAlchemy to perform Unicode encoding/decoding even on a platform that
already handles Unicode natively, set convert_unicode=’force’. This will incur significant
performance overhead when fetching unicode result columns.
• assert_unicode – Deprecated. A warning is raised in all cases when a non-Unicode object
is passed when SQLAlchemy would coerce into an encoding (note: but not when the
DBAPI handles unicode objects natively). To suppress or raise this warning to an error, use
the Python warnings filter documented at: http://docs.python.org/library/warnings.html
• unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves
like the ‘errors’ keyword argument to the standard library’s string.decode() functions. This
flag requires that convert_unicode is set to “force” - otherwise, SQLAlchemy is not guar-
anteed to handle the task of unicode conversion. Note that this flag adds significant per-
formance overhead to row-fetching operations for backends that already return unicode
objects natively (which most DBAPIs do). This flag should only be used as an absolute
last resort for reading strings from a column with varied or corrupted encodings, which
only applies to databases that accept invalid encodings in the first place (i.e. MySQL. not
PG, Sqlite, etc.)
class RAW(length=None)
Bases: sqlalchemy.types.LargeBinary
__init__(length=None)
Construct a LargeBinary type.
Parameter length – optional, a length for the column for use in DDL statements, for those BLOB
types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY
type - use the BINARY/VARBINARY types specifically for those. May be safely omitted if
no CREATE TABLE will be issued. Certain databases may require a length for use in DDL,
and will raise an exception when the CREATE TABLE DDL is issued.
cx_Oracle Notes
Driver The Oracle dialect uses the cx_oracle driver, available at http://cx-oracle.sourceforge.net/ . The dialect has
several behaviors which are specifically tailored towards compatibility with this module. Version 5.0 or greater is
strongly recommended, as SQLAlchemy makes extensive use of the cx_oracle output converters for numeric and
string conversions.
• mode - This is given the string value of SYSDBA or SYSOPER, or alternatively an integer value. This value is
only available as a URL query string argument.
• threaded - enable multithreaded access to cx_oracle connections. Defaults to True. Note that this is the
opposite default of cx_oracle itself.
Unicode cx_oracle 5 fully supports Python unicode objects. SQLAlchemy will pass all unicode strings directly to
cx_oracle, and additionally uses an output handler so that all string based result values are returned as unicode as well.
Note that this behavior is disabled when Oracle 8 is detected, as it has been observed that issues remain when passing
Python unicodes to cx_oracle with Oracle 8.
LOB Objects cx_oracle returns oracle LOBs using the cx_oracle.LOB object. SQLAlchemy converts these to
strings so that the interface of the Binary type is consistent with that of other backends, and so that the linkage to a live
cursor is not needed in scenarios like result.fetchmany() and result.fetchall(). This means that by default, LOB objects
are fully fetched unconditionally by SQLAlchemy, and the linkage to a live cursor is broken.
To disable this processing, pass auto_convert_lobs=False to create_engine().
Two Phase Transaction Support Two Phase transactions are implemented using XA transactions. Success has been
reported with this feature but it should be regarded as experimental.
zxjdbc Notes
Support for the Oracle database via the zxjdbc JDBC connector.
PostgreSQL
Sequences/SERIAL
PostgreSQL supports sequences, and SQLAlchemy uses these as the default means of creating new primary key
values for integer-based primary key columns. When creating tables, SQLAlchemy will issue the SERIAL datatype
for integer-based primary key columns, which generates a sequence corresponding to the column and associated with
it based on a naming convention.
To specify a specific named sequence to be used for primary key generation, use the Sequence() construct:
Table(’sometable’, metadata,
Column(’id’, Integer, Sequence(’some_id_seq’), primary_key=True)
)
Currently, when SQLAlchemy issues a single insert statement, to fulfill the contract of having the “last insert identi-
fier” available, the sequence is executed independently beforehand and the new value is retrieved, to be used in the
subsequent insert. Note that when an insert() construct is executed using “executemany” semantics, the sequence
is not pre-executed and normal PG SERIAL behavior is used.
PostgreSQL 8.2 supports an INSERT...RETURNING syntax which SQLAlchemy supports as well. A future release
of SQLA will use this feature by default in lieu of sequence pre-execution in order to retrieve new primary key values,
when available.
create_engine() accepts an isolation_level parameter which results in the command SET SESSION
CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL <level> being invoked for every new con-
nection. Valid values for this parameter are READ_COMMITTED, READ_UNCOMMITTED, REPEATABLE_READ,
and SERIALIZABLE.
INSERT/UPDATE...RETURNING
# INSERT..RETURNING
result = table.insert().returning(table.c.col1, table.c.col2).\
values(name=’foo’)
print result.fetchall()
# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
where(table.c.name==’foo’).values(name=’bar’)
print result.fetchall()
# DELETE..RETURNING
result = table.delete().returning(table.c.col1, table.c.col2).\
where(table.c.name==’foo’)
print result.fetchall()
Indexes
PostgreSQL supports partial indexes. To create them pass a postgresql_where option to the Index constructor:
__init__(*args, **kwargs)
class DOUBLE_PRECISION(precision=None, asdecimal=False, **kwargs)
Bases: sqlalchemy.types.Float
__init__(precision=None, asdecimal=False, **kwargs)
Construct a Float.
Parameters
• precision – the numeric precision for use in DDL CREATE TABLE.
• asdecimal – the same flag as that of Numeric, but defaults to False. Note that setting
this flag to True results in floating point conversion.
class ENUM(*enums, **kw)
Bases: sqlalchemy.types.Enum
__init__(*enums, **kw)
Construct an enum.
Keyword arguments which don’t apply to a specific backend are ignored by that backend.
Parameters
• *enums – string or unicode enumeration labels. If unicode labels are present, the con-
vert_unicode flag is auto-enabled.
• convert_unicode – Enable unicode-aware bind parameter and result-set processing for this
Enum’s data. This is set automatically based on the presence of unicode label strings.
• metadata – Associate this type directly with a MetaData object. For types that exist
on the target database as an independent schema construct (Postgresql), this type will be
created and dropped within create_all() and drop_all() operations. If the type
is not associated with any MetaData object, it will associate itself with each Table in
which it is used, and will be created when any of those individual tables are created, after
a check is performed for it’s existence. The type is only dropped when drop_all() is
called for that Table object’s metadata, however.
• name – The name of this type. This is required for Postgresql and any future supported
database which requires an explicitly named type, or an explicitly named constraint in
order to generate the type and/or a table that uses it.
• native_enum – Use the database’s native ENUM type when available. Defaults to True.
When False, uses VARCHAR + check constraint for all backends.
• schema – Schemaname of this type. For types that exist on the target database as an
independent schema construct (Postgresql), this parameter specifies the named schema in
which the type is present.
• quote – Force quoting to be on or off on the type’s name. If left as the default of None,
the usual schema-level “case sensitive”/”reserved name” rules are used to determine if this
type’s name should be quoted.
class INET(*args, **kwargs)
Bases: sqlalchemy.types.TypeEngine
__init__(*args, **kwargs)
class INTERVAL(precision=None)
Bases: sqlalchemy.types.TypeEngine
__init__(precision=None)
class MACADDR(*args, **kwargs)
Bases: sqlalchemy.types.TypeEngine
__init__(*args, **kwargs)
class REAL(precision=None, asdecimal=False, **kwargs)
Bases: sqlalchemy.types.Float
psycopg2 Notes
Driver The psycopg2 driver is supported, available at http://pypi.python.org/pypi/psycopg2/ . The dialect has several
behaviors which are specifically tailored towards compatibility with this module.
Note that psycopg1 is not supported.
Unicode By default, the Psycopg2 driver uses the psycopg2.extensions.UNICODE extension, such that the
DBAPI receives and returns all strings as Python Unicode objects directly - SQLAlchemy passes these values through
without change. Note that this setting requires that the PG client encoding be set to one which can accomodate the
kind of character data being passed - typically utf-8. If the Postgresql database is configured for SQL_ASCII
encoding, which is often the default for PG installations, it may be necessary for non-ascii strings to be encoded into
a specific encoding before being passed to the DBAPI. If changing the database’s client encoding setting is not an
option, specify use_native_unicode=False as a keyword argument to create_engine(), and take note of
the encoding setting as well, which also defaults to utf-8. Note that disabling “native unicode” mode has a slight
performance penalty, as SQLAlchemy now must translate unicode strings to/from an encoding such as utf-8, a task
that is handled more efficiently within the Psycopg2 driver natively.
• server_side_cursors - Enable the usage of “server side cursors” for SQL statements which support this feature.
What this essentially means from a psycopg2 point of view is that the cursor is created using a name, e.g.
connection.cursor(‘some name’), which has the effect that result rows are not immediately pre-fetched and
buffered after statement execution, but are instead left on the server and only retrieved as needed. SQLAlchemy’s
ResultProxy uses special row-buffering behavior when this feature is enabled, such that groups of 100 rows
at a time are fetched over the wire to reduce conversational overhead.
• use_native_unicode - Enable the usage of Psycopg2 “native unicode” mode per connection. True by default.
Transactions The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.
NOTICE logging The psycopg2 dialect will log Postgresql NOTICE messages via the
sqlalchemy.dialects.postgresql logger:
import logging
logging.getLogger(’sqlalchemy.dialects.postgresql’).setLevel(logging.INFO)
Per-Statement Execution Options The following per-statement execution options are respected:
• stream_results - Enable or disable usage of server side cursors for the SELECT-statement. If None or not set,
the server_side_cursors option of the connection is used. If auto-commit is enabled, the option is ignored.
pg8000 Notes
Unicode pg8000 requires that the postgresql client encoding be configured in the postgresql.conf file in order to use
encodings other than ascii. Set this value to the same value as the “encoding” parameter on create_engine(), usually
“utf-8”.
Interval Passing data from/to the Interval type is not supported as of yet.
zxjdbc Notes
Support for the PostgreSQL database via the zxjdbc JDBC connector.
SQLite
SQLite does not have built-in DATE, TIME, or DATETIME types, and pysqlite does not provide out of the box func-
tionality for translating values between Python datetime objects and a SQLite-supported format. SQLAlchemy’s own
DateTime and related types provide date formatting and parsing functionality when SQlite is used. The implementa-
tion classes are DATETIME, DATE and TIME. These types represent dates and times as ISO formatted strings, which
also nicely support ordering. There’s no reliance on typical “libc” internals for these functions so historical dates are
fully supported.
• The AUTOINCREMENT keyword is not required for SQLite tables to generate primary key values automati-
cally. AUTOINCREMENT only means that the algorithm used to generate ROWID values should be slightly
different.
• SQLite does not generate primary key (i.e. ROWID) values, even for one column, if the table has a composite
(i.e. multi-column) primary key. This is regardless of the AUTOINCREMENT keyword being present or not.
To specifically render the AUTOINCREMENT keyword on the primary key column when rendering DDL, add the
flag sqlite_autoincrement=True to the Table construct:
Table(’sometable’, metadata,
Column(’id’, Integer, primary_key=True),
sqlite_autoincrement=True)
Pysqlite
Driver When using Python 2.5 and above, the built in sqlite3 driver is already installed and no additional instal-
lation is needed. Otherwise, the pysqlite2 driver needs to be present. This is the same driver as sqlite3, just
with a different name.
The pysqlite2 driver will be loaded first, and if not found, sqlite3 is loaded. This allows an explicitly installed
pysqlite driver to take precedence over the built in one. As with all dialects, a specific DBAPI module may be provided
to create_engine() to control this explicitly:
Connect Strings The file specification for the SQLite database is taken as the “database” portion of the URL. Note
that the format of a url is:
driver://user:pass@host/database
This means that the actual filename to be used starts with the characters to the right of the third slash. So connecting
to a relative filepath looks like:
# relative path
e = create_engine(’sqlite:///path/to/database.db’)
An absolute path, which is denoted by starting with a slash, means you need four slashes:
# absolute path
e = create_engine(’sqlite:////path/to/database.db’)
To use a Windows path, regular drive specifications and backslashes can be used. Double backslashes are probably
needed:
The sqlite :memory: identifier is the default if no filepath is present. Specify sqlite:// and nothing else:
# in-memory database
e = create_engine(’sqlite://’)
Compatibility with sqlite3 “native” date and datetime types The pysqlite driver includes the
sqlite3.PARSE_DECLTYPES and sqlite3.PARSE_COLNAMES options, which have the effect of any column
or expression explicitly cast as “date” or “timestamp” will be converted to a Python date or datetime object. The date
and datetime types provided with the pysqlite dialect are not currently compatible with these options, since they render
the ISO date/datetime including microseconds, which pysqlite’s driver does not. Additionally, SQLAlchemy does not
at this time automatically render the “cast” syntax required for the freestanding functions “current_timestamp” and
“current_date” to return datetime/date types natively. Unfortunately, pysqlite does not provide the standard DBAPI
types in cursor.description, leaving SQLAlchemy with no way to detect these types on the fly without expensive
per-row type checks.
Usage of PARSE_DECLTYPES can be forced if one configures “native_datetime=True” on create_engine():
engine = create_engine(’sqlite://’,
connect_args={’detect_types’: sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAME
native_datetime=True
)
With this flag enabled, the DATE and TIMESTAMP types (but note - not the DATETIME or TIME types...confused
yet ?) will not perform any bind parameter or result processing. Execution of “func.current_date()” will return a
string. “func.current_timestamp()” is registered as returning a DATETIME type in SQLAlchemy, so this function still
receives SQLAlchemy-level result processing.
Threading Behavior Pysqlite connections do not support being moved between threads, unless the
check_same_thread Pysqlite flag is set to False. In addition, when using an in-memory SQLite database,
the full database exists only within the scope of a single connection. It is reported that an in-memory database does
not support being shared between threads regardless of the check_same_thread flag - which means that a multi-
threaded application cannot share data from a :memory: database across threads unless access to the connection is
limited to a single worker thread which communicates through a queueing mechanism to concurrent threads.
To provide a default which accomodates SQLite’s default threading capabilities somewhat reasonably, the SQLite
dialect will specify that the SingletonThreadPool be used by default. This pool maintains a single SQLite
connection per thread that is held open up to a count of five concurrent threads. When more than five threads are used,
a cleanup mechanism will dispose of excess unused connections.
Two optional pool implementations that may be appropriate for particular SQLite usage scenarios:
• the sqlalchemy.pool.NullPool might be appropriate for an application that makes use of a file-
based sqlite database. This pool disables any actual “pooling” behavior, and simply opens and closes real
connections corresonding to the connect() and close() methods. SQLite can “connect” to a particu-
lar file with very high efficiency, so this option may actually perform better without the extra overhead of
SingletonThreadPool. NullPool will of course render a :memory: connection useless since the database
would be lost as soon as the connection is “returned” to the pool.
Unicode In contrast to SQLAlchemy’s active handling of date and time types for pysqlite, pysqlite’s default behavior
regarding Unicode is that all strings are returned as Python unicode objects in all cases. So even if the Unicode type
is not used, you will still always receive unicode data back from a result set. It is strongly recommended that you
do use the Unicode type to represent strings, since it will raise a warning if a non-unicode Python string is passed
from the user application. Mixing the usage of non-unicode objects with returned unicode objects can quickly create
confusion, particularly when using the ORM as internal data is not always represented by an actual database result
string.
Sybase
python-sybase notes
sybase+pysybase://<username>:<password>@<dsn>/[database name]
Unicode Support The python-sybase driver does not appear to support non-ASCII strings of any kind at this time.
pyodbc notes
sybase+pyodbc://<username>:<password>@<dsn>/
sybase+pyodbc://<username>:<password>@<host>/<database>
Unicode Support The pyodbc driver currently supports usage of these Sybase types with Unicode or multibyte
strings:
CHAR
NCHAR
NVARCHAR
TEXT
VARCHAR
UNICHAR
UNITEXT
UNIVARCHAR
mxodbc notes
These backends are untested and may not be completely ported to current versions of SQLAlchemy.
Microsoft Access
Informix
MaxDB
Overview
The maxdb dialect is experimental and has only been tested on 7.6.03.007 and 7.6.00.037. Of these, only 7.6.03.007
will work with SQLAlchemy’s ORM. The earlier version has severe LEFT JOIN limitations and will return incorrect
results from even very simple ORM queries.
Only the native Python DB-API is currently supported. ODBC driver support is a future enhancement.
Connecting
The username is case-sensitive. If you usually connect to the database with sqlcli and other tools in lower case, you
likely need to use upper case for DB-API.
Implementation Notes
Also check the DatabaseNotes page on the wiki for detailed information.
With the 7.6.00.37 driver and Python 2.5, it seems that all DB-API generated exceptions are broken and can cause
Python to crash.
For ‘somecol.in_([])’ to work, the IN operator’s generation must be changed to cast ‘NULL’ to a numeric, i.e.
NUM(NULL). The DB-API doesn’t accept a bind parameter there, so that particular generation must inline the NULL
value, which depends on [ticket:807].
The DB-API is very picky about where bind params may be used in queries.
Bind params for some functions (e.g. MOD) need type information supplied. The dialect does not yet do this auto-
matically.
Max will occasionally throw up ‘bad sql, compile again’ exceptions for perfectly valid SQL. The dialect does not
currently handle these, more research is needed.
MaxDB 7.5 and Sap DB <= 7.4 reportedly do not support schemas. A very slightly different version of this dialect
would be required to support those versions, and can easily be added if there is demand. Some other required com-
ponents such as an Max-aware ‘old oracle style’ join compiler (thetas with (+) outer indicators) are already done and
available for integration- email the devel list if you’re interested in working on this.
9.4 sqlalchemy.ext
SQLAlchemy has a variety of extensions available which provide extra functionality to SA, either via explicit usage
or by augmenting the core behavior.
9.4.1 declarative
Synopsis
SQLAlchemy object-relational configuration involves the use of Table, mapper(), and class objects to define the
three areas of configuration. declarative allows all three types of configuration to be expressed declaratively on
an individual mapped class. Regular SQLAlchemy schema elements and ORM constructs are used in most cases.
As a simple example:
Base = declarative_base()
class SomeClass(Base):
__tablename__ = ’some_table’
id = Column(Integer, primary_key=True)
name = Column(String(50))
Above, the declarative_base() callable returns a new base class from which all mapped classes should inherit.
When the class definition is completed, a new Table and mapper will have been generated, accessible via the
__table__ and __mapper__ attributes on the SomeClass class.
Defining Attributes
In the above example, the Column objects are automatically named with the name of the attribute to which they are
assigned.
They can also be explicitly named, and that name does not have to be the same as name assigned on the class. The
column will be assigned to the Table using the given name, and mapped to the class using the attribute name:
class SomeClass(Base):
__tablename__ = ’some_table’
id = Column("some_table_id", Integer, primary_key=True)
name = Column("name", String(50))
Attributes may be added to the class after its construction, and they will be added to the underlying Table and
mapper() definitions as appropriate:
Classes which are mapped explicitly using mapper() can interact freely with declarative classes.
It is recommended, though not required, that all tables share the same underlying MetaData object, so that string-
configured ForeignKey references can be resolved without issue.
The declarative_base() base class contains a MetaData object where newly defined Table objects are
collected. This is accessed via the MetaData class level accessor, so to create tables we can say:
engine = create_engine(’sqlite://’)
Base.metadata.create_all(engine)
The Engine created above may also be directly associated with the declarative base class using the bind keyword
argument, where it will be associated with the underlying MetaData object and allow SQL operations involving that
metadata and its tables to make use of that engine automatically:
Base = declarative_base(bind=create_engine(’sqlite://’))
Alternatively, by way of the normal MetaData behaviour, the bind attribute of the class level accessor can be
assigned at any time as follows:
Base.metadata.bind = create_engine(’sqlite://’)
The declarative_base() can also receive a pre-created MetaData object, which allows a declarative setup to
be associated with an already existing traditional collection of Table objects:
mymetadata = MetaData()
Base = declarative_base(metadata=mymetadata)
Configuring Relationships
Relationships to other classes are done in the usual way, with the added feature that the class specified to
relationship() may be a string name (note that relationship() is only available as of SQLAlchemy
0.6beta2, and in all prior versions is known as relation(), including 0.5 and 0.4). The “class registry” associ-
ated with Base is used at mapper compilation time to resolve the name into the actual class object, which is expected
to have been defined once the mapper configuration is used:
class User(Base):
__tablename__ = ’users’
id = Column(Integer, primary_key=True)
name = Column(String(50))
addresses = relationship("Address", backref="user")
class Address(Base):
__tablename__ = ’addresses’
id = Column(Integer, primary_key=True)
email = Column(String(50))
user_id = Column(Integer, ForeignKey(’users.id’))
Column constructs, since they are just that, are immediately usable, as below where we define a primary join condition
on the Address class using them:
class Address(Base):
__tablename__ = ’addresses’
id = Column(Integer, primary_key=True)
email = Column(String(50))
user_id = Column(Integer, ForeignKey(’users.id’))
user = relationship(User, primaryjoin=user_id == User.id)
In addition to the main argument for relationship(), other arguments which depend upon the columns present
on an as-yet undefined class may also be specified as strings. These strings are evaluated as Python expressions. The
full namespace available within this evaluation includes all classes mapped for this declarative base, as well as the
contents of the sqlalchemy package, including expression functions like desc() and func:
class User(Base):
# ....
addresses = relationship("Address",
order_by="desc(Address.email)",
primaryjoin="Address.user_id==User.id")
As an alternative to string-based attributes, attributes may also be defined after all classes have been created. Just add
them to the target class after the fact:
User.addresses = relationship(Address,
primaryjoin=Address.user_id==User.id)
There’s nothing special about many-to-many with declarative. The secondary argument to relationship()
still requires a Table object, not a declarative class. The Table should share the same MetaData object used by
the declarative base:
keywords = Table(
’keywords’, Base.metadata,
Column(’author_id’, Integer, ForeignKey(’authors.id’)),
Column(’keyword_id’, Integer, ForeignKey(’keywords.id’))
)
class Author(Base):
__tablename__ = ’authors’
id = Column(Integer, primary_key=True)
keywords = relationship("Keyword", secondary=keywords)
You should generally not map a class and also specify its table in a many-to-many relationship, since the ORM may
issue duplicate INSERT and DELETE statements.
Defining Synonyms
Synonyms are introduced in Using Descriptors. To define a getter/setter which proxies to an underlying attribute, use
synonym() with the descriptor argument:
class MyClass(Base):
__tablename__ = ’sometable’
def _get_attr(self):
return self._some_attr
def _set_attr(self, attr):
self._some_attr = attr
attr = synonym(’_attr’, descriptor=property(_get_attr, _set_attr))
The above synonym is then usable as an instance attribute as well as a class-level expression construct:
x = MyClass()
x.attr = "some value"
session.query(MyClass).filter(MyClass.attr == ’some other value’).all()
For simple getters, the synonym_for() decorator can be used in conjunction with @property:
class MyClass(Base):
__tablename__ = ’sometable’
@synonym_for(’_attr’)
@property
def attr(self):
return self._some_attr
class MyClass(Base):
__tablename__ = ’sometable’
@comparable_using(MyUpperCaseComparator)
@property
def uc_name(self):
return self.name.upper()
Table Configuration
Table arguments other than the name, metadata, and mapped Column arguments are specified using the
__table_args__ class attribute. This attribute accommodates both positional as well as keyword arguments that
are normally sent to the Table constructor. The attribute can be specified in one of two forms. One is as a dictionary:
class MyClass(Base):
__tablename__ = ’sometable’
__table_args__ = {’mysql_engine’:’InnoDB’}
The other, a tuple of the form (arg1, arg2, ..., {kwarg1:value, ...}), which allows positional argu-
ments to be specified as well (usually constraints):
class MyClass(Base):
__tablename__ = ’sometable’
__table_args__ = (
ForeignKeyConstraint([’id’], [’remote_table.id’]),
UniqueConstraint(’foo’),
{’autoload’:True}
)
Note that the keyword parameters dictionary is required in the tuple form even if empty.
As an alternative to __tablename__, a direct Table construct may be used. The Column objects, which in this
case require their names, will be added to the mapping just like a regular mapping to a table:
class MyClass(Base):
__table__ = Table(’my_table’, Base.metadata,
Column(’id’, Integer, primary_key=True),
Column(’name’, String(50))
)
Mapper Configuration
Configuration of mappers is done with the mapper() function and all the possible mapper configuration parameters
can be found in the documentation for that function.
mapper() is still used by declaratively mapped classes and keyword parameters to the function can be passed by
placing them in the __mapper_args__ class variable:
class Widget(Base):
__tablename__ = ’widgets’
id = Column(Integer, primary_key=True)
Inheritance Configuration
Declarative supports all three forms of inheritance as intuitively as possible. The inherits mapper keyword ar-
gument is not needed as declarative will determine this from the class itself. The various “polymorphic” keyword
arguments are specified using __mapper_args__.
Joined table inheritance is defined as a subclass that defines its own table:
class Person(Base):
__tablename__ = ’people’
id = Column(Integer, primary_key=True)
discriminator = Column(’type’, String(50))
__mapper_args__ = {’polymorphic_on’: discriminator}
class Engineer(Person):
__tablename__ = ’engineers’
__mapper_args__ = {’polymorphic_identity’: ’engineer’}
id = Column(Integer, ForeignKey(’people.id’), primary_key=True)
primary_language = Column(String(50))
Note that above, the Engineer.id attribute, since it shares the same attribute name as the Person.id attribute,
will in fact represent the people.id and engineers.id columns together, and will render inside a query as
"people.id". To provide the Engineer class with an attribute that represents only the engineers.id column,
give it a different attribute name:
class Engineer(Person):
__tablename__ = ’engineers’
__mapper_args__ = {’polymorphic_identity’: ’engineer’}
engineer_id = Column(’id’, Integer, ForeignKey(’people.id’), primary_key=True)
primary_language = Column(String(50))
Single table inheritance is defined as a subclass that does not have its own table; you just leave out the __table__
and __tablename__ attributes:
class Person(Base):
__tablename__ = ’people’
id = Column(Integer, primary_key=True)
discriminator = Column(’type’, String(50))
__mapper_args__ = {’polymorphic_on’: discriminator}
class Engineer(Person):
__mapper_args__ = {’polymorphic_identity’: ’engineer’}
primary_language = Column(String(50))
When the above mappers are configured, the Person class is mapped to the people table before the
primary_language column is defined, and this column will not be included in its own mapping. When
Engineer then defines the primary_language column, the column is added to the people table so that
it is included in the mapping for Engineer and is also part of the table’s full set of columns. Columns
which are not mapped to Person are also excluded from any other single or joined inheriting classes using the
exclude_properties mapper argument. Below, Manager will have all the attributes of Person and Manager
but not the primary_language attribute of Engineer:
class Manager(Person):
__mapper_args__ = {’polymorphic_identity’: ’manager’}
golf_swing = Column(String(50))
The attribute exclusion logic is provided by the exclude_properties mapper argument, and declarative’s default
behavior can be disabled by passing an explicit exclude_properties collection (empty or otherwise) to the
__mapper_args__.
Concrete is defined as a subclass which has its own table and sets the concrete keyword argument to True:
class Person(Base):
__tablename__ = ’people’
id = Column(Integer, primary_key=True)
name = Column(String(50))
class Engineer(Person):
__tablename__ = ’engineers’
__mapper_args__ = {’concrete’:True}
id = Column(Integer, primary_key=True)
primary_language = Column(String(50))
name = Column(String(50))
Usage of an abstract base class is a little less straightforward as it requires usage of polymorphic_union():
punion = polymorphic_union({
’engineer’:engineers,
’manager’:managers
}, ’type’, ’punion’)
class Person(Base):
__table__ = punion
__mapper_args__ = {’polymorphic_on’:punion.c.type}
class Engineer(Person):
__table__ = engineers
__mapper_args__ = {’polymorphic_identity’:’engineer’, ’concrete’:True}
class Manager(Person):
__table__ = managers
__mapper_args__ = {’polymorphic_identity’:’manager’, ’concrete’:True}
Mix-in Classes
A common need when using declarative is to share some functionality, often a set of columns, across many
classes. The normal python idiom would be to put this common code into a base class and have all the other classes
subclass this class.
When using declarative, this need is met by using a “mix-in class”. A mix-in class is one that isn’t mapped to a
table and doesn’t subclass the declarative Base. For example:
class MyMixin(object):
__table_args__ = {’mysql_engine’:’InnoDB’}
__mapper_args__=dict(always_refresh=True)
id = Column(Integer, primary_key=True)
def foo(self):
return ’bar’+str(self.id)
class MyModel(Base,MyMixin):
__tablename__=’test’
name = Column(String(1000), nullable=False, index=True)
As the above example shows, __table_args__ and __mapper_args__ can both be abstracted out into a mix-in
if you use common values for these across many classes.
However, particularly in the case of __table_args__, you may want to combine some parameters from several
mix-ins with those you wish to define on the class iteself. To help with this, a classproperty() decorator is
provided that lets you implement a class property with a function. For example:
class MySQLSettings:
__table_args__ = {’mysql_engine’:’InnoDB’}
class MyOtherMixin:
__table_args__ = {’info’:’foo’}
class MyModel(Base,MySQLSettings,MyOtherMixin):
__tablename__=’my_model’
@classproperty
def __table_args__(self):
args = dict()
args.update(MySQLSettings.__table_args__)
args.update(MyOtherMixin.__table_args__)
return args
id = Column(Integer, primary_key=True)
The __tablename__ attribute in conjunction with the hierarchy of the classes involved controls what type of table
inheritance, if any, is configured by the declarative extension.
If the __tablename__ is computed by a mix-in, you may need to control which classes get the computed attribute
in order to get the type of table inheritance you require.
For example, if you had a mix-in that computes __tablename__ but where you wanted to use that mix-in in a single
table inheritance hierarchy, you can explicitly specify __tablename__ as None to indicate that the class should
not have a table mapped:
class Tablename:
@classproperty
def __tablename__(cls):
return cls.__name__.lower()
class Person(Base,Tablename):
id = Column(Integer, primary_key=True)
discriminator = Column(’type’, String(50))
__mapper_args__ = {’polymorphic_on’: discriminator}
class Engineer(Person):
__tablename__ = None
__mapper_args__ = {’polymorphic_identity’: ’engineer’}
primary_language = Column(String(50))
Alternatively, you can make the mix-in intelligent enough to only return a __tablename__ in the event that no table
is already mapped in the inheritance hierarchy. To help with this, a has_inherited_table() helper function is
provided that returns True if a parent class already has a mapped table.
As an examply, here’s a mix-in that will only allow single table inheritance:
class Tablename:
@classproperty
def __tablename__(cls):
if has_inherited_table(cls):
return None
return cls.__name__.lower()
class Person(Base,Tablename):
id = Column(Integer, primary_key=True)
discriminator = Column(’type’, String(50))
__mapper_args__ = {’polymorphic_on’: discriminator}
class Engineer(Person):
__tablename__ = None
__mapper_args__ = {’polymorphic_identity’: ’engineer’}
primary_language = Column(String(50))
If you want to use a similar pattern with a mix of single and joined table inheritance, you would need a slightly different
mix-in and use it on any joined table child classes in addition to their parent classes:
class Tablename:
@classproperty
def __tablename__(cls):
if (decl.has_inherited_table(cls) and
TableNameMixin not in cls.__bases__):
return None
return cls.__name__.lower()
class Person(Base,Tablename):
id = Column(Integer, primary_key=True)
discriminator = Column(’type’, String(50))
__mapper_args__ = {’polymorphic_on’: discriminator}
class Engineer(Person):
# This is single table inheritance
__tablename__ = None
__mapper_args__ = {’polymorphic_identity’: ’engineer’}
primary_language = Column(String(50))
class Manager(Person,Tablename):
# This is joinded table inheritance
__tablename__ = None
__mapper_args__ = {’polymorphic_identity’: ’engineer’}
preferred_recreation = Column(String(50))
Class Constructor
As a convenience feature, the declarative_base() sets a default constructor on classes which takes keyword
arguments, and assigns them to the named attributes:
e = Engineer(primary_language=’python’)
Sessions
Note that declarative does nothing special with sessions, and is only intended as an easier way to configure
mappers and Table objects. A typical application setup using scoped_session() might look like:
engine = create_engine(’postgresql://scott:tiger@localhost/test’)
Session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=engine))
Base = declarative_base()
API Reference
@synonym_for(’col’)
@property
def prop(self):
return ’special sauce’
The regular synonym() is also usable directly in a declarative setting and may be convenient for read/write
properties:
comparable_using(comparator_factory)
Decorator, allow a Python @property to be used in query criteria.
This is a decorator front end to comparable_property() that passes through the comparator_factory and
the function being decorated:
@comparable_using(MyComparatorType)
@property
def prop(self):
return ’special sauce’
The regular comparable_property() is also usable directly in a declarative setting and may be convenient
for read/write properties:
prop = comparable_property(MyComparatorType)
9.4.2 associationproxy
associationproxy is used to create a simplified, read/write view of a relationship. It can be used to cherry-
pick fields from a collection of related objects or to greatly simplify access to associated objects in an association
relationship.
Simplifying Relationships
primary_key=True)
)
class User(object):
def __init__(self, name):
self.name = name
class Keyword(object):
def __init__(self, keyword):
self.keyword = keyword
Above are three simple tables, modeling users, keywords and a many-to-many relationship between the two. These
Keyword objects are little more than a container for a name, and accessing them via the relationship is awkward:
user = User(’jek’)
user.kw.append(Keyword(’cheese inspector’))
print user.kw
# [<__main__.Keyword object at 0xb791ea0c>]
print user.kw[0].keyword
# ’cheese inspector’
print [keyword.keyword for keyword in user.kw]
# [’cheese inspector’]
With association_proxy you have a “view” of the relationship that contains just the .keyword of the related
objects. The proxy is a Python property, and unlike the mapper relationship, is defined in your class:
class User(object):
def __init__(self, name):
self.name = name
# ...
>>> user.kw
[<__main__.Keyword object at 0xb791ea0c>]
>>> user.keywords
[’cheese inspector’]
>>> user.keywords.append(’snack ninja’)
>>> user.keywords
[’cheese inspector’, ’snack ninja’]
>>> user.kw
[<__main__.Keyword object at 0x9272a4c>, <__main__.Keyword object at 0xb7b396ec>]
The proxy is read/write. New associated objects are created on demand when values are added to the proxy, and
modifying or removing an entry through the proxy also affects the underlying collection.
• The association proxy property is backed by a mapper-defined relationship, either a collection or scalar.
• You can access and modify both the proxy and the backing relationship. Changes in one are immediate in the
other.
• The proxy acts like the type of the underlying collection. A list gets a list-like proxy, a dict a dict-like proxy,
and so on.
• Multiple proxies for the same relationship are fine.
• Proxies are lazy, and won’t trigger a load of the backing relationship until they are accessed.
• The relationship is inspected to determine the type of the related objects.
• To construct new instances, the type is called with the value being assigned, or key and value for dicts.
• A ‘‘creator‘‘ function can be used to create instances instead.
Above, the Keyword.__init__ takes a single argument keyword, which maps conveniently to the value being
set through the proxy. A creator function could have been used instead if more flexibility was required.
Because the proxies are backed by a regular relationship collection, all of the usual hooks and patterns for using
collections are still in effect. The most convenient behavior is the automatic setting of “parent”-type relationships on
assignment. In the example above, nothing special had to be done to associate the Keyword to the User. Simply adding
it to the collection is sufficient.
Association proxies are also useful for keeping association objects out the way during regular use. For
example, the userkeywords table might have a bunch of auditing columns that need to get updated when changes
are made- columns that are updated but seldom, if ever, accessed in your application. A proxy can provide a very
natural access pattern for the relationship.
def get_current_uid():
"""Return the uid of the current user."""
return 1 # hardcoded for this example
def _create_uk_by_keyword(keyword):
"""A creator function."""
return UserKeyword(keyword=keyword)
class User(object):
def __init__(self, name):
self.name = name
class Keyword(object):
def __init__(self, keyword):
self.keyword = keyword
def __repr__(self):
return ’Keyword(%s)’ % repr(self.keyword)
class UserKeyword(object):
def __init__(self, user=None, keyword=None):
self.user = user
self.keyword = keyword
mapper(User, users_table)
mapper(Keyword, keywords_table)
mapper(UserKeyword, userkeywords_table, properties={
’user’: relationship(User, backref=’user_keywords’),
’keyword’: relationship(Keyword),
})
user = User(’log’)
kw1 = Keyword(’new_from_blammo’)
print user.user_keywords[0].keyword
# Keyword(’new_from_blammo’)
# Lots of work.
print user.keywords
# [Keyword(’new_from_blammo’), Keyword(’its_big’), Keyword(’its_heavy’), Keyword(’its_wood’
Above are three tables, modeling stocks, their brokers and the number of shares of a stock held by each broker. This
situation is quite different from the association example above. shares is a property of the relationship, an important
one that we need to use all the time.
For this example, it would be very convenient if Broker objects had a dictionary collection that mapped Stock
instances to the shares held for each. That’s easy:
class Broker(object):
def __init__(self, name):
self.name = name
class Stock(object):
def __init__(self, symbol):
self.symbol = symbol
self.last_price = 0
class Holding(object):
def __init__(self, broker=None, stock=None, shares=0):
self.broker = broker
self.stock = stock
self.shares = shares
mapper(Stock, stocks_table)
mapper(Broker, brokers_table, properties={
’by_stock’: relationship(Holding,
collection_class=attribute_mapped_collection(’stock’))
})
mapper(Holding, holdings_table, properties={
’stock’: relationship(Stock),
’broker’: relationship(Broker)
})
Above, we’ve set up the by_stock relationship collection to act as a dictionary, using the .stock property of each
Holding as a key.
Populating and accessing that dictionary manually is slightly inconvenient because of the complexity of the Holdings
association object:
stock = Stock(’ZZK’)
broker = Broker(’paj’)
The holdings proxy we’ve added to the Broker class hides the details of the Holding while also giving access
to .shares:
session.add(broker)
session.commit()
Further examples can be found in the examples/ directory in the SQLAlchemy distribution.
API
[getattr(member, *attr*)
for member in getattr(instance, *target_collection*)]
Unlike the list comprehension, the collection returned by the property is always in sync with target_collection,
and mutations made to either collection will be reflected in both.
Implements a Python property representing a relationship as a collection of simpler values. The proxied property
will mimic the collection type of the target (list, dict or set), or, in the case of a one to one relationship, a simple
scalar value.
Parameters
• target_collection – Name of the relationship attribute we’ll proxy to, usually created with
relationship().
• attr – Attribute on the associated instances we’ll proxy for.
For example, given a target collection of [obj1, obj2], a list created by this proxy property
would look like [getattr(obj1, attr), getattr(obj2, attr)]
If the relationship is one-to-one or otherwise uselist=False, then simply: getattr(obj, attr)
• creator – optional.
When new items are added to this proxied collection, new instances of the class collected by
the target collection will be created. For list and set collections, the target class constructor
will be called with the ‘value’ for the new instance. For dict types, two arguments are
passed: key and value.
If you want to construct instances differently, supply a creator function that takes arguments
as above and returns instances.
For scalar relationships, creator() will be called if the target is None. If the target is present,
set operations are proxied to setattr() on the associated object.
If you have an associated object with multiple attributes, you may set up multiple association
proxies mapping to different attributes. See the unit tests for examples, and for examples of
how creator() functions can be used to construct the scalar relationship on-demand in this
situation.
• **kw – Passes along any other keyword arguments to AssociationProxy.
class AssociationProxy(target_collection, attr, creator=None, getset_factory=None, proxy_factory=None,
proxy_bulk_set=None)
A descriptor that presents a read/write view of an object attribute.
__init__(target_collection, attr, creator=None, getset_factory=None, proxy_factory=None,
proxy_bulk_set=None)
Arguments are:
target_collection Name of the collection we’ll proxy to, usually created with ‘relationship()’ in a mapper
setup.
attr Attribute on the collected instances we’ll proxy for. For example, given a target collection of [obj1,
obj2], a list created by this proxy property would look like [getattr(obj1, attr), getattr(obj2, attr)]
creator Optional. When new items are added to this proxied collection, new instances of the class col-
lected by the target collection will be created. For list and set collections, the target class constructor
will be called with the ‘value’ for the new instance. For dict types, two arguments are passed: key and
value.
If you want to construct instances differently, supply a ‘creator’ function that takes arguments as above
and returns instances.
getset_factory Optional. Proxied attribute access is automatically handled by routines that get and set
values based on the attr argument for this proxy.
If you would like to customize this behavior, you may supply a getset_factory callable that produces
a tuple of getter and setter functions. The factory is called with two arguments, the abstract type of
the underlying collection and this proxy instance.
proxy_factory Optional. The type of collection to emulate is determined by sniffing the target collection.
If your collection type can’t be determined by duck typing or you’d like to use a different collection
implementation, you may supply a factory function to produce those collections. Only applicable to
non-scalar relationships.
proxy_bulk_set Optional, use with proxy_factory. See the _set() method for details.
any(criterion=None, **kwargs)
contains(obj)
has(criterion=None, **kwargs)
target_class
The class the proxy is attached to.
9.4.3 orderinglist
orderinglist is a helper for mutable ordered relationships. It will intercept list operations performed on a rela-
tionship collection and automatically synchronize changes in list position with an attribute on the related objects. (See
Alternate Collection Implementations for more information on the general pattern.)
Example: Two tables that store slides in a presentation. Each slide has a number of bullet points, displayed in order
by the ‘position’ column on the bullets table. These bullets can be inserted and re-ordered by your end users, and you
need to update the ‘position’ column of all affected rows when changes are made.
class Slide(object):
pass
class Bullet(object):
pass
The standard relationship mapping will produce a list-like attribute on each Slide containing all related Bullets, but
coping with changes in ordering is totally your responsibility. If you insert a Bullet into that list, there is no magic-
it won’t have a position attribute unless you assign it it one, and you’ll need to manually renumber all the subsequent
Bullets in the list to accommodate the insert.
An orderinglist can automate this and manage the ‘position’ attribute on all related bullets for you.
s = Slide()
s.bullets.append(Bullet())
s.bullets.append(Bullet())
s.bullets[1].position
>>> 1
s.bullets.insert(1, Bullet())
s.bullets[2].position
>>> 2
Use the ordering_list function to set up the collection_class on relationships (as in the mapper example
above). This implementation depends on the list starting in the proper order, so be SURE to put an order_by on your
relationship.
Warning: ordering_list only provides limited functionality when a primary key column or unique column
is the target of the sort. Since changing the order of entries often means that two rows must trade values, this
is not possible when the value is constrained by a primary key or unique constraint, since one of the rows would
temporarily have to point to a third available value so that the other row could take its old value. ordering_list
doesn’t do any of this for you, nor does SQLAlchemy itself.
ordering_list takes the name of the related object’s ordering attribute as an argument. By default, the zero-based
integer index of the object’s position in the ordering_list is synchronized with the ordering attribute: index 0
will get position 0, index 1 position 1, etc. To start numbering at 1 or some other integer, provide count_from=1.
Ordering values are not limited to incrementing integers. Almost any scheme can implemented by supplying a custom
ordering_func that maps a Python list index to any value you require.
API Reference
9.4.4 serializer
Serializer/Deserializer objects for usage with SQLAlchemy query structures, allowing “contextual” deserialization.
Any SQLAlchemy query structure, either based on sqlalchemy.sql.* or sqlalchemy.orm.* can be used. The mappers,
Tables, Columns, Session etc. which are referenced by the structure are not persisted in serialized form, but are instead
re-associated with the query structure when it is deserialized.
Usage is nearly the same as that of the standard Python pickle module:
query = Session.query(MyClass).filter(MyClass.somedata==’foo’).order_by(MyClass.sortkey)
print query2.all()
Similar restrictions as when using raw pickle apply; mapped classes must be themselves be pickleable, meaning they
are importable from a module-level namespace.
The serializer module is only appropriate for query structures. It is not needed for:
• instances of user-defined classes. These contain no references to engines, sessions or expression constructs in
the typical case and can be serialized directly.
• Table metadata that is to be loaded entirely from the serialized structure (i.e. is not already declared in the appli-
cation). Regular pickle.loads()/dumps() can be used to fully dump any MetaData object, typically one which
was reflected from an existing database at some previous point in time. The serializer module is specifically for
the opposite case, where the Table metadata is already present in memory.
Serializer(*args, **kw)
Deserializer(file, metadata=None, scoped_session=None, engine=None)
dumps(obj, protocol=0)
loads(data, metadata=None, scoped_session=None, engine=None)
9.4.5 SqlSoup
Introduction
SqlSoup provides a convenient way to access existing database tables without having to declare table or mapper classes
ahead of time. It is built on top of the SQLAlchemy ORM and provides a super-minimalistic interface to an existing
database.
Suppose we have a database with users, books, and loans tables (corresponding to the PyWebOff dataset, if you’re
curious).
Creating a SqlSoup gateway is just like creating an SQLAlchemy engine:
>>> db = SqlSoup(engine)
You can optionally specify a schema within the database for your SqlSoup:
Loading objects
>>> db.users.order_by(db.users.name).all()
[MappedUsers(name=u’Bhargan Basepair’,email=u’[email protected]’,password=u’basepair’,cl
>>> users[0].email
u’[email protected]’
Of course, you don’t want to load all users very often. Let’s add a WHERE clause. Let’s also switch the order_by to
DESC while we’re at it:
You can also use .first() (to retrieve only the first object from a query) or .one() (like .first when you expect exactly one
user – it will raise an exception if more were returned):
filter_by is like filter, but takes kwargs instead of full clause expressions. This makes it more concise for simple queries
like this, but you can’t do complex queries like the or_ above or non-equality based comparisons this way.
Get, filter, filter_by, order_by, limit, and the rest of the query methods are explained in detail in Querying.
Modifying objects
>>> user = _
>>> user.email = ’[email protected]’
>>> db.commit()
(SqlSoup leverages the sophisticated SQLAlchemy unit-of-work code, so multiple updates to a single object will be
turned into a single UPDATE statement when you commit.)
To finish covering the basics, let’s insert a new loan, then delete it:
You can also delete rows that have not been loaded as objects. Let’s do our insert/delete cycle once more, this time
using the loans table’s delete method. (For SQLAlchemy experts: note that no flush() call is required since this delete
acts at the SQL level, not at the Mapper level.) The same where-clause construction rules apply here as to the select
methods.
You can similarly update multiple rows at once. This will change the book_id to 1 in all loans whose book_id is 2:
Joins
Occasionally, you will want to pull out a lot of data from related tables all at once. In this situation, it is far more
efficient to have the database perform the necessary join. (Here we do not have a lot of data but hopefully the concept
is still clear.) SQLAlchemy is smart enough to recognize that loans has a foreign key to users, and uses that as the join
condition automatically.
If you’re unfortunate enough to be using MySQL with the default MyISAM storage engine, you’ll have to specify
the join condition manually, since MyISAM does not store foreign keys. Here’s the same join again, with the join
condition explicitly specified:
You can compose arbitrarily complex joins by combining Join objects with tables or other joins. Here we combine our
first join with the books table:
If you join tables that have an identical column name, wrap your join with with_labels, to disambiguate columns with
their table name (.c is short for .columns):
>>> db.with_labels(join1).c.keys()
[u’users_name’, u’users_email’, u’users_password’, u’users_classname’, u’users_admin’, u’lo
Relationships
>>> db.users.filter(~db.users.loans.any()).all()
[MappedUsers(name=u’Bhargan Basepair’,email=’[email protected]’,password=u’ba
relate can take any options that the relationship function accepts in normal mapper definition:
Advanced Use
Note: please read and understand this section thoroughly before using SqlSoup in any web application.
SqlSoup uses a ScopedSession to provide thread-local sessions. You can get a reference to the current one like this:
The configuration of this session is autoflush=True, autocommit=False. This means when you work
with the SqlSoup object, you need to call db.commit() in order to have changes persisted. You may also call
db.rollback() to roll things back.
Since the SqlSoup object’s Session automatically enters into a transaction as soon as it’s used, it is essential that you
call commit() or rollback() on it when the work within a thread completes. This means all the guidelines for
web application integration at Lifespan of a Contextual Session must be followed.
The SqlSoup object can have any session or scoped session configured onto it. This is of key importance when
integrating with existing code or frameworks such as Pylons. If your application already has a Session configured,
pass it to your SqlSoup object:
If the Session is configured with autocommit=True, use flush() instead of commit() to persist changes - in
this case, the Session closes out its transaction immediately and no external management is needed. rollback()
is also not available. Configuring a new SQLSoup object in “autocommit” mode looks like:
SqlSoup can map any SQLAlchemy Selectable with the map method. Let’s map a Select object that uses
an aggregate function; we’ll use the SQLAlchemy Table that SqlSoup introspected as the basis. (Since we’re not
mapping to a simple table or join, we need to tell SQLAlchemy how to find the primary key which just needs to be
unique within the select, and not necessarily correspond to a real PK in the database.)
Obviously if we just wanted to get a list of counts associated with book years once, raw SQL is going to be less work.
The advantage of mapping a Select is reusability, both standalone and in Joins. (And if you go to full SQLAlchemy,
you can perform mappings like this directly to your object models.)
An easy way to save mapped selectables like this is to just hang them on your db object:
Raw SQL
SqlSoup works fine with SQLAlchemy’s text construct, described in Using Text. You can also execute textual SQL
directly using the execute() method, which corresponds to the execute() method on the underlying Session. Expressions
here are expressed like text() constructs, using named parameters with colons:
>>> rp = db.execute(’select name, email from users where name like :name order by name’, na
>>> for name, email in rp.fetchall(): print name, email
Bhargan Basepair [email protected]
Or you can get at the current transaction’s connection using connection(). This is the raw connection object which can
accept any sort of SQL expression or raw SQL string passed to the database:
You can load a table whose name is specified at runtime with the entity() method:
entity() also takes an optional schema argument. If none is specified, the default schema is used.
9.4.6 compiler
Synopsis
Usage involves the creation of one or more ClauseElement subclasses and one or more callables defining its
compilation:
class MyColumn(ColumnClause):
pass
@compiles(MyColumn)
def compile_mycolumn(element, compiler, **kw):
return "[%s]" % element.name
Above, MyColumn extends ColumnClause, the base expression element for named column objects. The
compiles decorator registers itself with the MyColumn class so that it is invoked when the object is compiled
to a string:
s = select([MyColumn(’x’), MyColumn(’y’)])
print str(s)
Produces:
Compilers can also be made dialect-specific. The appropriate compiler will be invoked for the dialect in use:
class AlterColumn(DDLElement):
@compiles(AlterColumn)
def visit_alter_column(element, compiler, **kw):
return "ALTER COLUMN %s ..." % element.column.name
@compiles(AlterColumn, ’postgresql’)
def visit_alter_column(element, compiler, **kw):
return "ALTER TABLE %s ALTER COLUMN %s ..." % (element.table.name, element.column.name)
The second visit_alter_table will be invoked when any postgresql dialect is used.
The compiler argument is the Compiled object in use. This object can be inspected for any information about the
in-progress compilation, including compiler.dialect, compiler.statement etc. The SQLCompiler and
DDLCompiler both include a process() method which can be used for compilation of embedded attributes:
@compiles(InsertFromSelect)
def visit_insert_from_select(element, compiler, **kw):
return "INSERT INTO %s (%s)" % (
compiler.process(element.table, asfrom=True),
compiler.process(element.select)
)
Produces:
"INSERT INTO mytable (SELECT mytable.x, mytable.y, mytable.z FROM mytable WHERE mytable.x >
SQL and DDL constructs are each compiled using different base compilers - SQLCompiler and DDLCompiler.
A common need is to access the compilation rules of SQL expressions from within a DDL expression. The
DDLCompiler includes an accessor sql_compiler for this reason, such as below where we generate a CHECK
constraint that embeds a SQL expression:
@compiles(MyConstraint)
def compile_my_constraint(constraint, ddlcompiler, **kw):
return "CONSTRAINT %s CHECK (%s)" % (
constraint.name,
ddlcompiler.sql_compiler.process(constraint.expression)
)
The compiler extension applies just as well to the existing constructs. When overriding the compilation of a built in
SQL construct, the @compiles decorator is invoked upon the appropriate class (be sure to use the class, i.e. Insert
or Select, instead of the creation function such as insert() or select()).
Within the new compilation function, to get at the “original” compilation routine, use the appropriate visit_XXX
method - this because compiler.process() will call upon the overriding routine and cause an endless loop. Such as, to
add “prefix” to all insert statements:
@compiles(Insert)
def prefix_inserts(insert, compiler, **kw):
return compiler.visit_insert(insert.prefix_with("some prefix"), **kw)
The above compiler will prefix all INSERT statements with “some prefix” when compiled.
compiler works for types, too, such as below where we implement the MS-SQL specific ‘max’ keyword for
String/VARCHAR:
@compiles(String, ’mssql’)
@compiles(VARCHAR, ’mssql’)
def compile_varchar(element, compiler, **kw):
if element.length == ’max’:
return "VARCHAR(’max’)"
else:
return compiler.visit_VARCHAR(element, **kw)
Subclassing Guidelines
A big part of using the compiler extension is subclassing SQLAlchemy expression constructs. To make this easier, the
expression and schema packages feature a set of “bases” intended for common tasks. A synopsis is as follows:
• ClauseElement - This is the root expression class. Any SQL expression can be derived from this base, and
is probably the best choice for longer constructs such as specialized INSERT statements.
• ColumnElement - The root of all “column-like” elements. Anything that you’d place in the “columns” clause
of a SELECT statement (as well as order by and group by) can derive from this - the object will automatically
have Python “comparison” behavior.
ColumnElement classes want to have a type member which is expression’s return type. This can be estab-
lished at the instance level in the constructor, or at the class level if its generally constant:
class timestamp(ColumnElement):
type = TIMESTAMP()
• FunctionElement - This is a hybrid of a ColumnElement and a “from clause” like object, and represents
a SQL function or stored procedure type of call. Since most databases support statements along the line of
“SELECT FROM <some function>” FunctionElement adds in the ability to be used in the FROM clause
of a select() construct:
class coalesce(FunctionElement):
name = ’coalesce’
@compiles(coalesce)
def compile(element, compiler, **kw):
return "coalesce(%s)" % compiler.process(element.clauses)
@compiles(coalesce, ’oracle’)
def compile(element, compiler, **kw):
if len(element.clauses) > 2:
raise TypeError("coalesce only supports two arguments on Oracle")
return "nvl(%s)" % compiler.process(element.clauses)
• DDLElement - The root of all DDL expressions, like CREATE TABLE, ALTER TABLE, etc. Compilation
of DDLElement subclasses is issued by a DDLCompiler instead of a SQLCompiler. DDLElement also
features Table and MetaData event hooks via the execute_at() method, allowing the construct to be
invoked during CREATE TABLE and DROP TABLE sequences.
• Executable - This is a mixin which should be used with any expression class that represents a “stan-
dalone” SQL statement that can be passed directly to an execute() method. It is already implicit within
DDLElement and FunctionElement.
API Documentation
• id_chooser – A callable, passed a query and a tuple of identity values, which should return
a list of shard ids where the ID might reside. The databases will be queried in the order of
this listing.
• query_chooser – For a given Query, returns the list of shard_ids where the query should
be issued. Results from all shards returned will be combined together into a single listing.
• shards – A dictionary of string shard names to Engine objects.
class ShardedQuery(*args, **kwargs)
__init__(*args, **kwargs)
set_shard(shard_id)
return a new query, limited to a single shard ID.
all subsequent operations with the returned query will be against the single shard regardless of other state.
TEN
• Index
• Search Page
325
SQLAlchemy Documentation, Release 0.6.2
A sqlalchemy.dialects.mssql.adodbapi, 266
adjacency_list, 141 sqlalchemy.dialects.mssql.base, 262
association, 141 sqlalchemy.dialects.mssql.mxodbc, 265
sqlalchemy.dialects.mssql.pymssql, 266
B sqlalchemy.dialects.mssql.pyodbc, 264
beaker_caching, 142 sqlalchemy.dialects.mssql.zxjdbc, 266
sqlalchemy.dialects.mysql, 269
C sqlalchemy.dialects.mysql.base, 267
sqlalchemy.dialects.mysql.mysqlconnector,
custom_attributes, 142
279
D sqlalchemy.dialects.mysql.mysqldb, 277
sqlalchemy.dialects.mysql.oursql, 278
derived_attributes, 143
sqlalchemy.dialects.mysql.pyodbc, 279
dynamic_dict, 144
sqlalchemy.dialects.mysql.zxjdbc, 279
E sqlalchemy.dialects.oracle, 281
sqlalchemy.dialects.oracle.base, 279
elementtree, 147
sqlalchemy.dialects.oracle.cx_oracle,
284
G sqlalchemy.dialects.oracle.zxjdbc, 284
graphs, 143 sqlalchemy.dialects.postgresql.base, 285
sqlalchemy.dialects.postgresql.pg8000,
I 289
inheritance, 144 sqlalchemy.dialects.postgresql.psycopg2,
288
L sqlalchemy.dialects.postgresql.zxjdbc,
large_collection, 144 289
sqlalchemy.dialects.sqlite.base, 289
N sqlalchemy.dialects.sqlite.pysqlite, 290
nested_sets, 144 sqlalchemy.dialects.sybase.base, 292
sqlalchemy.dialects.sybase.mxodbc, 293
P sqlalchemy.dialects.sybase.pyodbc, 292
poly_assoc, 145 sqlalchemy.dialects.sybase.pysybase, 292
postgis, 145 sqlalchemy.engine.reflection, 200
sqlalchemy.ext.associationproxy, 305
S sqlalchemy.ext.compiler, 319
sharding, 144 sqlalchemy.ext.declarative, 294
sqlalchemy.dialects.access.base, 293 sqlalchemy.ext.horizontal_shard, 322
sqlalchemy.dialects.firebird.base, 261 sqlalchemy.ext.orderinglist, 311
sqlalchemy.dialects.firebird.kinterbasdb,sqlalchemy.ext.serializer, 313
262 sqlalchemy.ext.sqlsoup, 314
sqlalchemy.dialects.informix.base, 293 sqlalchemy.interfaces, 216
sqlalchemy.dialects.maxdb.base, 293
327
SQLAlchemy Documentation, Release 0.6.2
V
versioning, 145
vertical, 147
329
SQLAlchemy Documentation, Release 0.6.2
330 Index
SQLAlchemy Documentation, Release 0.6.2
Index 331
SQLAlchemy Documentation, Release 0.6.2
332 Index
SQLAlchemy Documentation, Release 0.6.2
Index 333
SQLAlchemy Documentation, Release 0.6.2
334 Index
SQLAlchemy Documentation, Release 0.6.2
create_engine() (in module sqlalchemy), 149 deprecated() (in module sqlalchemy.util), 220
create_instance() (sqlalchemy.orm.interfaces.MapperExtension
derived_attributes (module), 143
method), 257 desc() (in module sqlalchemy.sql.expression), 169
create_session() (in module sqlalchemy.orm), 247 desc() (sqlalchemy.sql.expression._CompareMixin
create_xid() (sqlalchemy.engine.base.Dialect method), method), 178
158 desc() (sqlalchemy.sql.expression.ColumnOperators
create_xid() (sqlalchemy.engine.default.DefaultDialect method), 179
method), 160 description (sqlalchemy.sql.expression.FromClause at-
CreateIndex (class in sqlalchemy.schema), 199 tribute), 181
CreateSequence (class in sqlalchemy.schema), 199 Deserializer() (in module sqlalchemy.ext.serializer), 314
CreateTable (class in sqlalchemy.schema), 199 detach() (sqlalchemy.engine.base.Connection method),
current_date (class in sqlalchemy.sql.functions), 186 153
current_time (class in sqlalchemy.sql.functions), 186 Dialect (class in sqlalchemy.engine.base), 157
current_timestamp (class in sqlalchemy.sql.functions), dialect (sqlalchemy.engine.base.Connection attribute),
186 153
current_user (class in sqlalchemy.sql.functions), 186 dialect_impl() (sqlalchemy.types.TypeDecorator
cursor_execute() (sqlalchemy.interfaces.ConnectionProxy method), 212
method), 217 dialect_impl() (sqlalchemy.types.TypeEngine method),
custom_attributes (module), 142 214
dialect_impl() (sqlalchemy.types.UserDefinedType
D method), 213
DATE (class in sqlalchemy.dialects.mysql), 272 dict_getter() (sqlalchemy.orm.interfaces.InstrumentationManager
DATE (class in sqlalchemy.types), 208 method), 255
Date (class in sqlalchemy.types), 203 dictlike_iteritems() (in module sqlalchemy.util), 220
DATETIME (class in sqlalchemy.dialects.mysql), 272 dirty (sqlalchemy.orm.session.Session attribute), 252
DATETIME (class in sqlalchemy.types), 209 dispose() (sqlalchemy.engine.base.Engine method), 151
DateTime (class in sqlalchemy.types), 203 dispose() (sqlalchemy.orm.interfaces.InstrumentationManager
DDL (class in sqlalchemy.schema), 198 method), 255
DDLElement (class in sqlalchemy.schema), 197 dispose() (sqlalchemy.pool.Pool method), 165
DECIMAL (class in sqlalchemy.dialects.mysql), 270 dispose() (sqlalchemy.pool.SingletonThreadPool
DECIMAL (class in sqlalchemy.types), 209 method), 166
declarative_base() (in module dispose() (sqlalchemy.schema.ThreadLocalMetaData
sqlalchemy.ext.declarative), 304 method), 193
decode_slice() (in module sqlalchemy.util), 220 distinct() (in module sqlalchemy.sql.expression), 169
decorator() (in module sqlalchemy.util), 220 distinct() (sqlalchemy.orm.query.Query method), 238
default_schema_name (sqlalchemy.engine.reflection.Inspectordistinct() (sqlalchemy.sql.expression._CompareMixin
attribute), 201 method), 178
DefaultClause (class in sqlalchemy.schema), 197 distinct() (sqlalchemy.sql.expression.ColumnOperators
DefaultDialect (class in sqlalchemy.engine.default), 160 method), 179
DefaultExecutionContext (class in distinct() (sqlalchemy.sql.expression.Select method), 183
sqlalchemy.engine.default), 161 do_begin() (sqlalchemy.engine.base.Dialect method), 158
DefaultGenerator (class in sqlalchemy.schema), 197 do_begin() (sqlalchemy.engine.default.DefaultDialect
defer() (in module sqlalchemy.orm), 245 method), 160
deferred() (in module sqlalchemy.orm), 227 do_begin_twophase() (sqlalchemy.engine.base.Dialect
del_attribute() (in module sqlalchemy.orm.attributes), 231 method), 158
Delete (class in sqlalchemy.sql.expression), 179 do_commit() (sqlalchemy.engine.base.Dialect method),
delete() (in module sqlalchemy.sql.expression), 169 158
delete() (sqlalchemy.orm.query.Query method), 237 do_commit() (sqlalchemy.engine.default.DefaultDialect
delete() (sqlalchemy.orm.session.Session method), 252 method), 160
delete() (sqlalchemy.sql.expression.TableClause method), do_commit_twophase() (sqlalchemy.engine.base.Dialect
185 method), 158
deleted (sqlalchemy.orm.session.Session attribute), 252 do_execute() (sqlalchemy.engine.base.Dialect method),
denormalize_name() (sqlalchemy.engine.base.Dialect 158
method), 158 do_executemany() (sqlalchemy.engine.base.Dialect
Index 335
SQLAlchemy Documentation, Release 0.6.2
336 Index
SQLAlchemy Documentation, Release 0.6.2
extract() (in module sqlalchemy.sql.expression), 169 get_cls_kwargs() (in module sqlalchemy.util), 221
get_columns() (sqlalchemy.engine.base.Dialect method),
F 158
fetchall() (sqlalchemy.engine.base.ResultProxy method), get_columns() (sqlalchemy.engine.reflection.Inspector
155 method), 201
FetchedValue (class in sqlalchemy.schema), 197 get_dbapi_type() (sqlalchemy.types.AbstractType
fetchmany() (sqlalchemy.engine.base.ResultProxy method), 215
method), 155 get_dbapi_type() (sqlalchemy.types.TypeDecorator
fetchone() (sqlalchemy.engine.base.ResultProxy method), 212
method), 155 get_dbapi_type() (sqlalchemy.types.TypeEngine
filter() (sqlalchemy.orm.query.Query method), 238 method), 214
filter_by() (sqlalchemy.orm.query.Query method), 238 get_dbapi_type() (sqlalchemy.types.UserDefinedType
first() (sqlalchemy.engine.base.ResultProxy method), 155 method), 213
first() (sqlalchemy.orm.query.Query method), 238 get_dialect() (sqlalchemy.engine.url.URL method), 151
first_connect() (sqlalchemy.interfaces.PoolListener get_foreign_keys() (sqlalchemy.engine.base.Dialect
method), 218 method), 159
flatten_iterator() (in module sqlalchemy.util), 220 get_foreign_keys() (sqlalchemy.engine.reflection.Inspector
FLOAT (class in sqlalchemy.dialects.mysql), 270 method), 201
FLOAT (class in sqlalchemy.types), 209 get_func_kwargs() (in module sqlalchemy.util), 221
Float (class in sqlalchemy.types), 204 get_history() (in module sqlalchemy.orm.attributes), 232
flush() (sqlalchemy.orm.session.Session method), 253 get_indexes() (sqlalchemy.engine.base.Dialect method),
foreign_keys (sqlalchemy.sql.expression.FromClause at- 159
tribute), 181 get_indexes() (sqlalchemy.engine.reflection.Inspector
ForeignKey (class in sqlalchemy.schema), 194 method), 201
ForeignKeyConstraint (class in sqlalchemy.schema), 195 get_instance_dict() (sqlalchemy.orm.interfaces.InstrumentationManager
format_argspec_init() (in module sqlalchemy.util), 220 method), 255
format_argspec_plus() (in module sqlalchemy.util), 220 get_lastrowid() (sqlalchemy.engine.default.DefaultExecutionContext
from_engine() (sqlalchemy.engine.reflection.Inspector method), 161
class method), 201 get_pk_constraint() (sqlalchemy.engine.base.Dialect
from_self() (sqlalchemy.orm.query.Query method), 239 method), 159
from_statement() (sqlalchemy.orm.query.Query method), get_pk_constraint() (sqlalchemy.engine.default.DefaultDialect
239 method), 161
FromClause (class in sqlalchemy.sql.expression), 180 get_pk_constraint() (sqlalchemy.engine.reflection.Inspector
froms (sqlalchemy.sql.expression.Select attribute), 183 method), 201
func (in module sqlalchemy.sql.expression), 170 get_primary_keys() (sqlalchemy.engine.base.Dialect
Function (class in sqlalchemy.sql.expression), 180 method), 159
function_named() (in module sqlalchemy.util), 221 get_primary_keys() (sqlalchemy.engine.reflection.Inspector
FunctionElement (class in sqlalchemy.sql.expression), method), 202
180 get_property() (sqlalchemy.orm.mapper.Mapper method),
233
G get_referent() (sqlalchemy.schema.ForeignKey method),
GenericFunction (class in sqlalchemy.sql.functions), 185 195
get() (sqlalchemy.orm.query.Query method), 239 get_rowcount() (sqlalchemy.engine.base.ExecutionContext
get() (sqlalchemy.pool.Pool method), 165 method), 162
get_attribute() (in module sqlalchemy.orm.attributes), 231 get_schema_names() (sqlalchemy.engine.reflection.Inspector
get_bind() (sqlalchemy.orm.session.Session method), 253 method), 202
get_children() (sqlalchemy.schema.Column method), 189 get_table_names() (sqlalchemy.engine.base.Dialect
get_children() (sqlalchemy.schema.SchemaItem method), method), 159
200 get_table_names() (sqlalchemy.engine.reflection.Inspector
get_children() (sqlalchemy.schema.Table method), 193 method), 202
get_children() (sqlalchemy.sql.expression.ClauseElement get_table_options() (sqlalchemy.engine.reflection.Inspector
method), 176 method), 202
get_children() (sqlalchemy.sql.expression.Select get_view_definition() (sqlalchemy.engine.base.Dialect
method), 183 method), 159
Index 337
SQLAlchemy Documentation, Release 0.6.2
get_view_definition() (sqlalchemy.engine.reflection.Inspector
in_transaction() (sqlalchemy.engine.base.Connection
method), 202 method), 154
get_view_names() (sqlalchemy.engine.base.Dialect Index (class in sqlalchemy.schema), 196
method), 159 INET (class in sqlalchemy.dialects.postgresql.base), 287
get_view_names() (sqlalchemy.engine.reflection.Inspector info (sqlalchemy.engine.base.Connection attribute), 154
method), 202 inheritance (module), 144
getargspec_init() (in module sqlalchemy.util), 221 init_collection() (in module sqlalchemy.orm.attributes),
graphs (module), 143 232
group_by() (sqlalchemy.orm.query.Query method), 239 init_failed() (sqlalchemy.orm.interfaces.MapperExtension
group_by() (sqlalchemy.sql.expression._SelectBaseMixin method), 257
method), 184 init_instance() (sqlalchemy.orm.interfaces.MapperExtension
method), 257
H initialize() (sqlalchemy.engine.base.Dialect method), 160
handle_dbapi_exception() initialize_instance_dict() (sqlalchemy.orm.interfaces.InstrumentationManag
(sqlalchemy.engine.base.ExecutionContext method), 255
method), 162 inner_columns (sqlalchemy.sql.expression.Select at-
has() (sqlalchemy.ext.associationproxy.AssociationProxy tribute), 183
method), 311 Insert (class in sqlalchemy.sql.expression), 181
has() (sqlalchemy.orm.interfaces.PropComparator insert() (in module sqlalchemy.sql.expression), 170
method), 258 insert() (sqlalchemy.sql.expression.TableClause method),
has_inherited_table() (in module 185
sqlalchemy.ext.declarative), 304 Inspector (class in sqlalchemy.engine.reflection), 200
has_key() (sqlalchemy.engine.base.RowProxy method), install_descriptor() (sqlalchemy.orm.interfaces.InstrumentationManager
156 method), 255
has_op() (sqlalchemy.orm.interfaces.PropComparator install_member() (sqlalchemy.orm.interfaces.InstrumentationManager
static method), 259 method), 255
has_sequence() (sqlalchemy.engine.base.Dialect install_state() (sqlalchemy.orm.interfaces.InstrumentationManager
method), 159 method), 255
has_table() (sqlalchemy.engine.base.Dialect method), instances() (sqlalchemy.orm.query.Query method), 239
160 instrument_attribute() (sqlalchemy.orm.interfaces.InstrumentationManager
having() (sqlalchemy.orm.query.Query method), 239 method), 255
having() (sqlalchemy.sql.expression.Select method), 183 instrument_class() (sqlalchemy.orm.interfaces.MapperExtension
method), 257
I instrument_collection_class()
identity_key() (in module sqlalchemy.orm.util), 260 (sqlalchemy.orm.interfaces.InstrumentationManager
identity_key_from_instance() method), 255
(sqlalchemy.orm.mapper.Mapper method), instrument_declarative() (in module
233 sqlalchemy.ext.declarative), 305
identity_key_from_primary_key() InstrumentationManager (class in
(sqlalchemy.orm.mapper.Mapper method), sqlalchemy.orm.interfaces), 255
233 INT (in module sqlalchemy.types), 209
identity_key_from_row() INTEGER (class in sqlalchemy.dialects.mysql), 271
(sqlalchemy.orm.mapper.Mapper method), INTEGER (class in sqlalchemy.types), 209
233 Integer (class in sqlalchemy.types), 204
IdentitySet (class in sqlalchemy.util), 218 intersect() (in module sqlalchemy.sql.expression), 171
ilike() (sqlalchemy.sql.expression.ColumnOperators intersect() (sqlalchemy.orm.query.Query method), 239
method), 179 intersect() (sqlalchemy.sql.expression.Select method),
impl (sqlalchemy.types.Interval attribute), 205 183
impl (sqlalchemy.types.PickleType attribute), 206 intersect_all() (in module sqlalchemy.sql.expression), 171
in_() (sqlalchemy.sql.expression._CompareMixin intersect_all() (sqlalchemy.orm.query.Query method),
method), 178 239
in_() (sqlalchemy.sql.expression.ColumnOperators intersect_all() (sqlalchemy.sql.expression.Select method),
method), 179 183
INTERVAL (class in sqlalchemy.dialects.oracle), 282
338 Index
SQLAlchemy Documentation, Release 0.6.2
Index 339
SQLAlchemy Documentation, Release 0.6.2
340 Index
SQLAlchemy Documentation, Release 0.6.2
Index 341
SQLAlchemy Documentation, Release 0.6.2
342 Index
SQLAlchemy Documentation, Release 0.6.2
Index 343
SQLAlchemy Documentation, Release 0.6.2
V
validates() (in module sqlalchemy.orm), 231
Validator (class in sqlalchemy.orm.util), 260
value() (sqlalchemy.orm.query.Query method), 243
values() (sqlalchemy.orm.query.Query method), 243
values() (sqlalchemy.sql.expression.Insert method), 181
values() (sqlalchemy.sql.expression.Update method), 185
VARBINARY (class in sqlalchemy.dialects.mysql), 275
VARBINARY (class in sqlalchemy.types), 209
VARCHAR (class in sqlalchemy.dialects.mysql), 274
VARCHAR (class in sqlalchemy.types), 209
versioning (module), 145
vertical (module), 147
W
warn_exception() (in module sqlalchemy.util), 222
WeakIdentityMapping (class in sqlalchemy.util), 219
344 Index