commit
2cfe1fc31c
28
CHANGELOG
Normal file
28
CHANGELOG
Normal file
@ -0,0 +1,28 @@
|
||||
0.3
|
||||
- SA 0.3.10 compatibility
|
||||
|
||||
0.2.3
|
||||
- Removed lots of SA monkeypatching in Migrate's internals
|
||||
- SA 0.3.3 compatibility
|
||||
- Removed logsql (#75)
|
||||
- Updated py.test version from 0.8 to 0.9; added a download link to setup.py
|
||||
- Fixed incorrect "function not defined" error (#88)
|
||||
- Fixed SQLite and .sql scripts (#87)
|
||||
|
||||
0.2.2
|
||||
- Deprecated driver(engine) in favor of engine.name (#80)
|
||||
- Deprecated logsql (#75)
|
||||
- Comments in .sql scripts don't make things fail silently now (#74)
|
||||
- Errors while downgrading (and probably other places) are shown on their own line
|
||||
- Created mailing list and announcements list, updated documentation accordingly
|
||||
- Automated tests now require py.test (#66)
|
||||
- Documentation fix to .sql script commits (#72)
|
||||
- Fixed a pretty major bug involving logengine, dealing with commits/tests (#64)
|
||||
- Fixes to the online docs - default DB versioning table name (#68)
|
||||
- Fixed the engine name in the scripts created by the command 'migrate script' (#69)
|
||||
- Added Evan's email to the online docs
|
||||
|
||||
0.2.1
|
||||
- Created this changelog
|
||||
- Now requires (and is now compatible with) SA 0.3
|
||||
- Commits across filesystems now allowed (shutil.move instead of os.rename) (#62)
|
14
README
Normal file
14
README
Normal file
@ -0,0 +1,14 @@
|
||||
Help:
|
||||
http://code.google.com/p/sqlalchemy-migrate/
|
||||
http://groups.google.com/group/migrate-users
|
||||
|
||||
To run automated tests:
|
||||
- Copy test_db.cfg.tmpl to test_db.cfg
|
||||
- Edit test_db.cfg with database connection strings sutable for running tests.
|
||||
(Use empty databases.)
|
||||
- python setup.py test
|
||||
|
||||
Note that py.test[1] is required to run migrate's tests. It should be
|
||||
installed automatically; if not, try "easy_install py".
|
||||
|
||||
[1] http://codespeak.net/py/current/doc/test.html
|
108
docs/changeset.rst
Normal file
108
docs/changeset.rst
Normal file
@ -0,0 +1,108 @@
|
||||
=================
|
||||
migrate.changeset
|
||||
=================
|
||||
|
||||
.. contents::
|
||||
|
||||
Importing ``migrate.changeset`` adds some new methods to existing SA objects, as well as creating functions of its own. Most operations can be done either by a method or a function. Methods match SA's existing API and are more intuitive when the object is available; functions allow one to make changes when only the name of an object is available (for example, adding a column to a table in the database without having to load that table into Python).
|
||||
|
||||
Changeset operations can be used independently of Migrate's `versioning system`_.
|
||||
|
||||
For more information, see the generated documentation for `migrate.changeset`_.
|
||||
|
||||
.. _`migrate.changeset`: module-migrate.changeset.html
|
||||
.. _`versioning system`: versioning.html
|
||||
|
||||
Column
|
||||
======
|
||||
|
||||
Given a standard SQLAlchemy table::
|
||||
|
||||
table = Table('mytable',meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
)
|
||||
table.create()
|
||||
|
||||
Create a column::
|
||||
|
||||
col = Column('col1',String)
|
||||
col.create(table)
|
||||
|
||||
# Column is added to table based on its name
|
||||
assert col is table.c.col1
|
||||
|
||||
Drop a column (Not supported by SQLite_)::
|
||||
|
||||
col.drop()
|
||||
|
||||
|
||||
Alter a column (Not supported by SQLite_)::
|
||||
|
||||
col.alter(name='col2')
|
||||
|
||||
# Renaming a column affects how it's accessed by the table object
|
||||
assert col is table.c.col2
|
||||
|
||||
# Other properties can be modified as well
|
||||
col.alter(type=String(42),
|
||||
default="life, the universe, and everything",
|
||||
nullable=False,
|
||||
)
|
||||
|
||||
# Given another column object, col1.alter(col2), col1 will be changed to match col2
|
||||
col.alter(Column('col3',String(77),nullable=True))
|
||||
assert col.nullable
|
||||
assert table.c.col3 is col
|
||||
|
||||
.. _sqlite: http://www.sqlite.org/lang_altertable.html
|
||||
|
||||
Table
|
||||
=====
|
||||
|
||||
SQLAlchemy supports `table create/drop`_
|
||||
|
||||
Rename a table::
|
||||
|
||||
table.rename('newtablename')
|
||||
|
||||
.. _`table create/drop`: http://www.sqlalchemy.org/docs/metadata.myt#metadata_creating
|
||||
|
||||
Index
|
||||
=====
|
||||
|
||||
SQLAlchemy supports `index create/drop`_
|
||||
|
||||
Rename an index, given an SQLAlchemy ``Index`` object::
|
||||
|
||||
index.rename('newindexname')
|
||||
|
||||
.. _`index create/drop`: http://www.sqlalchemy.org/docs/metadata.myt#metadata_indexes
|
||||
|
||||
Constraint
|
||||
==========
|
||||
|
||||
SQLAlchemy supports creating/dropping constraints at the same time a table is created/dropped. Migrate adds support for creating/dropping primary/foreign key constraints independently.
|
||||
|
||||
Primary key constraints::
|
||||
|
||||
cons = PrimaryKeyConstraint(col1,col2)
|
||||
# Create the constraint
|
||||
cons.create()
|
||||
# Drop the constraint
|
||||
cons.drop()
|
||||
|
||||
Note that Oracle requires that you state the name of the primary key constraint to be created/dropped. Migrate will try to guess the name of the PK constraint for other databases, but if it's something other than the default, you'll need to give its name::
|
||||
|
||||
PrimaryKeyConstraint(col1,col2,name='my_pk_constraint')
|
||||
|
||||
Foreign key constraints::
|
||||
|
||||
cons = ForeignKeyConstraint([table.c.fkey], [othertable.c.id])
|
||||
# Create the constraint
|
||||
cons.create()
|
||||
# Drop the constraint
|
||||
cons.drop()
|
||||
|
||||
Names are specified just as with primary key constraints::
|
||||
|
||||
ForeignKeyConstraint([table.c.fkey], [othertable.c.id],name='my_fk_constraint')
|
38
docs/download.rst
Normal file
38
docs/download.rst
Normal file
@ -0,0 +1,38 @@
|
||||
=======
|
||||
Migrate
|
||||
=======
|
||||
|
||||
Download
|
||||
========
|
||||
|
||||
Migrate builds on SQLAlchemy_, so you should install that first.
|
||||
|
||||
You can get the latest version of Migrate from the `cheese shop`_, or via easy_install_::
|
||||
|
||||
easy_install migrate
|
||||
|
||||
You should now be able to use the *migrate* command from the command line::
|
||||
|
||||
migrate
|
||||
|
||||
This should list all available commands. *migrate help COMMAND* will display more information about each command.
|
||||
|
||||
If you'd like to be notified when new versions of migrate are released, subscribe to `migrate-announce`_.
|
||||
|
||||
.. _easy_install: http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-install
|
||||
.. _sqlalchemy: http://www.sqlalchemy.org/download.myt
|
||||
.. _`cheese shop`: http://www.python.org/pypi/migrate
|
||||
.. _`migrate-announce`: http://groups.google.com/group/migrate-announce
|
||||
|
||||
Development
|
||||
===========
|
||||
|
||||
Migrate's Subversion_ repository is at http://erosson.com/migrate/svn/
|
||||
|
||||
To get the latest trunk::
|
||||
|
||||
svn co http://erosson.com/migrate/svn/migrate/trunk
|
||||
|
||||
Patches should be submitted as Trac tickets.
|
||||
|
||||
.. _subversion: http://subversion.tigris.org/
|
26
docs/historical/ProjectDesignDecisionsAutomation.trac
Normal file
26
docs/historical/ProjectDesignDecisionsAutomation.trac
Normal file
@ -0,0 +1,26 @@
|
||||
There are many migrations that don't require a lot of thought - for example, if we add a column to a table definition, we probably want to have an "ALTER TABLE...ADD COLUMN" statement show up in our migration.
|
||||
|
||||
The difficulty lies in the automation of changes where the requirements aren't obvious. What happens when you add a unique constraint to a column whose data is not already unique? What happens when we split an existing table in two? Completely automating database migrations is not possible.
|
||||
|
||||
That said - we shouldn't have to hunt down and handwrite the ALTER TABLE statements for every new column; this is often just tedious. Many other common migration tasks require little serious thought; such tasks are ripe for automation. Any automation attempted, however, should not interfere with our ability to write scripts by hand if we so choose; our tool should ''not'' be centered around automation.
|
||||
|
||||
|
||||
Automatically generating the code for this sort of task seems like a good solution:
|
||||
* It does not obstruct us from writing changes by hand; if we don't like the autogenerated code, delete it or don't generate it to begin with
|
||||
* We can easily add other migration tasks to the autogenerated code
|
||||
* We can see right away if the code is what we're expecting, or if it's wrong
|
||||
* If the generated code is wrong, it is easily modified; we can use parts of the generated code, rather than being required to use either 100% or 0%
|
||||
* Maintence, usually a problem with auto-generated code, is not an issue: old database migration scripts are not the subject of maintenance; the correct solution is usually a new migration script.
|
||||
|
||||
|
||||
Implementation is a problem: finding the 'diff' of two databases to determine what columns to add is not trivial. Fortunately, there exist tools that claim to do this for us: [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL] both claim to have this capability.
|
||||
|
||||
...
|
||||
|
||||
All that said, this is ''not'' something I'm going to attempt during the Summer of Code.
|
||||
* I'd have to rely tremendously on a tool I'm not at all familiar with
|
||||
* Creates a risk of the project itself relying too much on the automation, a Bad Thing
|
||||
* The project has a deadline and I have plenty else to do already
|
||||
* Lots of people with more experience than me say this would take more time than it's worth
|
||||
|
||||
It's something that might be considered for future work if this project is successful, though.
|
147
docs/historical/ProjectDesignDecisionsScriptFormat.trac
Normal file
147
docs/historical/ProjectDesignDecisionsScriptFormat.trac
Normal file
@ -0,0 +1,147 @@
|
||||
Important to our system is the API used for making database changes.
|
||||
|
||||
=== Raw SQL; .sql script ===
|
||||
Require users to write raw SQL. Migration scripts are .sql scripts (with database version information in a header comment).
|
||||
|
||||
+ Familiar interface for experienced DBAs.
|
||||
|
||||
+ No new API to learn[[br]]
|
||||
SQL is used elsewhere; many people know SQL already. Those who are still learning SQL will gain expertise not in the API of a specific tool, but in a language which will help them elsewhere. (On the other hand, those who are familiar with Python with no desire to learn SQL might find a Python API more intuitive.)
|
||||
|
||||
- Difficult to extend when necessary[[br]]
|
||||
.sql scripts mean that we can't write new functions specific to our migration system when necessary. (We can't always assume that the DBMS supports functions/procedures.)
|
||||
|
||||
- Lose the power of Python[[br]]
|
||||
Some things are possible in Python that aren't in SQL - for example, suppose we want to use some functions from our application in a migration script. (The user might also simply prefer Python.)
|
||||
|
||||
- Loss of database independence.[[br]]
|
||||
There isn't much we can do to specify different actions for a particular DBMS besides copying the .sql file, which is obviously bad form.
|
||||
|
||||
=== Raw SQL; Python script ===
|
||||
Require users to write raw SQL. Migration scripts are python scripts whose API does little beyond specifying what DBMS(es) a particular statement should apply to.
|
||||
|
||||
For example,
|
||||
{{{
|
||||
run("CREATE TABLE test[...]") # runs for all databases
|
||||
run("ALTER TABLE test ADD COLUMN varchar2[...]",oracle) # runs for Oracle only
|
||||
run("ALTER TABLE test ADD COLUMN varchar[...]",postgres|mysql) # runs for Postgres or MySQL only
|
||||
}}}
|
||||
|
||||
We could also allow parts of a single statement to apply to a specific DBMS:
|
||||
{{{
|
||||
run("ALTER TABLE test ADD COLUMN"+sql("varchar",postgres|mysql)+sql("varchar2",oracle))
|
||||
}}}
|
||||
or, the same thing:
|
||||
{{{
|
||||
run("ALTER TABLE test ADD COLUMN"+sql("varchar",postgres|mysql,"varchar2",oracle))
|
||||
}}}
|
||||
|
||||
+ Allows the user to write migration scripts for multiple DBMSes.
|
||||
|
||||
- The user must manage the conflicts between different databases themselves. [[br]]
|
||||
The user can write scripts to deal with conflicts between databases, but they're not really database-independent: the user has to deal with conflicts between databases; our system doesn't help them.
|
||||
|
||||
+ Minimal new API to learn. [[br]]
|
||||
There is a new API to learn, but it is extremely small, depending mostly on SQL DDL. This has the advantages of "no new API" in our first solution.
|
||||
|
||||
- More verbose than .sql scripts.
|
||||
|
||||
=== Raw SQL; automatic translation between each dialect ===
|
||||
Same as the above suggestion, but allow the user to specify a 'default' dialect of SQL that we'll interpret and whose quirks we'll deal with.
|
||||
That is, write everything in SQL and try to automatically resolve the conflicts of different DBMSes.
|
||||
|
||||
For example, take the following script:
|
||||
{{{
|
||||
engine=postgres
|
||||
|
||||
run("""
|
||||
CREATE TABLE test (
|
||||
id serial
|
||||
)
|
||||
""")
|
||||
}}}
|
||||
Running this on a Postgres database, surprisingly enough, would generate exactly what we typed:
|
||||
{{{
|
||||
CREATE TABLE test (
|
||||
id serial
|
||||
)
|
||||
}}}
|
||||
|
||||
Running it on a MySQL database, however, would generate something like
|
||||
{{{
|
||||
CREATE TABLE test (
|
||||
id integer auto_increment
|
||||
)
|
||||
}}}
|
||||
|
||||
+ Database-independence issues of the above SQL solutions are resolved.[[br]]
|
||||
Ideally, this solution would be as database-independent as a Python API for database changes (discussed next), but with all the advantages of writing SQL (no new API).
|
||||
|
||||
- Difficult implementation[[br]]
|
||||
Obviously, this is not easy to implement - there is a great deal of parsing logic and a great many things that need to be accounted for. In addition, this is a complex operation; any implementation will likely have errors somewhere.
|
||||
|
||||
It seems tools for this already exist; an effective tool would trivialize this implementation. I experimented a bit with [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL]; however, I had difficulties with both.
|
||||
|
||||
- Database-specific features ensure that this cannot possibly be "complete". [[br]]
|
||||
For example, Postgres has an 'interval' type to represent times and (AFAIK) MySQL does not.
|
||||
|
||||
=== Database-independent Python API ===
|
||||
Create a Python API through which we may manage database changes. Scripts would be based on the existing SQLAlchemy API when possible.
|
||||
|
||||
Scripts would look something like
|
||||
{{{
|
||||
# Create a table
|
||||
test_table = table('test'
|
||||
,Column('id',Integer,notNull=True)
|
||||
)
|
||||
table.create()
|
||||
# Add a column to an existing table
|
||||
test_table.add_column('id',Integer,notNull=True)
|
||||
# Or, use a column object instead of its parameters
|
||||
test_table.add_column(Column('id',Integer,notNull=True))
|
||||
# Or, don't use a table object at all
|
||||
add_column('test','id',Integer,notNull=True)
|
||||
}}}
|
||||
This would use engines, similar to SQLAlchemy's, to deal with database-independence issues.
|
||||
|
||||
We would, of course, allow users to write raw SQL if they wish. This would be done in the manner outlined in the second solution above; this allows us to write our entire script in SQL and ignore the Python API if we wish, or write parts of our solution in SQL to deal with specific databases.
|
||||
|
||||
+ Deals with database-independence thoroughly and with minimal user effort.[[br]]
|
||||
SQLAlchemy-style engines would be used for this; issues of different DBMS syntax are resolved with minimal user effort. (Database-specific features would still need handwritten SQL.)
|
||||
|
||||
+ Familiar interface for SQLAlchemy users.[[br]]
|
||||
In addition, we can often cut-and-paste column definitions from SQLAlchemy tables, easing one particular task.
|
||||
|
||||
- Requires that the user learn a new API. [[br]]
|
||||
SQL already exists; people know it. SQL newbies might be more comfortable with a Python interface, but folks who already know SQL must learn a whole new API. (On the other hand, the user *can* write things in SQL if they wish, learning only the most minimal of APIs, if they are willing to resolve issues of database-independence themself.)
|
||||
|
||||
- More difficult to implement than pure SQL solutions. [[br]]
|
||||
SQL already exists/has been tested. A new Python API does not/has not, and much of the work seems to consist of little more than reinventing the wheel.
|
||||
|
||||
- Script behavior might change under different versions of the project.[[br]]
|
||||
...where .sql scripts behave the same regardless of the project's version.
|
||||
|
||||
=== Generate .sql scripts from a Python API ===
|
||||
Attempts to take the best of the first and last solutions. An API similar to the previous solution would be used, but rather than immediately being applied to the database, .sql scripts are generated for each type of database we're interested in. These .sql scripts are what's actually applied to the database.
|
||||
|
||||
This would essentially allow users to skip the Python script step entirely if they wished, and write migration scripts in SQL instead, as in solution 1.
|
||||
|
||||
+ Database-independence is an option, when needed.
|
||||
|
||||
+ A familiar interface/an interface that can interact with other tools is an option, when needed.
|
||||
|
||||
+ Easy to inspect the SQL generated by a script, to ensure it's what we're expecting.
|
||||
|
||||
+ Migration scripts won't change behavior across different versions of the project. [[br]]
|
||||
Once a Python script is translated to a .sql script, its behavior is consistent across different versions of the project, unlike a pure Python solution.
|
||||
|
||||
- Multiple ways to do a single task: not Pythonic.[[br]]
|
||||
I never really liked that word - "Pythonic" - but it does apply here. Multiple ways to do a single task has the potential to cause confusion, especially in a large project if many people do the same task different ways. We have to support both ways of doing things, as well.
|
||||
|
||||
----
|
||||
|
||||
'''Conclusion''': The last solution, generating .sql scripts from a Python API, seems to be best.
|
||||
|
||||
The first solution (.sql scripts) suffers from a lack of database-independence, but is familiar to experienced database developers, useful with other tools, and shows exactly what will be done to the database. The Python API solution has no trouble with database-independence, but suffers from other problems that the .sql solution doesn't. The last solution resolves both reasonably well. Multiple ways to do a single task might be called "not Pythonic", but IMO, the trade-off is worth this cost.
|
||||
|
||||
Automatic translation between different dialects of SQL might have potential for use in a solution, but existing tools for this aren't reliable enough, as far as I can tell.
|
56
docs/historical/ProjectDesignDecisionsVersioning.trac
Normal file
56
docs/historical/ProjectDesignDecisionsVersioning.trac
Normal file
@ -0,0 +1,56 @@
|
||||
An important aspect of this project is database versioning. For migration scripts to be most useful, we need to know what version the database is: that is, has a particular migration script already been run?
|
||||
|
||||
An option not discussed below is "no versioning"; that is, simply apply any script we're given, and rely on the user to ensure it's valid. This is entirely too error-prone to seriously consider, and takes a lot of the usefulness out of the proposed tool.
|
||||
|
||||
|
||||
=== Database-wide version numbers ===
|
||||
A single integer version number would specify the version of each database. This is stored in the database in a table, let's call it "schema"; each migration script is associated with a certain database version number.
|
||||
|
||||
+ Simple implementation[[br]]
|
||||
Of the 3 solutions presented here, this one is by far the simplest.
|
||||
|
||||
+ Past success[[br]]
|
||||
Used in [http://www.rubyonrails.org/ Ruby on Rails' migrations].
|
||||
|
||||
~ Can detect corrupt schemas, but requires some extra work and a *complete* set of migrations.[[br]]
|
||||
If we have a set of database migration scripts that build the database from the ground up, we can apply them in sequence to a 'dummy' database, dump a diff of the real and dummy schemas, and expect a valid schema to match the dummy schema.
|
||||
|
||||
- Requires changes to the database schema.[[br]]
|
||||
Not a tremendous change - a single table with a single column and a single row - but a change nonetheless.
|
||||
|
||||
=== Table/object-specific version numbers ===
|
||||
Each database "object" - usually tables, though we might also deal with other database objects, such as stored procedures or Postgres' sequences - would have a version associated with it, initially 1. These versions are stored in a table, let's call it "schema". This table has two columns: the name of the database object and its current version number.
|
||||
|
||||
+ Allows us to write migration scripts for a subset of the database.[[br]]
|
||||
If we have multiple people working on a very large database, we may want to write migration scripts for a section of the database without stepping on another person's work. This allows unrelated to
|
||||
|
||||
- Requires changes to the database schema.
|
||||
Similar to the database-wide version number; the contents of the new table are more complex, but still shouldn't conflict with anything.
|
||||
|
||||
- More difficult to implement than a database-wide version number.
|
||||
|
||||
- Determining the version of database-specific objects (ie. stored procedures, functions) is difficult.
|
||||
|
||||
- Ultimately gains nothing over the previous solution.[[br]]
|
||||
The intent here was to allow multiple people to write scripts for a single database, but if database-wide version numbers aren't assigned until the script is placed in the repository, we could already do this.
|
||||
|
||||
=== Version determined via introspection ===
|
||||
Each script has a schema associated with it, rather than a version number. The database schema is loaded, analyzed, and compared to the schema expected by the script.
|
||||
|
||||
+ No modifications to the database are necessary for this versioning system.[[br]]
|
||||
The primary advantage here is that no changes to the database are required.
|
||||
|
||||
- Most difficult solution to implement, by far.[[br]]
|
||||
Comparing the state of every schema object in the database is much more complex than simply comparing a version number, especially since we need to do it in a database-independent way (ie. we can't just diff the dump of each schema). SQLAlchemy's reflection would certainly be very helpful, but this remains the most complex solution.
|
||||
|
||||
+ "Automatically" detects corrupt schemas.[[br]]
|
||||
A corrupt schema won't match any migration script.
|
||||
|
||||
- Difficult to deal with corrupt schemas.[[br]]
|
||||
When version numbers are stored in the database, you have some idea of where an error occurred. Without this, we have no idea what version the database was in before corruption.
|
||||
|
||||
- Potential ambiguity: what if two database migration scripts expect the same schema?
|
||||
|
||||
----
|
||||
|
||||
'''Conclusion''': database-wide version numbers are the best way to go.
|
29
docs/historical/ProjectDetailedDesign.trac
Normal file
29
docs/historical/ProjectDetailedDesign.trac
Normal file
@ -0,0 +1,29 @@
|
||||
This is very much a draft/brainstorm right now. It should be made prettier and thought about in more detail later, but it at least gives some idea of the direction we're headed right now.
|
||||
----
|
||||
* Two distinct tools; should not be coupled (can work independently):
|
||||
* Versioning tool
|
||||
* Command line tool; let's call it "samigrate"
|
||||
* Organizes old migration scripts into repositories
|
||||
* Runs groups of migration scripts on a database, updating it to a specified version/latest version
|
||||
* Helps run various tests
|
||||
* usage
|
||||
* "samigrate create PATH": Create project migration-script repository
|
||||
* We shouldn't have to enter the path for every other command. Use a hidden file
|
||||
* (This means we can't move the repository after it's created. Oh well)
|
||||
* "samigrate add SCRIPT [VERSION]": Add script to this project's repository; latest version
|
||||
* If a .sql script: how to determine engine, operation (up/down)? Options:
|
||||
* specify at the command line: "samigrate add SCRIPT UP_OR_DOWN ENGINE"
|
||||
* naming convention: SCRIPT is named something like NAME.postgres.up.sql
|
||||
* "samigrate upgrade CONNECTION_STRING [VERSION] [SCRIPT...]": connect to the specified database and upgrade (or downgrade) it to the specified version (default latest)
|
||||
* If SCRIPT... specified: act like these scripts are in the repository (useful for testing?)
|
||||
* "samigrate dump CONNECTION_STRING [VERSION] [SCRIPT...]": like update, but sends all sql to stdout instead of the db
|
||||
* (Later: some more commands, to be used for script testing tools)
|
||||
* Alchemy API extensions for altering schema
|
||||
* Operations here are DB-independent
|
||||
* Each database modification is a script that may use this API
|
||||
* Can handwrite SQL for all databases or a single database
|
||||
* upgrade()/downgrade() functions: need only one file for both operations
|
||||
* sql scripts reqire either (2 files, *.up.sql;*.down.sql) or (don't use downgrade)
|
||||
* usage
|
||||
* "python NAME.py ENGINE up": upgrade sql > stdout
|
||||
* "python NAME.py ENGINE down": downgrade sql > stdout
|
50
docs/historical/ProjectGoals.trac
Normal file
50
docs/historical/ProjectGoals.trac
Normal file
@ -0,0 +1,50 @@
|
||||
== Goals ==
|
||||
|
||||
=== DBMS-independent schema changes ===
|
||||
Many projects need to run on more than one DBMS. Similar changes need to be applied to both types of databases upon a schema change. The usual solution to database changes - .sql scripts with ALTER statements - runs into problems since different DBMSes have different dialects of SQL; we end up having to create a different script for each DBMS. This project will simplify this by providing an API, similar to the table definition API that already exists in SQLAlchemy, to alter a table independent of the DBMS being used, where possible.
|
||||
|
||||
This project will support all DBMSes currently supported by SQLAlchemy: SQLite, Postgres, MySQL, Oracle, and MS SQL. Adding support for more should be as possible as it is in SQLAlchemy.
|
||||
|
||||
Many are already used to writing .sql scripts for database changes, aren't interested in learning a new API, and have projects where DBMS-independence isn't an issue. Writing SQL statements as part of a (Python) change script must be an option, of course. Writing change scripts as .sql scripts, eliminating Python scripts from the picture entirely, would be nice too, although this is a lower-priority goal.
|
||||
|
||||
=== Database versioning and change script organization ===
|
||||
Once we've accumulated a set of change scripts, it's important to know which ones have been applied/need to be applied to a particular database: suppose we need to upgrade a database that's extremenly out-of-date; figuring out the scripts to run by hand is tedious. Applying changes in the wrong order, or applying changes when they shouldn't be applied, is bad; attempting to manage all of this by hand inevitably leads to an accident. This project will be able to detect the version of a particular database and apply the scripts required to bring it up to the latest version, or up to any specified version number (given all change scripts required to reach that version number).
|
||||
|
||||
Sometimes we need to be able to revert a schema to an older version. There's no automatic way to do this without rebuilding the database from scratch, so our project will allow one to write scripts to downgrade the database as well as upgrade it. If such scripts have been written, we should be able to apply them in the correct order, just like upgrading.
|
||||
|
||||
Large projects inevitably accumulate a large number of database change scripts; it's important that we have a place to keep them. Once a script has been written, this project will deal with organizing it among existing change scripts, and the user will never have to look at it again.
|
||||
|
||||
=== Change testing ===
|
||||
It's important to test one's database changes before applying them to a production database (unless you happen to like disasters). Much testing is up to the user and can't be automated, but there's a few places we can help ensure at least a minimal level of schema integrity. A few examples are below; we could add more later.
|
||||
|
||||
Given an obsolete schema, a database change script, and an up-to-date schema known to be correct, this project will be able to ensure that applying the
|
||||
change script to the obsolete schema will result in an up-to-date schema - all without actually changing the obsolete database. Folks who have SQLAlchemy create their database using table.create() might find this useful; this is also useful for ensuring database downgrade scripts are correct.
|
||||
|
||||
Given a schema of a known version and a complete set of change scripts up to that version, this project will be able to detect if the schema matches its version. If a schema has gone through changes not present in migration scripts, this test will fail; if applying all scripts in sequence up to the specified version creates an identical schema, this test will succeed. Identifying that a schema is corrupt is sufficient; it would be nice if we could give a clue as to what's wrong, but this is lower priority. (Implementation: we'll probably show a diff of two schema dumps; this should be enough to tell the user what's gone wrong.)
|
||||
|
||||
== Non-Goals ==
|
||||
ie. things we will '''not''' try to do (at least, during the Summer of Code)
|
||||
|
||||
=== Automatic generation of schema changes ===
|
||||
For example, one might define a table:
|
||||
{{{
|
||||
CREATE TABLE person (
|
||||
id integer,
|
||||
name varchar(80)
|
||||
);
|
||||
}}}
|
||||
Later, we might add additional columns to the definition:
|
||||
{{{
|
||||
CREATE TABLE person (
|
||||
id integer,
|
||||
name varchar(80),
|
||||
profile text
|
||||
);
|
||||
}}}
|
||||
It might be nice if a tool could look at both table definitions and spit out a change script; something like
|
||||
{{{
|
||||
ALTER TABLE person ADD COLUMN profile text;
|
||||
}}}
|
||||
This is a difficult problem for a number of reasons. I have no intention of tackling this problem as part of the Summer of Code. This project aims to give you a better way to write that ALTER statement and make sure it's applied correctly, not to write it for you.
|
||||
|
||||
(Using an [http://sqlfairy.sourceforge.net/ existing] [http://xml2ddl.berlios.de/ tool] to add this sort of thing later might be worth looking into, but it will not be done during the Summer of Code. Among other reasons, methinks it's best to start with a system that isn't dependent on this sort of automation.)
|
73
docs/historical/ProjectProposal.txt
Normal file
73
docs/historical/ProjectProposal.txt
Normal file
@ -0,0 +1,73 @@
|
||||
Evan Rosson
|
||||
|
||||
Project
|
||||
---
|
||||
SQLAlchemy Schema Migration
|
||||
|
||||
|
||||
Synopsis
|
||||
---
|
||||
SQLAlchemy is an excellent object-relational database mapper for Python projects. Currently, it does a fine job of creating a database from scratch, but provides no tool to assist the user in modifying an existing database. This project aims to provide such a tool.
|
||||
|
||||
|
||||
Benefits
|
||||
---
|
||||
Application requirements change; a database schema must be able to change with them. It's possible to write SQL scripts that make the proper modifications without any special tools, but this setup quickly becomes difficult to manage - when we need to apply multiple updates to a database, organize old migration scripts, or have a single application support more than one DBMS, a tool to support database changes becomes necessary. This tool will aid the creation of organizing migration scripts, applying multiple updates or removing updates to revert to an old version, and creating DBMS-independent migration scripts.
|
||||
|
||||
Writing one's schema migration scripts by hand often results in problems when dealing with multiple obsolete database instances - we must figure out what scripts are necessary to bring the database up-to-date. Database versioning tools are helpful for this task; this project will track the version of a particular database to determine what scripts are necessary to update an old schema.
|
||||
|
||||
|
||||
Description
|
||||
---
|
||||
The migration system used by Ruby on Rails has had much success, and for good reason - the system is easy to understand, generally database-independent, as powerful as the application itself, and capable of dealing nicely with a schema with multiple instances of different versions. A migration system similar to that of Rails is a fine place to begin this project.
|
||||
|
||||
Each instance of the schema will have a version associated with it; this version is tracked using a single table with a single row and a single integer column. A set of changes to the database schema will increment the schema's version number; each migration script will be associated with a schema version.
|
||||
|
||||
A migration script will be written by the user, and consist of two functions:
|
||||
- upgrade(): brings an old database up-to-date, from version n-1 to version n
|
||||
- downgrade(): reverts an up-to-date database to the previous schema; an 'undo' for upgrade()
|
||||
|
||||
When applying multiple updates to an old schema instance, migration scripts are applied in sequence: when updating a schema to version n from version n-2, two migration scripts are run; n-2 => n-1 => n.
|
||||
|
||||
A command-line tool will create empty migration scripts (empty upgrade()/downgrade() functions), display the SQL that will be generated by a migration script for a particular DBMS, and apply migration scripts to a specified database.
|
||||
|
||||
This project will implement the command-line tool that manages the above functionality. This project will also extend SQLAlchemy with the functions necessary to construct DBMS-independent migration scripts: in particular, column creation/deletion/alteration and the ability to rename existing tables/indexes/columns will be implemented. We'll also need a way to write raw SQL for a specific DBMS/set of DBMSes for situations where our abstraction doesn't fit a script's requirements. The creation/deletion of existing tables and indexes are operations already provided by SQLAlchemy.
|
||||
|
||||
|
||||
On DBMS support - I intend to support MySQL, Postgres, SQLite, Oracle, and MS-SQL by the end of the project. (Update: I previously omitted support for Oracle and MS-SQL because I don't have access to the full version of each; I wasn't aware Oracle Lite and MS-SQL Express were available for free.) The system will be abstracted in such a way that adding support for other databases will not be any more difficult than adding support for them in SQLAlchemy.
|
||||
|
||||
|
||||
Schedule
|
||||
---
|
||||
This project will be my primary activity this summer. Unfortunately, I am in school when things begin, until June 9, but I can still begin the project during that period. I have no other commitments this summer - I can easily make up any lost time.
|
||||
I will be spending my spare time this summer further developing my online game (discussed below), but this has no deadline and will not interfere with the project proposed here.
|
||||
|
||||
|
||||
I'll begin by familiarizing myself with the internals of SQLAlchemy and creating a detailed plan for the project. This plan will be reviewed by the current SQLAlchemy developers and other potential users, and will be modified based on their feedback. This will be completed no later than May 30, one week after SoC begins.
|
||||
|
||||
Development will follow, in this order:
|
||||
- The database versioning system. This will manage the creation and application of (initially empty) migration scripts. Complete by June 16.
|
||||
- Access the database; read/update the schema's version number
|
||||
- Apply a single (empty) script to the database
|
||||
- Apply a set of (empty) scripts to upgrade/downgrade the database to a specified version; examine all migration scripts and apply all to update the database to the latest version available
|
||||
- An API for table/column alterations, to make the above system useful. Complete by August 11.
|
||||
- Implement an empty API - does nothing at this point, but written in such a way that syntax for each supported DBMS may be added as a module. Completed June 26-30, the mid-project review deadline.
|
||||
- Implement/test the above API for a single DBMS (probably Postgres, as I'm familiar with it). Users should be able to test the 'complete' application with this DBMS.
|
||||
- Implement the database modification API for other supported databases
|
||||
|
||||
All development will have unit tests written where appropriate. Unit testing the SQL generated for each DBMS will be particularly important.
|
||||
|
||||
The project will finish with various wrap-up activities, documentation, and some final tests, to be completed by the project deadline.
|
||||
|
||||
|
||||
About me
|
||||
---
|
||||
I am a 3rd year BS Computer Science student; Cal Poly, San Luis Obispo, California, USA; currently applying for a Master's degree in CS from the same school. I've taken several classes dealing with databases, though much of what I know on the subject is self-taught. Outside of class, I've developed a browser-based online game, Zeal, at http://zealgame.com ; it has been running for well over a year and gone through many changes. It has taught me firsthand the importance of using appropriate tools and designing one's application well early on (largely through the pain that follows when you don't); I've learned a great many other things from the experience as well.
|
||||
|
||||
One recurring problem I've had with this project is dealing with changes to the database schema. I've thought much about how I'd like to see this solved, but hadn't done much to implement it.
|
||||
|
||||
I'm now working on another project that will be making use of SQLAlchemy: it fits many of my project's requirements, but lacks a migration tool that will be much needed. This presents an opportunity for me to make my first contribution to open source - I've long been interested in open source software and use it regularly, but haven't contributed to any until now. I'm particularly interested in the application of this tool with the TurboGears framework, as this project was inspired by a suggestion the TurboGears mailing list and I'm working on a project using TurboGears - but there is no reason to couple an SQLAlchemy enhancement with TurboGears; this project may be used by anyone who uses SQLAlchemy.
|
||||
|
||||
|
||||
Further information:
|
||||
http://evan.zealgame.com/soc
|
56
docs/historical/RepositoryFormat.trac
Normal file
56
docs/historical/RepositoryFormat.trac
Normal file
@ -0,0 +1,56 @@
|
||||
This plan has several problems and has been modified; new plan is discussed in wiki:RepositoryFormat2
|
||||
|
||||
----
|
||||
|
||||
One problem with [http://www.rubyonrails.org/ Ruby on Rails'] (very good) schema migration system is the behavior of scripts that depend on outside sources; ie. the application. If those change, there's no guarantee that such scripts will behave as they did before, and you'll get strange results.
|
||||
|
||||
For example, suppose one defines a SQLAlchemy table:
|
||||
{{{
|
||||
users = Table('users', metadata,
|
||||
Column('user_id', Integer, primary_key = True),
|
||||
Column('user_name', String(16), nullable = False),
|
||||
Column('password', String(20), nullable = False)
|
||||
)
|
||||
}}}
|
||||
and creates it in a change script:
|
||||
{{{
|
||||
from project import table
|
||||
|
||||
def upgrade():
|
||||
table.users.create()
|
||||
}}}
|
||||
|
||||
Suppose we later add a column to this table. We write an appropriate change script:
|
||||
{{{
|
||||
from project import table
|
||||
|
||||
def upgrade():
|
||||
# This syntax isn't set in stone yet
|
||||
table.users.add_column('email_address', String(60), key='email')
|
||||
}}}
|
||||
...and change our application's table definition:
|
||||
{{{
|
||||
users = Table('users', metadata,
|
||||
Column('user_id', Integer, primary_key = True),
|
||||
Column('user_name', String(16), nullable = False),
|
||||
Column('password', String(20), nullable = False),
|
||||
Column('email_address', String(60), key='email') #new column
|
||||
)
|
||||
}}}
|
||||
|
||||
Modifying the table definition changes how our first script behaves - it will create the table with the new column. This might work if we only apply change scripts to a few database which are always kept up to date (or very close), but we'll run into errors eventually if our migration scripts' behavior isn't consistent.
|
||||
|
||||
----
|
||||
|
||||
One solution is to generate .sql files from a Python change script at the time it's added to a repository. The sql generated by the script for each database is set in stone at this point; changes to outside files won't affect it.
|
||||
|
||||
This limits what change scripts are capable of - we can't write dynamic SQL; ie., we can't do something like this:
|
||||
{{{
|
||||
for row in db.execute("select id from table1"):
|
||||
db.execute("insert into table2 (table1_id, value) values (:id,42)",**row)
|
||||
}}}
|
||||
But SQL is usually powerful enough to where the above is rarely necessary in a migration script:
|
||||
{{{
|
||||
db.execute("insert into table2 select id,42 from table1")
|
||||
}}}
|
||||
This is a reasonable solution. The limitations aren't serious (everything possible in a traditional .sql script is still possible), and change scripts are much less prone to error.
|
28
docs/historical/RepositoryFormat2.trac
Normal file
28
docs/historical/RepositoryFormat2.trac
Normal file
@ -0,0 +1,28 @@
|
||||
My original plan for Migrate's RepositoryFormat had several problems:
|
||||
|
||||
* Bind parameters: We needed to bind parameters into statements to get something suitable for an .sql file. For some types of parameters, there's no clean way to do this without writing an entire parser - too great a cost for this project. There's a reason why SQLAlchemy's logs display the statement and its parameters separately: the binding is done at a lower level than we have access to.
|
||||
* Failure: Discussed in #17, the old format had no easy way to find the Python statements associated with an SQL error. This makes it difficult to debug scripts.
|
||||
|
||||
A new format will be used to solve this problem instead.
|
||||
Similar to our previous solution, where one .sql file was created per version/operation/DBMS (version_1.upgrade.postgres.sql, for example), one file will be created per version/operation/DBMS here.
|
||||
These files will contain the following information:
|
||||
|
||||
* The dialect used to perform the logging. Particularly,
|
||||
* The paramstyle expected by the dbapi
|
||||
* The DBMS this log applies to
|
||||
* Information on each logged SQL statement, each of which contains:
|
||||
* The text of the statement
|
||||
* Parameters to be bound to the statement
|
||||
* A Python stack trace at the point the statement was logged - this allows us to tell what Python statements are associated with an SQL statement when there's an error
|
||||
|
||||
These files will be created by pickling a Python object with the above information.
|
||||
|
||||
Such files may be executed by loading the log and having SQLAlchemy execute them as it might have before.
|
||||
|
||||
Good:
|
||||
* Since the statements and bind parameters are stored separately and executed as SQLAlchemy would normally execute them, one problem discussed above is eliminated.
|
||||
* Storing the stack trace at the point each statement was logged allows us to identify what Python statements are responsible for an SQL error. This makes it much easier for users to debug their scripts.
|
||||
|
||||
Bad:
|
||||
* It's less trivial to commit .sql scripts to our repository, since they're no longer used internally. This isn't a huge loss, and .sql commits can still be implemented later if need be.
|
||||
* There's some danger of script behavior changing if changes are made to the dbapi the script is associated with. The primary place where problems would occur is during parameter binding, but the chance of this changing significantly isn't large. The danger of changes in behavior due to changes in the user's application is not affected.
|
23
docs/index.rst
Normal file
23
docs/index.rst
Normal file
@ -0,0 +1,23 @@
|
||||
=======
|
||||
Migrate
|
||||
=======
|
||||
SQLAlchemy schema change management
|
||||
-----------------------------------
|
||||
|
||||
Inspired by Ruby on Rails' migrations, Migrate provides a way to deal with database schema changes in SQLAlchemy_ projects.
|
||||
|
||||
Migrate was started as part of `Google's Summer of Code`_ by Evan Rosson, mentored by Jonathan LaCour.
|
||||
|
||||
- Download_
|
||||
|
||||
- Documentation:
|
||||
|
||||
* Versioning_: Version tracking/update management for your database schema
|
||||
|
||||
* Changeset_: Database-independent schema changes; ALTER TABLE with SQLAlchemy
|
||||
|
||||
.. _`google's summer of code`: http://code.google.com/soc
|
||||
.. _download: download.html
|
||||
.. _versioning: versioning.html
|
||||
.. _changeset: changeset.html
|
||||
.. _sqlalchemy: http://www.sqlalchemy.org
|
288
docs/theme/almodovar.css
vendored
Normal file
288
docs/theme/almodovar.css
vendored
Normal file
@ -0,0 +1,288 @@
|
||||
/*
|
||||
* Original theme modified by Evan Rosson
|
||||
* http://erosson.com/migrate
|
||||
* ---
|
||||
*
|
||||
* Theme Name: Almodovar
|
||||
* Theme URI: http://blog.ratterobert.com/archiv/2005/03/09/almodovar/
|
||||
* Description: Das Theme basiert im Ursprung auf Michael Heilemanns <a href="http://binarybonsai.com/kubrick/">Kubrick</a>-Template und ist von dem einen oder anderen Gimmick anderer sehr guter Templates inspiriert worden.
|
||||
* Version: 0.7
|
||||
* Author: ratte / robert
|
||||
* Author URI: http://blog.ratterobert.com/
|
||||
* */
|
||||
|
||||
/* Begin Typography & Colors */
|
||||
body {
|
||||
font-size: 75%;
|
||||
font-family: 'Lucida Grande', 'Trebuchet MS', 'Bitstream Vera Sans', Sans-Serif;
|
||||
background-color: #CCF;
|
||||
color: #333;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
#page {
|
||||
background-color: #fff;
|
||||
border: 1px solid #88f;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
#content {
|
||||
font-size: 1.2em;
|
||||
margin: 1em;
|
||||
}
|
||||
|
||||
#content p,
|
||||
#content ul,
|
||||
#content blockquote {
|
||||
line-height: 1.6em;
|
||||
}
|
||||
|
||||
#footer {
|
||||
border-top: 1px solid #006;
|
||||
margin-top: 2em;
|
||||
}
|
||||
|
||||
small {
|
||||
font-family: 'Trebuchet MS', Arial, Helvetica, Sans-Serif;
|
||||
font-size: 0.9em;
|
||||
line-height: 1.5em;
|
||||
}
|
||||
|
||||
h1, h2, h3 {
|
||||
font-family: 'Trebuchet MS', 'Lucida Grande', Verdana, Arial, Sans-Serif;
|
||||
font-weight: bold;
|
||||
margin-top: .7em;
|
||||
margin-bottom: .7em;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 2.5em;
|
||||
}
|
||||
h2 {
|
||||
font-size: 2em;
|
||||
}
|
||||
h3 {
|
||||
font-size: 1.5em;
|
||||
}
|
||||
|
||||
h1, h2, h3 {
|
||||
color: #33a;
|
||||
}
|
||||
|
||||
h1 a, h2 a, h3 a {
|
||||
color: #33a;
|
||||
}
|
||||
|
||||
h1, h1 a, h1 a:hover, h1 a:visited,
|
||||
h2, h2 a, h2 a:hover, h2 a:visited,
|
||||
h3, h3 a, h3 a:hover, h3 a:visited,
|
||||
cite {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
#content p a:visited {
|
||||
color: #004099;
|
||||
/*font-weight: normal;*/
|
||||
}
|
||||
|
||||
small, blockquote, strike {
|
||||
color: #33a;
|
||||
}
|
||||
|
||||
#links ul ul li, #links li {
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
code {
|
||||
font: 1.1em 'Courier', 'Courier New', Fixed;
|
||||
}
|
||||
|
||||
acronym, abbr, span.caps {
|
||||
font-size: 0.9em;
|
||||
letter-spacing: .07em;
|
||||
}
|
||||
|
||||
a {
|
||||
color: #0050FF;
|
||||
/*text-decoration: none;*/
|
||||
text-decoration:underline;
|
||||
/*font-weight: bold;*/
|
||||
}
|
||||
a:hover {
|
||||
color: #0080FF;
|
||||
}
|
||||
|
||||
/* Special case doc-title */
|
||||
h1.doc-title {
|
||||
text-transform: lowercase;
|
||||
font-size: 4em;
|
||||
margin: 0;
|
||||
}
|
||||
h1.doc-title a {
|
||||
display: block;
|
||||
padding-left: 0.8em;
|
||||
padding-bottom: .5em;
|
||||
padding-top: .5em;
|
||||
margin: 0;
|
||||
border-bottom: 1px #fff solid;
|
||||
}
|
||||
h1.doc-title,
|
||||
h1.doc-title a,
|
||||
h1.doc-title a:visited,
|
||||
h1.doc-title a:hover {
|
||||
text-decoration: none;
|
||||
color: #0050FF;
|
||||
}
|
||||
/* End Typography & Colors */
|
||||
|
||||
|
||||
/* Begin Structure */
|
||||
body {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
#page {
|
||||
background-color: white;
|
||||
margin: 0 auto 0 9em;
|
||||
padding: 0;
|
||||
max-width: 60em;
|
||||
border: 1px solid #555596;
|
||||
}
|
||||
* html #page {
|
||||
* width: 60em;
|
||||
* }
|
||||
*
|
||||
* #content {
|
||||
* margin: 0 1em 0 3em;
|
||||
* }
|
||||
*
|
||||
* #content h1 {
|
||||
* margin-left: 0;
|
||||
* }
|
||||
*
|
||||
* #footer {
|
||||
* padding: 0 0 0 1px;
|
||||
* margin: 0;
|
||||
* margin-top: 1.5em;
|
||||
* clear: both;
|
||||
* }
|
||||
*
|
||||
* #footer p {
|
||||
* margin: 1em;
|
||||
* }
|
||||
*
|
||||
*/* End Structure */
|
||||
|
||||
|
||||
|
||||
/* Begin Headers */
|
||||
.description {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* End Headers */
|
||||
|
||||
|
||||
/* Begin Form Elements */
|
||||
#searchform {
|
||||
margin: 1em auto;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
#searchform #s {
|
||||
width: 100px;
|
||||
padding: 2px;
|
||||
}
|
||||
|
||||
#searchsubmit {
|
||||
padding: 1px;
|
||||
}
|
||||
/* End Form Elements */
|
||||
|
||||
|
||||
/* Begin Various Tags & Classes */
|
||||
acronym, abbr, span.caps {
|
||||
cursor: help;
|
||||
}
|
||||
|
||||
acronym, abbr {
|
||||
border-bottom: 1px dashed #999;
|
||||
}
|
||||
|
||||
blockquote {
|
||||
margin: 15px 30px 0 10px;
|
||||
padding-left: 20px;
|
||||
border-left: 5px solid #CCC;
|
||||
}
|
||||
|
||||
blockquote cite {
|
||||
margin: 5px 0 0;
|
||||
display: block;
|
||||
}
|
||||
|
||||
hr {
|
||||
display: none;
|
||||
}
|
||||
|
||||
a img {
|
||||
border: none;
|
||||
}
|
||||
|
||||
.navigation {
|
||||
display: block;
|
||||
text-align: center;
|
||||
margin-top: 10px;
|
||||
margin-bottom: 60px;
|
||||
}
|
||||
/* End Various Tags & Classes*/
|
||||
|
||||
span a { color: #CCC; }
|
||||
|
||||
span a:hover { color: #0050FF; }
|
||||
|
||||
#navcontainer {
|
||||
margin-top: 0px;
|
||||
padding-top: 0px;
|
||||
width: 100%;
|
||||
background-color: #AAF;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
#navlist ul {
|
||||
margin-left: 0;
|
||||
margin-right: 5px;
|
||||
padding-left: 0;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
#navlist li {
|
||||
display: inline;
|
||||
list-style-type: none;
|
||||
}
|
||||
|
||||
#navlist a {
|
||||
padding: 3px 10px;
|
||||
color: #fff;
|
||||
background-color: #339;
|
||||
text-decoration: none;
|
||||
border: 1px solid #44F;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
#navlist a:hover {
|
||||
color: #000;
|
||||
background-color: #FFF;
|
||||
text-decoration: none;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
#navlist a:active, #navlist a.selected {
|
||||
padding: 3px 10px;
|
||||
color: #000;
|
||||
background-color: #EEF;
|
||||
text-decoration: none;
|
||||
border: 1px solid #CCF;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
123
docs/theme/layout.css
vendored
Normal file
123
docs/theme/layout.css
vendored
Normal file
@ -0,0 +1,123 @@
|
||||
@import url("pudge.css");
|
||||
@import url("almodovar.css");
|
||||
|
||||
/* Basic Style
|
||||
----------------------------------- */
|
||||
|
||||
h1.pudge-member-page-heading {
|
||||
font-size: 300%;
|
||||
}
|
||||
h4.pudge-member-page-subheading {
|
||||
font-size: 130%;
|
||||
font-style: italic;
|
||||
margin-top: -2.0em;
|
||||
margin-left: 2em;
|
||||
margin-bottom: .3em;
|
||||
color: #0050CC;
|
||||
}
|
||||
p.pudge-member-blurb {
|
||||
font-style: italic;
|
||||
font-weight: bold;
|
||||
font-size: 120%;
|
||||
margin-top: 0.2em;
|
||||
color: #999;
|
||||
}
|
||||
p.pudge-member-parent-link {
|
||||
margin-top: 0;
|
||||
}
|
||||
/*div.pudge-module-doc {
|
||||
max-width: 45em;
|
||||
}*/
|
||||
div.pudge-section {
|
||||
margin-left: 2em;
|
||||
max-width: 45em;
|
||||
}
|
||||
/* Section Navigation
|
||||
----------------------------------- */
|
||||
|
||||
div#pudge-section-nav
|
||||
{
|
||||
margin: 1em 0 1.5em 0;
|
||||
padding: 0;
|
||||
height: 20px;
|
||||
}
|
||||
|
||||
div#pudge-section-nav ul {
|
||||
border: 0;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
list-style-type: none;
|
||||
text-align: center;
|
||||
border-right: 1px solid #aaa;
|
||||
}
|
||||
div#pudge-section-nav ul li
|
||||
{
|
||||
display: block;
|
||||
float: left;
|
||||
text-align: center;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
div#pudge-section-nav ul li .pudge-section-link,
|
||||
div#pudge-section-nav ul li .pudge-missing-section-link
|
||||
{
|
||||
background: #aaa;
|
||||
width: 9em;
|
||||
height: 1.8em;
|
||||
border: 1px solid #bbb;
|
||||
padding: 0;
|
||||
margin: 0 0 10px 0;
|
||||
color: #ddd;
|
||||
text-decoration: none;
|
||||
display: block;
|
||||
text-align: center;
|
||||
font: 11px/20px "Verdana", "Lucida Grande";
|
||||
cursor: hand;
|
||||
text-transform: lowercase;
|
||||
}
|
||||
|
||||
div#pudge-section-nav ul li a:hover {
|
||||
color: #000;
|
||||
background: #fff;
|
||||
}
|
||||
|
||||
div#pudge-section-nav ul li .pudge-section-link {
|
||||
background: #888;
|
||||
color: #eee;
|
||||
border: 1px solid #bbb;
|
||||
}
|
||||
|
||||
/* Module Lists
|
||||
----------------------------------- */
|
||||
dl.pudge-module-list dt {
|
||||
font-style: normal;
|
||||
font-size: 110%;
|
||||
}
|
||||
dl.pudge-module-list dd {
|
||||
color: #555;
|
||||
}
|
||||
|
||||
/* Misc Overrides */
|
||||
.rst-doc p.topic-title a {
|
||||
color: #777;
|
||||
}
|
||||
.rst-doc ul.auto-toc a,
|
||||
.rst-doc div.contents a {
|
||||
color: #333;
|
||||
}
|
||||
pre { background: #eee; }
|
||||
|
||||
.rst-doc dl dt {
|
||||
color: #444;
|
||||
margin-top: 1em;
|
||||
font-weight: bold;
|
||||
}
|
||||
.rst-doc dl dd {
|
||||
margin-top: .2em;
|
||||
}
|
||||
.rst-doc hr {
|
||||
display: block;
|
||||
margin: 2em 0;
|
||||
}
|
||||
|
90
docs/theme/layout.html
vendored
Normal file
90
docs/theme/layout.html
vendored
Normal file
@ -0,0 +1,90 @@
|
||||
<?xml version="1.0"?>
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
||||
|
||||
<?python
|
||||
import pudge
|
||||
|
||||
def initialize(t):
|
||||
g = t.generator
|
||||
if not hasattr(t, 'title'):
|
||||
t.title = 'Untitled'
|
||||
t.doc_title = g.index_document['title']
|
||||
t.home_url = g.organization_url or g.blog_url or g.trac_url
|
||||
t.home_title = g.organization
|
||||
?>
|
||||
|
||||
<html xmlns="http://www.w3.org/1999/xhtml"
|
||||
xmlns:py="http://purl.org/kid/ns#"
|
||||
py:def="layout">
|
||||
|
||||
<head>
|
||||
<title>${title}</title>
|
||||
<link rel="stylesheet" type="text/css" href="layout.css"/>
|
||||
<link py:if="generator.syndication_url"
|
||||
rel="alternate"
|
||||
type="application/rss+xml"
|
||||
title="RSS 2.0" href="${generator.syndication_url}"/>
|
||||
|
||||
</head>
|
||||
<body>
|
||||
<div id="page">
|
||||
<h1 class="doc-title"><a href="${home_url}">${home_title}</a></h1>
|
||||
<div id="navcontainer">
|
||||
<ul id="navlist">
|
||||
<li class="pagenav">
|
||||
<ul>
|
||||
<li class="page_item">
|
||||
<a href="index.html"
|
||||
class="${'index.html'== destfile and 'selected' or ''}"
|
||||
title="Project Home / Index">${doc_title}</a>
|
||||
</li>
|
||||
<li class="page_item">
|
||||
<a href="module-index.html"
|
||||
class="${'module-index.html'== destfile and 'selected' or ''}"
|
||||
title="${doc_title.lower()} package and module reference">Modules</a>
|
||||
</li>
|
||||
<?python
|
||||
trac_url = generator.trac_url
|
||||
mailing_list_url = generator.mailing_list_url
|
||||
?>
|
||||
<li py:if="trac_url">
|
||||
<a href="${trac_url}"
|
||||
title="Wiki / Subversion / Roadmap / Bug Tracker"
|
||||
>Trac</a>
|
||||
</li>
|
||||
<li py:if="generator.blog_url">
|
||||
<a href="${generator.blog_url}">Blog</a>
|
||||
</li>
|
||||
<li py:if="mailing_list_url">
|
||||
<a href="${mailing_list_url}"
|
||||
class="${mailing_list_url == destfile and 'selected' or ''}"
|
||||
title="Mailing List">Discuss</a>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<hr />
|
||||
|
||||
<div id="content" py:content="content()"/>
|
||||
|
||||
<div id="footer">
|
||||
<?python license = generator.get_document('doc-license') ?>
|
||||
|
||||
<p style="float: left;">
|
||||
built with
|
||||
<a href="http://lesscode.org/projects/pudge/"
|
||||
>pudge/${pudge.__version__}</a><br />
|
||||
original design by
|
||||
<a href="http://blog.ratterobert.com/"
|
||||
>ratter / robert</a><br />
|
||||
</p>
|
||||
<p style="float:right;">
|
||||
evan.rosson (at) gmail.com
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
|
||||
</html>
|
270
docs/versioning.rst
Normal file
270
docs/versioning.rst
Normal file
@ -0,0 +1,270 @@
|
||||
==================
|
||||
migrate.versioning
|
||||
==================
|
||||
|
||||
.. contents::
|
||||
|
||||
Project Setup
|
||||
=============
|
||||
|
||||
Create a change repository
|
||||
--------------------------
|
||||
|
||||
To begin, we'll need to create a *repository* for our project. Repositories are associated with a single database schema, and store collections of change scripts to manage that schema. The scripts in a repository may be applied to any number of databases.
|
||||
|
||||
Repositories each have a name. This name is used to identify the repository we're working with.
|
||||
|
||||
All work with repositories is done using the migrate command. Let's create our project's repository::
|
||||
|
||||
% migrate create my_repository "Example project"
|
||||
|
||||
This creates an initially empty repository in the current directory at my_repository/ named Example project.
|
||||
|
||||
Version-control a database
|
||||
--------------------------
|
||||
|
||||
Next, we need to create a database and declare it to be under version control. Information on a database's version is stored in the database itself; declaring a database to be under version control creates a table, named 'migrate_version' by default, and associates it with your repository.
|
||||
|
||||
The database is specified as a `SQLAlchemy database url`_.
|
||||
|
||||
.. _`sqlalchemy database url`: http://www.sqlalchemy.org/docs/dbengine.myt#dbengine_establishing
|
||||
|
||||
::
|
||||
|
||||
% migrate version_control sqlite:///project.db my_repository
|
||||
|
||||
We can have any number of databases under this repository's version control.
|
||||
|
||||
Each schema has a version that Migrate manages. Each change script applied to the database increments this version number. You can see a database's current version::
|
||||
|
||||
% migrate db_version sqlite:///project.db my_repository
|
||||
0
|
||||
|
||||
A freshly versioned database begins at version 0 by default. This assumes the database is empty. (If this is a bad assumption, you can specify the version at the time the database is declared under version control, with the "version_control" command.) We'll see that creating and applying change scripts changes the database's version number.
|
||||
|
||||
Similarly, we can also see the latest version available in a repository with the command::
|
||||
|
||||
% migrate version my_repository
|
||||
0
|
||||
|
||||
We've entered no changes so far, so our repository cannot upgrade a database past version 0.
|
||||
|
||||
Project management script
|
||||
-------------------------
|
||||
|
||||
Many commands need to know our project's database url and repository path - typing them each time is tedious. We can create a script for our project that remembers the database and repository we're using, and use it to perform commands::
|
||||
|
||||
% migrate manage manage.py --repository=my_repository --url=sqlite:///project.db
|
||||
% python manage.py db_version
|
||||
0
|
||||
|
||||
The script manage.py was created. All commands we perform with it are the same as those performed with the 'migrate' tool, using the repository and database connection entered above.
|
||||
|
||||
Making schema changes
|
||||
=====================
|
||||
|
||||
All changes to a database schema under version control should be done via change scripts - you should avoid schema modifications (creating tables, etc.) outside of change scripts. This allows you to determine what the schema looks like based on the version number alone, and helps ensure multiple databases you're working with are consistent.
|
||||
|
||||
Create a change script
|
||||
----------------------
|
||||
Our first change script will create a simple table::
|
||||
|
||||
account = Table('account',meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
Column('login',String(40)),
|
||||
Column('passwd',String(40)),
|
||||
)
|
||||
|
||||
This table should be created in a change script. Let's create one::
|
||||
|
||||
% python manage.py script script.py
|
||||
|
||||
This creates an empty change script at ``script.py``. Next, we'll edit this script to create our table.
|
||||
|
||||
Edit the change script
|
||||
----------------------
|
||||
Our change script defines two functions, currently empty: upgrade() and downgrade(). We'll fill those in::
|
||||
|
||||
# script.py
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
|
||||
meta = BoundMetaData(migrate_engine)
|
||||
account = Table('account',meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
Column('login',String(40)),
|
||||
Column('passwd',String(40)),
|
||||
)
|
||||
|
||||
def upgrade():
|
||||
account.create()
|
||||
|
||||
def downgrade():
|
||||
account.drop()
|
||||
|
||||
|
||||
As you might have guessed, upgrade() upgrades the database to the next version. This function should contain the changes we want to perform; here, we're creating a table. downgrade() should reverse changes made by upgrade(). You'll need to write both functions for every change script. (Well, you don't *have* to write downgrade(), but you won't be able to revert to an older version of the database or test your scripts without it.)
|
||||
|
||||
``from migrate import *`` imports a special SQLAlchemy engine named 'migrate_engine'. You should use this in your change scripts, rather than creating your own engine.
|
||||
|
||||
You should be very careful about importing files from the rest of your application, as your change scripts might break when your application changes. More about `writing scripts with consistent behavior`_.
|
||||
|
||||
Commit the change script
|
||||
------------------------
|
||||
Now that our script is done, we'll commit it to our repository. Committed scripts are considered 'done' - once a script is committed, it is moved into the repository, the change script file 'disappears', and your change script can be applied to a database. Once a script is committed, Migrate expects that the SQL the script generates will not change. (As mentioned above, this may be a bad assumption when importing files from your application!)
|
||||
|
||||
Change scripts should be tested before they are committed. Testing a script will run its upgrade() and downgrade() functions on a specified database; you can ensure the script runs without error. You should be testing on a test database - if something goes wrong here, you'll need to correct it by hand. If the test is successful, the database should appear unchanged after upgrade() and downgrade() run.
|
||||
|
||||
To test the script::
|
||||
|
||||
% python manage.py test script.py
|
||||
Upgrading... done
|
||||
Downgrading... done
|
||||
Success
|
||||
|
||||
Our script runs on our database (``sqlite:///project.db``, as specified in manage.py) without any errors.
|
||||
|
||||
To commit the script::
|
||||
|
||||
% python manage.py commit script.py
|
||||
|
||||
``script.py`` will be removed, and our repository's version will change::
|
||||
|
||||
% python manage.py version
|
||||
1
|
||||
|
||||
Upgrade the database
|
||||
--------------------
|
||||
Now, we can apply this change script to our database::
|
||||
|
||||
% python manage.py upgrade
|
||||
|
||||
This upgrades the database (``sqlite:///project.db``, as specified when we created manage.py above) to the latest available version. (We could also specify a version number if we wished, using the --version option.) We can see the database's version number has changed, and our table has been created::
|
||||
|
||||
% python manage.py db_version
|
||||
1
|
||||
% sqlite3 project.db
|
||||
sqlite> .tables
|
||||
_version account
|
||||
|
||||
Our account table was created - success! As our application evolves, we can create more change scripts using a similar process.
|
||||
|
||||
Writing change scripts
|
||||
======================
|
||||
|
||||
By default, change scripts may do anything any other SQLAlchemy program can do.
|
||||
|
||||
Migrate extends SQLAlchemy with several operations used to change existing schemas - ie. ALTER TABLE stuff. See changeset_ documentation for details.
|
||||
|
||||
.. _changeset: changeset.html
|
||||
|
||||
Writing scripts with consistent behavior
|
||||
----------------------------------------
|
||||
|
||||
Normally, it's important to write change scripts in a way that's independent of your application - the same SQL should be generated every time, despite any changes to your app's source code. You don't want your change scripts' behavior changing when your source code does.
|
||||
|
||||
Consider the following example of what can go wrong (ie. what NOT to do):
|
||||
|
||||
Your application defines a table in the model.py file::
|
||||
|
||||
# model.py
|
||||
from sqlalchemy import *
|
||||
|
||||
meta = DynamicMetaData()
|
||||
table = Table('mytable',meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
)
|
||||
|
||||
...and uses this file to create a table in a change script::
|
||||
|
||||
# changescript.py
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
import model
|
||||
model.meta.connect(migrate_engine)
|
||||
|
||||
def upgrade():
|
||||
model.table.create()
|
||||
def downgrade():
|
||||
model.table.drop()
|
||||
|
||||
This runs successfully the first time. But what happens if we change the table definition?
|
||||
|
||||
::
|
||||
|
||||
table = Table('mytable',meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
Column('data',String(42)),
|
||||
)
|
||||
|
||||
We'll create a new column with a matching change script::
|
||||
|
||||
# changescript2.py
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
import model
|
||||
model.meta.connect(migrate_engine)
|
||||
|
||||
def upgrade():
|
||||
model.table.data.create()
|
||||
def downgrade():
|
||||
model.table.data.drop()
|
||||
|
||||
This appears to run fine when upgrading an existing database - but the first script's behavior changed! Running all our change scripts on a new database will result in an error - the first script creates the table based on the new definition, with both columns; the second cannot add the column because it already exists.
|
||||
|
||||
To avoid the above problem, you should copy-paste your table definition into each change script rather than importing parts of your application.
|
||||
|
||||
Writing for a specific database
|
||||
-------------------------------
|
||||
|
||||
Sometimes you need to write code for a specific database. Migrate scripts can run under any database, however - the engine you're given might belong to any database. Use engine.name to get the name of the database you're working with::
|
||||
|
||||
>>> from sqlalchemy import *
|
||||
>>> from migrate import *
|
||||
>>>
|
||||
>>> engine = create_engine('sqlite:///:memory:')
|
||||
>>> engine.name
|
||||
'sqlite'
|
||||
|
||||
.sql scripts
|
||||
------------
|
||||
|
||||
You might prefer to write your change scripts in SQL, as .sql files, rather than as Python scripts. Migrate can work with that::
|
||||
|
||||
% migrate version my_repository
|
||||
10
|
||||
% migrate commit upgrade.sql my_repository postgres upgrade
|
||||
% migrate version my_repository
|
||||
11
|
||||
% migrate commit downgrade.sql my_repository postgres downgrade 11
|
||||
% migrate version my_repository
|
||||
11
|
||||
|
||||
Here, two scripts are given, one for each *operation*, or function defined in a Python change script - upgrade and downgrade. Both are specified to run with Postgres databases - we can commit more for different databases if we like. Any database defined by SQLAlchemy may be used here - ex. sqlite, postgres, oracle, mysql...
|
||||
|
||||
For every .sql script added after the first, we must specify the version - if you don't enter a version to commit, Migrate assumes that commit is for a new version.
|
||||
|
||||
Python API
|
||||
==========
|
||||
All commands available from the command line are also available for your Python scripts by importing `migrate.versioning.api`_. See the `migrate.versioning.api`_ documentation for a list of functions; function names match equivalent shell commands. You can use this to help integrate Migrate with your existing update process.
|
||||
|
||||
For example, the following commands are similar:
|
||||
|
||||
*From the command line*::
|
||||
|
||||
% migrate help help
|
||||
/usr/bin/migrate help COMMAND
|
||||
|
||||
Displays help on a given command.
|
||||
|
||||
*From Python*::
|
||||
|
||||
import migrate.versioning.api
|
||||
migrate.versioning.api.help('help')
|
||||
# Output:
|
||||
# %prog help COMMAND
|
||||
#
|
||||
# Displays help on a given command.
|
||||
|
||||
|
||||
.. _migrate.versioning.api: module-migrate.versioning.api.html
|
1
migrate/__init__.py
Normal file
1
migrate/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
from migrate.run import *
|
3
migrate/changeset/__init__.py
Normal file
3
migrate/changeset/__init__.py
Normal file
@ -0,0 +1,3 @@
|
||||
from migrate.changeset.schema import *
|
||||
from migrate.changeset.constraint import *
|
||||
|
280
migrate/changeset/ansisql.py
Normal file
280
migrate/changeset/ansisql.py
Normal file
@ -0,0 +1,280 @@
|
||||
"""Extensions to SQLAlchemy for altering existing tables.
|
||||
At the moment, this isn't so much based off of ANSI as much as things that just
|
||||
happen to work with multiple databases.
|
||||
"""
|
||||
import sqlalchemy as sa
|
||||
from sqlalchemy.engine.base import Connection, Dialect
|
||||
from migrate.changeset import constraint,exceptions
|
||||
|
||||
SchemaIterator = sa.engine.SchemaIterator
|
||||
SchemaGenerator = sa.sql.compiler.SchemaGenerator
|
||||
|
||||
class RawAlterTableVisitor(object):
|
||||
"""Common operations for 'alter table' statements"""
|
||||
def _to_table(self,param):
|
||||
if isinstance(param,(sa.Column,sa.Index,sa.schema.Constraint)):
|
||||
ret = param.table
|
||||
else:
|
||||
ret = param
|
||||
return ret
|
||||
def _to_table_name(self,param):
|
||||
ret = self._to_table(param)
|
||||
if isinstance(ret,sa.Table):
|
||||
ret = ret.fullname
|
||||
return ret
|
||||
|
||||
def start_alter_table(self,param):
|
||||
table = self._to_table(param)
|
||||
table_name = self._to_table_name(table)
|
||||
self.append("\nALTER TABLE %s "%table_name)
|
||||
return table
|
||||
|
||||
def _pk_constraint(self,table,column,status):
|
||||
"""Create a primary key constraint from a table, column
|
||||
Status: true if the constraint is being added; false if being dropped
|
||||
"""
|
||||
if isinstance(column,basestring):
|
||||
column = getattr(table.c,name)
|
||||
|
||||
ret = constraint.PrimaryKeyConstraint(*table.primary_key)
|
||||
if status:
|
||||
# Created PK
|
||||
ret.c.append(column)
|
||||
else:
|
||||
# Dropped PK
|
||||
#cons.remove(col)
|
||||
names = [c.name for c in cons.c]
|
||||
index = names.index(col.name)
|
||||
del ret.c[index]
|
||||
|
||||
# Allow explicit PK name assignment
|
||||
if isinstance(pk,basestring):
|
||||
ret.name = pk
|
||||
return ret
|
||||
|
||||
class AlterTableVisitor(SchemaIterator,RawAlterTableVisitor):
|
||||
"""Common operations for 'alter table' statements"""
|
||||
|
||||
|
||||
class ANSIColumnGenerator(AlterTableVisitor,SchemaGenerator):
|
||||
"""Extends ansisql generator for column creation (alter table add col)"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
dialect = None
|
||||
if isinstance(args[0], Connection):
|
||||
dialect = args[0].engine.dialect
|
||||
elif isinstance(args[0], Dialect):
|
||||
dialect = args[0]
|
||||
else:
|
||||
raise exceptions.Error("Cannot infer dialect in __init__")
|
||||
super(ANSIColumnGenerator, self).__init__(dialect, *args,
|
||||
**kwargs)
|
||||
|
||||
def visit_column(self,column):
|
||||
"""Create a column (table already exists); #32"""
|
||||
table = self.start_alter_table(column)
|
||||
self.append(" ADD ")
|
||||
pks = table.primary_key
|
||||
colspec = self.get_column_specification(column)
|
||||
self.append(colspec)
|
||||
self.execute()
|
||||
#if column.primary_key:
|
||||
# cons = self._pk_constraint(table,column,True)
|
||||
# cons.drop()
|
||||
# cons.create()
|
||||
|
||||
def visit_table(self,table):
|
||||
pass
|
||||
|
||||
class ANSIColumnDropper(AlterTableVisitor):
|
||||
"""Extends ansisql dropper for column dropping (alter table drop col)"""
|
||||
def visit_column(self,column):
|
||||
"""Drop a column; #33"""
|
||||
table = self.start_alter_table(column)
|
||||
self.append(" DROP COLUMN %s"%column.name)
|
||||
self.execute()
|
||||
#if column.primary_key:
|
||||
# cons = self._pk_constraint(table,column,False)
|
||||
# cons.create()
|
||||
|
||||
class ANSISchemaChanger(AlterTableVisitor,SchemaGenerator):
|
||||
"""Manages changes to existing schema elements.
|
||||
Note that columns are schema elements; "alter table add column" is in
|
||||
SchemaGenerator.
|
||||
|
||||
All items may be renamed. Columns can also have many of their properties -
|
||||
type, for example - changed.
|
||||
|
||||
Each function is passed a tuple, containing (object,name); where object
|
||||
is a type of object you'd expect for that function (ie. table for
|
||||
visit_table) and name is the object's new name. NONE means the name is
|
||||
unchanged.
|
||||
"""
|
||||
def visit_table(self,param):
|
||||
"""Rename a table; #38. Other ops aren't supported."""
|
||||
table,newname = param
|
||||
self.start_alter_table(table)
|
||||
self.append("RENAME TO %s"%newname)
|
||||
self.execute()
|
||||
|
||||
def visit_column(self,delta):
|
||||
"""Rename/change a column; #34/#35"""
|
||||
# ALTER COLUMN is implemented as several ALTER statements
|
||||
keys = delta.keys()
|
||||
if 'type' in keys:
|
||||
self._run_subvisit(delta,self._visit_column_type)
|
||||
if 'nullable' in keys:
|
||||
self._run_subvisit(delta,self._visit_column_nullable)
|
||||
if 'default' in keys:
|
||||
self._run_subvisit(delta,self._visit_column_default)
|
||||
#if 'primary_key' in keys:
|
||||
# #self._run_subvisit(delta,self._visit_column_primary_key)
|
||||
# self._visit_column_primary_key(delta)
|
||||
#if 'foreign_key' in keys:
|
||||
# self._visit_column_foreign_key(delta)
|
||||
if 'name' in keys:
|
||||
self._run_subvisit(delta,self._visit_column_name)
|
||||
def _run_subvisit(self,delta,func,col_name=None,table_name=None):
|
||||
if table_name is None:
|
||||
table_name = delta.table_name
|
||||
if col_name is None:
|
||||
col_name = delta.current_name
|
||||
ret = func(table_name,col_name,delta)
|
||||
self.execute()
|
||||
return ret
|
||||
|
||||
def _visit_column_foreign_key(self,delta):
|
||||
table = delta.table
|
||||
column = getattr(table.c,delta.current_name)
|
||||
cons = constraint.ForeignKeyConstraint(column,autoload=True)
|
||||
fk = delta['foreign_key']
|
||||
if fk:
|
||||
# For now, cons.columns is limited to one column:
|
||||
# no multicolumn FKs
|
||||
column.foreign_key = ForeignKey(*cons.columns)
|
||||
else:
|
||||
column_foreign_key = None
|
||||
cons.drop()
|
||||
cons.create()
|
||||
def _visit_column_primary_key(self,delta):
|
||||
table = delta.table
|
||||
col = getattr(table.c,delta.current_name)
|
||||
pk = delta['primary_key']
|
||||
cons = self._pk_constraint(table,col,pk)
|
||||
cons.drop()
|
||||
cons.create()
|
||||
def _visit_column_nullable(self,table_name,col_name,delta):
|
||||
nullable = delta['nullable']
|
||||
table = self._to_table(delta)
|
||||
self.start_alter_table(table_name)
|
||||
self.append("ALTER COLUMN %s "%col_name)
|
||||
if nullable:
|
||||
self.append("DROP NOT NULL")
|
||||
else:
|
||||
self.append("SET NOT NULL")
|
||||
def _visit_column_default(self,table_name,col_name,delta):
|
||||
default = delta['default']
|
||||
# Default must be a PassiveDefault; else, ignore
|
||||
# (Non-PassiveDefaults are managed by the app, not the db)
|
||||
if default is not None:
|
||||
if not isinstance(default,sa.PassiveDefault):
|
||||
return
|
||||
# Dummy column: get_col_default_string needs a column for some reason
|
||||
dummy = sa.Column(None,None,default=default)
|
||||
default_text = self.get_column_default_string(dummy)
|
||||
self.start_alter_table(table_name)
|
||||
self.append("ALTER COLUMN %s "%col_name)
|
||||
if default_text is not None:
|
||||
self.append("SET DEFAULT %s"%default_text)
|
||||
else:
|
||||
self.append("DROP DEFAULT")
|
||||
def _visit_column_type(self,table_name,col_name,delta):
|
||||
type = delta['type']
|
||||
if not isinstance(type,sa.types.AbstractType):
|
||||
# It's the class itself, not an instance... make an instance
|
||||
type = type()
|
||||
type_text = type.engine_impl(self.engine).get_col_spec()
|
||||
self.start_alter_table(table_name)
|
||||
self.append("ALTER COLUMN %s TYPE %s"%(col_name,type_text))
|
||||
def _visit_column_name(self,table_name,col_name,delta):
|
||||
new_name = delta['name']
|
||||
self.start_alter_table(table_name)
|
||||
self.append("RENAME COLUMN %s TO %s"%(col_name,new_name))
|
||||
|
||||
def visit_index(self,param):
|
||||
"""Rename an index; #36"""
|
||||
index,newname = param
|
||||
#self.start_alter_table(index)
|
||||
#self.append("RENAME INDEX %s TO %s"%(index.name,newname))
|
||||
self.append("ALTER INDEX %s RENAME TO %s"%(index.name,newname))
|
||||
self.execute()
|
||||
|
||||
|
||||
class ANSIConstraintCommon(AlterTableVisitor):
|
||||
"""
|
||||
Migrate's constraints require a separate creation function from SA's:
|
||||
Migrate's constraints are created independently of a table; SA's are
|
||||
created at the same time as the table.
|
||||
"""
|
||||
def get_constraint_name(self,cons):
|
||||
if cons.name is not None:
|
||||
ret = cons.name
|
||||
else:
|
||||
ret = cons.name = cons.autoname()
|
||||
return ret
|
||||
|
||||
class ANSIConstraintGenerator(ANSIConstraintCommon):
|
||||
def get_constraint_specification(self,cons,**kwargs):
|
||||
if isinstance(cons,constraint.PrimaryKeyConstraint):
|
||||
col_names = ','.join([i.name for i in cons.columns])
|
||||
ret = "PRIMARY KEY (%s)"%col_names
|
||||
if cons.name:
|
||||
# Named constraint
|
||||
ret = ("CONSTRAINT %s "%cons.name)+ret
|
||||
elif isinstance(cons,constraint.ForeignKeyConstraint):
|
||||
params = dict(
|
||||
columns=','.join([c.name for c in cons.columns]),
|
||||
reftable=cons.reftable,
|
||||
referenced=','.join([c.name for c in cons.referenced]),
|
||||
name=self.get_constraint_name(cons),
|
||||
)
|
||||
ret = "CONSTRAINT %(name)s FOREIGN KEY (%(columns)s) "\
|
||||
"REFERENCES %(reftable)s (%(referenced)s)"%params
|
||||
else:
|
||||
raise exceptions.InvalidConstraintError(cons)
|
||||
return ret
|
||||
def _visit_constraint(self,constraint):
|
||||
table = self.start_alter_table(constraint)
|
||||
self.append("ADD ")
|
||||
spec = self.get_constraint_specification(constraint)
|
||||
self.append(spec)
|
||||
self.execute()
|
||||
|
||||
def visit_migrate_primary_key_constraint(self,*p,**k):
|
||||
return self._visit_constraint(*p,**k)
|
||||
|
||||
def visit_migrate_foreign_key_constraint(self,*p,**k):
|
||||
return self._visit_constraint(*p,**k)
|
||||
|
||||
class ANSIConstraintDropper(ANSIConstraintCommon):
|
||||
def _visit_constraint(self,constraint):
|
||||
self.start_alter_table(constraint)
|
||||
self.append("DROP CONSTRAINT ")
|
||||
self.append(self.get_constraint_name(constraint))
|
||||
self.execute()
|
||||
|
||||
def visit_migrate_primary_key_constraint(self,*p,**k):
|
||||
return self._visit_constraint(*p,**k)
|
||||
|
||||
def visit_migrate_foreign_key_constraint(self,*p,**k):
|
||||
return self._visit_constraint(*p,**k)
|
||||
|
||||
class ANSIDialect(object):
|
||||
columngenerator = ANSIColumnGenerator
|
||||
columndropper = ANSIColumnDropper
|
||||
schemachanger = ANSISchemaChanger
|
||||
|
||||
@classmethod
|
||||
def visitor(self,name):
|
||||
return getattr(self,name)
|
||||
def reflectconstraints(self,connection,table_name):
|
||||
raise NotImplementedError()
|
126
migrate/changeset/constraint.py
Normal file
126
migrate/changeset/constraint.py
Normal file
@ -0,0 +1,126 @@
|
||||
import sqlalchemy
|
||||
from sqlalchemy import schema
|
||||
|
||||
class ConstraintChangeset(object):
|
||||
def _normalize_columns(self,cols,fullname=False):
|
||||
"""Given: column objects or names; return col names and (maybe) a table"""
|
||||
colnames = []
|
||||
table = None
|
||||
for col in cols:
|
||||
if isinstance(col,schema.Column):
|
||||
if col.table is not None and table is None:
|
||||
table = col.table
|
||||
if fullname:
|
||||
col = '.'.join((col.table.name,col.name))
|
||||
else:
|
||||
col = col.name
|
||||
colnames.append(col)
|
||||
return colnames,table
|
||||
def create(self,engine=None):
|
||||
if engine is None:
|
||||
engine = self.engine
|
||||
engine.create(self)
|
||||
def drop(self,engine=None):
|
||||
if engine is None:
|
||||
engine = self.engine
|
||||
#if self.name is None:
|
||||
# self.name = self.autoname()
|
||||
engine.drop(self)
|
||||
def _derived_metadata(self):
|
||||
return self.table._derived_metadata()
|
||||
def accept_schema_visitor(self,visitor,*p,**k):
|
||||
raise NotImplementedError()
|
||||
def _accept_schema_visitor(self,visitor,func,*p,**k):
|
||||
"""Call the visitor only if it defines the given function"""
|
||||
try:
|
||||
func = getattr(visitor,func)
|
||||
except AttributeError:
|
||||
return
|
||||
return func(self)
|
||||
def autoname(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
def _engine_run_visitor(engine,visitorcallable,element,**kwargs):
|
||||
conn = engine.connect()
|
||||
try:
|
||||
element.accept_schema_visitor(visitorcallable(conn))
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
class PrimaryKeyConstraint(ConstraintChangeset,schema.PrimaryKeyConstraint):
|
||||
def __init__(self,*cols,**kwargs):
|
||||
colnames,table = self._normalize_columns(cols)
|
||||
table = kwargs.pop('table',table)
|
||||
super(PrimaryKeyConstraint,self).__init__(*colnames,**kwargs)
|
||||
if table is not None:
|
||||
self._set_parent(table)
|
||||
|
||||
def _set_parent(self,table):
|
||||
self.table = table
|
||||
return super(ConstraintChangeset,self)._set_parent(table)
|
||||
|
||||
def create(self, *args, **kwargs):
|
||||
from migrate.changeset.databases.visitor import get_engine_visitor
|
||||
visitorcallable = get_engine_visitor(self.table.bind,'constraintgenerator')
|
||||
_engine_run_visitor (self.table.bind, visitorcallable, self)
|
||||
|
||||
def autoname(self):
|
||||
"""Mimic the database's automatic constraint names"""
|
||||
ret = "%(table)s_pkey"%dict(
|
||||
table=self.table.name,
|
||||
)
|
||||
return ret
|
||||
|
||||
def drop(self,*args,**kwargs):
|
||||
from migrate.changeset.databases.visitor import get_engine_visitor
|
||||
visitorcallable = get_engine_visitor(self.table.bind,'constraintdropper')
|
||||
_engine_run_visitor (self.table.bind, visitorcallable, self)
|
||||
self.columns.clear()
|
||||
return self
|
||||
|
||||
def accept_schema_visitor(self,visitor,*p,**k):
|
||||
#return visitor.visit_constraint(self,*p,**k)
|
||||
func = 'visit_migrate_primary_key_constraint'
|
||||
return self._accept_schema_visitor(visitor,func,*p,**k)
|
||||
|
||||
class ForeignKeyConstraint(ConstraintChangeset,schema.ForeignKeyConstraint):
|
||||
def __init__(self,columns,refcolumns,*p,**k):
|
||||
colnames,table = self._normalize_columns(columns)
|
||||
table = k.pop('table',table)
|
||||
refcolnames,reftable = self._normalize_columns(refcolumns,fullname=True)
|
||||
super(ForeignKeyConstraint,self).__init__(colnames,refcolnames,*p,**k)
|
||||
if table is not None:
|
||||
self._set_parent(table)
|
||||
|
||||
def _get_referenced(self):
|
||||
return [e.column for e in self.elements]
|
||||
referenced = property(_get_referenced)
|
||||
|
||||
def _get_reftable(self):
|
||||
return self.referenced[0].table
|
||||
reftable = property(_get_reftable)
|
||||
|
||||
def autoname(self):
|
||||
"""Mimic the database's automatic constraint names"""
|
||||
ret = "%(table)s_%(reftable)s_fkey"%dict(
|
||||
table=self.table.name,
|
||||
reftable=self.reftable.name,
|
||||
)
|
||||
return ret
|
||||
|
||||
def create(self, *args, **kwargs):
|
||||
from migrate.changeset.databases.visitor import get_engine_visitor
|
||||
visitorcallable = get_engine_visitor(self.table.bind,'constraintgenerator')
|
||||
_engine_run_visitor (self.table.bind, visitorcallable, self)
|
||||
return self
|
||||
|
||||
def drop(self,*args,**kwargs):
|
||||
from migrate.changeset.databases.visitor import get_engine_visitor
|
||||
visitorcallable = get_engine_visitor(self.table.bind,'constraintdropper')
|
||||
_engine_run_visitor (self.table.bind, visitorcallable, self)
|
||||
self.columns.clear()
|
||||
return self
|
||||
|
||||
def accept_schema_visitor(self,visitor,*p,**k):
|
||||
func = 'visit_migrate_foreign_key_constraint'
|
||||
return self._accept_schema_visitor(visitor,func,*p,**k)
|
6
migrate/changeset/databases/__init__.py
Normal file
6
migrate/changeset/databases/__init__.py
Normal file
@ -0,0 +1,6 @@
|
||||
__all__=[
|
||||
'postgres',
|
||||
'sqlite',
|
||||
'mysql',
|
||||
'oracle',
|
||||
]
|
61
migrate/changeset/databases/mysql.py
Normal file
61
migrate/changeset/databases/mysql.py
Normal file
@ -0,0 +1,61 @@
|
||||
from migrate.changeset import ansisql,exceptions
|
||||
from sqlalchemy.databases import mysql as sa_base
|
||||
#import sqlalchemy as sa
|
||||
|
||||
MySQLSchemaGenerator = sa_base.MySQLSchemaGenerator
|
||||
|
||||
class MySQLColumnGenerator(MySQLSchemaGenerator,ansisql.ANSIColumnGenerator):
|
||||
pass
|
||||
class MySQLColumnDropper(ansisql.ANSIColumnDropper):
|
||||
pass
|
||||
class MySQLSchemaChanger(MySQLSchemaGenerator,ansisql.ANSISchemaChanger):
|
||||
def visit_column(self,delta):
|
||||
keys = delta.keys()
|
||||
if 'type' in keys or 'nullable' in keys or 'name' in keys:
|
||||
self._run_subvisit(delta,self._visit_column_change)
|
||||
if 'default' in keys:
|
||||
# Column name might have changed above
|
||||
col_name = delta.get('name',delta.current_name)
|
||||
self._run_subvisit(delta,self._visit_column_default,col_name=col_name)
|
||||
def _visit_column_change(self,table_name,col_name,delta):
|
||||
if not hasattr(delta,'result_column'):
|
||||
# Mysql needs the whole column definition, not just a lone name/type
|
||||
raise exceptions.NotSupportedError(
|
||||
"A column object is required to do this")
|
||||
|
||||
column = delta.result_column
|
||||
colspec = self.get_column_specification(column)
|
||||
self.start_alter_table(table_name)
|
||||
self.append("CHANGE COLUMN ")
|
||||
self.append(col_name)
|
||||
self.append(' ')
|
||||
self.append(colspec)
|
||||
def visit_index(self,param):
|
||||
# If MySQL can do this, I can't find how
|
||||
raise exceptions.NotSupportedError("MySQL cannot rename indexes")
|
||||
class MySQLConstraintGenerator(ansisql.ANSIConstraintGenerator):
|
||||
pass
|
||||
class MySQLConstraintDropper(ansisql.ANSIConstraintDropper):
|
||||
#def visit_constraint(self,constraint):
|
||||
# if isinstance(constraint,sqlalchemy.schema.PrimaryKeyConstraint):
|
||||
# return self._visit_constraint_pk(constraint)
|
||||
# elif isinstance(constraint,sqlalchemy.schema.ForeignKeyConstraint):
|
||||
# return self._visit_constraint_fk(constraint)
|
||||
# return super(MySQLConstraintDropper,self).visit_constraint(constraint)
|
||||
def visit_migrate_primary_key_constraint(self,constraint):
|
||||
self.start_alter_table(constraint)
|
||||
self.append("DROP PRIMARY KEY")
|
||||
self.execute()
|
||||
|
||||
def visit_migrate_foreign_key_constraint(self,constraint):
|
||||
self.start_alter_table(constraint)
|
||||
self.append("DROP FOREIGN KEY ")
|
||||
self.append(constraint.name)
|
||||
self.execute()
|
||||
|
||||
class MySQLDialect(ansisql.ANSIDialect):
|
||||
columngenerator = MySQLColumnGenerator
|
||||
columndropper = MySQLColumnDropper
|
||||
schemachanger = MySQLSchemaChanger
|
||||
constraintgenerator = MySQLConstraintGenerator
|
||||
constraintdropper = MySQLConstraintDropper
|
79
migrate/changeset/databases/oracle.py
Normal file
79
migrate/changeset/databases/oracle.py
Normal file
@ -0,0 +1,79 @@
|
||||
from migrate.changeset import ansisql,exceptions
|
||||
from sqlalchemy.databases import oracle as sa_base
|
||||
import sqlalchemy as sa
|
||||
|
||||
OracleSchemaGenerator = sa_base.OracleSchemaGenerator
|
||||
|
||||
class OracleColumnGenerator(OracleSchemaGenerator,ansisql.ANSIColumnGenerator):
|
||||
pass
|
||||
class OracleColumnDropper(ansisql.ANSIColumnDropper):
|
||||
pass
|
||||
class OracleSchemaChanger(OracleSchemaGenerator,ansisql.ANSISchemaChanger):
|
||||
def get_column_specification(self,column,**kwargs):
|
||||
# Ignore the NOT NULL generated
|
||||
override_nullable = kwargs.pop('override_nullable',None)
|
||||
if override_nullable:
|
||||
orig = column.nullable
|
||||
column.nullable = True
|
||||
ret=super(OracleSchemaChanger,self).get_column_specification(column,**kwargs)
|
||||
if override_nullable:
|
||||
column.nullable = orig
|
||||
return ret
|
||||
|
||||
def visit_column(self,delta):
|
||||
keys = delta.keys()
|
||||
if 'type' in keys or 'nullable' in keys or 'default' in keys:
|
||||
self._run_subvisit(delta,self._visit_column_change)
|
||||
if 'name' in keys:
|
||||
self._run_subvisit(delta,self._visit_column_name)
|
||||
def _visit_column_change(self,table_name,col_name,delta):
|
||||
if not hasattr(delta,'result_column'):
|
||||
# Oracle needs the whole column definition, not just a lone name/type
|
||||
raise exceptions.NotSupportedError(
|
||||
"A column object is required to do this")
|
||||
|
||||
column = delta.result_column
|
||||
# Oracle cannot drop a default once created, but it can set it to null.
|
||||
# We'll do that if default=None
|
||||
# http://forums.oracle.com/forums/message.jspa?messageID=1273234#1273234
|
||||
dropdefault_hack = (column.default is None and 'default' in delta.keys())
|
||||
# Oracle apparently doesn't like it when we say "not null" if the
|
||||
# column's already not null. Fudge it, so we don't need a new function
|
||||
notnull_hack = ((not column.nullable) and ('nullable' not in delta.keys()))
|
||||
# We need to specify NULL if we're removing a NOT NULL constraint
|
||||
null_hack = (column.nullable and ('nullable' in delta.keys()))
|
||||
|
||||
|
||||
if dropdefault_hack:
|
||||
column.default = sa.PassiveDefault(sa.func.null())
|
||||
if notnull_hack:
|
||||
column.nullable = True
|
||||
colspec=self.get_column_specification(column,override_nullable=null_hack)
|
||||
if null_hack:
|
||||
colspec += ' NULL'
|
||||
if notnull_hack:
|
||||
column.nullable = False
|
||||
if dropdefault_hack:
|
||||
column.default = None
|
||||
|
||||
self.start_alter_table(table_name)
|
||||
self.append("MODIFY ")
|
||||
self.append(colspec)
|
||||
class OracleConstraintCommon(object):
|
||||
def get_constraint_name(self,cons):
|
||||
# Oracle constraints can't guess their name like other DBs
|
||||
if not cons.name:
|
||||
raise exceptions.NotSupportedError(
|
||||
"Oracle constraint names must be explicitly stated")
|
||||
return cons.name
|
||||
class OracleConstraintGenerator(OracleConstraintCommon,ansisql.ANSIConstraintGenerator):
|
||||
pass
|
||||
class OracleConstraintDropper(OracleConstraintCommon,ansisql.ANSIConstraintDropper):
|
||||
pass
|
||||
|
||||
class OracleDialect(ansisql.ANSIDialect):
|
||||
columngenerator = OracleColumnGenerator
|
||||
columndropper = OracleColumnDropper
|
||||
schemachanger = OracleSchemaChanger
|
||||
constraintgenerator = OracleConstraintGenerator
|
||||
constraintdropper = OracleConstraintDropper
|
23
migrate/changeset/databases/postgres.py
Normal file
23
migrate/changeset/databases/postgres.py
Normal file
@ -0,0 +1,23 @@
|
||||
from migrate.changeset import ansisql
|
||||
from sqlalchemy.databases import postgres as sa_base
|
||||
#import sqlalchemy as sa
|
||||
|
||||
PGSchemaGenerator = sa_base.PGSchemaGenerator
|
||||
|
||||
class PGColumnGenerator(PGSchemaGenerator,ansisql.ANSIColumnGenerator):
|
||||
pass
|
||||
class PGColumnDropper(ansisql.ANSIColumnDropper):
|
||||
pass
|
||||
class PGSchemaChanger(ansisql.ANSISchemaChanger):
|
||||
pass
|
||||
class PGConstraintGenerator(ansisql.ANSIConstraintGenerator):
|
||||
pass
|
||||
class PGConstraintDropper(ansisql.ANSIConstraintDropper):
|
||||
pass
|
||||
|
||||
class PGDialect(ansisql.ANSIDialect):
|
||||
columngenerator = PGColumnGenerator
|
||||
columndropper = PGColumnDropper
|
||||
schemachanger = PGSchemaChanger
|
||||
constraintgenerator = PGConstraintGenerator
|
||||
constraintdropper = PGConstraintDropper
|
49
migrate/changeset/databases/sqlite.py
Normal file
49
migrate/changeset/databases/sqlite.py
Normal file
@ -0,0 +1,49 @@
|
||||
from migrate.changeset import ansisql,constraint,exceptions
|
||||
from sqlalchemy.databases import sqlite as sa_base
|
||||
#import sqlalchemy as sa
|
||||
|
||||
SQLiteSchemaGenerator = sa_base.SQLiteSchemaGenerator
|
||||
|
||||
class SQLiteColumnGenerator(SQLiteSchemaGenerator,ansisql.ANSIColumnGenerator):
|
||||
pass
|
||||
class SQLiteColumnDropper(ansisql.ANSIColumnDropper):
|
||||
def visit_column(self,column):
|
||||
raise exceptions.NotSupportedError("SQLite does not support "
|
||||
"DROP COLUMN; see http://www.sqlite.org/lang_altertable.html")
|
||||
class SQLiteSchemaChanger(ansisql.ANSISchemaChanger):
|
||||
def _not_supported(self,op):
|
||||
raise exceptions.NotSupportedError("SQLite does not support "
|
||||
"%s; see http://www.sqlite.org/lang_altertable.html"%op)
|
||||
def _visit_column_nullable(self,table_name,col_name,delta):
|
||||
return self._not_supported('ALTER TABLE')
|
||||
def _visit_column_default(self,table_name,col_name,delta):
|
||||
return self._not_supported('ALTER TABLE')
|
||||
def _visit_column_type(self,table_name,col_name,delta):
|
||||
return self._not_supported('ALTER TABLE')
|
||||
def _visit_column_name(self,table_name,col_name,delta):
|
||||
return self._not_supported('ALTER TABLE')
|
||||
def visit_index(self,param):
|
||||
self._not_supported('ALTER INDEX')
|
||||
class SQLiteConstraintGenerator(ansisql.ANSIConstraintGenerator):
|
||||
def visit_migrate_primary_key_constraint(self,constraint):
|
||||
tmpl = "CREATE UNIQUE INDEX %s ON %s ( %s )"
|
||||
cols = ','.join([c.name for c in constraint.columns])
|
||||
tname = constraint.table.name
|
||||
name = constraint.name
|
||||
msg = tmpl%(name,tname,cols)
|
||||
self.append(msg)
|
||||
self.execute()
|
||||
class SQLiteConstraintDropper(object):
|
||||
def visit_migrate_primary_key_constraint(self,constraint):
|
||||
tmpl = "DROP INDEX %s "
|
||||
name = constraint.name
|
||||
msg = tmpl%(name)
|
||||
self.append(msg)
|
||||
self.execute()
|
||||
|
||||
class SQLiteDialect(ansisql.ANSIDialect):
|
||||
columngenerator = SQLiteColumnGenerator
|
||||
columndropper = SQLiteColumnDropper
|
||||
schemachanger = SQLiteSchemaChanger
|
||||
constraintgenerator = SQLiteConstraintGenerator
|
||||
constraintdropper = SQLiteConstraintDropper
|
20
migrate/changeset/databases/visitor.py
Normal file
20
migrate/changeset/databases/visitor.py
Normal file
@ -0,0 +1,20 @@
|
||||
import sqlalchemy as sa
|
||||
from migrate.changeset.databases import sqlite,postgres,mysql,oracle
|
||||
from migrate.changeset import ansisql
|
||||
|
||||
# Map SA dialects to the corresponding Migrate extensions
|
||||
dialects = {
|
||||
sa.engine.default.DefaultDialect : ansisql.ANSIDialect,
|
||||
sa.databases.sqlite.SQLiteDialect : sqlite.SQLiteDialect,
|
||||
sa.databases.postgres.PGDialect : postgres.PGDialect,
|
||||
sa.databases.mysql.MySQLDialect : mysql.MySQLDialect,
|
||||
sa.databases.oracle.OracleDialect : oracle.OracleDialect,
|
||||
}
|
||||
|
||||
def get_engine_visitor(engine,name):
|
||||
return get_dialect_visitor(engine.dialect,name)
|
||||
|
||||
def get_dialect_visitor(sa_dialect,name):
|
||||
sa_dialect_cls = sa_dialect.__class__
|
||||
migrate_dialect_cls = dialects[sa_dialect_cls]
|
||||
return migrate_dialect_cls.visitor(name)
|
11
migrate/changeset/exceptions.py
Normal file
11
migrate/changeset/exceptions.py
Normal file
@ -0,0 +1,11 @@
|
||||
|
||||
|
||||
class Error(Exception):
|
||||
pass
|
||||
|
||||
class NotSupportedError(Error):
|
||||
pass
|
||||
|
||||
class InvalidConstraintError(Error):
|
||||
pass
|
||||
|
353
migrate/changeset/schema.py
Normal file
353
migrate/changeset/schema.py
Normal file
@ -0,0 +1,353 @@
|
||||
import re
|
||||
import sqlalchemy
|
||||
from migrate.changeset.databases.visitor import get_engine_visitor
|
||||
|
||||
__all__ = [
|
||||
'create_column',
|
||||
'drop_column',
|
||||
'alter_column',
|
||||
'rename_table',
|
||||
'rename_index',
|
||||
]
|
||||
|
||||
|
||||
def create_column(column,table=None,*p,**k):
|
||||
if table is not None:
|
||||
return table.create_column(column,*p,**k)
|
||||
return column.create(*p,**k)
|
||||
|
||||
def drop_column(column,table=None,*p,**k):
|
||||
if table is not None:
|
||||
return table.drop_column(column,*p,**k)
|
||||
return column.drop(*p,**k)
|
||||
|
||||
def _to_table(table,engine=None):
|
||||
if isinstance(table,sqlalchemy.Table):
|
||||
return table
|
||||
# Given: table name, maybe an engine
|
||||
meta = sqlalchemy.MetaData()
|
||||
if engine is not None:
|
||||
meta.connect(engine)
|
||||
return sqlalchemy.Table(table,meta)
|
||||
def _to_index(index,table=None,engine=None):
|
||||
if isinstance(index,sqlalchemy.Index):
|
||||
return index
|
||||
# Given: index name; table name required
|
||||
table = _to_table(table,engine)
|
||||
ret = sqlalchemy.Index(index)
|
||||
ret.table = table
|
||||
return ret
|
||||
|
||||
def rename_table(table,name,engine=None):
|
||||
"""Rename a table, given the table's current name and the new name."""
|
||||
table = _to_table(table,engine)
|
||||
table.rename(name)
|
||||
|
||||
def rename_index(index,name,table=None,engine=None):
|
||||
"""Rename an index
|
||||
Takes an index name/object, a table name/object, and an engine. Engine and
|
||||
table aren't required if an index object is given.
|
||||
"""
|
||||
index = _to_index(index,table,engine)
|
||||
index.rename(name)
|
||||
|
||||
|
||||
def _engine_run_visitor(engine,visitorcallable,element,**kwargs):
|
||||
conn = engine.connect()
|
||||
try:
|
||||
element.accept_schema_visitor(visitorcallable(engine.dialect,connection=conn))
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def alter_column(*p,**k):
|
||||
"""Alter a column
|
||||
Parameters: column name, table name, an engine, and the
|
||||
properties of that column to change
|
||||
"""
|
||||
if len(p) and isinstance(p[0],sqlalchemy.Column):
|
||||
col = p[0]
|
||||
else:
|
||||
col = None
|
||||
if 'table' not in k:
|
||||
k['table'] = col.table
|
||||
if 'engine' not in k:
|
||||
k['engine'] = k['table'].bind
|
||||
engine = k['engine']
|
||||
delta = _ColumnDelta(*p,**k)
|
||||
visitorcallable = get_engine_visitor(engine,'schemachanger')
|
||||
_engine_run_visitor(engine,visitorcallable,delta)
|
||||
|
||||
# Update column
|
||||
if col is not None:
|
||||
# Special case: change column key on rename, if key not explicit
|
||||
# Used by SA : table.c.[key]
|
||||
#
|
||||
# This fails if the key was explit AND equal to the column name.
|
||||
# (It changes the key name when it shouldn't.)
|
||||
# Not much we can do about it.
|
||||
if 'name' in delta.keys():
|
||||
if (col.name == col.key):
|
||||
newname = delta['name']
|
||||
del col.table.c[col.key]
|
||||
setattr(col,'key',newname)
|
||||
col.table.c[col.key] = col
|
||||
# Change all other attrs
|
||||
for key,val in delta.iteritems():
|
||||
setattr(col,key,val)
|
||||
|
||||
def _normalize_table(column,table):
|
||||
if table is not None:
|
||||
if table is not column.table:
|
||||
# This is a bit of a hack: we end up with dupe PK columns here
|
||||
pk_names = map(lambda c: c.name, table.primary_key)
|
||||
if column.primary_key and pk_names.count(column.name):
|
||||
index = pk_names.index(column_name)
|
||||
del table.primary_key[index]
|
||||
table.append_column(column)
|
||||
return column.table
|
||||
|
||||
|
||||
class _WrapRename(object):
|
||||
def __init__(self,item,name):
|
||||
self.item = item
|
||||
self.name = name
|
||||
|
||||
def accept_schema_visitor(self,visitor):
|
||||
if isinstance(self.item,sqlalchemy.Table):
|
||||
suffix = 'table'
|
||||
elif isinstance(self.item,sqlalchemy.Column):
|
||||
suffix = 'column'
|
||||
elif isinstance(self.item,sqlalchemy.Index):
|
||||
suffix = 'index'
|
||||
funcname = 'visit_%s'%suffix
|
||||
func = getattr(visitor,funcname)
|
||||
param = self.item,self.name
|
||||
return func(param)
|
||||
|
||||
class _ColumnDelta(dict):
|
||||
"""Extracts the differences between two columns/column-parameters"""
|
||||
def __init__(self,*p,**k):
|
||||
"""Extract ALTER-able differences from two columns
|
||||
May receive parameters arranged in several different ways:
|
||||
* old_column_object,new_column_object,*parameters
|
||||
Identifies attributes that differ between the two columns.
|
||||
Parameters specified outside of either column are always executed
|
||||
and override column differences.
|
||||
* column_object,[current_name,]*parameters
|
||||
Parameters specified are changed; table name is extracted from
|
||||
column object.
|
||||
Name is changed to column_object.name from current_name, if
|
||||
current_name is specified. If not specified, name is unchanged.
|
||||
* current_name,table,*parameters
|
||||
'table' may be either an object or a name
|
||||
"""
|
||||
# Things are initialized differently depending on how many column
|
||||
# parameters are given. Figure out how many and call the appropriate
|
||||
# method.
|
||||
if len(p) >= 1 and isinstance(p[0],sqlalchemy.Column):
|
||||
# At least one column specified
|
||||
if len(p) >= 2 and isinstance(p[1],sqlalchemy.Column):
|
||||
# Two columns specified
|
||||
func = self._init_2col
|
||||
else:
|
||||
# Exactly one column specified
|
||||
func = self._init_1col
|
||||
else:
|
||||
# Zero columns specified
|
||||
func = self._init_0col
|
||||
diffs = func(*p,**k)
|
||||
self._set_diffs(diffs)
|
||||
# Column attributes that can be altered
|
||||
diff_keys = ('name','type','nullable','default','primary_key','foreign_key')
|
||||
|
||||
def _get_table_name(self):
|
||||
if isinstance(self._table,basestring):
|
||||
ret = self._table
|
||||
else:
|
||||
ret = self._table.name
|
||||
return ret
|
||||
table_name = property(_get_table_name)
|
||||
|
||||
def _get_table(self):
|
||||
if isinstance(self._table,basestring):
|
||||
ret = None
|
||||
else:
|
||||
ret = self._table
|
||||
return ret
|
||||
table = property(_get_table)
|
||||
|
||||
def _init_0col(self,current_name,*p,**k):
|
||||
p,k = self._init_normalize_params(p,k)
|
||||
table = k.pop('table')
|
||||
self.current_name = current_name
|
||||
self._table = table
|
||||
return k
|
||||
def _init_1col(self,col,*p,**k):
|
||||
p,k = self._init_normalize_params(p,k)
|
||||
self._table = k.pop('table',None) or col.table
|
||||
self.result_column = col.copy()
|
||||
if 'current_name' in k:
|
||||
# Renamed
|
||||
self.current_name = k.pop('current_name')
|
||||
k.setdefault('name',col.name)
|
||||
else:
|
||||
self.current_name = col.name
|
||||
return k
|
||||
def _init_2col(self,start_col,end_col,*p,**k):
|
||||
p,k = self._init_normalize_params(p,k)
|
||||
self.result_column = start_col.copy()
|
||||
self._table = k.pop('table',None) or start_col.table or end_col.table
|
||||
self.current_name = start_col.name
|
||||
for key in ('name','nullable','default','primary_key','foreign_key'):
|
||||
val = getattr(end_col,key,None)
|
||||
if getattr(start_col,key,None) != val:
|
||||
k.setdefault(key,val)
|
||||
if not self.column_types_eq(start_col.type,end_col.type):
|
||||
k.setdefault('type',end_col.type)
|
||||
return k
|
||||
def _init_normalize_params(self,p,k):
|
||||
p = list(p)
|
||||
if len(p):
|
||||
k.setdefault('name',p.pop(0))
|
||||
if len(p):
|
||||
k.setdefault('type',p.pop(0))
|
||||
# TODO: sequences? FKs?
|
||||
return p,k
|
||||
def _set_diffs(self,diffs):
|
||||
for key in self.diff_keys:
|
||||
if key in diffs:
|
||||
self[key] = diffs[key]
|
||||
if getattr(self,'result_column',None) is not None:
|
||||
setattr(self.result_column,key,diffs[key])
|
||||
def column_types_eq(self,this,that):
|
||||
ret = isinstance(this,that.__class__)
|
||||
ret = ret or isinstance(that,this.__class__)
|
||||
# String length is a special case
|
||||
if ret and isinstance(that,sqlalchemy.types.String):
|
||||
ret = (getattr(this,'length',None) == getattr(that,'length',None))
|
||||
return ret
|
||||
|
||||
def accept_schema_visitor(self,visitor):
|
||||
return visitor.visit_column(self)
|
||||
|
||||
class ChangesetTable(object):
|
||||
"""Changeset extensions to SQLAlchemy tables."""
|
||||
def create_column(self,column):
|
||||
"""Creates a column
|
||||
The column parameter may be a column definition or the name of a column
|
||||
in this table.
|
||||
"""
|
||||
if not isinstance(column,sqlalchemy.Column):
|
||||
# It's a column name
|
||||
column = getattr(self.c,str(column))
|
||||
column.create(table=self)
|
||||
|
||||
def drop_column(self,column):
|
||||
"""Drop a column, given its name or definition."""
|
||||
if not isinstance(column,sqlalchemy.Column):
|
||||
# It's a column name
|
||||
try:
|
||||
column = getattr(self.c,str(column),None)
|
||||
except AttributeError:
|
||||
# That column isn't part of the table. We don't need its entire
|
||||
# definition to drop the column, just its name, so create a dummy
|
||||
# column with the same name.
|
||||
column = sqlalchemy.Column(str(column))
|
||||
column.drop(table=self)
|
||||
|
||||
def _meta_key(self):
|
||||
return sqlalchemy.schema._get_table_key(self.name,self.schema)
|
||||
def deregister(self):
|
||||
"""Remove this table from its metadata"""
|
||||
key = self._meta_key()
|
||||
meta = self.metadata
|
||||
if key in meta.tables:
|
||||
del meta.tables[key]
|
||||
|
||||
def rename(self,name,*args,**kwargs):
|
||||
"""Rename this table
|
||||
This changes both the database name and the name of this Python object
|
||||
"""
|
||||
engine = self.bind
|
||||
visitorcallable = get_engine_visitor(engine,'schemachanger')
|
||||
param = _WrapRename(self,name)
|
||||
#engine._run_visitor(visitorcallable,param,*args,**kwargs)
|
||||
_engine_run_visitor(engine,visitorcallable,param,*args,**kwargs)
|
||||
|
||||
# Fix metadata registration
|
||||
meta = self.metadata
|
||||
self.deregister()
|
||||
self.name = name
|
||||
self._set_parent(meta)
|
||||
|
||||
def _get_fullname(self):
|
||||
"""Fullname should always be up to date"""
|
||||
# Copied from Table constructor
|
||||
if self.schema is not None:
|
||||
ret = "%s.%s"%(self.schema,self.name)
|
||||
else:
|
||||
ret = self.name
|
||||
return ret
|
||||
fullname = property(_get_fullname,(lambda self,val: None))
|
||||
|
||||
class ChangesetColumn(object):
|
||||
"""Changeset extensions to SQLAlchemy columns"""
|
||||
def alter(self,*p,**k):
|
||||
"""Alter a column's definition: ALTER TABLE ALTER COLUMN
|
||||
May supply a new column object, or a list of properties to change.
|
||||
|
||||
For example; the following are equivalent:
|
||||
col.alter(Column('myint',Integer,nullable=False))
|
||||
col.alter('myint',Integer,nullable=False)
|
||||
col.alter(name='myint',type=Integer,nullable=False)
|
||||
|
||||
Column name, type, default, and nullable may be changed here. Note that
|
||||
for column defaults, only PassiveDefaults are managed by the database -
|
||||
changing others doesn't make sense.
|
||||
"""
|
||||
return alter_column(self,*p,**k)
|
||||
|
||||
def create(self,table=None,*args,**kwargs):
|
||||
"""Create this column in the database. Assumes the given table exists.
|
||||
ALTER TABLE ADD COLUMN, for most databases.
|
||||
"""
|
||||
table = _normalize_table(self,table)
|
||||
engine = table.bind
|
||||
visitorcallable = get_engine_visitor(engine,'columngenerator')
|
||||
engine._run_visitor(visitorcallable,self,*args,**kwargs)
|
||||
return self
|
||||
|
||||
def drop(self,table=None,*args,**kwargs):
|
||||
"""Drop this column from the database, leaving its table intact.
|
||||
ALTER TABLE DROP COLUMN, for most databases.
|
||||
"""
|
||||
table = _normalize_table(self,table)
|
||||
engine = table.bind
|
||||
visitorcallable = get_engine_visitor(engine,'columndropper')
|
||||
engine._run_visitor(visitorcallable,self,*args,**kwargs)
|
||||
## Remove col from table object, too
|
||||
#del table._columns[self.key]
|
||||
#if self in table.primary_key:
|
||||
# table.primary_key.remove(self)
|
||||
return self
|
||||
|
||||
class ChangesetIndex(object):
|
||||
"""Changeset extensions to SQLAlchemy Indexes"""
|
||||
def rename(self,name,*args,**kwargs):
|
||||
"""Change the name of an index.
|
||||
This changes both the Python object name and the database name.
|
||||
"""
|
||||
engine = self.table.bind
|
||||
visitorcallable = get_engine_visitor(engine,'schemachanger')
|
||||
param = _WrapRename(self,name)
|
||||
#engine._run_visitor(visitorcallable,param,*args,**kwargs)
|
||||
_engine_run_visitor(engine,visitorcallable,param,*args,**kwargs)
|
||||
self.name = name
|
||||
|
||||
|
||||
def _patch():
|
||||
"""All the 'ugly' operations that patch SQLAlchemy's internals."""
|
||||
sqlalchemy.schema.Table.__bases__ += (ChangesetTable,)
|
||||
sqlalchemy.schema.Column.__bases__ += (ChangesetColumn,)
|
||||
sqlalchemy.schema.Index.__bases__ += (ChangesetIndex,)
|
||||
_patch()
|
21
migrate/run.py
Normal file
21
migrate/run.py
Normal file
@ -0,0 +1,21 @@
|
||||
"""Each migration script must import everything in this file."""
|
||||
#from sqlalchemy import *
|
||||
#from migrate.changeset import *
|
||||
#from migrate.versioning import logengine
|
||||
|
||||
#__all__=[
|
||||
# 'engine',
|
||||
#]
|
||||
|
||||
# 'migrate_engine' is assigned elsewhere, and used during scripts
|
||||
#migrate_engine = None
|
||||
|
||||
def driver(engine):
|
||||
"""Given an engine, return the name of the database driving it:
|
||||
|
||||
'postgres','mysql','sqlite'...
|
||||
"""
|
||||
from warnings import warn
|
||||
warn("Use engine.name instead; http://erosson.com/migrate/trac/ticket/80",
|
||||
DeprecationWarning)
|
||||
return engine.name
|
0
migrate/versioning/__init__.py
Normal file
0
migrate/versioning/__init__.py
Normal file
279
migrate/versioning/api.py
Normal file
279
migrate/versioning/api.py
Normal file
@ -0,0 +1,279 @@
|
||||
"""An external API to the versioning system
|
||||
Used by the shell utility; could also be used by other scripts
|
||||
"""
|
||||
import sys
|
||||
import inspect
|
||||
from sqlalchemy import create_engine
|
||||
from migrate.versioning import exceptions,repository,schema,version
|
||||
import script as script_ #command name conflict
|
||||
|
||||
__all__=[
|
||||
'help',
|
||||
'create',
|
||||
'script',
|
||||
'commit',
|
||||
'version',
|
||||
'source',
|
||||
'version_control',
|
||||
'db_version',
|
||||
'upgrade',
|
||||
'downgrade',
|
||||
'drop_version_control',
|
||||
'manage',
|
||||
'test',
|
||||
]
|
||||
|
||||
cls_repository = repository.Repository
|
||||
cls_schema = schema.ControlledSchema
|
||||
cls_vernum = version.VerNum
|
||||
cls_script_python = script_.PythonScript
|
||||
|
||||
def help(cmd=None,**opts):
|
||||
"""%prog help COMMAND
|
||||
|
||||
Displays help on a given command.
|
||||
"""
|
||||
if cmd is None:
|
||||
raise exceptions.UsageError(None)
|
||||
try:
|
||||
func = globals()[cmd]
|
||||
except:
|
||||
raise exceptions.UsageError("'%s' isn't a valid command. Try 'help COMMAND'"%cmd)
|
||||
ret = func.__doc__
|
||||
if sys.argv[0]:
|
||||
ret = ret.replace('%prog',sys.argv[0])
|
||||
return ret
|
||||
|
||||
def create(repository,name,**opts):
|
||||
"""%prog create REPOSITORY_PATH NAME [--table=TABLE]
|
||||
|
||||
Create an empty repository at the specified path.
|
||||
|
||||
You can specify the version_table to be used; by default, it is '_version'.
|
||||
This table is created in all version-controlled databases.
|
||||
"""
|
||||
try:
|
||||
rep=cls_repository.create(repository,name,**opts)
|
||||
except exceptions.PathFoundError,e:
|
||||
raise exceptions.KnownError("The path %s already exists"%e.args[0])
|
||||
|
||||
def script(path,**opts):
|
||||
"""%prog script PATH
|
||||
|
||||
Create an empty change script at the specified path.
|
||||
"""
|
||||
try:
|
||||
cls_script_python.create(path,**opts)
|
||||
except exceptions.PathFoundError,e:
|
||||
raise exceptions.KnownError("The path %s already exists"%e.args[0])
|
||||
|
||||
def commit(script,repository,database=None,operation=None,version=None,**opts):
|
||||
"""%prog commit SCRIPT_PATH.py REPOSITORY_PATH [VERSION]
|
||||
|
||||
%prog commit SCRIPT_PATH.sql REPOSITORY_PATH DATABASE OPERATION [VERSION]
|
||||
|
||||
Commit a script to this repository. The committed script is added to the
|
||||
repository, and the file disappears.
|
||||
|
||||
Once a script has been committed, you can use it to upgrade a database with
|
||||
the 'upgrade' command.
|
||||
|
||||
If a version is given, that version will be replaced instead of creating a
|
||||
new version.
|
||||
|
||||
Normally, when writing change scripts in Python, you'll use the first form
|
||||
of this command (DATABASE and OPERATION aren't specified). If you write
|
||||
change scripts as .sql files, you'll need to specify DATABASE ('postgres',
|
||||
'mysql', 'oracle', 'sqlite'...) and OPERATION ('upgrade' or 'downgrade').
|
||||
You may commit multiple .sql files under the same version to complete
|
||||
functionality for a particular version::
|
||||
|
||||
%prog commit upgrade.postgres.sql /repository/path postgres upgrade 1
|
||||
%prog commit downgrade.postgres.sql /repository/path postgres downgrade 1
|
||||
%prog commit upgrade.sqlite.sql /repository/path sqlite upgrade 1
|
||||
%prog commit downgrade.sqlite.sql /repository/path sqlite downgrade 1
|
||||
[etc...]
|
||||
"""
|
||||
if (database is not None) and (operation is None) and (version is None):
|
||||
# Version was supplied as a positional
|
||||
version = database
|
||||
database = None
|
||||
|
||||
repos = cls_repository(repository)
|
||||
repos.commit(script,version,database=database,operation=operation)
|
||||
|
||||
def test(script,repository,url=None,**opts):
|
||||
"""%prog test SCRIPT_PATH REPOSITORY_PATH URL [VERSION]
|
||||
"""
|
||||
engine=create_engine(url)
|
||||
schema = cls_schema(engine,repository)
|
||||
script = cls_script_python(script)
|
||||
# Upgrade
|
||||
print "Upgrading...",
|
||||
try:
|
||||
script.run(engine,1)
|
||||
except:
|
||||
print "ERROR"
|
||||
raise
|
||||
print "done"
|
||||
|
||||
print "Downgrading...",
|
||||
try:
|
||||
script.run(engine,-1)
|
||||
except:
|
||||
print "ERROR"
|
||||
raise
|
||||
print "done"
|
||||
print "Success"
|
||||
|
||||
def version(repository,**opts):
|
||||
"""%prog version REPOSITORY_PATH
|
||||
|
||||
Display the latest version available in a repository.
|
||||
"""
|
||||
repos=cls_repository(repository)
|
||||
return repos.latest
|
||||
|
||||
def source(version,dest=None,repository=None,**opts):
|
||||
"""%prog source VERSION [DESTINATION] --repository=REPOSITORY_PATH
|
||||
|
||||
Display the Python code for a particular version in this repository.
|
||||
Save it to the file at DESTINATION or, if omitted, send to stdout.
|
||||
"""
|
||||
if repository is None:
|
||||
raise exceptions.UsageError("A repository must be specified")
|
||||
repos=cls_repository(repository)
|
||||
ret=repos.version(version).script().source()
|
||||
if dest is not None:
|
||||
dest=open(dest,'w')
|
||||
dest.write(ret)
|
||||
ret=None
|
||||
return ret
|
||||
|
||||
def version_control(url,repository,version=None,**opts):
|
||||
"""%prog version_control URL REPOSITORY_PATH [VERSION]
|
||||
|
||||
Mark a database as under this repository's version control.
|
||||
Once a database is under version control, schema changes should only be
|
||||
done via change scripts in this repository.
|
||||
|
||||
This creates the table version_table in the database.
|
||||
|
||||
The url should be any valid SQLAlchemy connection string.
|
||||
|
||||
By default, the database begins at version 0 and is assumed to be empty.
|
||||
If the database is not empty, you may specify a version at which to begin
|
||||
instead. No attempt is made to verify this version's correctness - the
|
||||
database schema is expected to be identical to what it would be if the
|
||||
database were created from scratch.
|
||||
"""
|
||||
engine=create_engine(url)
|
||||
cls_schema.create(engine,repository,version)
|
||||
|
||||
def db_version(url,repository,**opts):
|
||||
"""%prog db_version URL REPOSITORY_PATH
|
||||
|
||||
Show the current version of the repository with the given connection
|
||||
string, under version control of the specified repository.
|
||||
|
||||
The url should be any valid SQLAlchemy connection string.
|
||||
"""
|
||||
engine = create_engine(url)
|
||||
schema = cls_schema(engine,repository)
|
||||
return schema.version
|
||||
|
||||
def upgrade(url,repository,version=None,**opts):
|
||||
"""%prog upgrade URL REPOSITORY_PATH [VERSION] [--preview_py|--preview_sql]
|
||||
|
||||
Upgrade a database to a later version.
|
||||
This runs the upgrade() function defined in your change scripts.
|
||||
|
||||
By default, the database is updated to the latest available version. You
|
||||
may specify a version instead, if you wish.
|
||||
|
||||
You may preview the Python or SQL code to be executed, rather than actually
|
||||
executing it, using the appropriate 'preview' option.
|
||||
"""
|
||||
err = "Cannot upgrade a database of version %s to version %s. "\
|
||||
"Try 'downgrade' instead."
|
||||
return _migrate(url,repository,version,upgrade=True,err=err,**opts)
|
||||
|
||||
def downgrade(url,repository,version,**opts):
|
||||
"""%prog downgrade URL REPOSITORY_PATH VERSION [--preview_py|--preview_sql]
|
||||
|
||||
Downgrade a database to an earlier version.
|
||||
This is the reverse of upgrade; this runs the downgrade() function defined
|
||||
in your change scripts.
|
||||
|
||||
You may preview the Python or SQL code to be executed, rather than actually
|
||||
executing it, using the appropriate 'preview' option.
|
||||
"""
|
||||
err = "Cannot downgrade a database of version %s to version %s. "\
|
||||
"Try 'upgrade' instead."
|
||||
return _migrate(url,repository,version,upgrade=False,err=err,**opts)
|
||||
|
||||
def _migrate(url,repository,version,upgrade,err,**opts):
|
||||
engine = create_engine(url)
|
||||
schema = cls_schema(engine,repository)
|
||||
version = _migrate_version(schema,version,upgrade,err)
|
||||
|
||||
changeset = schema.changeset(version)
|
||||
for ver,change in changeset:
|
||||
nextver = ver + changeset.step
|
||||
print '%s -> %s... '%(ver,nextver),
|
||||
if opts.get('preview_sql'):
|
||||
print
|
||||
print change.log
|
||||
elif opts.get('preview_py'):
|
||||
source_ver = max(ver,nextver)
|
||||
module = schema.repository.version(source_ver).script().module
|
||||
funcname = upgrade and "upgrade" or "downgrade"
|
||||
func = getattr(module,funcname)
|
||||
print
|
||||
print inspect.getsource(module.upgrade)
|
||||
else:
|
||||
schema.runchange(ver,change,changeset.step)
|
||||
print 'done'
|
||||
|
||||
def _migrate_version(schema,version,upgrade,err):
|
||||
if version is None:
|
||||
return version
|
||||
# Version is specified: ensure we're upgrading in the right direction
|
||||
# (current version < target version for upgrading; reverse for down)
|
||||
version = cls_vernum(version)
|
||||
cur = schema.version
|
||||
if upgrade is not None:
|
||||
if upgrade:
|
||||
direction = cur <= version
|
||||
else:
|
||||
direction = cur >= version
|
||||
if not direction:
|
||||
raise exceptions.KnownError(err%(cur,version))
|
||||
return version
|
||||
|
||||
def drop_version_control(url,repository,**opts):
|
||||
"""%prog drop_version_control URL REPOSITORY_PATH
|
||||
|
||||
Removes version control from a database.
|
||||
"""
|
||||
engine=create_engine(url)
|
||||
schema=cls_schema(engine,repository)
|
||||
schema.drop()
|
||||
|
||||
def manage(file,**opts):
|
||||
"""%prog manage FILENAME VARIABLES...
|
||||
|
||||
Creates a script that runs Migrate with a set of default values.
|
||||
|
||||
For example::
|
||||
|
||||
%prog manage manage.py --repository=/path/to/repository --url=sqlite:///project.db
|
||||
|
||||
would create the script manage.py. The following two commands would then
|
||||
have exactly the same results::
|
||||
|
||||
python manage.py version
|
||||
%prog version --repository=/path/to/repository
|
||||
"""
|
||||
return repository.manage(file,**opts)
|
||||
|
5
migrate/versioning/base/__init__.py
Normal file
5
migrate/versioning/base/__init__.py
Normal file
@ -0,0 +1,5 @@
|
||||
"""Things that should be imported by all migrate packages"""
|
||||
|
||||
#__all__ = ['logging','log','databases','operations']
|
||||
from logger import logging,log
|
||||
from const import databases,operations
|
10
migrate/versioning/base/const.py
Normal file
10
migrate/versioning/base/const.py
Normal file
@ -0,0 +1,10 @@
|
||||
__all__ = ['databases','operations']
|
||||
|
||||
#databases = ('sqlite','postgres','mysql','oracle','mssql','firebird')
|
||||
databases = ('sqlite','postgres','mysql','oracle','mssql')
|
||||
|
||||
# Map operation names to function names
|
||||
from sqlalchemy.util import OrderedDict
|
||||
operations = OrderedDict()
|
||||
operations['upgrade'] = 'upgrade'
|
||||
operations['downgrade'] = 'downgrade'
|
9
migrate/versioning/base/logger.py
Normal file
9
migrate/versioning/base/logger.py
Normal file
@ -0,0 +1,9 @@
|
||||
"""Manages logging (to stdout) for our versioning system.
|
||||
"""
|
||||
import logging
|
||||
|
||||
log=logging.getLogger()
|
||||
log.setLevel(logging.WARNING)
|
||||
log.addHandler(logging.StreamHandler())
|
||||
|
||||
__all__=['log','logging']
|
19
migrate/versioning/cfgparse.py
Normal file
19
migrate/versioning/cfgparse.py
Normal file
@ -0,0 +1,19 @@
|
||||
from migrate.versioning.base import *
|
||||
from migrate.versioning import pathed
|
||||
from ConfigParser import ConfigParser
|
||||
|
||||
#__all__=['MigrateConfigParser']
|
||||
|
||||
class Parser(ConfigParser):
|
||||
"""A project configuration file"""
|
||||
def to_dict(self,sections=None):
|
||||
"""It's easier to access config values like dictionaries"""
|
||||
return self._sections
|
||||
|
||||
class Config(pathed.Pathed,Parser):
|
||||
def __init__(self,path,*p,**k):
|
||||
"""Confirm the config file exists; read it"""
|
||||
self.require_found(path)
|
||||
pathed.Pathed.__init__(self,path)
|
||||
Parser.__init__(self,*p,**k)
|
||||
self.read(path)
|
58
migrate/versioning/exceptions.py
Normal file
58
migrate/versioning/exceptions.py
Normal file
@ -0,0 +1,58 @@
|
||||
|
||||
class Error(Exception):
|
||||
pass
|
||||
class ApiError(Error):
|
||||
pass
|
||||
class KnownError(ApiError):
|
||||
"""A known error condition"""
|
||||
class UsageError(ApiError):
|
||||
"""A known error condition where help should be displayed"""
|
||||
|
||||
class ControlledSchemaError(Error):
|
||||
pass
|
||||
class InvalidVersionError(ControlledSchemaError):
|
||||
"""Invalid version number"""
|
||||
class DatabaseNotControlledError(ControlledSchemaError):
|
||||
"""Database shouldn't be under vc, but it is"""
|
||||
class DatabaseAlreadyControlledError(ControlledSchemaError):
|
||||
"""Database should be under vc, but it's not"""
|
||||
class WrongRepositoryError(ControlledSchemaError):
|
||||
"""This database is under version control by another repository"""
|
||||
class NoSuchTableError(ControlledSchemaError):
|
||||
pass
|
||||
|
||||
class LogSqlError(Error):
|
||||
"""A SQLError, with a traceback of where that statement was logged"""
|
||||
def __init__(self,sqlerror,entry):
|
||||
Exception.__init__(self)
|
||||
self.sqlerror = sqlerror
|
||||
self.entry = entry
|
||||
def __str__(self):
|
||||
ret = "SQL error in statement: \n%s\n"%(str(self.entry))
|
||||
ret += "Traceback from change script:\n"
|
||||
ret += ''.join(traceback.format_list(self.entry.traceback))
|
||||
ret += str(self.sqlerror)
|
||||
return ret
|
||||
|
||||
class PathError(Error):
|
||||
pass
|
||||
class PathNotFoundError(PathError):
|
||||
"""A path with no file was required; found a file"""
|
||||
pass
|
||||
class PathFoundError(PathError):
|
||||
"""A path with a file was required; found no file"""
|
||||
pass
|
||||
|
||||
class RepositoryError(Error):
|
||||
pass
|
||||
class InvalidRepositoryError(RepositoryError):
|
||||
pass
|
||||
|
||||
class ScriptError(Error):
|
||||
pass
|
||||
class InvalidScriptError(ScriptError):
|
||||
pass
|
||||
|
||||
class InvalidVersionError(Error):
|
||||
pass
|
||||
|
60
migrate/versioning/pathed.py
Normal file
60
migrate/versioning/pathed.py
Normal file
@ -0,0 +1,60 @@
|
||||
from migrate.versioning.base import *
|
||||
from migrate.versioning.util import KeyedInstance
|
||||
import os,shutil
|
||||
from migrate.versioning import exceptions
|
||||
|
||||
class Pathed(KeyedInstance):
|
||||
"""A class associated with a path/directory tree
|
||||
Only one instance of this class may exist for a particular file;
|
||||
__new__ will return an existing instance if possible
|
||||
"""
|
||||
parent=None
|
||||
|
||||
@classmethod
|
||||
def _key(cls,path):
|
||||
return str(path)
|
||||
|
||||
def __init__(self,path):
|
||||
self.path=path
|
||||
if self.__class__.parent is not None:
|
||||
self._init_parent(path)
|
||||
|
||||
def _init_parent(self,path):
|
||||
"""Try to initialize this object's parent, if it has one"""
|
||||
parent_path=self.__class__._parent_path(path)
|
||||
self.parent=self.__class__.parent(parent_path)
|
||||
log.info("Getting parent %r:%r"%(self.__class__.parent,parent_path))
|
||||
self.parent._init_child(path,self)
|
||||
|
||||
def _init_child(self,child,path):
|
||||
"""Run when a child of this object is initialized
|
||||
Parameters: the child object; the path to this object (its parent)
|
||||
"""
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
def _parent_path(cls,path):
|
||||
"""Fetch the path of this object's parent from this object's path
|
||||
"""
|
||||
# os.path.dirname(), but strip directories like files (like unix basename)
|
||||
# Treat directories like files...
|
||||
if path[-1]=='/':
|
||||
path=path[:-1]
|
||||
ret = os.path.dirname(path)
|
||||
return ret
|
||||
|
||||
@classmethod
|
||||
def require_notfound(cls,path):
|
||||
"""Ensures a given path does not already exist"""
|
||||
if os.path.exists(path):
|
||||
raise exceptions.PathFoundError(path)
|
||||
|
||||
@classmethod
|
||||
def require_found(cls,path):
|
||||
"""Ensures a given path already exists"""
|
||||
if not os.path.exists(path):
|
||||
raise exceptions.PathNotFoundError(path)
|
||||
|
||||
def __str__(self):
|
||||
return self.path
|
||||
|
164
migrate/versioning/repository.py
Normal file
164
migrate/versioning/repository.py
Normal file
@ -0,0 +1,164 @@
|
||||
from pkg_resources import resource_string,resource_filename
|
||||
import os,shutil
|
||||
import string
|
||||
from migrate.versioning.base import *
|
||||
from migrate.versioning.template import template
|
||||
from migrate.versioning import exceptions,script,version,pathed,cfgparse
|
||||
|
||||
|
||||
class Changeset(dict):
|
||||
"""A collection of changes to be applied to a database
|
||||
Changesets are bound to a repository and manage a set of logsql scripts from
|
||||
that repository.
|
||||
Behaves like a dict, for the most part. Keys are ordered based on start/end.
|
||||
"""
|
||||
def __init__(self,start,*changes,**k):
|
||||
"""Give a start version; step must be explicitly stated"""
|
||||
self.step = k.pop('step',1)
|
||||
self.start = version.VerNum(start)
|
||||
self.end = self.start
|
||||
for change in changes:
|
||||
self.add(change)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.items())
|
||||
|
||||
def keys(self):
|
||||
"""In a series of upgrades x -> y, keys are version x. Sorted."""
|
||||
ret = super(Changeset,self).keys()
|
||||
# Reverse order if downgrading
|
||||
ret.sort(reverse=(self.step < 1))
|
||||
return ret
|
||||
def values(self):
|
||||
return [self[k] for k in self.keys()]
|
||||
def items(self):
|
||||
return zip(self.keys(),self.values())
|
||||
|
||||
def add(self,change):
|
||||
key = self.end
|
||||
self.end += self.step
|
||||
self[key] = change
|
||||
|
||||
def run(self,*p,**k):
|
||||
for version,script in self:
|
||||
script.run(*p,**k)
|
||||
|
||||
|
||||
class Repository(pathed.Pathed):
|
||||
"""A project's change script repository"""
|
||||
# Configuration file, inside repository
|
||||
_config='migrate.cfg'
|
||||
# Version information, inside repository
|
||||
_versions='versions'
|
||||
|
||||
def __init__(self,path):
|
||||
log.info('Loading repository %s...'%path)
|
||||
self.verify(path)
|
||||
super(Repository,self).__init__(path)
|
||||
self.config=cfgparse.Config(os.path.join(self.path,self._config))
|
||||
self.versions=version.Collection(os.path.join(self.path,self._versions))
|
||||
log.info('Repository %s loaded successfully'%path)
|
||||
log.debug('Config: %r'%self.config.to_dict())
|
||||
|
||||
@classmethod
|
||||
def verify(cls,path):
|
||||
"""Ensure the target path is a valid repository
|
||||
Throws InvalidRepositoryError if not
|
||||
"""
|
||||
# Ensure the existance of required files
|
||||
try:
|
||||
cls.require_found(path)
|
||||
cls.require_found(os.path.join(path,cls._config))
|
||||
cls.require_found(os.path.join(path,cls._versions))
|
||||
except exceptions.PathNotFoundError,e:
|
||||
raise exceptions.InvalidRepositoryError(path)
|
||||
|
||||
@classmethod
|
||||
def prepare_config(cls,pkg,rsrc,name,**opts):
|
||||
"""Prepare a project configuration file for a new project"""
|
||||
# Prepare opts
|
||||
defaults=dict(
|
||||
version_table='migrate_version',
|
||||
repository_id=name,
|
||||
required_dbs=[],
|
||||
)
|
||||
for key,val in defaults.iteritems():
|
||||
if (key not in opts) or (opts[key] is None):
|
||||
opts[key]=val
|
||||
|
||||
tmpl = resource_string(pkg,rsrc)
|
||||
ret = string.Template(tmpl).substitute(opts)
|
||||
return ret
|
||||
|
||||
@classmethod
|
||||
def create(cls,path,name,**opts):
|
||||
"""Create a repository at a specified path"""
|
||||
cls.require_notfound(path)
|
||||
|
||||
pkg,rsrc = template.get_repository(as_pkg=True)
|
||||
tmplpkg = '.'.join((pkg,rsrc))
|
||||
tmplfile = resource_filename(pkg,rsrc)
|
||||
config_text = cls.prepare_config(tmplpkg,cls._config,name,**opts)
|
||||
# Create repository
|
||||
try:
|
||||
shutil.copytree(tmplfile,path)
|
||||
# Edit config defaults
|
||||
fd = open(os.path.join(path,cls._config),'w')
|
||||
fd.write(config_text)
|
||||
fd.close()
|
||||
# Create a management script
|
||||
manager = os.path.join(path,'manage.py')
|
||||
manage(manager,repository=path)
|
||||
except:
|
||||
log.error("There was an error creating your repository")
|
||||
return cls(path)
|
||||
|
||||
def commit(self,*p,**k):
|
||||
reqd = self.config.get('db_settings','required_dbs')
|
||||
return self.versions.commit(required=reqd,*p,**k)
|
||||
|
||||
latest=property(lambda self: self.versions.latest)
|
||||
version_table=property(lambda self: self.config.get('db_settings','version_table'))
|
||||
id=property(lambda self: self.config.get('db_settings','repository_id'))
|
||||
|
||||
def version(self,*p,**k):
|
||||
return self.versions.version(*p,**k)
|
||||
|
||||
@classmethod
|
||||
def clear(cls):
|
||||
super(Repository,cls).clear()
|
||||
version.Collection.clear()
|
||||
|
||||
def changeset(self,database,start,end=None):
|
||||
"""Create a changeset to migrate this dbms from ver. start to end/latest
|
||||
"""
|
||||
start = version.VerNum(start)
|
||||
if end is None:
|
||||
end = self.latest
|
||||
else:
|
||||
end = version.VerNum(end)
|
||||
if start <= end:
|
||||
step = 1
|
||||
range_mod = 1
|
||||
op = 'upgrade'
|
||||
else:
|
||||
step = -1
|
||||
range_mod = 0
|
||||
op = 'downgrade'
|
||||
versions = range(start+range_mod,end+range_mod,step)
|
||||
#changes = [self.version(v).script(database,op).log for v in versions]
|
||||
changes = [self.version(v).script(database,op) for v in versions]
|
||||
ret = Changeset(start,step=step,*changes)
|
||||
return ret
|
||||
|
||||
|
||||
def manage(file,**opts):
|
||||
"""Create a project management script"""
|
||||
pkg,rsrc = template.manage(as_pkg=True)
|
||||
tmpl = resource_string(pkg,rsrc)
|
||||
vars = ",".join(["%s='%s'"%vars for vars in opts.iteritems()])
|
||||
result = tmpl%dict(defaults=vars)
|
||||
|
||||
fd = open(file,'w')
|
||||
fd.write(result)
|
||||
fd.close()
|
130
migrate/versioning/schema.py
Normal file
130
migrate/versioning/schema.py
Normal file
@ -0,0 +1,130 @@
|
||||
from sqlalchemy import Table,Column,MetaData,String,Integer,create_engine
|
||||
from sqlalchemy import exceptions as sa_exceptions
|
||||
from migrate.versioning.repository import Repository
|
||||
from migrate.versioning.version import VerNum
|
||||
from migrate.versioning import exceptions
|
||||
|
||||
class ControlledSchema(object):
|
||||
"""A database under version control"""
|
||||
#def __init__(self,engine,repository=None):
|
||||
def __init__(self,engine,repository):
|
||||
if type(repository) is str:
|
||||
repository=Repository(repository)
|
||||
self.engine = engine
|
||||
self.repository = repository
|
||||
self.meta=MetaData(engine)
|
||||
#if self.repository is None:
|
||||
# self._get_repository()
|
||||
self._load()
|
||||
|
||||
def __eq__(self,other):
|
||||
return (self.repository is other.repository \
|
||||
and self.version == other.version)
|
||||
|
||||
def _load(self):
|
||||
"""Load controlled schema version info from DB"""
|
||||
tname = self.repository.version_table
|
||||
self.meta=MetaData(self.engine)
|
||||
if not hasattr(self,'table') or self.table is None:
|
||||
try:
|
||||
self.table = Table(tname,self.meta,autoload=True)
|
||||
except (exceptions.NoSuchTableError):
|
||||
raise exceptions.DatabaseNotControlledError(tname)
|
||||
# TODO?: verify that the table is correct (# cols, etc.)
|
||||
result = self.engine.execute(self.table.select(),)
|
||||
data = list(result)[0]
|
||||
# TODO?: exception if row count is bad
|
||||
# TODO: check repository id, exception if incorrect
|
||||
self.version = data['version']
|
||||
|
||||
def _get_repository(self):
|
||||
"""Given a database engine, try to guess the repository"""
|
||||
# TODO: no guessing yet; for now, a repository must be supplied
|
||||
raise NotImplementedError()
|
||||
|
||||
@classmethod
|
||||
def create(cls,engine,repository,version=None):
|
||||
"""Declare a database to be under a repository's version control"""
|
||||
# Confirm that the version # is valid: positive, integer, exists in repos
|
||||
if type(repository) is str:
|
||||
repository=Repository(repository)
|
||||
version = cls._validate_version(repository,version)
|
||||
table=cls._create_table_version(engine,repository,version)
|
||||
# TODO: history table
|
||||
# Load repository information and return
|
||||
return cls(engine,repository)
|
||||
|
||||
@classmethod
|
||||
def _validate_version(cls,repository,version):
|
||||
"""Ensures this is a valid version number for this repository
|
||||
If invalid, raises cls.InvalidVersionError
|
||||
Returns a valid version number
|
||||
"""
|
||||
if version is None:
|
||||
version=0
|
||||
try:
|
||||
version = VerNum(version) # raises valueerror
|
||||
if version < 0 or version > repository.latest:
|
||||
raise ValueError()
|
||||
except ValueError:
|
||||
raise exceptions.InvalidVersionError(version)
|
||||
return version
|
||||
|
||||
@classmethod
|
||||
def _create_table_version(cls,engine,repository,version):
|
||||
"""Creates the versioning table in a database"""
|
||||
# Create tables
|
||||
tname = repository.version_table
|
||||
meta = MetaData(engine)
|
||||
try:
|
||||
table = Table(tname,meta,
|
||||
#Column('repository_id',String,primary_key=True), # MySQL needs a length
|
||||
Column('repository_id',String(255),primary_key=True),
|
||||
Column('repository_path',String),
|
||||
Column('version',Integer),
|
||||
)
|
||||
table.create()
|
||||
except (sa_exceptions.ArgumentError,sa_exceptions.SQLError):
|
||||
# The table already exists
|
||||
raise exceptions.DatabaseAlreadyControlledError()
|
||||
# Insert data
|
||||
engine.execute(table.insert(),repository_id=repository.id,
|
||||
repository_path=repository.path,version=int(version))
|
||||
return table
|
||||
|
||||
def drop(self):
|
||||
"""Remove version control from a database"""
|
||||
try:
|
||||
self.table.drop()
|
||||
except (sa_exceptions.SQLError):
|
||||
raise exceptions.DatabaseNotControlledError(str(self.table))
|
||||
|
||||
def _engine_db(self,engine):
|
||||
"""Returns the database name of an engine - 'postgres','sqlite'..."""
|
||||
# TODO: This is a bit of a hack...
|
||||
return str(engine.dialect.__module__).split('.')[-1]
|
||||
|
||||
def changeset(self,version=None):
|
||||
database = self._engine_db(self.engine)
|
||||
start_ver = self.version
|
||||
changeset = self.repository.changeset(database,start_ver,version)
|
||||
return changeset
|
||||
|
||||
def runchange(self,ver,change,step):
|
||||
startver = ver
|
||||
endver = ver + step
|
||||
# Current database version must be correct! Don't run if corrupt!
|
||||
if self.version != startver:
|
||||
raise exceptions.InvalidVersionError("%s is not %s"%(self.version,startver))
|
||||
# Run the change
|
||||
change.run(self.engine,step)
|
||||
# Update/refresh database version
|
||||
update = self.table.update(self.table.c.version == int(startver))
|
||||
self.engine.execute(update, version=int(endver))
|
||||
self._load()
|
||||
|
||||
def upgrade(self,version=None):
|
||||
"""Upgrade (or downgrade) to a specified version, or latest version"""
|
||||
changeset = self.changeset(version)
|
||||
for ver,change in changeset:
|
||||
self.runchange(ver,change,changeset.step)
|
3
migrate/versioning/script/__init__.py
Normal file
3
migrate/versioning/script/__init__.py
Normal file
@ -0,0 +1,3 @@
|
||||
from py import PythonScript
|
||||
from sql import SqlScript
|
||||
from base import BaseScript
|
42
migrate/versioning/script/base.py
Normal file
42
migrate/versioning/script/base.py
Normal file
@ -0,0 +1,42 @@
|
||||
from migrate.versioning.base import log,operations
|
||||
from migrate.versioning import pathed,exceptions
|
||||
import migrate.run
|
||||
|
||||
class BaseScript(pathed.Pathed):
|
||||
"""Base class for other types of scripts
|
||||
All scripts have the following properties:
|
||||
|
||||
source (script.source())
|
||||
The source code of the script
|
||||
version (script.version())
|
||||
The version number of the script
|
||||
operations (script.operations())
|
||||
The operations defined by the script: upgrade(), downgrade() or both.
|
||||
Returns a tuple of operations.
|
||||
Can also check for an operation with ex. script.operation(Script.ops.up)
|
||||
"""
|
||||
|
||||
def __init__(self,path):
|
||||
log.info('Loading script %s...'%path)
|
||||
self.verify(path)
|
||||
super(BaseScript,self).__init__(path)
|
||||
log.info('Script %s loaded successfully'%path)
|
||||
|
||||
@classmethod
|
||||
def verify(cls,path):
|
||||
"""Ensure this is a valid script, or raise InvalidScriptError
|
||||
This version simply ensures the script file's existence
|
||||
"""
|
||||
try:
|
||||
cls.require_found(path)
|
||||
except:
|
||||
raise exceptions.InvalidScriptError(path)
|
||||
|
||||
def source(self):
|
||||
fd=open(self.path)
|
||||
ret=fd.read()
|
||||
fd.close()
|
||||
return ret
|
||||
|
||||
def run(self,engine):
|
||||
raise NotImplementedError()
|
63
migrate/versioning/script/py.py
Normal file
63
migrate/versioning/script/py.py
Normal file
@ -0,0 +1,63 @@
|
||||
import shutil
|
||||
import migrate.run
|
||||
from migrate.versioning import exceptions
|
||||
from migrate.versioning.base import operations
|
||||
from migrate.versioning.template import template
|
||||
from migrate.versioning.script import base
|
||||
from migrate.versioning.util import import_path
|
||||
|
||||
class PythonScript(base.BaseScript):
|
||||
@classmethod
|
||||
def create(cls,path,**opts):
|
||||
"""Create an empty migration script"""
|
||||
cls.require_notfound(path)
|
||||
|
||||
# TODO: Use the default script template (defined in the template
|
||||
# module) for now, but we might want to allow people to specify a
|
||||
# different one later.
|
||||
template_file = None
|
||||
src = template.get_script(template_file)
|
||||
shutil.copy(src,path)
|
||||
|
||||
@classmethod
|
||||
def verify_module(cls,path):
|
||||
"""Ensure this is a valid script, or raise InvalidScriptError"""
|
||||
# Try to import and get the upgrade() func
|
||||
try:
|
||||
module=import_path(path)
|
||||
except:
|
||||
# If the script itself has errors, that's not our problem
|
||||
raise
|
||||
try:
|
||||
assert callable(module.upgrade)
|
||||
except Exception,e:
|
||||
raise exceptions.InvalidScriptError(path+': %s'%str(e))
|
||||
return module
|
||||
|
||||
def _get_module(self):
|
||||
if not hasattr(self,'_module'):
|
||||
self._module = self.verify_module(self.path)
|
||||
return self._module
|
||||
module = property(_get_module)
|
||||
|
||||
|
||||
def _func(self,funcname):
|
||||
fn = getattr(self.module, funcname, None)
|
||||
if not fn:
|
||||
msg = "The function %s is not defined in this script"
|
||||
raise exceptions.ScriptError(msg%funcname)
|
||||
return fn
|
||||
|
||||
def run(self,engine,step):
|
||||
if step > 0:
|
||||
op = 'upgrade'
|
||||
elif step < 0:
|
||||
op = 'downgrade'
|
||||
else:
|
||||
raise exceptions.ScriptError("%d is not a valid step"%step)
|
||||
funcname = base.operations[op]
|
||||
|
||||
migrate.run.migrate_engine = migrate.migrate_engine = engine
|
||||
func = self._func(funcname)
|
||||
func()
|
||||
migrate.run.migrate_engine = migrate.migrate_engine = None
|
27
migrate/versioning/script/sql.py
Normal file
27
migrate/versioning/script/sql.py
Normal file
@ -0,0 +1,27 @@
|
||||
from migrate.versioning.script import base
|
||||
|
||||
class SqlScript(base.BaseScript):
|
||||
"""A file containing plain SQL statements."""
|
||||
def run(self,engine,step):
|
||||
text = self.source()
|
||||
# Don't rely on SA's autocommit here
|
||||
# (SA uses .startswith to check if a commit is needed. What if script
|
||||
# starts with a comment?)
|
||||
conn = engine.connect()
|
||||
try:
|
||||
trans = conn.begin()
|
||||
try:
|
||||
# ###HACK: SQLite doesn't allow multiple statements through
|
||||
# its execute() method, but it provides executescript() instead
|
||||
dbapi = conn.engine.raw_connection()
|
||||
if getattr(dbapi, 'executescript', None):
|
||||
dbapi.executescript(text)
|
||||
else:
|
||||
conn.execute(text)
|
||||
# Success
|
||||
trans.commit()
|
||||
except:
|
||||
trans.rollback()
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
143
migrate/versioning/shell.py
Normal file
143
migrate/versioning/shell.py
Normal file
@ -0,0 +1,143 @@
|
||||
"""The migrate command-line tool.
|
||||
"""
|
||||
import sys
|
||||
from migrate.versioning.base import *
|
||||
from optparse import OptionParser,Values
|
||||
from migrate.versioning import api,exceptions
|
||||
import inspect
|
||||
|
||||
alias = dict(
|
||||
s=api.script,
|
||||
ci=api.commit,
|
||||
vc=api.version_control,
|
||||
dbv=api.db_version,
|
||||
v=api.version,
|
||||
)
|
||||
def alias_setup():
|
||||
global alias
|
||||
for key,val in alias.iteritems():
|
||||
setattr(api,key,val)
|
||||
alias_setup()
|
||||
|
||||
class ShellUsageError(Exception):
|
||||
def die(self,exitcode=None):
|
||||
usage="""%%prog COMMAND ...
|
||||
Available commands:
|
||||
%s
|
||||
|
||||
Enter "%%prog help COMMAND" for information on a particular command.
|
||||
"""
|
||||
usage = usage.replace("\n"+" "*8,"\n")
|
||||
commands = list(api.__all__)
|
||||
commands.sort()
|
||||
commands = '\n'.join(map((lambda x:'\t'+x),commands))
|
||||
message = usage%commands
|
||||
try:
|
||||
message = message.replace('%prog',sys.argv[0])
|
||||
except IndexError:
|
||||
pass
|
||||
|
||||
if self.args[0] is not None:
|
||||
message += "\nError: %s\n"%str(self.args[0])
|
||||
if exitcode is None:
|
||||
exitcode = 1
|
||||
if exitcode is None:
|
||||
exitcode = 0
|
||||
die(message,exitcode)
|
||||
|
||||
def die(message,exitcode=1):
|
||||
if message is not None:
|
||||
sys.stderr.write(message)
|
||||
sys.stderr.write("\n")
|
||||
raise SystemExit(int(exitcode))
|
||||
|
||||
kwmap = dict(
|
||||
v='verbose',
|
||||
d='debug',
|
||||
f='force',
|
||||
)
|
||||
|
||||
def kwparse(arg):
|
||||
ret = arg.split('=',1)
|
||||
if len(ret) == 1:
|
||||
# No value specified (--kw, not --kw=stuff): use True
|
||||
ret = [ret[0],True]
|
||||
return ret
|
||||
|
||||
def parse_arg(arg,argnames):
|
||||
global kwmap
|
||||
if arg.startswith('--'):
|
||||
# Keyword-argument; either --keyword or --keyword=value
|
||||
kw,val = kwparse(arg[2:])
|
||||
elif arg.startswith('-'):
|
||||
# Short form of a keyword-argument; map it to a keyword
|
||||
try:
|
||||
parg = kwmap.get(arg)
|
||||
except KeyError:
|
||||
raise ShellUsageError("Invalid argument: %s"%arg)
|
||||
kw,val = kwparse(parg)
|
||||
else:
|
||||
# Simple positional parameter
|
||||
val = arg
|
||||
try:
|
||||
kw = argnames.pop(0)
|
||||
except IndexError,e:
|
||||
raise ShellUsageError("Too many arguments to command")
|
||||
return kw,val
|
||||
|
||||
def parse_args(*args,**kwargs):
|
||||
"""Map positional arguments to keyword-args"""
|
||||
args=list(args)
|
||||
try:
|
||||
cmdname = args.pop(0)
|
||||
except IndexError:
|
||||
# No command specified: no error message; just show usage
|
||||
raise ShellUsageError(None)
|
||||
|
||||
# Special cases: -h and --help should act like 'help'
|
||||
if cmdname == '-h' or cmdname == '--help':
|
||||
cmdname = 'help'
|
||||
|
||||
cmdfunc = getattr(api,cmdname,None)
|
||||
if cmdfunc is None or cmdname.startswith('_'):
|
||||
raise ShellUsageError("Invalid command %s"%cmdname)
|
||||
|
||||
argnames, p,k, defaults = inspect.getargspec(cmdfunc)
|
||||
argnames_orig = list(argnames)
|
||||
|
||||
for arg in args:
|
||||
kw,val = parse_arg(arg,argnames)
|
||||
kwargs[kw] = val
|
||||
|
||||
if defaults is not None:
|
||||
num_defaults = len(defaults)
|
||||
else:
|
||||
num_defaults = 0
|
||||
req_argnames = argnames_orig[:len(argnames_orig)-num_defaults]
|
||||
for name in req_argnames:
|
||||
if name not in kwargs:
|
||||
raise ShellUsageError("Too few arguments: %s not specified"%name)
|
||||
|
||||
return cmdfunc,kwargs
|
||||
|
||||
def main(argv=None,**kwargs):
|
||||
if argv is None:
|
||||
argv = list(sys.argv[1:])
|
||||
|
||||
try:
|
||||
command, kwargs = parse_args(*argv,**kwargs)
|
||||
except ShellUsageError,e:
|
||||
e.die()
|
||||
|
||||
try:
|
||||
ret = command(**kwargs)
|
||||
if ret is not None:
|
||||
print ret
|
||||
except exceptions.UsageError,e:
|
||||
e = ShellUsageError(e.args[0])
|
||||
e.die()
|
||||
except exceptions.KnownError,e:
|
||||
die(e.args[0])
|
||||
|
||||
if __name__=="__main__":
|
||||
main()
|
67
migrate/versioning/template.py
Normal file
67
migrate/versioning/template.py
Normal file
@ -0,0 +1,67 @@
|
||||
from pkg_resources import resource_filename
|
||||
import os,shutil
|
||||
import sys
|
||||
from migrate.versioning.base import *
|
||||
from migrate.versioning import pathed
|
||||
|
||||
class Packaged(pathed.Pathed):
|
||||
"""An object assoc'ed with a Python package"""
|
||||
def __init__(self,pkg):
|
||||
self.pkg = pkg
|
||||
path = self._find_path(pkg)
|
||||
super(Packaged,self).__init__(path)
|
||||
|
||||
@classmethod
|
||||
def _find_path(cls,pkg):
|
||||
pkg_name, resource_name = pkg.rsplit('.',1)
|
||||
ret = resource_filename(pkg_name,resource_name)
|
||||
return ret
|
||||
|
||||
class Collection(Packaged):
|
||||
"""A collection of templates of a specific type"""
|
||||
_default=None
|
||||
def get_path(self,file):
|
||||
return os.path.join(self.path,str(file))
|
||||
def get_pkg(self,file):
|
||||
return (self.pkg,str(file))
|
||||
|
||||
class RepositoryCollection(Collection):
|
||||
_default='default'
|
||||
|
||||
class ScriptCollection(Collection):
|
||||
_default='default.py_tmpl'
|
||||
|
||||
class Template(Packaged):
|
||||
"""Finds the paths/packages of various Migrate templates"""
|
||||
_repository='repository'
|
||||
_script='script'
|
||||
_manage='manage.py_tmpl'
|
||||
|
||||
def __init__(self,pkg):
|
||||
super(Template,self).__init__(pkg)
|
||||
self.repository=RepositoryCollection('.'.join((self.pkg,self._repository)))
|
||||
self.script=ScriptCollection('.'.join((self.pkg,self._script)))
|
||||
|
||||
def get_item(self,attr,filename=None,as_pkg=None,as_str=None):
|
||||
item = getattr(self,attr)
|
||||
if filename is None:
|
||||
filename = getattr(item,'_default')
|
||||
if as_pkg:
|
||||
ret = item.get_pkg(filename)
|
||||
if as_str:
|
||||
ret = '.'.join(ret)
|
||||
else:
|
||||
ret = item.get_path(filename)
|
||||
return ret
|
||||
|
||||
def get_repository(self,filename=None,as_pkg=None,as_str=None):
|
||||
return self.get_item('repository',filename,as_pkg,as_str)
|
||||
|
||||
def get_script(self,filename=None,as_pkg=None,as_str=None):
|
||||
return self.get_item('script',filename,as_pkg,as_str)
|
||||
|
||||
def manage(self,**k):
|
||||
return (self.pkg,self._manage)
|
||||
|
||||
template_pkg='migrate.versioning.templates'
|
||||
template=Template(template_pkg)
|
0
migrate/versioning/templates/__init__.py
Normal file
0
migrate/versioning/templates/__init__.py
Normal file
4
migrate/versioning/templates/manage.py_tmpl
Normal file
4
migrate/versioning/templates/manage.py_tmpl
Normal file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env python
|
||||
from migrate.versioning.shell import main
|
||||
|
||||
main(%(defaults)s)
|
0
migrate/versioning/templates/repository/__init__.py
Normal file
0
migrate/versioning/templates/repository/__init__.py
Normal file
4
migrate/versioning/templates/repository/default/README
Normal file
4
migrate/versioning/templates/repository/default/README
Normal file
@ -0,0 +1,4 @@
|
||||
This is a database migration repository.
|
||||
|
||||
More information at
|
||||
http://code.google.com/p/sqlalchemy-migrate/
|
20
migrate/versioning/templates/repository/default/migrate.cfg
Normal file
20
migrate/versioning/templates/repository/default/migrate.cfg
Normal file
@ -0,0 +1,20 @@
|
||||
[db_settings]
|
||||
# Used to identify which repository this database is versioned under.
|
||||
# You can use the name of your project.
|
||||
repository_id=${repository_id}
|
||||
|
||||
# The name of the database table used to track the schema version.
|
||||
# This name shouldn't already be used by your project.
|
||||
# If this is changed once a database is under version control, you'll need to
|
||||
# change the table name in each database too.
|
||||
version_table=${version_table}
|
||||
|
||||
# When committing a change script, Migrate will attempt to generate the
|
||||
# sql for all supported databases; normally, if one of them fails - probably
|
||||
# because you don't have that database installed - it is ignored and the
|
||||
# commit continues, perhaps ending successfully.
|
||||
# Databases in this list MUST compile successfully during a commit, or the
|
||||
# entire commit will fail. List the databases your application will actually
|
||||
# be using to ensure your updates to that database work properly.
|
||||
# This must be a list; example: ['postgres','sqlite']
|
||||
required_dbs=${required_dbs}
|
0
migrate/versioning/templates/script/__init__.py
Normal file
0
migrate/versioning/templates/script/__init__.py
Normal file
11
migrate/versioning/templates/script/default.py_tmpl
Normal file
11
migrate/versioning/templates/script/default.py_tmpl
Normal file
@ -0,0 +1,11 @@
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
|
||||
def upgrade():
|
||||
# Upgrade operations go here. Don't create your own engine; use the engine
|
||||
# named 'migrate_engine' imported from migrate.
|
||||
pass
|
||||
|
||||
def downgrade():
|
||||
# Operations to reverse the above upgrade go here.
|
||||
pass
|
12
migrate/versioning/templates/script/logsql.py_tmpl
Normal file
12
migrate/versioning/templates/script/logsql.py_tmpl
Normal file
@ -0,0 +1,12 @@
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
logsql=True
|
||||
|
||||
def upgrade():
|
||||
# Upgrade operations go here. Don't create your own engine; use the engine
|
||||
# named 'migrate_engine' imported from migrate.
|
||||
pass
|
||||
|
||||
def downgrade():
|
||||
# Operations to reverse the above upgrade go here.
|
||||
pass
|
3
migrate/versioning/util/__init__.py
Normal file
3
migrate/versioning/util/__init__.py
Normal file
@ -0,0 +1,3 @@
|
||||
from keyedinstance import KeyedInstance
|
||||
from importpath import import_path
|
||||
|
16
migrate/versioning/util/importpath.py
Normal file
16
migrate/versioning/util/importpath.py
Normal file
@ -0,0 +1,16 @@
|
||||
import os
|
||||
import sys
|
||||
|
||||
def import_path(fullpath):
|
||||
""" Import a file with full path specification. Allows one to
|
||||
import from anywhere, something __import__ does not do.
|
||||
"""
|
||||
# http://zephyrfalcon.org/weblog/arch_d7_2002_08_31.html
|
||||
path, filename = os.path.split(fullpath)
|
||||
filename, ext = os.path.splitext(filename)
|
||||
sys.path.append(path)
|
||||
module = __import__(filename)
|
||||
reload(module) # Might be out of date during tests
|
||||
del sys.path[-1]
|
||||
return module
|
||||
|
36
migrate/versioning/util/keyedinstance.py
Normal file
36
migrate/versioning/util/keyedinstance.py
Normal file
@ -0,0 +1,36 @@
|
||||
class KeyedInstance(object):
|
||||
"""A class whose instances have a unique identifier of some sort
|
||||
No two instances with the same unique ID should exist - if we try to create
|
||||
a second instance, the first should be returned.
|
||||
"""
|
||||
# _instances[class][instance]
|
||||
_instances=dict()
|
||||
def __new__(cls,*p,**k):
|
||||
instances = cls._instances
|
||||
clskey = str(cls)
|
||||
if clskey not in instances:
|
||||
instances[clskey] = dict()
|
||||
instances = instances[clskey]
|
||||
|
||||
key = cls._key(*p,**k)
|
||||
if key not in instances:
|
||||
instances[key] = super(KeyedInstance,cls).__new__(cls,*p,**k)
|
||||
self = instances[key]
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def _key(cls,*p,**k):
|
||||
"""Given a unique identifier, return a dictionary key
|
||||
This should be overridden by child classes, to specify which parameters
|
||||
should determine an object's uniqueness
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
@classmethod
|
||||
def clear(cls,cls2=None):
|
||||
# Allow cls.clear() as well as niqueInstance.clear(cls)
|
||||
if cls2 is not None:
|
||||
cls=cls2
|
||||
if str(cls) in cls._instances:
|
||||
del cls._instances[str(cls)]
|
||||
|
204
migrate/versioning/version.py
Normal file
204
migrate/versioning/version.py
Normal file
@ -0,0 +1,204 @@
|
||||
from migrate.versioning import exceptions,pathed,script
|
||||
import os,shutil
|
||||
|
||||
|
||||
|
||||
class VerNum(object):
|
||||
"""A version number"""
|
||||
_instances=dict()
|
||||
def __new__(cls,value):
|
||||
val=str(value)
|
||||
if val not in cls._instances:
|
||||
cls._instances[val] = super(VerNum,cls).__new__(cls,value)
|
||||
ret = cls._instances[val]
|
||||
return ret
|
||||
def __init__(self,value):
|
||||
self.value=str(int(value))
|
||||
if self < 0:
|
||||
raise ValueError("Version number cannot be negative")
|
||||
def __repr__(self):
|
||||
return str(self.value)
|
||||
def __str__(self):
|
||||
return str(self.value)
|
||||
def __int__(self):
|
||||
return int(self.value)
|
||||
def __add__(self,value):
|
||||
ret=int(self)+int(value)
|
||||
return VerNum(ret)
|
||||
def __sub__(self,value):
|
||||
return self+(int(value)*-1)
|
||||
def __cmp__(self,value):
|
||||
return int(self)-int(value)
|
||||
|
||||
|
||||
class Collection(pathed.Pathed):
|
||||
"""A collection of versioning scripts in a repository"""
|
||||
def __init__(self,path):
|
||||
super(Collection,self).__init__(path)
|
||||
self.versions=dict()
|
||||
|
||||
ver=self.latest=VerNum(1)
|
||||
vers=os.listdir(path)
|
||||
# This runs up to the latest *complete* version; stops when one's missing
|
||||
while str(ver) in vers:
|
||||
verpath=self.version_path(ver)
|
||||
self.versions[ver]=Version(verpath)
|
||||
ver+=1
|
||||
self.latest=ver-1
|
||||
|
||||
def version_path(self,ver):
|
||||
return os.path.join(self.path,str(ver))
|
||||
|
||||
def version(self,vernum=None):
|
||||
if vernum is None:
|
||||
vernum = self.latest
|
||||
return self.versions[VerNum(vernum)]
|
||||
|
||||
def commit(self,path,ver=None,*p,**k):
|
||||
"""Commit a script to this collection of scripts
|
||||
"""
|
||||
maxver = self.latest+1
|
||||
if ver is None:
|
||||
ver = maxver
|
||||
# Ver must be valid: can't upgrade past the next version
|
||||
# No change scripts exist for 0 (even though it's a valid version)
|
||||
if ver > maxver or ver == 0:
|
||||
raise exceptions.InvalidVersionError()
|
||||
verpath = self.version_path(ver)
|
||||
tmpname = None
|
||||
try:
|
||||
# If replacing an old version, copy it in case it gets trashed
|
||||
if os.path.exists(verpath):
|
||||
tmpname = os.path.join(os.path.split(verpath)[0],"%s_tmp"%ver)
|
||||
shutil.copytree(verpath,tmpname)
|
||||
version = Version(verpath)
|
||||
else:
|
||||
# Create version folder
|
||||
version = Version.create(verpath)
|
||||
self.versions[ver] = version
|
||||
# Commit the individual script
|
||||
script = version.commit(path,*p,**k)
|
||||
except:
|
||||
# Rollback everything we did in the try before dying, and reraise
|
||||
# Remove the created version folder
|
||||
shutil.rmtree(verpath)
|
||||
# Rollback if a version already existed above
|
||||
if tmpname is not None:
|
||||
shutil.move(tmpname,verpath)
|
||||
raise
|
||||
# Success: mark latest; delete old version
|
||||
if tmpname is not None:
|
||||
shutil.rmtree(tmpname)
|
||||
self.latest = ver
|
||||
|
||||
@classmethod
|
||||
def clear(cls):
|
||||
super(Collection,cls).clear()
|
||||
Version.clear()
|
||||
|
||||
|
||||
class extensions:
|
||||
"""A namespace for file extensions"""
|
||||
py='py'
|
||||
sql='sql'
|
||||
|
||||
|
||||
class Version(pathed.Pathed):
|
||||
"""A single version in a repository
|
||||
"""
|
||||
def __init__(self,path):
|
||||
super(Version,self).__init__(path)
|
||||
# Version must be numeric
|
||||
try:
|
||||
self.version=VerNum(os.path.basename(path))
|
||||
except:
|
||||
raise exceptions.InvalidVersionError(path)
|
||||
# Collect scripts in this folder
|
||||
self.sql = dict()
|
||||
self.python = None
|
||||
try:
|
||||
for script in os.listdir(path):
|
||||
self._add_script(os.path.join(path,script))
|
||||
except:
|
||||
raise exceptions.InvalidVersionError(path)
|
||||
|
||||
def script(self,database=None,operation=None):
|
||||
#if database is None and operation is None:
|
||||
# return self._script_py()
|
||||
#print database,operation,self.sql
|
||||
try:
|
||||
# Try to return a .sql script first
|
||||
ret = self._script_sql(database,operation)
|
||||
except KeyError:
|
||||
# No .sql script exists; return a python script
|
||||
ret = self._script_py()
|
||||
assert ret is not None
|
||||
return ret
|
||||
def _script_py(self):
|
||||
return self.python
|
||||
def _script_sql(self,database,operation):
|
||||
return self.sql[database][operation]
|
||||
|
||||
@classmethod
|
||||
def create(cls,path):
|
||||
os.mkdir(path)
|
||||
try:
|
||||
ret=cls(path)
|
||||
except:
|
||||
os.rmdir(path)
|
||||
raise
|
||||
return ret
|
||||
|
||||
def _add_script(self,path):
|
||||
if path.endswith(extensions.py):
|
||||
self._add_script_py(path)
|
||||
elif path.endswith(extensions.sql):
|
||||
self._add_script_sql(path)
|
||||
def _add_script_sql(self,path):
|
||||
try:
|
||||
version,dbms,op,ext=path.split('.',3)
|
||||
except:
|
||||
raise exceptions.ScriptError("Invalid sql script name %s"%path)
|
||||
|
||||
# File the script into a dictionary
|
||||
dbmses = self.sql
|
||||
if dbms not in dbmses:
|
||||
dbmses[dbms] = dict()
|
||||
ops = dbmses[dbms]
|
||||
ops[op] = script.SqlScript(path)
|
||||
def _add_script_py(self,path):
|
||||
self.python = script.PythonScript(path)
|
||||
|
||||
def _rm_ignore(self,path):
|
||||
"""Try to remove a path; ignore failure"""
|
||||
try:
|
||||
os.remove(path)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def commit(self,path,database=None,operation=None,required=None):
|
||||
if (database is not None) and (operation is not None):
|
||||
return self._commit_sql(path,database,operation)
|
||||
return self._commit_py(path,required)
|
||||
def _commit_sql(self,path,database,operation):
|
||||
if not path.endswith(extensions.sql):
|
||||
msg = "Bad file extension: should end with %s"%extensions.sql
|
||||
raise exceptions.ScriptError(msg)
|
||||
dest=os.path.join(self.path,'%s.%s.%s.%s'%(
|
||||
str(self.version),str(database),str(operation),extensions.sql))
|
||||
# Move the committed py script to this version's folder
|
||||
shutil.move(path,dest)
|
||||
self._add_script(dest)
|
||||
|
||||
def _commit_py(self,path_py,required=None):
|
||||
if (not os.path.exists(path_py)) or (not os.path.isfile(path_py)):
|
||||
raise exceptions.InvalidVersionError(path_py)
|
||||
dest = os.path.join(self.path,'%s.%s'%(str(self.version),extensions.py))
|
||||
|
||||
# Move the committed py script to this version's folder
|
||||
shutil.move(path_py,dest)
|
||||
self._add_script(dest)
|
||||
# Also delete the .pyc file, if it exists
|
||||
path_pyc = path_py+'c'
|
||||
if os.path.exists(path_pyc):
|
||||
self._rm_ignore(path_pyc)
|
12
setup.cfg
Normal file
12
setup.cfg
Normal file
@ -0,0 +1,12 @@
|
||||
[pudge]
|
||||
docs=docs/index.rst,docs/versioning.rst,docs/changeset.rst,docs/download.rst
|
||||
dest=docs/html
|
||||
title=Migrate
|
||||
trac_url=http://erosson.com/migrate/trac
|
||||
mailing_list_url=http://groups.google.com/group/migrate-users
|
||||
theme=erosson.com
|
||||
|
||||
[publish]
|
||||
doc-dir=docs/html
|
||||
doc-dest=scp://evan@erosson.com/var/opt/htdocs/evan/www/migrate/docs
|
||||
make-dirs=True
|
37
setup.py
Normal file
37
setup.py
Normal file
@ -0,0 +1,37 @@
|
||||
#!/usr/bin/python
|
||||
from setuptools import setup,find_packages
|
||||
|
||||
# Pudge
|
||||
try:
|
||||
import buildutils
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
setup(
|
||||
name = "sqlalchemy-migrate",
|
||||
version = "0.4.0dev",
|
||||
packages = find_packages(exclude=['test*']),
|
||||
scripts = ['shell/migrate'],
|
||||
include_package_data = True,
|
||||
description = "Database schema migration for SQLAlchemy",
|
||||
long_description = """
|
||||
Inspired by Ruby on Rails' migrations, Migrate provides a way to deal with database schema changes in `SQLAlchemy <http://sqlalchemy.org>`_ projects.
|
||||
|
||||
Migrate extends SQLAlchemy to have database changeset handling. It provides a database change repository mechanism which can be used from the command line as well as from inside python code.
|
||||
""",
|
||||
|
||||
install_requires = ['sqlalchemy >= 0.4.0'],
|
||||
setup_requires = ['py >= 0.9.0-beta'],
|
||||
dependency_links = [
|
||||
"http://codespeak.net/download/py/",
|
||||
],
|
||||
|
||||
author = "Evan Rosson",
|
||||
author_email = "evan.rosson@gmail.com",
|
||||
url = "http://code.google.com/p/sqlalchemy-migrate/",
|
||||
maintainer = "Jan Dittberner",
|
||||
maintainer_email = "jan@dittberner.info",
|
||||
license = "MIT",
|
||||
|
||||
test_suite = "py.test.cmdline.main",
|
||||
)
|
4
shell/migrate
Executable file
4
shell/migrate
Executable file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env python
|
||||
from migrate.versioning.shell import main
|
||||
|
||||
main()
|
0
test/__init__.py
Normal file
0
test/__init__.py
Normal file
0
test/changeset/__init__.py
Normal file
0
test/changeset/__init__.py
Normal file
515
test/changeset/test_changeset.py
Normal file
515
test/changeset/test_changeset.py
Normal file
@ -0,0 +1,515 @@
|
||||
import sqlalchemy
|
||||
from sqlalchemy import *
|
||||
from test import fixture
|
||||
from migrate import changeset
|
||||
from migrate.changeset import *
|
||||
from migrate.changeset.schema import _ColumnDelta
|
||||
from sqlalchemy.databases import information_schema
|
||||
|
||||
import migrate
|
||||
from migrate.run import driver
|
||||
|
||||
class TestAddDropColumn(fixture.DB):
|
||||
level=fixture.DB.CONNECT
|
||||
meta = MetaData()
|
||||
# We'll be adding the 'data' column
|
||||
table_name = 'tmp_adddropcol'
|
||||
table_int = 0
|
||||
|
||||
def setUp(self):
|
||||
if self.url.startswith('sqlite://'):
|
||||
self.engine = create_engine(self.url)
|
||||
#self.engine.echo=True
|
||||
self.meta.clear()
|
||||
self.table = Table(self.table_name,self.meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
)
|
||||
super(TestAddDropColumn,self).setUp()
|
||||
self.meta.bind = self.engine
|
||||
if self.engine.has_table(self.table.name):
|
||||
self.table.drop()
|
||||
self.table.create()
|
||||
def tearDown(self):
|
||||
super(TestAddDropColumn,self).tearDown()
|
||||
if self.engine.has_table(self.table.name):
|
||||
self.table.drop()
|
||||
self.meta.clear()
|
||||
|
||||
def run_(self,create_column_func,drop_column_func,*col_p,**col_k):
|
||||
col_name = 'data'
|
||||
|
||||
def _assert_numcols(expected,type_):
|
||||
result = len(self.table.c)
|
||||
self.assertEquals(result,expected,
|
||||
"# %s cols incorrect: %s != %s"%(type_,result,expected))
|
||||
if not col_k.get('primary_key',None):
|
||||
return
|
||||
# new primary key: check its length too
|
||||
result = len(self.table.primary_key)
|
||||
self.assertEquals(result,expected,
|
||||
"# %s pks incorrect: %s != %s"%(type_,result,expected))
|
||||
def assert_numcols(expected):
|
||||
# number of cols should be correct in table object and in database
|
||||
# Changed: create/drop shouldn't mess with the objects
|
||||
#_assert_numcols(expected,'object')
|
||||
# Detect # database cols via autoload
|
||||
self.meta.clear()
|
||||
self.table=Table(self.table_name,self.meta,autoload=True)
|
||||
_assert_numcols(expected,'database')
|
||||
assert_numcols(1)
|
||||
if len(col_p) == 0:
|
||||
col_p = [String]
|
||||
col = Column(col_name,*col_p,**col_k)
|
||||
create_column_func(col)
|
||||
#create_column(col,self.table)
|
||||
assert_numcols(2)
|
||||
self.assertEquals(getattr(self.table.c,col_name),col)
|
||||
#drop_column(col,self.table)
|
||||
col = getattr(self.table.c,col_name)
|
||||
# SQLite can't do drop column: stop here
|
||||
if self.url.startswith('sqlite://'):
|
||||
self.assertRaises(changeset.exceptions.NotSupportedError,drop_column_func,col)
|
||||
return
|
||||
drop_column_func(col)
|
||||
assert_numcols(1)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_undefined(self):
|
||||
"""Add/drop columns not yet defined in the table"""
|
||||
def add_func(col):
|
||||
return create_column(col,self.table)
|
||||
def drop_func(col):
|
||||
return drop_column(col,self.table)
|
||||
return self.run_(add_func,drop_func)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_defined(self):
|
||||
"""Add/drop columns already defined in the table"""
|
||||
def add_func(col):
|
||||
self.meta.clear()
|
||||
self.table = Table(self.table_name,self.meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
col,
|
||||
)
|
||||
return create_column(col,self.table)
|
||||
def drop_func(col):
|
||||
return drop_column(col,self.table)
|
||||
return self.run_(add_func,drop_func)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_method_bound(self):
|
||||
"""Add/drop columns via column methods; columns bound to a table
|
||||
ie. no table parameter passed to function
|
||||
"""
|
||||
def add_func(col):
|
||||
self.assert_(col.table is None,col.table)
|
||||
self.table.append_column(col)
|
||||
return col.create()
|
||||
def drop_func(col):
|
||||
#self.assert_(col.table is None,col.table)
|
||||
#self.table.append_column(col)
|
||||
return col.drop()
|
||||
return self.run_(add_func,drop_func)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_method_notbound(self):
|
||||
"""Add/drop columns via column methods; columns not bound to a table"""
|
||||
def add_func(col):
|
||||
return col.create(self.table)
|
||||
def drop_func(col):
|
||||
return col.drop(self.table)
|
||||
return self.run_(add_func,drop_func)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_tablemethod_obj(self):
|
||||
"""Add/drop columns via table methods; by column object"""
|
||||
def add_func(col):
|
||||
return self.table.create_column(col)
|
||||
def drop_func(col):
|
||||
return self.table.drop_column(col)
|
||||
return self.run_(add_func,drop_func)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_tablemethod_name(self):
|
||||
"""Add/drop columns via table methods; by column name"""
|
||||
def add_func(col):
|
||||
# must be bound to table
|
||||
self.table.append_column(col)
|
||||
return self.table.create_column(col.name)
|
||||
def drop_func(col):
|
||||
# Not necessarily bound to table
|
||||
return self.table.drop_column(col.name)
|
||||
return self.run_(add_func,drop_func)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_byname(self):
|
||||
"""Add/drop columns via functions; by table object and column name"""
|
||||
def add_func(col):
|
||||
self.table.append_column(col)
|
||||
return create_column(col.name,self.table)
|
||||
def drop_func(col):
|
||||
return drop_column(col.name,self.table)
|
||||
return self.run_(add_func,drop_func)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_fk(self):
|
||||
"""Can create columns with foreign keys"""
|
||||
reftable = Table('tmp_ref',self.meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
)
|
||||
def add_func(col):
|
||||
# create FK's target
|
||||
if self.engine.has_table(reftable.name):
|
||||
reftable.drop()
|
||||
reftable.create()
|
||||
self.table.append_column(col)
|
||||
return create_column(col.name,self.table)
|
||||
def drop_func(col):
|
||||
ret = drop_column(col.name,self.table)
|
||||
if self.engine.has_table(reftable.name):
|
||||
reftable.drop()
|
||||
return ret
|
||||
return self.run_(add_func,drop_func,Integer,ForeignKey('tmp_ref.id'))
|
||||
|
||||
#@fixture.usedb()
|
||||
#def xtest_pk(self):
|
||||
# """Can create/drop primary key columns
|
||||
# Not supported
|
||||
# """
|
||||
# def add_func(col):
|
||||
# create_column(col,self.table)
|
||||
# def drop_func(col):
|
||||
# drop_column(col,self.table)
|
||||
# # Primary key length is checked in run_
|
||||
# return self.run_(add_func,drop_func,Integer,primary_key=True)
|
||||
|
||||
class TestRename(fixture.DB):
|
||||
level=fixture.DB.CONNECT
|
||||
meta = MetaData()
|
||||
|
||||
def setUp(self):
|
||||
self.meta.connect(self.engine)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_rename_table(self):
|
||||
"""Tables can be renamed"""
|
||||
#self.engine.echo=True
|
||||
name1 = 'name_one'
|
||||
name2 = 'name_two'
|
||||
xname1 = 'x'+name1
|
||||
xname2 = 'x'+name2
|
||||
self.column = Column(name1,Integer)
|
||||
self.meta.clear()
|
||||
self.table = Table(name1,self.meta,self.column)
|
||||
self.index = Index(xname1,self.column,unique=False)
|
||||
if self.engine.has_table(self.table.name):
|
||||
self.table.drop()
|
||||
if self.engine.has_table(name2):
|
||||
tmp = Table(name2,self.meta,autoload=True)
|
||||
tmp.drop()
|
||||
tmp.deregister()
|
||||
del tmp
|
||||
self.table.create()
|
||||
|
||||
def assert_table_name(expected,skip_object_check=False):
|
||||
"""Refresh a table via autoload
|
||||
SA has changed some since this test was written; we now need to do
|
||||
meta.clear() upon reloading a table - clear all rather than a
|
||||
select few. So, this works only if we're working with one table at
|
||||
a time (else, others will vanish too).
|
||||
"""
|
||||
if not skip_object_check:
|
||||
# Table object check
|
||||
self.assertEquals(self.table.name,expected)
|
||||
newname = self.table.name
|
||||
else:
|
||||
# we know the object's name isn't consistent: just assign it
|
||||
newname = expected
|
||||
# Table DB check
|
||||
#table = self.refresh_table(self.table,newname)
|
||||
self.meta.clear()
|
||||
self.table = Table(newname, self.meta, autoload=True)
|
||||
self.assertEquals(self.table.name,expected)
|
||||
def assert_index_name(expected,skip_object_check=False):
|
||||
if not skip_object_check:
|
||||
# Index object check
|
||||
self.assertEquals(self.index.name,expected)
|
||||
else:
|
||||
# object is inconsistent
|
||||
self.index.name = expected
|
||||
# Index DB check
|
||||
#TODO
|
||||
|
||||
try:
|
||||
# Table renames
|
||||
assert_table_name(name1)
|
||||
rename_table(self.table,name2)
|
||||
assert_table_name(name2)
|
||||
self.table.rename(name1)
|
||||
assert_table_name(name1)
|
||||
# ..by just the string
|
||||
rename_table(name1,name2,engine=self.engine)
|
||||
assert_table_name(name2,True) # object not updated
|
||||
|
||||
# Index renames
|
||||
if self.url.startswith('sqlite') or self.url.startswith('mysql'):
|
||||
self.assertRaises(changeset.exceptions.NotSupportedError,
|
||||
self.index.rename,xname2)
|
||||
else:
|
||||
assert_index_name(xname1)
|
||||
rename_index(self.index,xname2,engine=self.engine)
|
||||
assert_index_name(xname2)
|
||||
self.index.rename(xname1)
|
||||
assert_index_name(xname1)
|
||||
# ..by just the string
|
||||
rename_index(xname1,xname2,engine=self.engine)
|
||||
assert_index_name(xname2,True)
|
||||
|
||||
finally:
|
||||
#self.index.drop()
|
||||
if self.table.exists():
|
||||
self.table.drop()
|
||||
|
||||
class TestColumnChange(fixture.DB):
|
||||
level=fixture.DB.CONNECT
|
||||
table_name = 'tmp_colchange'
|
||||
|
||||
def setUp(self):
|
||||
fixture.DB.setUp(self)
|
||||
self.meta = MetaData(self.engine)
|
||||
self.table = Table(self.table_name,self.meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
Column('data',String(40),PassiveDefault("tluafed"),nullable=True),
|
||||
)
|
||||
if self.table.exists():
|
||||
self.table.drop()
|
||||
try:
|
||||
self.table.create()
|
||||
except sqlalchemy.exceptions.SQLError,e:
|
||||
# SQLite: database schema has changed
|
||||
if not self.url.startswith('sqlite://'):
|
||||
raise
|
||||
#self.engine.echo=True
|
||||
def tearDown(self):
|
||||
#self.engine.echo=False
|
||||
if self.table:
|
||||
try:
|
||||
self.table.drop()
|
||||
except sqlalchemy.exceptions.SQLError,e:
|
||||
# SQLite: database schema has changed
|
||||
if not self.url.startswith('sqlite://'):
|
||||
raise
|
||||
fixture.DB.tearDown(self)
|
||||
|
||||
@fixture.usedb(supported='sqlite')
|
||||
def test_sqlite_not_supported(self):
|
||||
self.assertRaises(changeset.exceptions.NotSupportedError,
|
||||
self.table.c.data.alter,default=PassiveDefault('tluafed'))
|
||||
self.assertRaises(changeset.exceptions.NotSupportedError,
|
||||
self.table.c.data.alter,nullable=True)
|
||||
self.assertRaises(changeset.exceptions.NotSupportedError,
|
||||
self.table.c.data.alter,type=String(21))
|
||||
self.assertRaises(changeset.exceptions.NotSupportedError,
|
||||
self.table.c.data.alter,name='atad')
|
||||
|
||||
@fixture.usedb(not_supported='sqlite')
|
||||
def test_rename(self):
|
||||
"""Can rename a column"""
|
||||
def num_rows(col,content):
|
||||
return len(list(self.table.select(col==content).execute()))
|
||||
# Table content should be preserved in changed columns
|
||||
content = "fgsfds"
|
||||
self.engine.execute(self.table.insert(),data=content,id=42)
|
||||
self.assertEquals(num_rows(self.table.c.data,content),1)
|
||||
|
||||
# ...as a function, given a column object and the new name
|
||||
alter_column(self.table.c.data, name='atad')
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_('data' not in self.table.c.keys())
|
||||
self.assert_('atad' in self.table.c.keys())
|
||||
#self.assertRaises(AttributeError,getattr,self.table.c,'data')
|
||||
self.table.c.atad # Should not raise exception
|
||||
self.assertEquals(num_rows(self.table.c.atad,content),1)
|
||||
|
||||
# ...as a method, given a new name
|
||||
self.table.c.atad.alter(name='data')
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_('atad' not in self.table.c.keys())
|
||||
self.table.c.data # Should not raise exception
|
||||
self.assertEquals(num_rows(self.table.c.data,content),1)
|
||||
|
||||
# ...as a function, given a new object
|
||||
col = Column('atad',String(40),default=self.table.c.data.default)
|
||||
alter_column(self.table.c.data, col)
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_('data' not in self.table.c.keys())
|
||||
self.table.c.atad # Should not raise exception
|
||||
self.assertEquals(num_rows(self.table.c.atad,content),1)
|
||||
|
||||
# ...as a method, given a new object
|
||||
col = Column('data',String(40),default=self.table.c.atad.default)
|
||||
self.table.c.atad.alter(col)
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_('atad' not in self.table.c.keys())
|
||||
self.table.c.data # Should not raise exception
|
||||
self.assertEquals(num_rows(self.table.c.data,content),1)
|
||||
|
||||
@fixture.usedb(not_supported='sqlite')
|
||||
def xtest_fk(self):
|
||||
"""Can add/drop foreign key constraints to/from a column
|
||||
Not supported
|
||||
"""
|
||||
self.assert_(self.table.c.data.foreign_key is None)
|
||||
|
||||
# add
|
||||
self.table.c.data.alter(foreign_key=ForeignKey(self.table.c.id))
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_(self.table.c.data.foreign_key is not None)
|
||||
|
||||
# drop
|
||||
self.table.c.data.alter(foreign_key=None)
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_(self.table.c.data.foreign_key is None)
|
||||
|
||||
@fixture.usedb(not_supported='sqlite')
|
||||
def test_type(self):
|
||||
"""Can change a column's type"""
|
||||
# Entire column definition given
|
||||
self.table.c.data.alter(Column('data',String(42)))
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_(isinstance(self.table.c.data.type,String))
|
||||
self.assertEquals(self.table.c.data.type.length,42)
|
||||
|
||||
# Just the new type
|
||||
self.table.c.data.alter(type=String(21))
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_(isinstance(self.table.c.data.type,String))
|
||||
self.assertEquals(self.table.c.data.type.length,21)
|
||||
|
||||
# Different type
|
||||
self.assert_(isinstance(self.table.c.id.type,Integer))
|
||||
self.assertEquals(self.table.c.id.nullable,False)
|
||||
self.table.c.id.alter(type=String(20))
|
||||
self.assertEquals(self.table.c.id.nullable,False)
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_(isinstance(self.table.c.id.type,String))
|
||||
|
||||
@fixture.usedb(not_supported='sqlite')
|
||||
def test_default(self):
|
||||
"""Can change a column's default value (PassiveDefaults only)
|
||||
Only PassiveDefaults are changed here: others are managed by the
|
||||
application / by SA
|
||||
"""
|
||||
#self.engine.echo=True
|
||||
self.assertEquals(self.table.c.data.default.arg,'tluafed')
|
||||
|
||||
# Just the new default
|
||||
default = 'my_default'
|
||||
self.table.c.data.alter(default=PassiveDefault(default))
|
||||
self.refresh_table(self.table.name)
|
||||
#self.assertEquals(self.table.c.data.default.arg,default)
|
||||
# TextClause returned by autoload
|
||||
self.assert_(default in str(self.table.c.data.default.arg))
|
||||
|
||||
# Column object
|
||||
default = 'your_default'
|
||||
self.table.c.data.alter(Column('data',String(40),default=PassiveDefault(default)))
|
||||
self.refresh_table(self.table.name)
|
||||
self.assert_(default in str(self.table.c.data.default.arg))
|
||||
|
||||
# Remove default
|
||||
self.table.c.data.alter(default=None)
|
||||
self.refresh_table(self.table.name)
|
||||
# default isn't necessarily None for Oracle
|
||||
#self.assert_(self.table.c.data.default is None,self.table.c.data.default)
|
||||
self.engine.execute(self.table.insert(),id=11)
|
||||
row = self.table.select().execute().fetchone()
|
||||
self.assert_(row['data'] is None,row['data'])
|
||||
|
||||
|
||||
@fixture.usedb(not_supported='sqlite')
|
||||
def test_null(self):
|
||||
"""Can change a column's null constraint"""
|
||||
self.assertEquals(self.table.c.data.nullable,True)
|
||||
|
||||
# Column object
|
||||
self.table.c.data.alter(Column('data',String(40),nullable=False))
|
||||
self.table.nullable=None
|
||||
self.refresh_table(self.table.name)
|
||||
self.assertEquals(self.table.c.data.nullable,False)
|
||||
|
||||
# Just the new status
|
||||
self.table.c.data.alter(nullable=True)
|
||||
self.refresh_table(self.table.name)
|
||||
self.assertEquals(self.table.c.data.nullable,True)
|
||||
|
||||
@fixture.usedb(not_supported='sqlite')
|
||||
def xtest_pk(self):
|
||||
"""Can add/drop a column to/from its table's primary key
|
||||
Not supported
|
||||
"""
|
||||
self.assertEquals(len(self.table.primary_key),1)
|
||||
|
||||
# Entire column definition
|
||||
self.table.c.data.alter(Column('data',String,primary_key=True))
|
||||
self.refresh_table(self.table.name)
|
||||
self.assertEquals(len(self.table.primary_key),2)
|
||||
|
||||
# Just the new status
|
||||
self.table.c.data.alter(primary_key=False)
|
||||
self.refresh_table(self.table.name)
|
||||
self.assertEquals(len(self.table.primary_key),1)
|
||||
|
||||
class TestColumnDelta(fixture.Base):
|
||||
def test_deltas(self):
|
||||
def mkcol(name='id',type=String,*p,**k):
|
||||
return Column(name,type,*p,**k)
|
||||
col_orig = mkcol(primary_key=True)
|
||||
|
||||
def verify(expected,original,*p,**k):
|
||||
delta = _ColumnDelta(original,*p,**k)
|
||||
result = delta.keys()
|
||||
result.sort()
|
||||
self.assertEquals(expected,result)
|
||||
return delta
|
||||
|
||||
verify([],col_orig)
|
||||
verify(['name'],col_orig,'ids')
|
||||
# Parameters are always executed, even if they're 'unchanged'
|
||||
# (We can't assume given column is up-to-date)
|
||||
verify(['name','primary_key','type'],col_orig,'id',Integer,primary_key=True)
|
||||
verify(['name','primary_key','type'],col_orig,name='id',type=Integer,primary_key=True)
|
||||
|
||||
# Can compare two columns and find differences
|
||||
col_new = mkcol(name='ids',primary_key=True)
|
||||
verify([],col_orig,col_orig)
|
||||
verify(['name'],col_orig,col_orig,'ids')
|
||||
verify(['name'],col_orig,col_orig,name='ids')
|
||||
verify(['name'],col_orig,col_new)
|
||||
verify(['name','type'],col_orig,col_new,type=String)
|
||||
# Change name, given an up-to-date definition and the current name
|
||||
delta = verify(['name'],col_new,current_name='id')
|
||||
self.assertEquals(delta.get('name'),'ids')
|
||||
# Change other params at the same time
|
||||
verify(['name','type'],col_new,current_name='id',type=String)
|
||||
# Type comparisons
|
||||
verify([],mkcol(type=String),mkcol(type=String))
|
||||
verify(['type'],mkcol(type=String),mkcol(type=Integer))
|
||||
verify(['type'],mkcol(type=String),mkcol(type=String(42)))
|
||||
verify([],mkcol(type=String(42)),mkcol(type=String(42)))
|
||||
verify(['type'],mkcol(type=String(24)),mkcol(type=String(42)))
|
||||
# Other comparisons
|
||||
verify(['primary_key'],mkcol(nullable=False),mkcol(primary_key=True))
|
||||
# PK implies nullable=False
|
||||
verify(['nullable','primary_key'],mkcol(nullable=True),mkcol(primary_key=True))
|
||||
verify([],mkcol(primary_key=True),mkcol(primary_key=True))
|
||||
verify(['nullable'],mkcol(nullable=True),mkcol(nullable=False))
|
||||
verify([],mkcol(nullable=True),mkcol(nullable=True))
|
||||
verify(['default'],mkcol(default=None),mkcol(default='42'))
|
||||
verify([],mkcol(default=None),mkcol(default=None))
|
||||
verify([],mkcol(default='42'),mkcol(default='42'))
|
||||
|
||||
class TestDriver(fixture.DB):
|
||||
@fixture.usedb()
|
||||
def test_driver(self):
|
||||
self.assertEquals(self.url.split(':',1)[0],driver(self.engine))
|
134
test/changeset/test_constraint.py
Normal file
134
test/changeset/test_constraint.py
Normal file
@ -0,0 +1,134 @@
|
||||
from sqlalchemy import *
|
||||
from sqlalchemy.util import *
|
||||
from test import fixture
|
||||
from migrate.changeset import *
|
||||
|
||||
class TestConstraint(fixture.DB):
|
||||
level=fixture.DB.CONNECT
|
||||
def setUp(self):
|
||||
fixture.DB.setUp(self)
|
||||
self._create_table()
|
||||
def tearDown(self):
|
||||
if hasattr(self,'table') and self.engine.has_table(self.table.name):
|
||||
self.table.drop()
|
||||
fixture.DB.tearDown(self)
|
||||
|
||||
def _create_table(self):
|
||||
self.meta = MetaData(self.engine)
|
||||
self.table = Table('mytable',self.meta,
|
||||
Column('id',Integer),
|
||||
Column('fkey',Integer),
|
||||
mysql_engine='InnoDB'
|
||||
)
|
||||
if self.engine.has_table(self.table.name):
|
||||
self.table.drop()
|
||||
self.table.create()
|
||||
#self.assertEquals(self.table.primary_key,[])
|
||||
self.assertEquals(len(self.table.primary_key),0)
|
||||
self.assert_(isinstance(self.table.primary_key,
|
||||
schema.PrimaryKeyConstraint),self.table.primary_key.__class__)
|
||||
def _define_pk(self,*cols):
|
||||
# Add a pk by creating a PK constraint
|
||||
pk = PrimaryKeyConstraint(table=self.table, *cols)
|
||||
self.assertEquals(list(pk.columns),list(cols))
|
||||
if self.url.startswith('oracle'):
|
||||
# Can't drop Oracle PKs without an explicit name
|
||||
pk.name = 'fgsfds'
|
||||
pk.create()
|
||||
self.refresh_table()
|
||||
self.assertEquals(list(self.table.primary_key),list(cols))
|
||||
#self.assert_(self.table.primary_key.name is not None)
|
||||
|
||||
# Drop the PK constraint
|
||||
if not self.url.startswith('oracle'):
|
||||
# Apparently Oracle PK names aren't introspected
|
||||
pk.name = self.table.primary_key.name
|
||||
pk.drop()
|
||||
self.refresh_table()
|
||||
#self.assertEquals(list(self.table.primary_key),list())
|
||||
self.assertEquals(len(self.table.primary_key),0)
|
||||
self.assert_(isinstance(self.table.primary_key,
|
||||
schema.PrimaryKeyConstraint),self.table.primary_key.__class__)
|
||||
return pk
|
||||
|
||||
@fixture.usedb(not_supported='sqlite')
|
||||
def test_define_fk(self):
|
||||
"""FK constraints can be defined, created, and dropped"""
|
||||
# FK target must be unique
|
||||
pk = PrimaryKeyConstraint(self.table.c.id, table=self.table)
|
||||
pk.create()
|
||||
# Add a FK by creating a FK constraint
|
||||
self.assertEquals(self.table.c.fkey.foreign_keys._list, [])
|
||||
fk = ForeignKeyConstraint([self.table.c.fkey],[self.table.c.id], table=self.table)
|
||||
self.assert_(self.table.c.fkey.foreign_keys._list is not [])
|
||||
self.assertEquals(list(fk.columns), [self.table.c.fkey])
|
||||
self.assertEquals([e.column for e in fk.elements],[self.table.c.id])
|
||||
self.assertEquals(list(fk.referenced),[self.table.c.id])
|
||||
|
||||
if self.url.startswith('mysql'):
|
||||
# MySQL FKs need an index
|
||||
index = Index('index_name',self.table.c.fkey)
|
||||
index.create()
|
||||
if self.url.startswith('oracle'):
|
||||
# Oracle constraints need a name
|
||||
fk.name = 'fgsfds'
|
||||
print 'drop...'
|
||||
self.engine.echo=True
|
||||
fk.create()
|
||||
self.engine.echo=False
|
||||
print 'dropped'
|
||||
self.refresh_table()
|
||||
self.assert_(self.table.c.fkey.foreign_keys._list is not [])
|
||||
|
||||
print 'drop...'
|
||||
self.engine.echo=True
|
||||
fk.drop()
|
||||
self.engine.echo=False
|
||||
print 'dropped'
|
||||
self.refresh_table()
|
||||
self.assertEquals(self.table.c.fkey.foreign_keys._list, [])
|
||||
|
||||
@fixture.usedb()
|
||||
def test_define_pk(self):
|
||||
"""PK constraints can be defined, created, and dropped"""
|
||||
self._define_pk(self.table.c.id)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_define_pk_multi(self):
|
||||
"""Multicolumn PK constraints can be defined, created, and dropped"""
|
||||
self.engine.echo=True
|
||||
self._define_pk(self.table.c.id,self.table.c.fkey)
|
||||
|
||||
|
||||
class TestAutoname(fixture.DB):
|
||||
level=fixture.DB.CONNECT
|
||||
|
||||
def setUp(self):
|
||||
fixture.DB.setUp(self)
|
||||
self.meta = MetaData(self.engine)
|
||||
self.table = Table('mytable',self.meta,
|
||||
Column('id',Integer),
|
||||
Column('fkey',String(40)),
|
||||
)
|
||||
if self.engine.has_table(self.table.name):
|
||||
self.table.drop()
|
||||
self.table.create()
|
||||
def tearDown(self):
|
||||
if hasattr(self,'table') and self.engine.has_table(self.table.name):
|
||||
self.table.drop()
|
||||
fixture.DB.tearDown(self)
|
||||
|
||||
@fixture.usedb(not_supported='oracle')
|
||||
def test_autoname(self):
|
||||
"""Constraints can guess their name if none is given"""
|
||||
# Don't supply a name; it should create one
|
||||
cons = PrimaryKeyConstraint(self.table.c.id)
|
||||
cons.create()
|
||||
self.refresh_table()
|
||||
self.assertEquals(list(cons.columns),list(self.table.primary_key))
|
||||
|
||||
# Remove the name, drop the constraint; it should succeed
|
||||
cons.name = None
|
||||
cons.drop()
|
||||
self.refresh_table()
|
||||
self.assertEquals(list(),list(self.table.primary_key))
|
62
test/fixture/__init__.py
Normal file
62
test/fixture/__init__.py
Normal file
@ -0,0 +1,62 @@
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
## Append test method name,etc. to descriptions automatically.
|
||||
## Yes, this is ugly, but it's the simplest way...
|
||||
#def getDescription(self,test):
|
||||
# ret = str(test)
|
||||
# if self.descriptions:
|
||||
# ret += "\n\t"+(test.shortDescription() or '')
|
||||
# return ret
|
||||
#unittest._TextTestResult.getDescription = getDescription
|
||||
|
||||
class Result(unittest._TextTestResult):
|
||||
# test description may be changed as we go; store the description at
|
||||
# exception-time and print later
|
||||
def __init__(self,*p,**k):
|
||||
super(Result,self).__init__(*p,**k)
|
||||
self.desc=dict()
|
||||
def _addError(self,test,err,errs):
|
||||
test,err=errs.pop()
|
||||
errdata=(test,err,self.getDescription(test))
|
||||
errs.append(errdata)
|
||||
|
||||
def addFailure(self,test,err):
|
||||
super(Result,self).addFailure(test,err)
|
||||
self._addError(test,err,self.failures)
|
||||
def addError(self,test,err):
|
||||
super(Result,self).addError(test,err)
|
||||
self._addError(test,err,self.errors)
|
||||
def printErrorList(self, flavour, errors):
|
||||
# Copied from unittest.py
|
||||
#for test, err in errors:
|
||||
for errdata in errors:
|
||||
test,err,desc=errdata
|
||||
self.stream.writeln(self.separator1)
|
||||
#self.stream.writeln("%s: %s" % (flavour,self.getDescription(test)))
|
||||
self.stream.writeln("%s: %s" % (flavour,desc or self.getDescription(test)))
|
||||
self.stream.writeln(self.separator2)
|
||||
self.stream.writeln("%s" % err)
|
||||
|
||||
class Runner(unittest.TextTestRunner):
|
||||
def _makeResult(self):
|
||||
return Result(self.stream,self.descriptions,self.verbosity)
|
||||
|
||||
def suite(imports):
|
||||
return unittest.TestLoader().loadTestsFromNames(imports)
|
||||
|
||||
def main(imports=None):
|
||||
if imports:
|
||||
global suite
|
||||
suite = suite(imports)
|
||||
defaultTest='fixture.suite'
|
||||
else:
|
||||
defaultTest=None
|
||||
return unittest.TestProgram(defaultTest=defaultTest,\
|
||||
testRunner=Runner(verbosity=1))
|
||||
|
||||
from base import Base
|
||||
from pathed import Pathed
|
||||
from shell import Shell
|
||||
from database import DB,usedb
|
||||
|
35
test/fixture/base.py
Normal file
35
test/fixture/base.py
Normal file
@ -0,0 +1,35 @@
|
||||
#import unittest
|
||||
from py.test import raises
|
||||
|
||||
class FakeTestCase(object):
|
||||
"""Mimics unittest.testcase methods
|
||||
Minimize changes needed in migration to py.test
|
||||
"""
|
||||
def setUp(self):
|
||||
pass
|
||||
def setup_method(self,func=None):
|
||||
self.setUp()
|
||||
|
||||
def tearDown(self):
|
||||
pass
|
||||
def teardown_method(self,func=None):
|
||||
self.tearDown()
|
||||
|
||||
def assert_(self,x,doc=None):
|
||||
assert x
|
||||
def assertEquals(self,x,y,doc=None):
|
||||
assert x == y
|
||||
def assertNotEquals(self,x,y,doc=None):
|
||||
assert x != y
|
||||
def assertRaises(self,error,func,*p,**k):
|
||||
assert raises(error,func,*p,**k)
|
||||
|
||||
class Base(FakeTestCase):
|
||||
"""Base class for other test cases"""
|
||||
def ignoreErrors(self,*p,**k):
|
||||
"""Call a function, ignoring any exceptions"""
|
||||
func=p[0]
|
||||
try:
|
||||
func(*p[1:],**k)
|
||||
except:
|
||||
pass
|
156
test/fixture/database.py
Normal file
156
test/fixture/database.py
Normal file
@ -0,0 +1,156 @@
|
||||
from base import Base
|
||||
from pathed import Pathed
|
||||
from sqlalchemy import create_engine,Table
|
||||
from sqlalchemy.orm import create_session
|
||||
from pkg_resources import resource_stream
|
||||
import os
|
||||
|
||||
def readurls():
|
||||
filename='test_db.cfg'
|
||||
fullpath = os.path.join(os.curdir,filename)
|
||||
ret=[]
|
||||
tmpfile=Pathed.tmp()
|
||||
try:
|
||||
fd=open(fullpath)
|
||||
except IOError:
|
||||
print "You must specify the databases to use for testing!"
|
||||
tmplfile = "%s.tmpl"%filename
|
||||
print "Copy %s.tmpl to %s and edit your database URLs."%(tmplfile,filename)
|
||||
raise
|
||||
#fd = resource_stream('__main__',filename)
|
||||
for line in fd:
|
||||
if line.startswith('#'):
|
||||
continue
|
||||
line=line.replace('__tmp__',tmpfile).strip()
|
||||
ret.append(line)
|
||||
fd.close()
|
||||
return ret
|
||||
|
||||
def is_supported(url,supported,not_supported):
|
||||
db = url.split(':',1)[0]
|
||||
if supported is not None:
|
||||
if isinstance(supported,basestring):
|
||||
supported = (supported,)
|
||||
ret = db in supported
|
||||
elif not_supported is not None:
|
||||
if isinstance(not_supported,basestring):
|
||||
not_supported = (not_supported,)
|
||||
ret = not (db in not_supported)
|
||||
else:
|
||||
ret = True
|
||||
return ret
|
||||
|
||||
|
||||
def usedb(supported=None,not_supported=None):
|
||||
"""Decorates tests to be run with a database connection
|
||||
These tests are run once for each available database
|
||||
|
||||
@param supported: run tests for ONLY these databases
|
||||
@param not_supported: run tests for all databases EXCEPT these
|
||||
|
||||
If both supported and not_supported are empty, all dbs are assumed
|
||||
to be supported
|
||||
"""
|
||||
if supported is not None and not_supported is not None:
|
||||
msg = "Can't specify both supported and not_supported in fixture.db()"
|
||||
assert False, msg
|
||||
|
||||
urls = DB.urls
|
||||
urls = [url for url in urls if is_supported(url,supported,not_supported)]
|
||||
def entangle(func):
|
||||
def run(self,*p,**k):
|
||||
for url in urls:
|
||||
def run_one():
|
||||
self._connect(url)
|
||||
self.setup_method(func)
|
||||
try:
|
||||
func(self,*p,**k)
|
||||
finally:
|
||||
self.teardown_method(func)
|
||||
self._disconnect()
|
||||
yield run_one
|
||||
return run
|
||||
return entangle
|
||||
|
||||
class DB(Base):
|
||||
# Constants: connection level
|
||||
NONE=0 # No connection; just set self.url
|
||||
CONNECT=1 # Connect; no transaction
|
||||
TXN=2 # Everything in a transaction
|
||||
|
||||
level=TXN
|
||||
urls=readurls()
|
||||
# url: engine
|
||||
engines=dict([(url,create_engine(url)) for url in urls])
|
||||
|
||||
def shortDescription(self,*p,**k):
|
||||
"""List database connection info with description of the test"""
|
||||
ret = super(DB,self).shortDescription(*p,**k) or str(self)
|
||||
engine = self._engineInfo()
|
||||
if engine is not None:
|
||||
ret = "(%s) %s"%(engine,ret)
|
||||
return ret
|
||||
|
||||
def _engineInfo(self,url=None):
|
||||
if url is None:
|
||||
url=self.url
|
||||
return url
|
||||
|
||||
def _connect(self,url):
|
||||
self.url = url
|
||||
self.engine = self.engines[url]
|
||||
if self.level < self.CONNECT:
|
||||
return
|
||||
#self.conn = self.engine.connect()
|
||||
self.session = create_session(bind=self.engine)
|
||||
if self.level < self.TXN:
|
||||
return
|
||||
self.txn = self.session.create_transaction()
|
||||
#self.txn.add(self.engine)
|
||||
|
||||
def _disconnect(self):
|
||||
if hasattr(self,'txn'):
|
||||
self.txn.rollback()
|
||||
if hasattr(self,'session'):
|
||||
self.session.close()
|
||||
#if hasattr(self,'conn'):
|
||||
# self.conn.close()
|
||||
|
||||
def run(self,*p,**k):
|
||||
"""Run one test for each connection string"""
|
||||
for url in self.urls:
|
||||
self._run_one(url,*p,**k)
|
||||
|
||||
def _supported(self,url):
|
||||
db = url.split(':',1)[0]
|
||||
func = getattr(self,self._TestCase__testMethodName)
|
||||
if hasattr(func,'supported'):
|
||||
return db in func.supported
|
||||
if hasattr(func,'not_supported'):
|
||||
return not (db in func.not_supported)
|
||||
# Neither list assigned; assume all are supported
|
||||
return True
|
||||
def _not_supported(self,url):
|
||||
return not self._supported(url)
|
||||
|
||||
def _run_one(self,url,*p,**k):
|
||||
if self._not_supported(url):
|
||||
return
|
||||
self._connect(url)
|
||||
try:
|
||||
super(DB,self).run(*p,**k)
|
||||
finally:
|
||||
self._disconnect()
|
||||
|
||||
def refresh_table(self,name=None):
|
||||
"""Reload the table from the database
|
||||
Assumes we're working with only a single table, self.table, and
|
||||
metadata self.meta
|
||||
|
||||
Working w/ multiple tables is not possible, as tables can only be
|
||||
reloaded with meta.clear()
|
||||
"""
|
||||
if name is None:
|
||||
name = self.table.name
|
||||
self.meta.clear()
|
||||
self.table = Table(name,self.meta,autoload=True)
|
59
test/fixture/pathed.py
Normal file
59
test/fixture/pathed.py
Normal file
@ -0,0 +1,59 @@
|
||||
import os,shutil,tempfile
|
||||
import base
|
||||
|
||||
class Pathed(base.Base):
|
||||
# Temporary files
|
||||
#repos='/tmp/test_repos_091x10'
|
||||
#config=repos+'/migrate.cfg'
|
||||
#script='/tmp/test_migration_script.py'
|
||||
|
||||
_tmpdir=tempfile.mkdtemp()
|
||||
|
||||
@classmethod
|
||||
def _tmp(cls,prefix='',suffix=''):
|
||||
"""Generate a temporary file name that doesn't exist
|
||||
All filenames are generated inside a temporary directory created by
|
||||
tempfile.mkdtemp(); only the creating user has access to this directory.
|
||||
It should be secure to return a nonexistant temp filename in this
|
||||
directory, unless the user is messing with their own files.
|
||||
"""
|
||||
file,ret = tempfile.mkstemp(suffix,prefix,cls._tmpdir)
|
||||
os.close(file)
|
||||
os.remove(ret)
|
||||
return ret
|
||||
|
||||
@classmethod
|
||||
def tmp(cls,*p,**k):
|
||||
return cls._tmp(*p,**k)
|
||||
|
||||
@classmethod
|
||||
def tmp_py(cls,*p,**k):
|
||||
return cls._tmp(suffix='.py',*p,**k)
|
||||
|
||||
@classmethod
|
||||
def tmp_sql(cls,*p,**k):
|
||||
return cls._tmp(suffix='.sql',*p,**k)
|
||||
|
||||
@classmethod
|
||||
def tmp_named(cls,name):
|
||||
return os.path.join(cls._tmpdir,name)
|
||||
|
||||
@classmethod
|
||||
def tmp_repos(cls,*p,**k):
|
||||
return cls._tmp(*p,**k)
|
||||
|
||||
@classmethod
|
||||
def purge(cls,path):
|
||||
"""Removes this path if it exists, in preparation for tests
|
||||
Careful - all tests should take place in /tmp.
|
||||
We don't want to accidentally wipe stuff out...
|
||||
"""
|
||||
if os.path.exists(path):
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path)
|
||||
else:
|
||||
os.remove(path)
|
||||
if path.endswith('.py'):
|
||||
pyc = path+'c'
|
||||
if os.path.exists(pyc):
|
||||
os.remove(pyc)
|
37
test/fixture/shell.py
Normal file
37
test/fixture/shell.py
Normal file
@ -0,0 +1,37 @@
|
||||
from pathed import *
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
|
||||
class Shell(Pathed):
|
||||
"""Base class for command line tests"""
|
||||
def execute(self,command,*p,**k):
|
||||
"""Return the fd of a command; can get output (stdout/err) and exitcode"""
|
||||
# We might be passed a file descriptor for some reason; if so, just return it
|
||||
if type(command) is file:
|
||||
return command
|
||||
# Redirect stderr to stdout
|
||||
# This is a bit of a hack, but I've not found a better way
|
||||
fd=os.popen(command+' 2>&1',*p,**k)
|
||||
return fd
|
||||
def output_and_exitcode(self,*p,**k):
|
||||
fd=self.execute(*p,**k)
|
||||
output = fd.read()
|
||||
exitcode = fd.close()
|
||||
if k.pop('emit',False):
|
||||
print output
|
||||
return (output,exitcode)
|
||||
def exitcode(self,*p,**k):
|
||||
"""Execute a command and return its exit code
|
||||
...without printing its output/errors
|
||||
"""
|
||||
ret = self.output_and_exitcode(*p,**k)
|
||||
return ret[1]
|
||||
|
||||
def assertFailure(self,*p,**k):
|
||||
output,exitcode = self.output_and_exitcode(*p,**k)
|
||||
assert (exitcode), output
|
||||
def assertSuccess(self,*p,**k):
|
||||
output,exitcode = self.output_and_exitcode(*p,**k)
|
||||
#self.assert_(not exitcode, output)
|
||||
assert (not exitcode), output
|
0
test/integrated/__init__.py
Normal file
0
test/integrated/__init__.py
Normal file
16
test/integrated/test_docs.py
Normal file
16
test/integrated/test_docs.py
Normal file
@ -0,0 +1,16 @@
|
||||
from test import fixture
|
||||
import doctest
|
||||
import os
|
||||
|
||||
# Collect tests for all handwritten docs: doc/*.rst
|
||||
|
||||
dir = ('..','..','docs')
|
||||
absdir = (os.path.dirname(os.path.abspath(__file__)),)+dir
|
||||
dirpath = os.path.join(*absdir)
|
||||
files = [f for f in os.listdir(dirpath) if f.endswith('.rst')]
|
||||
paths = [os.path.join(*(dir+(f,))) for f in files]
|
||||
assert len(paths) > 0
|
||||
suite = doctest.DocFileSuite(*paths)
|
||||
|
||||
def test_docs():
|
||||
suite.debug()
|
0
test/versioning/__init__.py
Normal file
0
test/versioning/__init__.py
Normal file
21
test/versioning/test_cfgparse.py
Normal file
21
test/versioning/test_cfgparse.py
Normal file
@ -0,0 +1,21 @@
|
||||
from test import fixture
|
||||
from migrate.versioning import cfgparse
|
||||
from migrate.versioning.repository import *
|
||||
|
||||
class TestConfigParser(fixture.Base):
|
||||
def test_to_dict(self):
|
||||
"""Correctly interpret config results as dictionaries"""
|
||||
parser = cfgparse.Parser(dict(default_value=42))
|
||||
self.assert_(len(parser.sections())==0)
|
||||
parser.add_section('section')
|
||||
parser.set('section','option','value')
|
||||
self.assert_(parser.get('section','option')=='value')
|
||||
self.assert_(parser.to_dict()['section']['option']=='value')
|
||||
|
||||
def test_table_config(self):
|
||||
"""We should be able to specify the table to be used with a repository"""
|
||||
default_text=Repository.prepare_config(template.get_repository(as_pkg=True,as_str=True),
|
||||
Repository._config,'repository_name')
|
||||
specified_text=Repository.prepare_config(template.get_repository(as_pkg=True,as_str=True),
|
||||
Repository._config,'repository_name',version_table='_other_table')
|
||||
self.assertNotEquals(default_text,specified_text)
|
11
test/versioning/test_database.py
Normal file
11
test/versioning/test_database.py
Normal file
@ -0,0 +1,11 @@
|
||||
from sqlalchemy import *
|
||||
from test import fixture
|
||||
|
||||
class TestConnect(fixture.DB):
|
||||
level=fixture.DB.TXN
|
||||
|
||||
@fixture.usedb()
|
||||
def test_connect(self):
|
||||
"""Connect to the database successfully"""
|
||||
# Connection is done in fixture.DB setup; make sure we can do stuff
|
||||
select(['42'],bind=self.engine).execute()
|
40
test/versioning/test_keyedinstance.py
Normal file
40
test/versioning/test_keyedinstance.py
Normal file
@ -0,0 +1,40 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.util.keyedinstance import *
|
||||
|
||||
class TestKeydInstance(fixture.Base):
|
||||
def test_unique(self):
|
||||
"""UniqueInstance should produce unique object instances"""
|
||||
class Uniq1(KeyedInstance):
|
||||
@classmethod
|
||||
def _key(cls,key):
|
||||
return str(key)
|
||||
def __init__(self,value):
|
||||
self.value=value
|
||||
class Uniq2(KeyedInstance):
|
||||
@classmethod
|
||||
def _key(cls,key):
|
||||
return str(key)
|
||||
def __init__(self,value):
|
||||
self.value=value
|
||||
|
||||
a10 = Uniq1('a')
|
||||
|
||||
# Different key: different instance
|
||||
b10 = Uniq1('b')
|
||||
self.assert_(a10 is not b10)
|
||||
|
||||
# Different class: different instance
|
||||
a20 = Uniq2('a')
|
||||
self.assert_(a10 is not a20)
|
||||
|
||||
# Same key/class: same instance
|
||||
a11 = Uniq1('a')
|
||||
self.assert_(a10 is a11)
|
||||
|
||||
# __init__ is called
|
||||
self.assertEquals(a10.value,'a')
|
||||
|
||||
# clear() causes us to forget all existing instances
|
||||
Uniq1.clear()
|
||||
a12 = Uniq1('a')
|
||||
self.assert_(a10 is not a12)
|
51
test/versioning/test_pathed.py
Normal file
51
test/versioning/test_pathed.py
Normal file
@ -0,0 +1,51 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.pathed import *
|
||||
|
||||
class TestPathed(fixture.Base):
|
||||
def test_parent_path(self):
|
||||
"""Default parent_path should behave correctly"""
|
||||
filepath='/fgsfds/moot.py'
|
||||
dirpath='/fgsfds/moot'
|
||||
sdirpath='/fgsfds/moot/'
|
||||
|
||||
result='/fgsfds'
|
||||
self.assert_(result==Pathed._parent_path(filepath))
|
||||
self.assert_(result==Pathed._parent_path(dirpath))
|
||||
self.assert_(result==Pathed._parent_path(sdirpath))
|
||||
|
||||
def test_new(self):
|
||||
"""Pathed(path) shouldn't create duplicate objects of the same path"""
|
||||
path='/fgsfds'
|
||||
class Test(Pathed):
|
||||
attr=None
|
||||
o1=Test(path)
|
||||
o2=Test(path)
|
||||
self.assert_(isinstance(o1,Test))
|
||||
self.assert_(o1.path==path)
|
||||
self.assert_(o1 is o2)
|
||||
o1.attr='herring'
|
||||
self.assert_(o2.attr=='herring')
|
||||
o2.attr='shrubbery'
|
||||
self.assert_(o1.attr=='shrubbery')
|
||||
|
||||
def test_parent(self):
|
||||
"""Parents should be fetched correctly"""
|
||||
class Parent(Pathed):
|
||||
parent=None
|
||||
children=0
|
||||
def _init_child(self,child,path):
|
||||
"""Keep a tally of children.
|
||||
(A real class might do something more interesting here)
|
||||
"""
|
||||
self.__class__.children+=1
|
||||
|
||||
class Child(Pathed):
|
||||
parent=Parent
|
||||
|
||||
path='/fgsfds/moot.py'
|
||||
parent_path='/fgsfds'
|
||||
object=Child(path)
|
||||
self.assert_(isinstance(object,Child))
|
||||
self.assert_(isinstance(object.parent,Parent))
|
||||
self.assert_(object.path==path)
|
||||
self.assert_(object.parent.path==parent_path)
|
204
test/versioning/test_repository.py
Normal file
204
test/versioning/test_repository.py
Normal file
@ -0,0 +1,204 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.repository import *
|
||||
from migrate.versioning import exceptions
|
||||
import os,shutil
|
||||
|
||||
class TestRepository(fixture.Pathed):
|
||||
def test_create(self):
|
||||
"""Repositories are created successfully"""
|
||||
path=self.tmp_repos()
|
||||
name='repository_name'
|
||||
# Creating a repository that doesn't exist should succeed
|
||||
repos=Repository.create(path,name)
|
||||
config_path=repos.config.path
|
||||
manage_path=os.path.join(repos.path,'manage.py')
|
||||
self.assert_(repos)
|
||||
# Files should actually be created
|
||||
self.assert_(os.path.exists(path))
|
||||
self.assert_(os.path.exists(config_path))
|
||||
self.assert_(os.path.exists(manage_path))
|
||||
# Can't create it again: it already exists
|
||||
self.assertRaises(exceptions.PathFoundError,Repository.create,path,name)
|
||||
return path
|
||||
|
||||
def test_load(self):
|
||||
"""We should be able to load information about an existing repository"""
|
||||
# Create a repository to load
|
||||
path=self.test_create()
|
||||
repos=Repository(path)
|
||||
self.assert_(repos)
|
||||
self.assert_(repos.config)
|
||||
self.assert_(repos.config.get('db_settings','version_table'))
|
||||
# version_table's default isn't none
|
||||
self.assertNotEquals(repos.config.get('db_settings','version_table'),'None')
|
||||
|
||||
def test_load_notfound(self):
|
||||
"""Nonexistant repositories shouldn't be loaded"""
|
||||
path=self.tmp_repos()
|
||||
self.assert_(not os.path.exists(path))
|
||||
self.assertRaises(exceptions.InvalidRepositoryError,Repository,path)
|
||||
|
||||
def test_load_invalid(self):
|
||||
"""Invalid repos shouldn't be loaded"""
|
||||
# Here, invalid=empty directory. There may be other conditions too,
|
||||
# but we shouldn't need to test all of them
|
||||
path=self.tmp_repos()
|
||||
os.mkdir(path)
|
||||
self.assertRaises(exceptions.InvalidRepositoryError,Repository,path)
|
||||
|
||||
|
||||
class TestVersionedRepository(fixture.Pathed):
|
||||
"""Tests on an existing repository with a single python script"""
|
||||
script_cls = script.PythonScript
|
||||
def setUp(self):
|
||||
Repository.clear()
|
||||
self.path_repos=self.tmp_repos()
|
||||
self.path_script=self.tmp_py()
|
||||
# Create repository, script
|
||||
Repository.create(self.path_repos,'repository_name')
|
||||
|
||||
def test_commit(self):
|
||||
"""Commit scripts to a repository and detect repository version"""
|
||||
# Load repository; commit script by pathname; script should go away
|
||||
self.script_cls.create(self.path_script)
|
||||
repos=Repository(self.path_repos)
|
||||
self.assert_(os.path.exists(self.path_script))
|
||||
repos.commit(self.path_script)
|
||||
self.assert_(not os.path.exists(self.path_script))
|
||||
# .pyc file from the committed script shouldn't exist either
|
||||
self.assert_(not os.path.exists(self.path_script+'c'))
|
||||
def test_version(self):
|
||||
"""We should correctly detect the version of a repository"""
|
||||
self.script_cls.create(self.path_script)
|
||||
repos=Repository(self.path_repos)
|
||||
# Get latest version, or detect if a specified version exists
|
||||
self.assertEquals(repos.latest,0)
|
||||
# repos.latest isn't an integer, but a VerNum
|
||||
# (so we can't just assume the following tests are correct)
|
||||
self.assert_(repos.latest>=0)
|
||||
self.assert_(repos.latest<1)
|
||||
# Commit a script and test again
|
||||
repos.commit(self.path_script)
|
||||
self.assertEquals(repos.latest,1)
|
||||
self.assert_(repos.latest>=0)
|
||||
self.assert_(repos.latest>=1)
|
||||
self.assert_(repos.latest<2)
|
||||
# Commit a new script and test again
|
||||
self.script_cls.create(self.path_script)
|
||||
repos.commit(self.path_script)
|
||||
self.assertEquals(repos.latest,2)
|
||||
self.assert_(repos.latest>=0)
|
||||
self.assert_(repos.latest>=1)
|
||||
self.assert_(repos.latest>=2)
|
||||
self.assert_(repos.latest<3)
|
||||
def test_source(self):
|
||||
"""Get a script object by version number and view its source"""
|
||||
self.script_cls.create(self.path_script)
|
||||
# Load repository and commit script
|
||||
repos=Repository(self.path_repos)
|
||||
repos.commit(self.path_script)
|
||||
# Get script object
|
||||
source=repos.version(1).script().source()
|
||||
# Source is valid: script must have an upgrade function
|
||||
# (not a very thorough test, but should be plenty)
|
||||
self.assert_(source.find('def upgrade')>=0)
|
||||
def test_latestversion(self):
|
||||
self.script_cls.create(self.path_script)
|
||||
"""Repository.version() (no params) returns the latest version"""
|
||||
repos=Repository(self.path_repos)
|
||||
repos.commit(self.path_script)
|
||||
self.assert_(repos.version(repos.latest) is repos.version())
|
||||
self.assert_(repos.version() is not None)
|
||||
def xtest_commit_fail(self):
|
||||
"""Failed commits shouldn't corrupt the repository
|
||||
Test disabled - logsql ran the script on commit; now that that's gone,
|
||||
the content of the script is not checked before commit
|
||||
"""
|
||||
repos=Repository(self.path_repos)
|
||||
path_script=self.tmp_py()
|
||||
text_script = """
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
|
||||
# Upgrade is not declared; commit should fail
|
||||
#def upgrade():
|
||||
# raise Exception()
|
||||
|
||||
def downgrade():
|
||||
raise Exception()
|
||||
""".replace("\n ","\n")
|
||||
fd=open(path_script,'w')
|
||||
fd.write(text_script)
|
||||
fd.close()
|
||||
|
||||
# Record current state, and commit
|
||||
ver_pre = os.listdir(repos.versions.path)
|
||||
repos_pre = os.listdir(repos.path)
|
||||
self.assertRaises(Exception,repos.commit,path_script)
|
||||
# Version is unchanged
|
||||
self.assertEquals(repos.latest,0)
|
||||
# No new files created; committed script not moved
|
||||
self.assert_(os.path.exists(path_script))
|
||||
self.assertEquals(os.listdir(repos.versions.path),ver_pre)
|
||||
self.assertEquals(os.listdir(repos.path),repos_pre)
|
||||
|
||||
def test_changeset(self):
|
||||
"""Repositories can create changesets properly"""
|
||||
# Create a nonzero-version repository of empty scripts
|
||||
repos=Repository(self.path_repos)
|
||||
for i in range(10):
|
||||
self.script_cls.create(self.path_script)
|
||||
repos.commit(self.path_script)
|
||||
|
||||
def check_changeset(params,length):
|
||||
"""Creates and verifies a changeset"""
|
||||
changeset = repos.changeset('postgres',*params)
|
||||
self.assertEquals(len(changeset),length)
|
||||
self.assert_(isinstance(changeset,Changeset))
|
||||
uniq = list()
|
||||
# Changesets are iterable
|
||||
for version,change in changeset:
|
||||
self.assert_(isinstance(change,script.BaseScript))
|
||||
# Changes aren't identical
|
||||
self.assert_(id(change) not in uniq)
|
||||
uniq.append(id(change))
|
||||
return changeset
|
||||
|
||||
# Upgrade to a specified version...
|
||||
cs=check_changeset((0,10),10)
|
||||
self.assertEquals(cs.keys().pop(0),0) # 0 -> 1: index is starting version
|
||||
self.assertEquals(cs.keys().pop(),9) # 9 -> 10: index is starting version
|
||||
self.assertEquals(cs.start,0) # starting version
|
||||
self.assertEquals(cs.end,10) # ending version
|
||||
check_changeset((0,1),1)
|
||||
check_changeset((0,5),5)
|
||||
check_changeset((0,0),0)
|
||||
check_changeset((5,5),0)
|
||||
check_changeset((10,10),0)
|
||||
check_changeset((5,10),5)
|
||||
# Can't request a changeset of higher version than this repository
|
||||
self.assertRaises(Exception,repos.changeset,'postgres',5,11)
|
||||
self.assertRaises(Exception,repos.changeset,'postgres',-1,5)
|
||||
|
||||
# Upgrade to the latest version...
|
||||
cs=check_changeset((0,),10)
|
||||
self.assertEquals(cs.keys().pop(0),0)
|
||||
self.assertEquals(cs.keys().pop(),9)
|
||||
self.assertEquals(cs.start,0)
|
||||
self.assertEquals(cs.end,10)
|
||||
check_changeset((1,),9)
|
||||
check_changeset((5,),5)
|
||||
check_changeset((9,),1)
|
||||
check_changeset((10,),0)
|
||||
# Can't request a changeset of higher/lower version than this repository
|
||||
self.assertRaises(Exception,repos.changeset,'postgres',11)
|
||||
self.assertRaises(Exception,repos.changeset,'postgres',-1)
|
||||
|
||||
# Downgrade
|
||||
cs=check_changeset((10,0),10)
|
||||
self.assertEquals(cs.keys().pop(0),10) # 10 -> 9
|
||||
self.assertEquals(cs.keys().pop(),1) # 1 -> 0
|
||||
self.assertEquals(cs.start,10)
|
||||
self.assertEquals(cs.end,0)
|
||||
check_changeset((10,5),5)
|
||||
check_changeset((5,0),5)
|
48
test/versioning/test_runchangeset.py
Normal file
48
test/versioning/test_runchangeset.py
Normal file
@ -0,0 +1,48 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.schema import *
|
||||
from migrate.versioning import script
|
||||
import os,shutil
|
||||
|
||||
class TestRunChangeset(fixture.Pathed,fixture.DB):
|
||||
level=fixture.DB.CONNECT
|
||||
def setUp(self):
|
||||
Repository.clear()
|
||||
self.path_repos=self.tmp_repos()
|
||||
self.path_script=self.tmp_py()
|
||||
# Create repository, script
|
||||
Repository.create(self.path_repos,'repository_name')
|
||||
|
||||
@fixture.usedb()
|
||||
def test_changeset_run(self):
|
||||
"""Running a changeset against a repository gives expected results"""
|
||||
repos=Repository(self.path_repos)
|
||||
for i in range(10):
|
||||
script.PythonScript.create(self.path_script)
|
||||
repos.commit(self.path_script)
|
||||
try:
|
||||
ControlledSchema(self.engine,repos).drop()
|
||||
except:
|
||||
pass
|
||||
db=ControlledSchema.create(self.engine,repos)
|
||||
|
||||
# Scripts are empty; we'll check version # correctness.
|
||||
# (Correct application of their content is checked elsewhere)
|
||||
self.assertEquals(db.version,0)
|
||||
db.upgrade(1)
|
||||
self.assertEquals(db.version,1)
|
||||
db.upgrade(5)
|
||||
self.assertEquals(db.version,5)
|
||||
db.upgrade(5)
|
||||
self.assertEquals(db.version,5)
|
||||
db.upgrade(None) # Latest is implied
|
||||
self.assertEquals(db.version,10)
|
||||
self.assertRaises(Exception,db.upgrade,11)
|
||||
self.assertEquals(db.version,10)
|
||||
db.upgrade(9)
|
||||
self.assertEquals(db.version,9)
|
||||
db.upgrade(0)
|
||||
self.assertEquals(db.version,0)
|
||||
self.assertRaises(Exception,db.upgrade,-1)
|
||||
self.assertEquals(db.version,0)
|
||||
#changeset = repos.changeset(self.url,0)
|
||||
db.drop()
|
90
test/versioning/test_schema.py
Normal file
90
test/versioning/test_schema.py
Normal file
@ -0,0 +1,90 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.schema import *
|
||||
from migrate.versioning import script,exceptions
|
||||
import os,shutil
|
||||
|
||||
class TestControlledSchema(fixture.Pathed,fixture.DB):
|
||||
# Transactions break postgres in this test; we'll clean up after ourselves
|
||||
level=fixture.DB.CONNECT
|
||||
def setUp(self):
|
||||
path_repos=self.tmp_repos()
|
||||
self.repos=Repository.create(path_repos,'repository_name')
|
||||
# drop existing version table if necessary
|
||||
try:
|
||||
ControlledSchema(self.engine,self.repos).drop()
|
||||
except:
|
||||
# No table to drop; that's fine, be silent
|
||||
pass
|
||||
|
||||
@fixture.usedb()
|
||||
def test_version_control(self):
|
||||
"""Establish version control on a particular database"""
|
||||
# Establish version control on this database
|
||||
dbcontrol=ControlledSchema.create(self.engine,self.repos)
|
||||
|
||||
# We can load a controlled DB this way, too
|
||||
dbcontrol0=ControlledSchema(self.engine,self.repos)
|
||||
self.assertEquals(dbcontrol,dbcontrol0)
|
||||
# We can also use a repository path, instead of a repository
|
||||
dbcontrol0=ControlledSchema(self.engine,self.repos.path)
|
||||
self.assertEquals(dbcontrol,dbcontrol0)
|
||||
# We don't have to use the same connection
|
||||
engine=create_engine(self.url)
|
||||
dbcontrol0=ControlledSchema(self.engine,self.repos.path)
|
||||
self.assertEquals(dbcontrol,dbcontrol0)
|
||||
|
||||
# Trying to create another DB this way fails: table exists
|
||||
self.assertRaises(exceptions.ControlledSchemaError,
|
||||
ControlledSchema.create,self.engine,self.repos)
|
||||
|
||||
# Clean up:
|
||||
# un-establish version control
|
||||
dbcontrol.drop()
|
||||
# Attempting to drop vc from a db without it should fail
|
||||
self.assertRaises(exceptions.DatabaseNotControlledError,dbcontrol.drop)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_version_control_specified(self):
|
||||
"""Establish version control with a specified version"""
|
||||
# Establish version control on this database
|
||||
version=0
|
||||
dbcontrol=ControlledSchema.create(self.engine,self.repos,version)
|
||||
self.assertEquals(dbcontrol.version,version)
|
||||
|
||||
# Correct when we load it, too
|
||||
dbcontrol=ControlledSchema(self.engine,self.repos)
|
||||
self.assertEquals(dbcontrol.version,version)
|
||||
|
||||
dbcontrol.drop()
|
||||
|
||||
# Now try it with a nonzero value
|
||||
script_path = self.tmp_py()
|
||||
version=10
|
||||
for i in range(version):
|
||||
script.PythonScript.create(script_path)
|
||||
self.repos.commit(script_path)
|
||||
self.assertEquals(self.repos.latest,version)
|
||||
|
||||
# Test with some mid-range value
|
||||
dbcontrol=ControlledSchema.create(self.engine,self.repos,5)
|
||||
self.assertEquals(dbcontrol.version,5)
|
||||
dbcontrol.drop()
|
||||
|
||||
# Test with max value
|
||||
dbcontrol=ControlledSchema.create(self.engine,self.repos,version)
|
||||
self.assertEquals(dbcontrol.version,version)
|
||||
dbcontrol.drop()
|
||||
|
||||
@fixture.usedb()
|
||||
def test_version_control_invalid(self):
|
||||
"""Try to establish version control with an invalid version"""
|
||||
versions=('Thirteen','-1',-1,'',13)
|
||||
# A fresh repository doesn't go up to version 13 yet
|
||||
for version in versions:
|
||||
#self.assertRaises(ControlledSchema.InvalidVersionError,
|
||||
# Can't have custom errors with assertRaises...
|
||||
try:
|
||||
ControlledSchema.create(self.engine,self.repos,version)
|
||||
self.assert_(False,repr(version))
|
||||
except exceptions.InvalidVersionError:
|
||||
pass
|
57
test/versioning/test_script.py
Normal file
57
test/versioning/test_script.py
Normal file
@ -0,0 +1,57 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.script import *
|
||||
from migrate.versioning import exceptions
|
||||
import os,shutil
|
||||
|
||||
class TestPyScript(fixture.Pathed):
|
||||
cls = PythonScript
|
||||
def test_create(self):
|
||||
"""We can create a migration script"""
|
||||
path=self.tmp_py()
|
||||
# Creating a file that doesn't exist should succeed
|
||||
self.cls.create(path)
|
||||
self.assert_(os.path.exists(path))
|
||||
# Created file should be a valid script (If not, raises an error)
|
||||
self.cls.verify(path)
|
||||
# Can't create it again: it already exists
|
||||
self.assertRaises(exceptions.PathFoundError,self.cls.create,path)
|
||||
|
||||
def test_verify_notfound(self):
|
||||
"""Correctly verify a python migration script: nonexistant file"""
|
||||
path=self.tmp_py()
|
||||
self.assert_(not os.path.exists(path))
|
||||
# Fails on empty path
|
||||
self.assertRaises(exceptions.InvalidScriptError,self.cls.verify,path)
|
||||
self.assertRaises(exceptions.InvalidScriptError,self.cls,path)
|
||||
|
||||
def test_verify_invalidpy(self):
|
||||
"""Correctly verify a python migration script: invalid python file"""
|
||||
path=self.tmp_py()
|
||||
# Create empty file
|
||||
f=open(path,'w')
|
||||
f.write("def fail")
|
||||
f.close()
|
||||
self.assertRaises(Exception,self.cls.verify_module,path)
|
||||
# script isn't verified on creation, but on module reference
|
||||
py = self.cls(path)
|
||||
self.assertRaises(Exception,(lambda x: x.module),py)
|
||||
|
||||
def test_verify_nofuncs(self):
|
||||
"""Correctly verify a python migration script: valid python file; no upgrade func"""
|
||||
path=self.tmp_py()
|
||||
# Create empty file
|
||||
f=open(path,'w')
|
||||
f.write("def zergling():\n\tprint 'rush'")
|
||||
f.close()
|
||||
self.assertRaises(exceptions.InvalidScriptError,self.cls.verify_module,path)
|
||||
# script isn't verified on creation, but on module reference
|
||||
py = self.cls(path)
|
||||
self.assertRaises(exceptions.InvalidScriptError,(lambda x: x.module),py)
|
||||
|
||||
def test_verify_success(self):
|
||||
"""Correctly verify a python migration script: success"""
|
||||
path=self.tmp_py()
|
||||
# Succeeds after creating
|
||||
self.cls.create(path)
|
||||
self.cls.verify(path)
|
||||
|
454
test/versioning/test_shell.py
Normal file
454
test/versioning/test_shell.py
Normal file
@ -0,0 +1,454 @@
|
||||
import sys
|
||||
import traceback
|
||||
from StringIO import StringIO
|
||||
import os,shutil
|
||||
from test import fixture
|
||||
from migrate.versioning.repository import Repository
|
||||
from migrate.versioning import shell
|
||||
from sqlalchemy import MetaData,Table
|
||||
|
||||
class Shell(fixture.Shell):
|
||||
_cmd=os.path.join('python shell','migrate')
|
||||
@classmethod
|
||||
def cmd(cls,*p):
|
||||
p = map(lambda s: str(s),p)
|
||||
ret = ' '.join([cls._cmd]+p)
|
||||
return ret
|
||||
def execute(self,shell_cmd,runshell=None):
|
||||
"""A crude simulation of a shell command, to speed things up"""
|
||||
# If we get an fd, the command is already done
|
||||
if isinstance(shell_cmd,file) or isinstance(shell_cmd,StringIO):
|
||||
return shell_cmd
|
||||
# Analyze the command; see if we can 'fake' the shell
|
||||
try:
|
||||
# Forced to run in shell?
|
||||
#if runshell or '--runshell' in sys.argv:
|
||||
if runshell:
|
||||
raise Exception
|
||||
# Remove the command prefix
|
||||
if not shell_cmd.startswith(self._cmd):
|
||||
raise Exception
|
||||
cmd = shell_cmd[(len(self._cmd)+1):]
|
||||
params = cmd.split(' ')
|
||||
command = params[0]
|
||||
except:
|
||||
return super(Shell,self).execute(shell_cmd)
|
||||
|
||||
# Redirect stdout to an object; redirect stderr to stdout
|
||||
fd = StringIO()
|
||||
orig_stdout = sys.stdout
|
||||
orig_stderr = sys.stderr
|
||||
sys.stdout = fd
|
||||
sys.stderr = fd
|
||||
# Execute this command
|
||||
try:
|
||||
try:
|
||||
shell.main(params)
|
||||
except SystemExit,e:
|
||||
# Simulate the exit status
|
||||
fd_close=fd.close
|
||||
def close_():
|
||||
fd_close()
|
||||
return e.args[0]
|
||||
fd.close = close_
|
||||
except Exception,e:
|
||||
# Print the exception, but don't re-raise it
|
||||
traceback.print_exc()
|
||||
# Simulate a nonzero exit status
|
||||
fd_close=fd.close
|
||||
def close_():
|
||||
fd_close()
|
||||
return 2
|
||||
fd.close = close_
|
||||
finally:
|
||||
# Clean up
|
||||
sys.stdout = orig_stdout
|
||||
sys.stderr = orig_stderr
|
||||
fd.seek(0)
|
||||
return fd
|
||||
|
||||
def cmd_version(self,repos_path):
|
||||
fd = self.execute(self.cmd('version',repos_path))
|
||||
ret = int(fd.read().strip())
|
||||
self.assertSuccess(fd)
|
||||
return ret
|
||||
def cmd_db_version(self,url,repos_path):
|
||||
fd = self.execute(self.cmd('db_version',url,repos_path))
|
||||
txt = fd.read()
|
||||
#print txt
|
||||
ret = int(txt.strip())
|
||||
self.assertSuccess(fd)
|
||||
return ret
|
||||
|
||||
class TestShellCommands(Shell):
|
||||
"""Tests migrate.py commands"""
|
||||
|
||||
def test_run(self):
|
||||
"""Runs; displays help"""
|
||||
# Force this to run in shell...
|
||||
self.assertSuccess(self.cmd('-h'),runshell=True)
|
||||
self.assertSuccess(self.cmd('--help'),runshell=True)
|
||||
|
||||
def test_help(self):
|
||||
"""Display help on a specific command"""
|
||||
self.assertSuccess(self.cmd('-h'),runshell=True)
|
||||
self.assertSuccess(self.cmd('--help'),runshell=True)
|
||||
for cmd in shell.api.__all__:
|
||||
fd=self.execute(self.cmd('help',cmd))
|
||||
# Description may change, so best we can do is ensure it shows up
|
||||
#self.assertNotEquals(fd.read(),'')
|
||||
output = fd.read()
|
||||
self.assertNotEquals(output,'')
|
||||
self.assertSuccess(fd)
|
||||
|
||||
def test_create(self):
|
||||
"""Repositories are created successfully"""
|
||||
repos=self.tmp_repos()
|
||||
name='name'
|
||||
# Creating a file that doesn't exist should succeed
|
||||
cmd=self.cmd('create',repos,name)
|
||||
self.assertSuccess(cmd)
|
||||
# Files should actually be created
|
||||
self.assert_(os.path.exists(repos))
|
||||
# The default table should not be None
|
||||
repos_ = Repository(repos)
|
||||
self.assertNotEquals(repos_.config.get('db_settings','version_table'),'None')
|
||||
# Can't create it again: it already exists
|
||||
self.assertFailure(cmd)
|
||||
|
||||
def test_script(self):
|
||||
"""We can create a migration script via the command line"""
|
||||
script=self.tmp_py()
|
||||
# Creating a file that doesn't exist should succeed
|
||||
self.assertSuccess(self.cmd('script',script))
|
||||
self.assert_(os.path.exists(script))
|
||||
# 's' instead of 'script' should work too
|
||||
os.remove(script)
|
||||
self.assert_(not os.path.exists(script))
|
||||
self.assertSuccess(self.cmd('s',script))
|
||||
self.assert_(os.path.exists(script))
|
||||
# Can't create it again: it already exists
|
||||
self.assertFailure(self.cmd('script',script))
|
||||
|
||||
def test_manage(self):
|
||||
"""Create a project management script"""
|
||||
script=self.tmp_py()
|
||||
self.assert_(not os.path.exists(script))
|
||||
# No attempt is made to verify correctness of the repository path here
|
||||
self.assertSuccess(self.cmd('manage',script,'--repository=/path/to/repository'))
|
||||
self.assert_(os.path.exists(script))
|
||||
|
||||
class TestShellRepository(Shell):
|
||||
"""Shell commands on an existing repository/python script"""
|
||||
def setUp(self):
|
||||
"""Create repository, python change script"""
|
||||
self.path_repos=repos=self.tmp_repos()
|
||||
self.path_script=script=self.tmp_py()
|
||||
self.assertSuccess(self.cmd('create',repos,'repository_name'))
|
||||
self.assertSuccess(self.cmd('script',script))
|
||||
|
||||
def test_commit_1(self):
|
||||
"""Commits should work correctly; script should vanish after commit"""
|
||||
self.assert_(os.path.exists(self.path_script))
|
||||
self.assertSuccess(self.cmd('commit',self.path_script,self.path_repos))
|
||||
self.assert_(not os.path.exists(self.path_script))
|
||||
def test_commit_2(self):
|
||||
"""Commits should work correctly with repository as a keyword param"""
|
||||
self.assert_(os.path.exists(self.path_script))
|
||||
self.assertSuccess(self.cmd('commit',self.path_script,'--repository=%s'%self.path_repos))
|
||||
self.assert_(not os.path.exists(self.path_script))
|
||||
def test_version(self):
|
||||
"""Correctly detect repository version"""
|
||||
# Version: 0 (no scripts yet); successful execution
|
||||
fd=self.execute(self.cmd('version','--repository=%s'%self.path_repos))
|
||||
self.assertEquals(fd.read().strip(),"0")
|
||||
self.assertSuccess(fd)
|
||||
# Also works as a positional param
|
||||
fd=self.execute(self.cmd('version',self.path_repos))
|
||||
self.assertEquals(fd.read().strip(),"0")
|
||||
self.assertSuccess(fd)
|
||||
# Commit a script and version should increment
|
||||
self.assertSuccess(self.cmd('commit',self.path_script,'--repository=%s'%self.path_repos))
|
||||
fd=self.execute(self.cmd('version',self.path_repos))
|
||||
self.assertEquals(fd.read().strip(),"1")
|
||||
self.assertSuccess(fd)
|
||||
def test_source(self):
|
||||
"""Correctly fetch a script's source"""
|
||||
source=open(self.path_script).read()
|
||||
self.assert_(source.find('def upgrade')>=0)
|
||||
self.assertSuccess(self.cmd('commit',self.path_script,'--repository=%s'%self.path_repos))
|
||||
# Later, we'll want to make repos optional somehow
|
||||
# Version is now 1
|
||||
fd=self.execute(self.cmd('version',self.path_repos))
|
||||
self.assert_(fd.read().strip()=="1")
|
||||
self.assertSuccess(fd)
|
||||
# Output/verify the source of version 1
|
||||
fd=self.execute(self.cmd('source',1,'--repository=%s'%self.path_repos))
|
||||
result=fd.read()
|
||||
self.assertSuccess(fd)
|
||||
self.assert_(result.strip()==source.strip())
|
||||
# We can also send the source to a file... test that too
|
||||
self.assertSuccess(self.cmd('source',1,self.path_script,'--repository=%s'%self.path_repos))
|
||||
self.assert_(os.path.exists(self.path_script))
|
||||
fd=open(self.path_script)
|
||||
result=fd.read()
|
||||
self.assert_(result.strip()==source.strip())
|
||||
def test_commit_replace(self):
|
||||
"""Commit can replace a specified version"""
|
||||
# Commit the default script
|
||||
self.assertSuccess(self.cmd('commit',self.path_script,self.path_repos))
|
||||
self.assertEquals(self.cmd_version(self.path_repos),1)
|
||||
# Read the default script's text
|
||||
fd=self.execute(self.cmd('source',1,'--repository=%s'%self.path_repos))
|
||||
script_src_1 = fd.read()
|
||||
self.assertSuccess(fd)
|
||||
|
||||
# Commit a new script
|
||||
script_text="""
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
|
||||
# Our test is just that the source is different; so we don't have to
|
||||
# do anything useful in here.
|
||||
|
||||
def upgrade():
|
||||
pass
|
||||
|
||||
def downgrade():
|
||||
pass
|
||||
""".replace('\n ','\n')
|
||||
fd=open(self.path_script,'w')
|
||||
fd.write(script_text)
|
||||
fd.close()
|
||||
self.assertSuccess(self.cmd('commit',self.path_script,self.path_repos,1))
|
||||
# We specified a version above - it should replace that, not create new
|
||||
self.assertEquals(self.cmd_version(self.path_repos),1)
|
||||
# Source should change
|
||||
fd=self.execute(self.cmd('source',1,'--repository=%s'%self.path_repos))
|
||||
script_src_2 = fd.read()
|
||||
self.assertSuccess(fd)
|
||||
self.assertNotEquals(script_src_1,script_src_2)
|
||||
# source should be reasonable
|
||||
self.assertEquals(script_src_2.strip(),script_text.strip())
|
||||
self.assert_(script_src_1.count('from migrate import'))
|
||||
self.assert_(script_src_1.count('from sqlalchemy import'))
|
||||
|
||||
class TestShellDatabase(Shell,fixture.DB):
|
||||
"""Commands associated with a particular database"""
|
||||
# We'll need to clean up after ourself, since the shell creates its own txn;
|
||||
# we need to connect to the DB to see if things worked
|
||||
level=fixture.DB.CONNECT
|
||||
|
||||
@fixture.usedb()
|
||||
def test_version_control(self):
|
||||
"""Ensure we can set version control on a database"""
|
||||
path_repos=repos=self.tmp_repos()
|
||||
self.assertSuccess(self.cmd('create',path_repos,'repository_name'))
|
||||
self.exitcode(self.cmd('drop_version_control',self.url,path_repos))
|
||||
self.assertSuccess(self.cmd('version_control',self.url,path_repos))
|
||||
# Clean up
|
||||
self.assertSuccess(self.cmd('drop_version_control',self.url,path_repos))
|
||||
# Attempting to drop vc from a database without it should fail
|
||||
self.assertFailure(self.cmd('drop_version_control',self.url,path_repos))
|
||||
|
||||
@fixture.usedb()
|
||||
def test_version_control_specified(self):
|
||||
"""Ensure we can set version control to a particular version"""
|
||||
path_repos=self.tmp_repos()
|
||||
self.assertSuccess(self.cmd('create',path_repos,'repository_name'))
|
||||
self.exitcode(self.cmd('drop_version_control',self.url,path_repos))
|
||||
# Fill the repository
|
||||
path_script = self.tmp_py()
|
||||
version=1
|
||||
for i in range(version):
|
||||
self.assertSuccess(self.cmd('script',path_script))
|
||||
self.assertSuccess(self.cmd('commit',path_script,path_repos))
|
||||
# Repository version is correct
|
||||
fd=self.execute(self.cmd('version',path_repos))
|
||||
self.assertEquals(fd.read().strip(),str(version))
|
||||
self.assertSuccess(fd)
|
||||
# Apply versioning to DB
|
||||
self.assertSuccess(self.cmd('version_control',self.url,path_repos,version))
|
||||
# Test version number
|
||||
fd=self.execute(self.cmd('db_version',self.url,path_repos))
|
||||
self.assertEquals(fd.read().strip(),str(version))
|
||||
self.assertSuccess(fd)
|
||||
# Clean up
|
||||
self.assertSuccess(self.cmd('drop_version_control',self.url,path_repos))
|
||||
|
||||
@fixture.usedb()
|
||||
def test_upgrade(self):
|
||||
"""Can upgrade a versioned database"""
|
||||
# Create a repository
|
||||
repos_name = 'repos_name'
|
||||
repos_path = self.tmp()
|
||||
script_path = self.tmp_py()
|
||||
self.assertSuccess(self.cmd('create',repos_path,repos_name))
|
||||
self.assertEquals(self.cmd_version(repos_path),0)
|
||||
# Version the DB
|
||||
self.exitcode(self.cmd('drop_version_control',self.url,repos_path))
|
||||
self.assertSuccess(self.cmd('version_control',self.url,repos_path))
|
||||
|
||||
# Upgrades with latest version == 0
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
self.assertSuccess(self.cmd('upgrade',self.url,repos_path))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
self.assertSuccess(self.cmd('upgrade',self.url,repos_path,0))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
self.assertFailure(self.cmd('upgrade',self.url,repos_path,1))
|
||||
self.assertFailure(self.cmd('upgrade',self.url,repos_path,-1))
|
||||
|
||||
# Add a script to the repository; upgrade the db
|
||||
self.assertSuccess(self.cmd('script',script_path))
|
||||
self.assertSuccess(self.cmd('commit',script_path,repos_path))
|
||||
self.assertEquals(self.cmd_version(repos_path),1)
|
||||
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
self.assertSuccess(self.cmd('upgrade',self.url,repos_path))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),1)
|
||||
# Downgrade must have a valid version specified
|
||||
self.assertFailure(self.cmd('downgrade',self.url,repos_path))
|
||||
self.assertFailure(self.cmd('downgrade',self.url,repos_path,2))
|
||||
self.assertFailure(self.cmd('downgrade',self.url,repos_path,-1))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),1)
|
||||
self.assertSuccess(self.cmd('downgrade',self.url,repos_path,0))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
self.assertFailure(self.cmd('downgrade',self.url,repos_path,1))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
|
||||
self.assertSuccess(self.cmd('drop_version_control',self.url,repos_path))
|
||||
|
||||
def _run_test_sqlfile(self,upgrade_script,downgrade_script):
|
||||
upgrade_path = self.tmp_sql()
|
||||
downgrade_path = self.tmp_sql()
|
||||
upgrade = (upgrade_path,upgrade_script)
|
||||
downgrade = (downgrade_path,downgrade_script)
|
||||
for file_path,file_text in (upgrade,downgrade):
|
||||
fd = open(file_path,'w')
|
||||
fd.write(file_text)
|
||||
fd.close()
|
||||
|
||||
repos_path = self.tmp()
|
||||
repos_name = 'repos'
|
||||
self.assertSuccess(self.cmd('create',repos_path,repos_name))
|
||||
self.exitcode(self.cmd('drop_version_control',self.url,repos_path))
|
||||
self.assertSuccess(self.cmd('version_control',self.url,repos_path))
|
||||
self.assertEquals(self.cmd_version(repos_path),0)
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
|
||||
self.assertSuccess(self.cmd('commit',upgrade_path,repos_path,'postgres','upgrade'))
|
||||
self.assertEquals(self.cmd_version(repos_path),1)
|
||||
self.assertEquals(len(os.listdir(os.path.join(repos_path,'versions','1'))),1)
|
||||
|
||||
# Add, not replace
|
||||
self.assertSuccess(self.cmd('commit',downgrade_path,repos_path,'postgres','downgrade','--version=1'))
|
||||
self.assertEquals(len(os.listdir(os.path.join(repos_path,'versions','1'))),2)
|
||||
self.assertEquals(self.cmd_version(repos_path),1)
|
||||
|
||||
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
self.assertRaises(Exception,self.engine.text('select * from t_table').execute)
|
||||
|
||||
self.assertSuccess(self.cmd('upgrade',self.url,repos_path))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),1)
|
||||
self.engine.text('select * from t_table').execute()
|
||||
|
||||
self.assertSuccess(self.cmd('downgrade',self.url,repos_path,0))
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
self.assertRaises(Exception,self.engine.text('select * from t_table').execute)
|
||||
|
||||
# The tests below are written with some postgres syntax, but the stuff
|
||||
# being tested (.sql files) ought to work with any db.
|
||||
@fixture.usedb(supported='postgres')
|
||||
def test_sqlfile(self):
|
||||
upgrade_script = """
|
||||
create table t_table (
|
||||
id serial,
|
||||
primary key(id)
|
||||
);
|
||||
"""
|
||||
downgrade_script = """
|
||||
drop table t_table;
|
||||
"""
|
||||
self._run_test_sqlfile(upgrade_script,downgrade_script)
|
||||
@fixture.usedb(supported='postgres')
|
||||
def test_sqlfile_comment(self):
|
||||
upgrade_script = """
|
||||
-- Comments in SQL break postgres autocommit
|
||||
create table t_table (
|
||||
id serial,
|
||||
primary key(id)
|
||||
);
|
||||
"""
|
||||
downgrade_script = """
|
||||
-- Comments in SQL break postgres autocommit
|
||||
drop table t_table;
|
||||
"""
|
||||
self._run_test_sqlfile(upgrade_script,downgrade_script)
|
||||
|
||||
@fixture.usedb()
|
||||
def test_test(self):
|
||||
repos_name = 'repos_name'
|
||||
repos_path = self.tmp()
|
||||
script_path = self.tmp_py()
|
||||
|
||||
self.assertSuccess(self.cmd('create',repos_path,repos_name))
|
||||
self.exitcode(self.cmd('drop_version_control',self.url,repos_path))
|
||||
self.assertSuccess(self.cmd('version_control',self.url,repos_path))
|
||||
self.assertEquals(self.cmd_version(repos_path),0)
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
|
||||
# Empty script should succeed
|
||||
self.assertSuccess(self.cmd('script',script_path))
|
||||
self.assertSuccess(self.cmd('test',script_path,repos_path,self.url))
|
||||
self.assertEquals(self.cmd_version(repos_path),0)
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
|
||||
# Error script should fail
|
||||
script_path = self.tmp_py()
|
||||
script_text="""
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
|
||||
def upgrade():
|
||||
print 'fgsfds'
|
||||
raise Exception()
|
||||
|
||||
def downgrade():
|
||||
print 'sdfsgf'
|
||||
raise Exception()
|
||||
""".replace("\n ","\n")
|
||||
file=open(script_path,'w')
|
||||
file.write(script_text)
|
||||
file.close()
|
||||
self.assertFailure(self.cmd('test',script_path,repos_path,self.url))
|
||||
self.assertEquals(self.cmd_version(repos_path),0)
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
||||
|
||||
# Nonempty script using migrate_engine should succeed
|
||||
script_path = self.tmp_py()
|
||||
script_text="""
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
|
||||
meta = MetaData(migrate_engine)
|
||||
account = Table('account',meta,
|
||||
Column('id',Integer,primary_key=True),
|
||||
Column('login',String(40)),
|
||||
Column('passwd',String(40)),
|
||||
)
|
||||
def upgrade():
|
||||
# Upgrade operations go here. Don't create your own engine; use the engine
|
||||
# named 'migrate_engine' imported from migrate.
|
||||
meta.create_all()
|
||||
|
||||
def downgrade():
|
||||
# Operations to reverse the above upgrade go here.
|
||||
meta.drop_all()
|
||||
""".replace("\n ","\n")
|
||||
file=open(script_path,'w')
|
||||
file.write(script_text)
|
||||
file.close()
|
||||
self.assertSuccess(self.cmd('test',script_path,repos_path,self.url))
|
||||
self.assertEquals(self.cmd_version(repos_path),0)
|
||||
self.assertEquals(self.cmd_db_version(self.url,repos_path),0)
|
17
test/versioning/test_template.py
Normal file
17
test/versioning/test_template.py
Normal file
@ -0,0 +1,17 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.repository import *
|
||||
import os
|
||||
|
||||
class TestPathed(fixture.Base):
|
||||
def test_templates(self):
|
||||
"""We can find the path to all repository templates"""
|
||||
path = str(template)
|
||||
self.assert_(os.path.exists(path))
|
||||
def test_repository(self):
|
||||
"""We can find the path to the default repository"""
|
||||
path = template.get_repository()
|
||||
self.assert_(os.path.exists(path))
|
||||
def test_script(self):
|
||||
"""We can find the path to the default migration script"""
|
||||
path = template.get_script()
|
||||
self.assert_(os.path.exists(path))
|
44
test/versioning/test_version.py
Normal file
44
test/versioning/test_version.py
Normal file
@ -0,0 +1,44 @@
|
||||
from test import fixture
|
||||
from migrate.versioning.version import *
|
||||
|
||||
class TestVerNum(fixture.Base):
|
||||
def test_invalid(self):
|
||||
"""Disallow invalid version numbers"""
|
||||
versions = ('-1',-1,'Thirteen','')
|
||||
for version in versions:
|
||||
self.assertRaises(ValueError,VerNum,version)
|
||||
def test_is(self):
|
||||
a=VerNum(1)
|
||||
b=VerNum(1)
|
||||
self.assert_(a is b)
|
||||
def test_add(self):
|
||||
self.assert_(VerNum(1)+VerNum(1)==VerNum(2))
|
||||
self.assert_(VerNum(1)+1==2)
|
||||
self.assert_(VerNum(1)+1=='2')
|
||||
def test_sub(self):
|
||||
self.assert_(VerNum(1)-1==0)
|
||||
self.assertRaises(ValueError,lambda:VerNum(0)-1)
|
||||
def test_eq(self):
|
||||
self.assert_(VerNum(1)==VerNum('1'))
|
||||
self.assert_(VerNum(1)==1)
|
||||
self.assert_(VerNum(1)=='1')
|
||||
self.assert_(not VerNum(1)==2)
|
||||
def test_ne(self):
|
||||
self.assert_(VerNum(1)!=2)
|
||||
self.assert_(not VerNum(1)!=1)
|
||||
def test_lt(self):
|
||||
self.assert_(not VerNum(1)<1)
|
||||
self.assert_(VerNum(1)<2)
|
||||
self.assert_(not VerNum(2)<1)
|
||||
def test_le(self):
|
||||
self.assert_(VerNum(1)<=1)
|
||||
self.assert_(VerNum(1)<=2)
|
||||
self.assert_(not VerNum(2)<=1)
|
||||
def test_gt(self):
|
||||
self.assert_(not VerNum(1)>1)
|
||||
self.assert_(not VerNum(1)>2)
|
||||
self.assert_(VerNum(2)>1)
|
||||
def test_ge(self):
|
||||
self.assert_(VerNum(1)>=1)
|
||||
self.assert_(not VerNum(1)>=2)
|
||||
self.assert_(VerNum(2)>=1)
|
13
test_db.cfg.tmpl
Normal file
13
test_db.cfg.tmpl
Normal file
@ -0,0 +1,13 @@
|
||||
# test_db.cfg
|
||||
#
|
||||
# This file contains a list of connection strings which will be used by
|
||||
# database tests. Tests will be executed once for each string in this file.
|
||||
# You should be sure that the database used for the test doesn't contain any
|
||||
# important data. See README for more information.
|
||||
#
|
||||
# The string '__tmp__' is substituted for a temporary file in each connection
|
||||
# string. This is useful for sqlite tests.
|
||||
sqlite:///__tmp__
|
||||
postgres://scott:tiger@localhost/test
|
||||
mysql://scott:tiger@localhost/test
|
||||
oracle://scott:tiger@localhost
|
Loading…
x
Reference in New Issue
Block a user