Eradicate trailing whitespace
Remove all trailing spaces and tabs in every file in the project. People have editors configured to do this, which causes them to accidentally make little whitespace changes in unrelated commits, which makes those commits harder to review. Better to fix them all at once. Change-Id: I17d89f55f41d8599e0ab1a31f646cd161289703e
This commit is contained in:
parent
21fcdad0f4
commit
d6fbf12989
@ -130,9 +130,9 @@ Fixed bugs
|
||||
- fixed case sensitivity in setup.py dependencies
|
||||
- moved :py:mod:`migrate.changeset.exceptions` and
|
||||
:py:mod:`migrate.versioning.exceptions` to :py:mod:`migrate.exceptions`
|
||||
- cleared up test output and improved testing of deprecation warnings.
|
||||
- cleared up test output and improved testing of deprecation warnings.
|
||||
- some documentation fixes
|
||||
- #107: fixed syntax error in genmodel.py
|
||||
- #107: fixed syntax error in genmodel.py
|
||||
- #96: fixed bug with column dropping in sqlite
|
||||
- #94: fixed bug that prevented non-unique indexes being created
|
||||
- fixed bug with column dropping involving foreign keys
|
||||
@ -140,7 +140,7 @@ Fixed bugs
|
||||
- rewrite of the schema diff internals, now supporting column
|
||||
differences in additon to missing columns and tables.
|
||||
- fixed bug when passing empty list in
|
||||
:py:func:`migrate.versioning.shell.main` failed
|
||||
:py:func:`migrate.versioning.shell.main` failed
|
||||
- #108: Fixed issues with firebird support.
|
||||
|
||||
0.6 (11.07.2010)
|
||||
|
@ -80,10 +80,10 @@ You can create a column with :meth:`~ChangesetColumn.create`:
|
||||
|
||||
You can pass `primary_key_name`, `index_name` and `unique_name` to the
|
||||
:meth:`~ChangesetColumn.create` method to issue ``ALTER TABLE ADD
|
||||
CONSTRAINT`` after changing the column.
|
||||
CONSTRAINT`` after changing the column.
|
||||
|
||||
For multi columns constraints and other advanced configuration, check the
|
||||
:ref:`constraint tutorial <constraint-tutorial>`.
|
||||
:ref:`constraint tutorial <constraint-tutorial>`.
|
||||
|
||||
.. versionadded:: 0.6.0
|
||||
|
||||
@ -244,12 +244,12 @@ Foreign key constraints:
|
||||
.. code-block:: python
|
||||
|
||||
from migrate.changeset.constraint import ForeignKeyConstraint
|
||||
|
||||
|
||||
cons = ForeignKeyConstraint([table.c.fkey], [othertable.c.id])
|
||||
|
||||
|
||||
# Create the constraint
|
||||
cons.create()
|
||||
|
||||
|
||||
# Drop the constraint
|
||||
cons.drop()
|
||||
|
||||
@ -258,9 +258,9 @@ Check constraints:
|
||||
.. code-block:: python
|
||||
|
||||
from migrate.changeset.constraint import CheckConstraint
|
||||
|
||||
|
||||
cons = CheckConstraint('id > 3', columns=[table.c.id])
|
||||
|
||||
|
||||
# Create the constraint
|
||||
cons.create()
|
||||
|
||||
@ -272,7 +272,7 @@ Unique constraints:
|
||||
.. code-block:: python
|
||||
|
||||
from migrate.changeset.constraint import UniqueConstraint
|
||||
|
||||
|
||||
cons = UniqueConstraint('id', 'age', table=self.table)
|
||||
|
||||
# Create the constraint
|
||||
|
@ -1,7 +1,7 @@
|
||||
FAQ
|
||||
===
|
||||
|
||||
Q: Adding a **nullable=False** column
|
||||
Q: Adding a **nullable=False** column
|
||||
**************************************
|
||||
|
||||
A: Your table probably already contains data. That means if you add column, it's contents will be NULL.
|
||||
|
@ -13,7 +13,7 @@ Automatically generating the code for this sort of task seems like a good soluti
|
||||
* Maintence, usually a problem with auto-generated code, is not an issue: old database migration scripts are not the subject of maintenance; the correct solution is usually a new migration script.
|
||||
|
||||
|
||||
Implementation is a problem: finding the 'diff' of two databases to determine what columns to add is not trivial. Fortunately, there exist tools that claim to do this for us: [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL] both claim to have this capability.
|
||||
Implementation is a problem: finding the 'diff' of two databases to determine what columns to add is not trivial. Fortunately, there exist tools that claim to do this for us: [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL] both claim to have this capability.
|
||||
|
||||
...
|
||||
|
||||
@ -23,4 +23,4 @@ All that said, this is ''not'' something I'm going to attempt during the Summer
|
||||
* The project has a deadline and I have plenty else to do already
|
||||
* Lots of people with more experience than me say this would take more time than it's worth
|
||||
|
||||
It's something that might be considered for future work if this project is successful, though.
|
||||
It's something that might be considered for future work if this project is successful, though.
|
@ -1,9 +1,9 @@
|
||||
Important to our system is the API used for making database changes.
|
||||
Important to our system is the API used for making database changes.
|
||||
|
||||
=== Raw SQL; .sql script ===
|
||||
Require users to write raw SQL. Migration scripts are .sql scripts (with database version information in a header comment).
|
||||
Require users to write raw SQL. Migration scripts are .sql scripts (with database version information in a header comment).
|
||||
|
||||
+ Familiar interface for experienced DBAs.
|
||||
+ Familiar interface for experienced DBAs.
|
||||
|
||||
+ No new API to learn[[br]]
|
||||
SQL is used elsewhere; many people know SQL already. Those who are still learning SQL will gain expertise not in the API of a specific tool, but in a language which will help them elsewhere. (On the other hand, those who are familiar with Python with no desire to learn SQL might find a Python API more intuitive.)
|
||||
@ -15,12 +15,12 @@ SQL is used elsewhere; many people know SQL already. Those who are still learnin
|
||||
Some things are possible in Python that aren't in SQL - for example, suppose we want to use some functions from our application in a migration script. (The user might also simply prefer Python.)
|
||||
|
||||
- Loss of database independence.[[br]]
|
||||
There isn't much we can do to specify different actions for a particular DBMS besides copying the .sql file, which is obviously bad form.
|
||||
There isn't much we can do to specify different actions for a particular DBMS besides copying the .sql file, which is obviously bad form.
|
||||
|
||||
=== Raw SQL; Python script ===
|
||||
Require users to write raw SQL. Migration scripts are python scripts whose API does little beyond specifying what DBMS(es) a particular statement should apply to.
|
||||
Require users to write raw SQL. Migration scripts are python scripts whose API does little beyond specifying what DBMS(es) a particular statement should apply to.
|
||||
|
||||
For example,
|
||||
For example,
|
||||
{{{
|
||||
run("CREATE TABLE test[...]") # runs for all databases
|
||||
run("ALTER TABLE test ADD COLUMN varchar2[...]",oracle) # runs for Oracle only
|
||||
@ -42,12 +42,12 @@ run("ALTER TABLE test ADD COLUMN"+sql("varchar",postgres|mysql,"varchar2",oracle
|
||||
The user can write scripts to deal with conflicts between databases, but they're not really database-independent: the user has to deal with conflicts between databases; our system doesn't help them.
|
||||
|
||||
+ Minimal new API to learn. [[br]]
|
||||
There is a new API to learn, but it is extremely small, depending mostly on SQL DDL. This has the advantages of "no new API" in our first solution.
|
||||
There is a new API to learn, but it is extremely small, depending mostly on SQL DDL. This has the advantages of "no new API" in our first solution.
|
||||
|
||||
- More verbose than .sql scripts.
|
||||
- More verbose than .sql scripts.
|
||||
|
||||
=== Raw SQL; automatic translation between each dialect ===
|
||||
Same as the above suggestion, but allow the user to specify a 'default' dialect of SQL that we'll interpret and whose quirks we'll deal with.
|
||||
Same as the above suggestion, but allow the user to specify a 'default' dialect of SQL that we'll interpret and whose quirks we'll deal with.
|
||||
That is, write everything in SQL and try to automatically resolve the conflicts of different DBMSes.
|
||||
|
||||
For example, take the following script:
|
||||
@ -74,7 +74,7 @@ CREATE TABLE test (
|
||||
)
|
||||
}}}
|
||||
|
||||
+ Database-independence issues of the above SQL solutions are resolved.[[br]]
|
||||
+ Database-independence issues of the above SQL solutions are resolved.[[br]]
|
||||
Ideally, this solution would be as database-independent as a Python API for database changes (discussed next), but with all the advantages of writing SQL (no new API).
|
||||
|
||||
- Difficult implementation[[br]]
|
||||
@ -83,7 +83,7 @@ Obviously, this is not easy to implement - there is a great deal of parsing logi
|
||||
It seems tools for this already exist; an effective tool would trivialize this implementation. I experimented a bit with [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL]; however, I had difficulties with both.
|
||||
|
||||
- Database-specific features ensure that this cannot possibly be "complete". [[br]]
|
||||
For example, Postgres has an 'interval' type to represent times and (AFAIK) MySQL does not.
|
||||
For example, Postgres has an 'interval' type to represent times and (AFAIK) MySQL does not.
|
||||
|
||||
=== Database-independent Python API ===
|
||||
Create a Python API through which we may manage database changes. Scripts would be based on the existing SQLAlchemy API when possible.
|
||||
@ -104,13 +104,13 @@ add_column('test','id',Integer,notNull=True)
|
||||
}}}
|
||||
This would use engines, similar to SQLAlchemy's, to deal with database-independence issues.
|
||||
|
||||
We would, of course, allow users to write raw SQL if they wish. This would be done in the manner outlined in the second solution above; this allows us to write our entire script in SQL and ignore the Python API if we wish, or write parts of our solution in SQL to deal with specific databases.
|
||||
We would, of course, allow users to write raw SQL if they wish. This would be done in the manner outlined in the second solution above; this allows us to write our entire script in SQL and ignore the Python API if we wish, or write parts of our solution in SQL to deal with specific databases.
|
||||
|
||||
+ Deals with database-independence thoroughly and with minimal user effort.[[br]]
|
||||
SQLAlchemy-style engines would be used for this; issues of different DBMS syntax are resolved with minimal user effort. (Database-specific features would still need handwritten SQL.)
|
||||
|
||||
+ Familiar interface for SQLAlchemy users.[[br]]
|
||||
In addition, we can often cut-and-paste column definitions from SQLAlchemy tables, easing one particular task.
|
||||
In addition, we can often cut-and-paste column definitions from SQLAlchemy tables, easing one particular task.
|
||||
|
||||
- Requires that the user learn a new API. [[br]]
|
||||
SQL already exists; people know it. SQL newbies might be more comfortable with a Python interface, but folks who already know SQL must learn a whole new API. (On the other hand, the user *can* write things in SQL if they wish, learning only the most minimal of APIs, if they are willing to resolve issues of database-independence themself.)
|
||||
@ -119,29 +119,29 @@ SQL already exists; people know it. SQL newbies might be more comfortable with a
|
||||
SQL already exists/has been tested. A new Python API does not/has not, and much of the work seems to consist of little more than reinventing the wheel.
|
||||
|
||||
- Script behavior might change under different versions of the project.[[br]]
|
||||
...where .sql scripts behave the same regardless of the project's version.
|
||||
...where .sql scripts behave the same regardless of the project's version.
|
||||
|
||||
=== Generate .sql scripts from a Python API ===
|
||||
Attempts to take the best of the first and last solutions. An API similar to the previous solution would be used, but rather than immediately being applied to the database, .sql scripts are generated for each type of database we're interested in. These .sql scripts are what's actually applied to the database.
|
||||
|
||||
This would essentially allow users to skip the Python script step entirely if they wished, and write migration scripts in SQL instead, as in solution 1.
|
||||
This would essentially allow users to skip the Python script step entirely if they wished, and write migration scripts in SQL instead, as in solution 1.
|
||||
|
||||
+ Database-independence is an option, when needed.
|
||||
+ Database-independence is an option, when needed.
|
||||
|
||||
+ A familiar interface/an interface that can interact with other tools is an option, when needed.
|
||||
+ A familiar interface/an interface that can interact with other tools is an option, when needed.
|
||||
|
||||
+ Easy to inspect the SQL generated by a script, to ensure it's what we're expecting.
|
||||
+ Easy to inspect the SQL generated by a script, to ensure it's what we're expecting.
|
||||
|
||||
+ Migration scripts won't change behavior across different versions of the project. [[br]]
|
||||
Once a Python script is translated to a .sql script, its behavior is consistent across different versions of the project, unlike a pure Python solution.
|
||||
Once a Python script is translated to a .sql script, its behavior is consistent across different versions of the project, unlike a pure Python solution.
|
||||
|
||||
- Multiple ways to do a single task: not Pythonic.[[br]]
|
||||
I never really liked that word - "Pythonic" - but it does apply here. Multiple ways to do a single task has the potential to cause confusion, especially in a large project if many people do the same task different ways. We have to support both ways of doing things, as well.
|
||||
I never really liked that word - "Pythonic" - but it does apply here. Multiple ways to do a single task has the potential to cause confusion, especially in a large project if many people do the same task different ways. We have to support both ways of doing things, as well.
|
||||
|
||||
----
|
||||
|
||||
'''Conclusion''': The last solution, generating .sql scripts from a Python API, seems to be best.
|
||||
'''Conclusion''': The last solution, generating .sql scripts from a Python API, seems to be best.
|
||||
|
||||
The first solution (.sql scripts) suffers from a lack of database-independence, but is familiar to experienced database developers, useful with other tools, and shows exactly what will be done to the database. The Python API solution has no trouble with database-independence, but suffers from other problems that the .sql solution doesn't. The last solution resolves both reasonably well. Multiple ways to do a single task might be called "not Pythonic", but IMO, the trade-off is worth this cost.
|
||||
The first solution (.sql scripts) suffers from a lack of database-independence, but is familiar to experienced database developers, useful with other tools, and shows exactly what will be done to the database. The Python API solution has no trouble with database-independence, but suffers from other problems that the .sql solution doesn't. The last solution resolves both reasonably well. Multiple ways to do a single task might be called "not Pythonic", but IMO, the trade-off is worth this cost.
|
||||
|
||||
Automatic translation between different dialects of SQL might have potential for use in a solution, but existing tools for this aren't reliable enough, as far as I can tell.
|
||||
Automatic translation between different dialects of SQL might have potential for use in a solution, but existing tools for this aren't reliable enough, as far as I can tell.
|
@ -7,7 +7,7 @@ An option not discussed below is "no versioning"; that is, simply apply any scri
|
||||
A single integer version number would specify the version of each database. This is stored in the database in a table, let's call it "schema"; each migration script is associated with a certain database version number.
|
||||
|
||||
+ Simple implementation[[br]]
|
||||
Of the 3 solutions presented here, this one is by far the simplest.
|
||||
Of the 3 solutions presented here, this one is by far the simplest.
|
||||
|
||||
+ Past success[[br]]
|
||||
Used in [http://www.rubyonrails.org/ Ruby on Rails' migrations].
|
||||
@ -32,13 +32,13 @@ Similar to the database-wide version number; the contents of the new table are m
|
||||
- Determining the version of database-specific objects (ie. stored procedures, functions) is difficult.
|
||||
|
||||
- Ultimately gains nothing over the previous solution.[[br]]
|
||||
The intent here was to allow multiple people to write scripts for a single database, but if database-wide version numbers aren't assigned until the script is placed in the repository, we could already do this.
|
||||
The intent here was to allow multiple people to write scripts for a single database, but if database-wide version numbers aren't assigned until the script is placed in the repository, we could already do this.
|
||||
|
||||
=== Version determined via introspection ===
|
||||
Each script has a schema associated with it, rather than a version number. The database schema is loaded, analyzed, and compared to the schema expected by the script.
|
||||
Each script has a schema associated with it, rather than a version number. The database schema is loaded, analyzed, and compared to the schema expected by the script.
|
||||
|
||||
+ No modifications to the database are necessary for this versioning system.[[br]]
|
||||
The primary advantage here is that no changes to the database are required.
|
||||
The primary advantage here is that no changes to the database are required.
|
||||
|
||||
- Most difficult solution to implement, by far.[[br]]
|
||||
Comparing the state of every schema object in the database is much more complex than simply comparing a version number, especially since we need to do it in a database-independent way (ie. we can't just diff the dump of each schema). SQLAlchemy's reflection would certainly be very helpful, but this remains the most complex solution.
|
||||
@ -53,4 +53,4 @@ When version numbers are stored in the database, you have some idea of where an
|
||||
|
||||
----
|
||||
|
||||
'''Conclusion''': database-wide version numbers are the best way to go.
|
||||
'''Conclusion''': database-wide version numbers are the best way to go.
|
@ -1,4 +1,4 @@
|
||||
This is very much a draft/brainstorm right now. It should be made prettier and thought about in more detail later, but it at least gives some idea of the direction we're headed right now.
|
||||
This is very much a draft/brainstorm right now. It should be made prettier and thought about in more detail later, but it at least gives some idea of the direction we're headed right now.
|
||||
----
|
||||
* Two distinct tools; should not be coupled (can work independently):
|
||||
* Versioning tool
|
||||
|
@ -1,32 +1,32 @@
|
||||
== Goals ==
|
||||
|
||||
=== DBMS-independent schema changes ===
|
||||
Many projects need to run on more than one DBMS. Similar changes need to be applied to both types of databases upon a schema change. The usual solution to database changes - .sql scripts with ALTER statements - runs into problems since different DBMSes have different dialects of SQL; we end up having to create a different script for each DBMS. This project will simplify this by providing an API, similar to the table definition API that already exists in SQLAlchemy, to alter a table independent of the DBMS being used, where possible.
|
||||
Many projects need to run on more than one DBMS. Similar changes need to be applied to both types of databases upon a schema change. The usual solution to database changes - .sql scripts with ALTER statements - runs into problems since different DBMSes have different dialects of SQL; we end up having to create a different script for each DBMS. This project will simplify this by providing an API, similar to the table definition API that already exists in SQLAlchemy, to alter a table independent of the DBMS being used, where possible.
|
||||
|
||||
This project will support all DBMSes currently supported by SQLAlchemy: SQLite, Postgres, MySQL, Oracle, and MS SQL. Adding support for more should be as possible as it is in SQLAlchemy.
|
||||
|
||||
Many are already used to writing .sql scripts for database changes, aren't interested in learning a new API, and have projects where DBMS-independence isn't an issue. Writing SQL statements as part of a (Python) change script must be an option, of course. Writing change scripts as .sql scripts, eliminating Python scripts from the picture entirely, would be nice too, although this is a lower-priority goal.
|
||||
Many are already used to writing .sql scripts for database changes, aren't interested in learning a new API, and have projects where DBMS-independence isn't an issue. Writing SQL statements as part of a (Python) change script must be an option, of course. Writing change scripts as .sql scripts, eliminating Python scripts from the picture entirely, would be nice too, although this is a lower-priority goal.
|
||||
|
||||
=== Database versioning and change script organization ===
|
||||
Once we've accumulated a set of change scripts, it's important to know which ones have been applied/need to be applied to a particular database: suppose we need to upgrade a database that's extremenly out-of-date; figuring out the scripts to run by hand is tedious. Applying changes in the wrong order, or applying changes when they shouldn't be applied, is bad; attempting to manage all of this by hand inevitably leads to an accident. This project will be able to detect the version of a particular database and apply the scripts required to bring it up to the latest version, or up to any specified version number (given all change scripts required to reach that version number).
|
||||
Once we've accumulated a set of change scripts, it's important to know which ones have been applied/need to be applied to a particular database: suppose we need to upgrade a database that's extremenly out-of-date; figuring out the scripts to run by hand is tedious. Applying changes in the wrong order, or applying changes when they shouldn't be applied, is bad; attempting to manage all of this by hand inevitably leads to an accident. This project will be able to detect the version of a particular database and apply the scripts required to bring it up to the latest version, or up to any specified version number (given all change scripts required to reach that version number).
|
||||
|
||||
Sometimes we need to be able to revert a schema to an older version. There's no automatic way to do this without rebuilding the database from scratch, so our project will allow one to write scripts to downgrade the database as well as upgrade it. If such scripts have been written, we should be able to apply them in the correct order, just like upgrading.
|
||||
Sometimes we need to be able to revert a schema to an older version. There's no automatic way to do this without rebuilding the database from scratch, so our project will allow one to write scripts to downgrade the database as well as upgrade it. If such scripts have been written, we should be able to apply them in the correct order, just like upgrading.
|
||||
|
||||
Large projects inevitably accumulate a large number of database change scripts; it's important that we have a place to keep them. Once a script has been written, this project will deal with organizing it among existing change scripts, and the user will never have to look at it again.
|
||||
Large projects inevitably accumulate a large number of database change scripts; it's important that we have a place to keep them. Once a script has been written, this project will deal with organizing it among existing change scripts, and the user will never have to look at it again.
|
||||
|
||||
=== Change testing ===
|
||||
It's important to test one's database changes before applying them to a production database (unless you happen to like disasters). Much testing is up to the user and can't be automated, but there's a few places we can help ensure at least a minimal level of schema integrity. A few examples are below; we could add more later.
|
||||
It's important to test one's database changes before applying them to a production database (unless you happen to like disasters). Much testing is up to the user and can't be automated, but there's a few places we can help ensure at least a minimal level of schema integrity. A few examples are below; we could add more later.
|
||||
|
||||
Given an obsolete schema, a database change script, and an up-to-date schema known to be correct, this project will be able to ensure that applying the
|
||||
change script to the obsolete schema will result in an up-to-date schema - all without actually changing the obsolete database. Folks who have SQLAlchemy create their database using table.create() might find this useful; this is also useful for ensuring database downgrade scripts are correct.
|
||||
change script to the obsolete schema will result in an up-to-date schema - all without actually changing the obsolete database. Folks who have SQLAlchemy create their database using table.create() might find this useful; this is also useful for ensuring database downgrade scripts are correct.
|
||||
|
||||
Given a schema of a known version and a complete set of change scripts up to that version, this project will be able to detect if the schema matches its version. If a schema has gone through changes not present in migration scripts, this test will fail; if applying all scripts in sequence up to the specified version creates an identical schema, this test will succeed. Identifying that a schema is corrupt is sufficient; it would be nice if we could give a clue as to what's wrong, but this is lower priority. (Implementation: we'll probably show a diff of two schema dumps; this should be enough to tell the user what's gone wrong.)
|
||||
Given a schema of a known version and a complete set of change scripts up to that version, this project will be able to detect if the schema matches its version. If a schema has gone through changes not present in migration scripts, this test will fail; if applying all scripts in sequence up to the specified version creates an identical schema, this test will succeed. Identifying that a schema is corrupt is sufficient; it would be nice if we could give a clue as to what's wrong, but this is lower priority. (Implementation: we'll probably show a diff of two schema dumps; this should be enough to tell the user what's gone wrong.)
|
||||
|
||||
== Non-Goals ==
|
||||
ie. things we will '''not''' try to do (at least, during the Summer of Code)
|
||||
|
||||
=== Automatic generation of schema changes ===
|
||||
For example, one might define a table:
|
||||
For example, one might define a table:
|
||||
{{{
|
||||
CREATE TABLE person (
|
||||
id integer,
|
||||
|
@ -69,5 +69,5 @@ One recurring problem I've had with this project is dealing with changes to the
|
||||
I'm now working on another project that will be making use of SQLAlchemy: it fits many of my project's requirements, but lacks a migration tool that will be much needed. This presents an opportunity for me to make my first contribution to open source - I've long been interested in open source software and use it regularly, but haven't contributed to any until now. I'm particularly interested in the application of this tool with the TurboGears framework, as this project was inspired by a suggestion the TurboGears mailing list and I'm working on a project using TurboGears - but there is no reason to couple an SQLAlchemy enhancement with TurboGears; this project may be used by anyone who uses SQLAlchemy.
|
||||
|
||||
|
||||
Further information:
|
||||
Further information:
|
||||
http://evan.zealgame.com/soc
|
||||
|
@ -2,11 +2,11 @@ This plan has several problems and has been modified; new plan is discussed in w
|
||||
|
||||
----
|
||||
|
||||
One problem with [http://www.rubyonrails.org/ Ruby on Rails'] (very good) schema migration system is the behavior of scripts that depend on outside sources; ie. the application. If those change, there's no guarantee that such scripts will behave as they did before, and you'll get strange results.
|
||||
One problem with [http://www.rubyonrails.org/ Ruby on Rails'] (very good) schema migration system is the behavior of scripts that depend on outside sources; ie. the application. If those change, there's no guarantee that such scripts will behave as they did before, and you'll get strange results.
|
||||
|
||||
For example, suppose one defines a SQLAlchemy table:
|
||||
{{{
|
||||
users = Table('users', metadata,
|
||||
users = Table('users', metadata,
|
||||
Column('user_id', Integer, primary_key = True),
|
||||
Column('user_name', String(16), nullable = False),
|
||||
Column('password', String(20), nullable = False)
|
||||
@ -30,7 +30,7 @@ def upgrade():
|
||||
}}}
|
||||
...and change our application's table definition:
|
||||
{{{
|
||||
users = Table('users', metadata,
|
||||
users = Table('users', metadata,
|
||||
Column('user_id', Integer, primary_key = True),
|
||||
Column('user_name', String(16), nullable = False),
|
||||
Column('password', String(20), nullable = False),
|
||||
|
@ -1,6 +1,6 @@
|
||||
My original plan for Migrate's RepositoryFormat had several problems:
|
||||
|
||||
* Bind parameters: We needed to bind parameters into statements to get something suitable for an .sql file. For some types of parameters, there's no clean way to do this without writing an entire parser - too great a cost for this project. There's a reason why SQLAlchemy's logs display the statement and its parameters separately: the binding is done at a lower level than we have access to.
|
||||
* Bind parameters: We needed to bind parameters into statements to get something suitable for an .sql file. For some types of parameters, there's no clean way to do this without writing an entire parser - too great a cost for this project. There's a reason why SQLAlchemy's logs display the statement and its parameters separately: the binding is done at a lower level than we have access to.
|
||||
* Failure: Discussed in #17, the old format had no easy way to find the Python statements associated with an SQL error. This makes it difficult to debug scripts.
|
||||
|
||||
A new format will be used to solve this problem instead.
|
||||
@ -15,9 +15,9 @@ These files will contain the following information:
|
||||
* Parameters to be bound to the statement
|
||||
* A Python stack trace at the point the statement was logged - this allows us to tell what Python statements are associated with an SQL statement when there's an error
|
||||
|
||||
These files will be created by pickling a Python object with the above information.
|
||||
These files will be created by pickling a Python object with the above information.
|
||||
|
||||
Such files may be executed by loading the log and having SQLAlchemy execute them as it might have before.
|
||||
Such files may be executed by loading the log and having SQLAlchemy execute them as it might have before.
|
||||
|
||||
Good:
|
||||
* Since the statements and bind parameters are stored separately and executed as SQLAlchemy would normally execute them, one problem discussed above is eliminated.
|
||||
|
@ -165,7 +165,7 @@ SQLAlchemy Migrate is split into two parts, database schema versioning
|
||||
|
||||
|
||||
API Documentation
|
||||
------------------
|
||||
------------------
|
||||
|
||||
.. toctree::
|
||||
|
||||
|
@ -33,7 +33,7 @@ body {
|
||||
|
||||
#content p,
|
||||
#content ul,
|
||||
#content blockquote {
|
||||
#content blockquote {
|
||||
line-height: 1.6em;
|
||||
}
|
||||
|
||||
@ -55,7 +55,7 @@ h1, h2, h3 {
|
||||
margin-bottom: .7em;
|
||||
}
|
||||
|
||||
h1 {
|
||||
h1 {
|
||||
font-size: 2.5em;
|
||||
}
|
||||
h2 {
|
||||
@ -73,9 +73,9 @@ h1 a, h2 a, h3 a {
|
||||
color: #33a;
|
||||
}
|
||||
|
||||
h1, h1 a, h1 a:hover, h1 a:visited,
|
||||
h2, h2 a, h2 a:hover, h2 a:visited,
|
||||
h3, h3 a, h3 a:hover, h3 a:visited,
|
||||
h1, h1 a, h1 a:hover, h1 a:visited,
|
||||
h2, h2 a, h2 a:hover, h2 a:visited,
|
||||
h3, h3 a, h3 a:hover, h3 a:visited,
|
||||
cite {
|
||||
text-decoration: none;
|
||||
}
|
||||
@ -113,12 +113,12 @@ a:hover {
|
||||
}
|
||||
|
||||
/* Special case doc-title */
|
||||
h1.doc-title {
|
||||
h1.doc-title {
|
||||
text-transform: lowercase;
|
||||
font-size: 4em;
|
||||
margin: 0;
|
||||
}
|
||||
h1.doc-title a {
|
||||
h1.doc-title a {
|
||||
display: block;
|
||||
padding-left: 0.8em;
|
||||
padding-bottom: .5em;
|
||||
@ -126,8 +126,8 @@ h1.doc-title a {
|
||||
margin: 0;
|
||||
border-bottom: 1px #fff solid;
|
||||
}
|
||||
h1.doc-title,
|
||||
h1.doc-title a,
|
||||
h1.doc-title,
|
||||
h1.doc-title a,
|
||||
h1.doc-title a:visited,
|
||||
h1.doc-title a:hover {
|
||||
text-decoration: none;
|
||||
@ -149,7 +149,7 @@ body {
|
||||
max-width: 60em;
|
||||
border: 1px solid #555596;
|
||||
}
|
||||
* html #page {
|
||||
* html #page {
|
||||
* width: 60em;
|
||||
* }
|
||||
*
|
||||
@ -157,7 +157,7 @@ body {
|
||||
* margin: 0 1em 0 3em;
|
||||
* }
|
||||
*
|
||||
* #content h1 {
|
||||
* #content h1 {
|
||||
* margin-left: 0;
|
||||
* }
|
||||
*
|
||||
|
@ -4,10 +4,10 @@
|
||||
/* Basic Style
|
||||
----------------------------------- */
|
||||
|
||||
h1.pudge-member-page-heading {
|
||||
h1.pudge-member-page-heading {
|
||||
font-size: 300%;
|
||||
}
|
||||
h4.pudge-member-page-subheading {
|
||||
h4.pudge-member-page-subheading {
|
||||
font-size: 130%;
|
||||
font-style: italic;
|
||||
margin-top: -2.0em;
|
||||
@ -15,24 +15,24 @@ h4.pudge-member-page-subheading {
|
||||
margin-bottom: .3em;
|
||||
color: #0050CC;
|
||||
}
|
||||
p.pudge-member-blurb {
|
||||
p.pudge-member-blurb {
|
||||
font-style: italic;
|
||||
font-weight: bold;
|
||||
font-size: 120%;
|
||||
margin-top: 0.2em;
|
||||
color: #999;
|
||||
}
|
||||
p.pudge-member-parent-link {
|
||||
p.pudge-member-parent-link {
|
||||
margin-top: 0;
|
||||
}
|
||||
/*div.pudge-module-doc {
|
||||
/*div.pudge-module-doc {
|
||||
max-width: 45em;
|
||||
}*/
|
||||
div.pudge-section {
|
||||
div.pudge-section {
|
||||
margin-left: 2em;
|
||||
max-width: 45em;
|
||||
}
|
||||
/* Section Navigation
|
||||
/* Section Navigation
|
||||
----------------------------------- */
|
||||
|
||||
div#pudge-section-nav
|
||||
@ -42,7 +42,7 @@ div#pudge-section-nav
|
||||
height: 20px;
|
||||
}
|
||||
|
||||
div#pudge-section-nav ul {
|
||||
div#pudge-section-nav ul {
|
||||
border: 0;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
@ -90,33 +90,33 @@ div#pudge-section-nav ul li .pudge-section-link {
|
||||
|
||||
/* Module Lists
|
||||
----------------------------------- */
|
||||
dl.pudge-module-list dt {
|
||||
dl.pudge-module-list dt {
|
||||
font-style: normal;
|
||||
font-size: 110%;
|
||||
}
|
||||
dl.pudge-module-list dd {
|
||||
dl.pudge-module-list dd {
|
||||
color: #555;
|
||||
}
|
||||
|
||||
/* Misc Overrides */
|
||||
.rst-doc p.topic-title a {
|
||||
.rst-doc p.topic-title a {
|
||||
color: #777;
|
||||
}
|
||||
.rst-doc ul.auto-toc a,
|
||||
.rst-doc div.contents a {
|
||||
.rst-doc div.contents a {
|
||||
color: #333;
|
||||
}
|
||||
pre { background: #eee; }
|
||||
|
||||
.rst-doc dl dt {
|
||||
.rst-doc dl dt {
|
||||
color: #444;
|
||||
margin-top: 1em;
|
||||
font-weight: bold;
|
||||
}
|
||||
.rst-doc dl dd {
|
||||
.rst-doc dl dd {
|
||||
margin-top: .2em;
|
||||
}
|
||||
.rst-doc hr {
|
||||
.rst-doc hr {
|
||||
display: block;
|
||||
margin: 2em 0;
|
||||
}
|
||||
|
@ -75,7 +75,7 @@ script applied to the database increments this version number. You can retrieve
|
||||
a database's current :term:`version`::
|
||||
|
||||
$ python my_repository/manage.py db_version sqlite:///project.db my_repository
|
||||
0
|
||||
0
|
||||
|
||||
A freshly versioned database begins at version 0 by default. This assumes the
|
||||
database is empty or does only contain schema elements (tables, views,
|
||||
@ -172,7 +172,7 @@ Our change script predefines two functions, currently empty:
|
||||
Column('login', String(40)),
|
||||
Column('passwd', String(40)),
|
||||
)
|
||||
|
||||
|
||||
|
||||
def upgrade(migrate_engine):
|
||||
meta.bind = migrate_engine
|
||||
@ -251,9 +251,9 @@ Our :term:`repository's <repository>` :term:`version` is::
|
||||
|
||||
The :option:`test` command executes actual scripts, be sure you are *NOT*
|
||||
doing this on production database.
|
||||
|
||||
|
||||
If you need to test production changes you should:
|
||||
|
||||
|
||||
#. get a dump of your production database
|
||||
#. import the dump into an empty database
|
||||
#. run :option:`test` or :option:`upgrade` on that copy
|
||||
@ -363,7 +363,7 @@ your application - the same SQL should be generated every time, despite any
|
||||
changes to your app's source code. You don't want your change scripts' behavior
|
||||
changing when your source code does.
|
||||
|
||||
.. warning::
|
||||
.. warning::
|
||||
|
||||
**Consider the following example of what NOT to do**
|
||||
|
||||
@ -372,7 +372,7 @@ changing when your source code does.
|
||||
.. code-block:: python
|
||||
|
||||
from sqlalchemy import *
|
||||
|
||||
|
||||
meta = MetaData()
|
||||
table = Table('mytable', meta,
|
||||
Column('id', Integer, primary_key=True),
|
||||
@ -381,7 +381,7 @@ changing when your source code does.
|
||||
... and uses this file to create a table in a change script:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
import model
|
||||
@ -390,7 +390,7 @@ changing when your source code does.
|
||||
model.meta.bind = migrate_engine
|
||||
|
||||
def downgrade(migrate_engine):
|
||||
model.meta.bind = migrate_engine
|
||||
model.meta.bind = migrate_engine
|
||||
model.table.drop()
|
||||
|
||||
This runs successfully the first time. But what happens if we change the
|
||||
@ -399,7 +399,7 @@ changing when your source code does.
|
||||
.. code-block:: python
|
||||
|
||||
from sqlalchemy import *
|
||||
|
||||
|
||||
meta = MetaData()
|
||||
table = Table('mytable', meta,
|
||||
Column('id', Integer, primary_key=True),
|
||||
@ -448,7 +448,7 @@ database. Use engine.name to get the name of the database you're working with
|
||||
|
||||
>>> from sqlalchemy import *
|
||||
>>> from migrate import *
|
||||
>>>
|
||||
>>>
|
||||
>>> engine = create_engine('sqlite:///:memory:')
|
||||
>>> engine.name
|
||||
'sqlite'
|
||||
@ -537,7 +537,7 @@ function names match equivalent shell commands. You can use this to
|
||||
help integrate SQLAlchemy Migrate with your existing update process.
|
||||
|
||||
For example, the following commands are similar:
|
||||
|
||||
|
||||
*From the command line*::
|
||||
|
||||
$ migrate help help
|
||||
@ -553,9 +553,9 @@ For example, the following commands are similar:
|
||||
migrate.versioning.api.help('help')
|
||||
# Output:
|
||||
# %prog help COMMAND
|
||||
#
|
||||
#
|
||||
# Displays help on a given command.
|
||||
|
||||
|
||||
|
||||
.. _migrate.versioning.api: module-migrate.versioning.api.html
|
||||
|
||||
@ -631,7 +631,7 @@ One may also want to specify custom themes. API functions accept
|
||||
``templates_theme`` for this purpose (which defaults to `default`)
|
||||
|
||||
Example::
|
||||
|
||||
|
||||
/home/user/templates/manage $ ls
|
||||
default.py_tmpl
|
||||
pylons.py_tmpl
|
||||
|
@ -56,7 +56,7 @@ class AlterTableVisitor(SchemaVisitor):
|
||||
# adapt to 0.6 which uses a string-returning
|
||||
# object
|
||||
self.append(" %s" % ret)
|
||||
|
||||
|
||||
def _to_table(self, param):
|
||||
"""Returns the table object for the given param object."""
|
||||
if isinstance(param, (sa.Column, sa.Index, sa.schema.Constraint)):
|
||||
@ -113,7 +113,7 @@ class ANSIColumnGenerator(AlterTableVisitor, SchemaGenerator):
|
||||
# SA bounds FK constraints to table, add manually
|
||||
for fk in column.foreign_keys:
|
||||
self.add_foreignkey(fk.constraint)
|
||||
|
||||
|
||||
# add primary key constraint if needed
|
||||
if column.primary_key_name:
|
||||
cons = constraint.PrimaryKeyConstraint(column,
|
||||
|
@ -31,7 +31,7 @@ class FBColumnDropper(ansisql.ANSIColumnDropper):
|
||||
if column.name in [col.name for col in index.columns]:
|
||||
index.drop()
|
||||
# TODO: recreate index if it references more than this column
|
||||
|
||||
|
||||
for cons in column.table.constraints:
|
||||
if isinstance(cons,PrimaryKeyConstraint):
|
||||
# will be deleted only when the column its on
|
||||
|
@ -42,7 +42,7 @@ class SQLiteHelper(SQLiteCommon):
|
||||
self.execute()
|
||||
self.append('DROP TABLE migration_tmp')
|
||||
self.execute()
|
||||
|
||||
|
||||
def visit_column(self, delta):
|
||||
if isinstance(delta, DictMixin):
|
||||
column = delta.result_column
|
||||
@ -52,7 +52,7 @@ class SQLiteHelper(SQLiteCommon):
|
||||
table = self._to_table(column.table)
|
||||
self.recreate_table(table,column,delta)
|
||||
|
||||
class SQLiteColumnGenerator(SQLiteSchemaGenerator,
|
||||
class SQLiteColumnGenerator(SQLiteSchemaGenerator,
|
||||
ansisql.ANSIColumnGenerator,
|
||||
# at the end so we get the normal
|
||||
# visit_column by default
|
||||
@ -78,7 +78,7 @@ class SQLiteColumnDropper(SQLiteHelper, ansisql.ANSIColumnDropper):
|
||||
"""SQLite ColumnDropper"""
|
||||
|
||||
def _modify_table(self, table, column, delta):
|
||||
|
||||
|
||||
columns = ' ,'.join(map(self.preparer.format_column, table.columns))
|
||||
return 'INSERT INTO %(table_name)s SELECT ' + columns + \
|
||||
' from migration_tmp'
|
||||
|
@ -31,7 +31,7 @@ __all__ = [
|
||||
|
||||
def create_column(column, table=None, *p, **kw):
|
||||
"""Create a column, given the table.
|
||||
|
||||
|
||||
API to :meth:`ChangesetColumn.create`.
|
||||
"""
|
||||
if table is not None:
|
||||
@ -41,7 +41,7 @@ def create_column(column, table=None, *p, **kw):
|
||||
|
||||
def drop_column(column, table=None, *p, **kw):
|
||||
"""Drop a column, given the table.
|
||||
|
||||
|
||||
API to :meth:`ChangesetColumn.drop`.
|
||||
"""
|
||||
if table is not None:
|
||||
@ -105,12 +105,12 @@ def alter_column(*p, **k):
|
||||
:param engine:
|
||||
The :class:`~sqlalchemy.engine.base.Engine` to use for table
|
||||
reflection and schema alterations.
|
||||
|
||||
|
||||
:returns: A :class:`ColumnDelta` instance representing the change.
|
||||
|
||||
|
||||
|
||||
"""
|
||||
|
||||
|
||||
if 'table' not in k and isinstance(p[0], sqlalchemy.Column):
|
||||
k['table'] = p[0].table
|
||||
if 'engine' not in k:
|
||||
@ -129,7 +129,7 @@ def alter_column(*p, **k):
|
||||
# that this crutch has to be left in until they can be sorted
|
||||
# out
|
||||
k['alter_metadata']=True
|
||||
|
||||
|
||||
delta = ColumnDelta(*p, **k)
|
||||
|
||||
visitorcallable = get_engine_visitor(engine, 'schemachanger')
|
||||
@ -183,10 +183,10 @@ class ColumnDelta(DictMixin, sqlalchemy.schema.SchemaItem):
|
||||
:param table: Table at which current Column should be bound to.\
|
||||
If table name is given, reflection will be used.
|
||||
:type table: string or Table instance
|
||||
|
||||
|
||||
:param metadata: A :class:`MetaData` instance to store
|
||||
reflected table names
|
||||
|
||||
|
||||
:param engine: When reflecting tables, either engine or metadata must \
|
||||
be specified to acquire engine object.
|
||||
:type engine: :class:`Engine` instance
|
||||
@ -211,7 +211,7 @@ class ColumnDelta(DictMixin, sqlalchemy.schema.SchemaItem):
|
||||
# as a crutch until the tests that fail when 'alter_metadata'
|
||||
# behaviour always happens can be sorted out
|
||||
self.alter_metadata = kw.pop("alter_metadata", False)
|
||||
|
||||
|
||||
self.meta = kw.pop("metadata", None)
|
||||
self.engine = kw.pop("engine", None)
|
||||
|
||||
@ -239,7 +239,7 @@ class ColumnDelta(DictMixin, sqlalchemy.schema.SchemaItem):
|
||||
self.alter_metadata,
|
||||
super(ColumnDelta, self).__repr__()
|
||||
)
|
||||
|
||||
|
||||
def __getitem__(self, key):
|
||||
if key not in self.keys():
|
||||
raise KeyError("No such diff key, available: %s" % self.diffs )
|
||||
@ -278,7 +278,7 @@ class ColumnDelta(DictMixin, sqlalchemy.schema.SchemaItem):
|
||||
"""Compares two Column objects"""
|
||||
self.process_column(new_col)
|
||||
self.table = k.pop('table', None)
|
||||
# we cannot use bool() on table in SA06
|
||||
# we cannot use bool() on table in SA06
|
||||
if self.table is None:
|
||||
self.table = old_col.table
|
||||
if self.table is None:
|
||||
@ -482,7 +482,7 @@ class ChangesetColumn(object):
|
||||
|
||||
def alter(self, *p, **k):
|
||||
"""Makes a call to :func:`alter_column` for the column this
|
||||
method is called on.
|
||||
method is called on.
|
||||
"""
|
||||
if 'table' not in k:
|
||||
k['table'] = self.table
|
||||
@ -560,12 +560,12 @@ populated with defaults
|
||||
|
||||
def _col_name_in_constraint(self,cons,name):
|
||||
return False
|
||||
|
||||
|
||||
def remove_from_table(self, table, unset_table=True):
|
||||
# TODO: remove primary keys, constraints, etc
|
||||
if unset_table:
|
||||
self.table = None
|
||||
|
||||
|
||||
to_drop = set()
|
||||
for index in table.indexes:
|
||||
columns = []
|
||||
@ -579,7 +579,7 @@ populated with defaults
|
||||
else:
|
||||
to_drop.add(index)
|
||||
table.indexes = table.indexes - to_drop
|
||||
|
||||
|
||||
to_drop = set()
|
||||
for cons in table.constraints:
|
||||
# TODO: deal with other types of constraint
|
||||
@ -591,7 +591,7 @@ populated with defaults
|
||||
if self.name==col_name:
|
||||
to_drop.add(cons)
|
||||
table.constraints = table.constraints - to_drop
|
||||
|
||||
|
||||
if table.c.contains_column(self):
|
||||
if SQLA_07:
|
||||
table._columns.remove(self)
|
||||
|
@ -65,7 +65,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
result = len(self.table.primary_key)
|
||||
self.assertEqual(result, num_of_expected_cols)
|
||||
|
||||
# we have 1 columns and there is no data column
|
||||
# we have 1 columns and there is no data column
|
||||
assert_numcols(1)
|
||||
self.assertTrue(getattr(self.table.c, 'data', None) is None)
|
||||
if len(col_p) == 0:
|
||||
@ -147,7 +147,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
# Not necessarily bound to table
|
||||
return self.table.drop_column(col.name)
|
||||
return self.run_(add_func, drop_func)
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_byname(self):
|
||||
"""Add/drop columns via functions; by table object and column name"""
|
||||
@ -302,7 +302,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
if index.name=='ix_data':
|
||||
break
|
||||
self.assertEqual(expected,index.unique)
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_index(self):
|
||||
col = Column('data', Integer)
|
||||
@ -311,7 +311,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
self._check_index(False)
|
||||
|
||||
col.drop()
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_index_unique(self):
|
||||
# shows how to create a unique index
|
||||
@ -332,7 +332,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
self._check_index(True)
|
||||
|
||||
col.drop()
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_server_defaults(self):
|
||||
"""Can create columns with server_default values"""
|
||||
@ -382,7 +382,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
sorted([i.name for i in self.table.indexes]),
|
||||
[u'ix_tmp_adddropcol_d1', u'ix_tmp_adddropcol_d2']
|
||||
)
|
||||
|
||||
|
||||
# delete one
|
||||
self.table.c.d2.drop()
|
||||
|
||||
@ -392,7 +392,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
sorted([i.name for i in self.table.indexes]),
|
||||
[u'ix_tmp_adddropcol_d1']
|
||||
)
|
||||
|
||||
|
||||
def _actual_foreign_keys(self):
|
||||
from sqlalchemy.schema import ForeignKeyConstraint
|
||||
result = []
|
||||
@ -406,12 +406,12 @@ class TestAddDropColumn(fixture.DB):
|
||||
result.append(col_names)
|
||||
result.sort()
|
||||
return result
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_drop_with_foreign_keys(self):
|
||||
self.table.drop()
|
||||
self.meta.clear()
|
||||
|
||||
|
||||
# create FK's target
|
||||
reftable = Table('tmp_ref', self.meta,
|
||||
Column('id', Integer, primary_key=True),
|
||||
@ -419,7 +419,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
if self.engine.has_table(reftable.name):
|
||||
reftable.drop()
|
||||
reftable.create()
|
||||
|
||||
|
||||
# add a table with two foreign key columns
|
||||
self.table = Table(
|
||||
self.table_name, self.meta,
|
||||
@ -432,13 +432,13 @@ class TestAddDropColumn(fixture.DB):
|
||||
# paranoid check
|
||||
self.assertEqual([['r1'],['r2']],
|
||||
self._actual_foreign_keys())
|
||||
|
||||
|
||||
# delete one
|
||||
if self.engine.name == 'mysql':
|
||||
constraint.ForeignKeyConstraint([self.table.c.r2], [reftable.c.id],
|
||||
name='test_fk2').drop()
|
||||
self.table.c.r2.drop()
|
||||
|
||||
|
||||
# check remaining foreign key is there
|
||||
self.assertEqual([['r1']],
|
||||
self._actual_foreign_keys())
|
||||
@ -447,7 +447,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
def test_drop_with_complex_foreign_keys(self):
|
||||
from sqlalchemy.schema import ForeignKeyConstraint
|
||||
from sqlalchemy.schema import UniqueConstraint
|
||||
|
||||
|
||||
self.table.drop()
|
||||
self.meta.clear()
|
||||
|
||||
@ -465,7 +465,7 @@ class TestAddDropColumn(fixture.DB):
|
||||
if self.engine.has_table(reftable.name):
|
||||
reftable.drop()
|
||||
reftable.create()
|
||||
|
||||
|
||||
# add a table with a complex foreign key constraint
|
||||
self.table = Table(
|
||||
self.table_name, self.meta,
|
||||
@ -481,21 +481,21 @@ class TestAddDropColumn(fixture.DB):
|
||||
# paranoid check
|
||||
self.assertEqual([['r1','r2']],
|
||||
self._actual_foreign_keys())
|
||||
|
||||
|
||||
# delete one
|
||||
if self.engine.name == 'mysql':
|
||||
constraint.ForeignKeyConstraint([self.table.c.r1, self.table.c.r2],
|
||||
[reftable.c.id, reftable.c.jd],
|
||||
name='test_fk').drop()
|
||||
self.table.c.r2.drop()
|
||||
|
||||
|
||||
# check the constraint is gone, since part of it
|
||||
# is no longer there - if people hit this,
|
||||
# they may be confused, maybe we should raise an error
|
||||
# and insist that the constraint is deleted first, separately?
|
||||
self.assertEqual([],
|
||||
self._actual_foreign_keys())
|
||||
|
||||
|
||||
class TestRename(fixture.DB):
|
||||
"""Tests for table and index rename methods"""
|
||||
level = fixture.DB.CONNECT
|
||||
@ -575,7 +575,7 @@ class TestRename(fixture.DB):
|
||||
# test by just the string
|
||||
rename_table(table_name1, table_name2, engine=self.engine)
|
||||
assert_table_name(table_name2, True) # object not updated
|
||||
|
||||
|
||||
# Index renames
|
||||
if self.url.startswith('sqlite') or self.url.startswith('mysql'):
|
||||
self.assertRaises(exceptions.NotSupportedError,
|
||||
@ -671,7 +671,7 @@ class TestColumnChange(fixture.DB):
|
||||
self.assert_('atad' not in self.table.c.keys())
|
||||
self.table.c.data # Should not raise exception
|
||||
self.assertEqual(num_rows(self.table.c.data,content), 1)
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_type(self):
|
||||
# Test we can change a column's type
|
||||
@ -695,12 +695,12 @@ class TestColumnChange(fixture.DB):
|
||||
@fixture.usedb()
|
||||
def test_default(self):
|
||||
"""Can change a column's server_default value (DefaultClauses only)
|
||||
Only DefaultClauses are changed here: others are managed by the
|
||||
Only DefaultClauses are changed here: others are managed by the
|
||||
application / by SA
|
||||
"""
|
||||
self.assertEqual(self.table.c.data.server_default.arg, 'tluafed')
|
||||
|
||||
# Just the new default
|
||||
# Just the new default
|
||||
default = 'my_default'
|
||||
self.table.c.data.alter(server_default=DefaultClause(default))
|
||||
self.refresh_table(self.table.name)
|
||||
@ -750,7 +750,7 @@ class TestColumnChange(fixture.DB):
|
||||
# py 2.4 compatability :-/
|
||||
cw = catch_warnings(record=True)
|
||||
w = cw.__enter__()
|
||||
|
||||
|
||||
warnings.simplefilter("always")
|
||||
self.table.c.data.alter(Column('data', String(100)))
|
||||
|
||||
@ -763,7 +763,7 @@ class TestColumnChange(fixture.DB):
|
||||
str(w[-1].message))
|
||||
finally:
|
||||
cw.__exit__()
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_alter_returns_delta(self):
|
||||
"""Test if alter constructs return delta"""
|
||||
@ -791,7 +791,7 @@ class TestColumnChange(fixture.DB):
|
||||
if self.engine.name == 'firebird':
|
||||
del kw['nullable']
|
||||
self.table.c.data.alter(**kw)
|
||||
|
||||
|
||||
# test altered objects
|
||||
self.assertEqual(self.table.c.data.server_default.arg, 'foobar')
|
||||
if not self.engine.name == 'firebird':
|
||||
|
@ -13,7 +13,7 @@ from migrate.tests import fixture
|
||||
|
||||
class CommonTestConstraint(fixture.DB):
|
||||
"""helper functions to test constraints.
|
||||
|
||||
|
||||
we just create a fresh new table and make sure everything is
|
||||
as required.
|
||||
"""
|
||||
@ -257,7 +257,7 @@ class TestAutoname(CommonTestConstraint):
|
||||
cons = CheckConstraint('id > 3', columns=[self.table.c.id])
|
||||
cons.create()
|
||||
self.refresh_table()
|
||||
|
||||
|
||||
if not self.engine.name == 'mysql':
|
||||
self.table.insert(values={'id': 4, 'fkey': 1}).execute()
|
||||
try:
|
||||
@ -280,7 +280,7 @@ class TestAutoname(CommonTestConstraint):
|
||||
cons = UniqueConstraint(self.table.c.fkey)
|
||||
cons.create()
|
||||
self.refresh_table()
|
||||
|
||||
|
||||
self.table.insert(values={'fkey': 4, 'id': 1}).execute()
|
||||
try:
|
||||
self.table.insert(values={'fkey': 4, 'id': 2}).execute()
|
||||
|
@ -65,7 +65,7 @@ def usedb(supported=None, not_supported=None):
|
||||
@param supported: run tests for ONLY these databases
|
||||
@param not_supported: run tests for all databases EXCEPT these
|
||||
|
||||
If both supported and not_supported are empty, all dbs are assumed
|
||||
If both supported and not_supported are empty, all dbs are assumed
|
||||
to be supported
|
||||
"""
|
||||
if supported is not None and not_supported is not None:
|
||||
@ -126,7 +126,7 @@ class DB(Base):
|
||||
level = TXN
|
||||
|
||||
def _engineInfo(self, url=None):
|
||||
if url is None:
|
||||
if url is None:
|
||||
url = self.url
|
||||
return url
|
||||
|
||||
@ -151,7 +151,7 @@ class DB(Base):
|
||||
if self.level < self.CONNECT:
|
||||
return
|
||||
#self.session = create_session(bind=self.engine)
|
||||
if self.level < self.TXN:
|
||||
if self.level < self.TXN:
|
||||
return
|
||||
#self.txn = self.session.begin()
|
||||
|
||||
|
@ -30,10 +30,10 @@ class Pathed(base.Base):
|
||||
@classmethod
|
||||
def _tmp(cls, prefix='', suffix=''):
|
||||
"""Generate a temporary file name that doesn't exist
|
||||
All filenames are generated inside a temporary directory created by
|
||||
tempfile.mkdtemp(); only the creating user has access to this directory.
|
||||
It should be secure to return a nonexistant temp filename in this
|
||||
directory, unless the user is messing with their own files.
|
||||
All filenames are generated inside a temporary directory created by
|
||||
tempfile.mkdtemp(); only the creating user has access to this directory.
|
||||
It should be secure to return a nonexistant temp filename in this
|
||||
directory, unless the user is messing with their own files.
|
||||
"""
|
||||
file, ret = tempfile.mkstemp(suffix,prefix,cls._tmpdir)
|
||||
os.close(file)
|
||||
@ -43,7 +43,7 @@ class Pathed(base.Base):
|
||||
@classmethod
|
||||
def tmp(cls, *p, **k):
|
||||
return cls._tmp(*p, **k)
|
||||
|
||||
|
||||
@classmethod
|
||||
def tmp_py(cls, *p, **k):
|
||||
return cls._tmp(suffix='.py', *p, **k)
|
||||
@ -63,7 +63,7 @@ class Pathed(base.Base):
|
||||
@classmethod
|
||||
def purge(cls, path):
|
||||
"""Removes this path if it exists, in preparation for tests
|
||||
Careful - all tests should take place in /tmp.
|
||||
Careful - all tests should take place in /tmp.
|
||||
We don't want to accidentally wipe stuff out...
|
||||
"""
|
||||
if os.path.exists(path):
|
||||
|
@ -17,7 +17,7 @@ class TestConfigParser(fixture.Base):
|
||||
parser.set('section','option','value')
|
||||
self.assertEqual(parser.get('section', 'option'), 'value')
|
||||
self.assertEqual(parser.to_dict()['section']['option'], 'value')
|
||||
|
||||
|
||||
def test_table_config(self):
|
||||
"""We should be able to specify the table to be used with a repository"""
|
||||
default_text = Repository.prepare_config(Template().get_repository(),
|
||||
|
@ -19,7 +19,7 @@ class TestKeydInstance(fixture.Base):
|
||||
return str(key)
|
||||
def __init__(self,value):
|
||||
self.value=value
|
||||
|
||||
|
||||
a10 = Uniq1('a')
|
||||
|
||||
# Different key: different instance
|
||||
|
@ -12,7 +12,7 @@ class TestPathed(fixture.Base):
|
||||
self.assert_(result==Pathed._parent_path(filepath))
|
||||
self.assert_(result==Pathed._parent_path(dirpath))
|
||||
self.assert_(result==Pathed._parent_path(sdirpath))
|
||||
|
||||
|
||||
def test_new(self):
|
||||
"""Pathed(path) shouldn't create duplicate objects of the same path"""
|
||||
path='/fgsfds'
|
||||
@ -34,7 +34,7 @@ class TestPathed(fixture.Base):
|
||||
parent=None
|
||||
children=0
|
||||
def _init_child(self,child,path):
|
||||
"""Keep a tally of children.
|
||||
"""Keep a tally of children.
|
||||
(A real class might do something more interesting here)
|
||||
"""
|
||||
self.__class__.children+=1
|
||||
|
@ -32,7 +32,7 @@ class TestRepository(fixture.Pathed):
|
||||
# Can't create it again: it already exists
|
||||
self.assertRaises(exceptions.PathFoundError, Repository.create, path, name)
|
||||
return path
|
||||
|
||||
|
||||
def test_load(self):
|
||||
"""We should be able to load information about an existing repository"""
|
||||
# Create a repository to load
|
||||
@ -45,7 +45,7 @@ class TestRepository(fixture.Pathed):
|
||||
|
||||
# version_table's default isn't none
|
||||
self.assertNotEquals(repos.config.get('db_settings', 'version_table'), 'None')
|
||||
|
||||
|
||||
def test_load_notfound(self):
|
||||
"""Nonexistant repositories shouldn't be loaded"""
|
||||
path = self.tmp_repos()
|
||||
@ -54,7 +54,7 @@ class TestRepository(fixture.Pathed):
|
||||
|
||||
def test_load_invalid(self):
|
||||
"""Invalid repos shouldn't be loaded"""
|
||||
# Here, invalid=empty directory. There may be other conditions too,
|
||||
# Here, invalid=empty directory. There may be other conditions too,
|
||||
# but we shouldn't need to test all of them
|
||||
path = self.tmp_repos()
|
||||
os.mkdir(path)
|
||||
@ -136,7 +136,7 @@ class TestVersionedRepository(fixture.Pathed):
|
||||
repos.create_script('')
|
||||
self.assert_(repos.version(repos.latest) is repos.version())
|
||||
self.assert_(repos.version() is not None)
|
||||
|
||||
|
||||
def test_changeset(self):
|
||||
"""Repositories can create changesets properly"""
|
||||
# Create a nonzero-version repository of empty scripts
|
||||
@ -201,17 +201,17 @@ class TestVersionedRepository(fixture.Pathed):
|
||||
self.assertEqual(cs.end, 0)
|
||||
check_changeset((10, 5), 5)
|
||||
check_changeset((5, 0), 5)
|
||||
|
||||
|
||||
def test_many_versions(self):
|
||||
"""Test what happens when lots of versions are created"""
|
||||
repos = Repository(self.path_repos)
|
||||
for i in range(1001):
|
||||
for i in range(1001):
|
||||
repos.create_script('')
|
||||
|
||||
# since we normally create 3 digit ones, let's see if we blow up
|
||||
self.assert_(os.path.exists('%s/versions/1000.py' % self.path_repos))
|
||||
self.assert_(os.path.exists('%s/versions/1001.py' % self.path_repos))
|
||||
|
||||
|
||||
|
||||
# TODO: test manage file
|
||||
# TODO: test changeset
|
||||
|
@ -53,7 +53,7 @@ class TestControlledSchema(fixture.Pathed, fixture.DB):
|
||||
# Trying to create another DB this way fails: table exists
|
||||
self.assertRaises(exceptions.DatabaseAlreadyControlledError,
|
||||
ControlledSchema.create, self.engine, self.repos)
|
||||
|
||||
|
||||
# We can load a controlled DB this way, too
|
||||
dbcontrol0 = ControlledSchema(self.engine, self.repos)
|
||||
self.assertEqual(dbcontrol, dbcontrol0)
|
||||
@ -67,7 +67,7 @@ class TestControlledSchema(fixture.Pathed, fixture.DB):
|
||||
dbcontrol0 = ControlledSchema(engine, self.repos.path)
|
||||
self.assertEqual(dbcontrol, dbcontrol0)
|
||||
|
||||
# Clean up:
|
||||
# Clean up:
|
||||
dbcontrol.drop()
|
||||
|
||||
# Attempting to drop vc from a db without it should fail
|
||||
@ -84,7 +84,7 @@ class TestControlledSchema(fixture.Pathed, fixture.DB):
|
||||
version = 0
|
||||
dbcontrol = ControlledSchema.create(self.engine, self.repos, version)
|
||||
self.assertEqual(dbcontrol.version, version)
|
||||
|
||||
|
||||
# Correct when we load it, too
|
||||
dbcontrol = ControlledSchema(self.engine, self.repos)
|
||||
self.assertEqual(dbcontrol.version, version)
|
||||
@ -125,7 +125,7 @@ class TestControlledSchema(fixture.Pathed, fixture.DB):
|
||||
def test_changeset(self):
|
||||
"""Create changeset from controlled schema"""
|
||||
dbschema = ControlledSchema.create(self.engine, self.repos)
|
||||
|
||||
|
||||
# empty schema doesn't have changesets
|
||||
cs = dbschema.changeset()
|
||||
self.assertEqual(cs, {})
|
||||
@ -143,7 +143,7 @@ class TestControlledSchema(fixture.Pathed, fixture.DB):
|
||||
@fixture.usedb()
|
||||
def test_upgrade_runchange(self):
|
||||
dbschema = ControlledSchema.create(self.engine, self.repos)
|
||||
|
||||
|
||||
for i in range(10):
|
||||
self.repos.create_script('')
|
||||
|
||||
@ -184,7 +184,7 @@ class TestControlledSchema(fixture.Pathed, fixture.DB):
|
||||
dbschema = ControlledSchema.create(self.engine, self.repos)
|
||||
|
||||
meta = self.construct_model()
|
||||
|
||||
|
||||
dbschema.update_db_from_model(meta)
|
||||
|
||||
# TODO: test for table version in db
|
||||
|
@ -18,7 +18,7 @@ class SchemaDiffBase(fixture.DB):
|
||||
)
|
||||
if kw.get('create',True):
|
||||
self.table.create()
|
||||
|
||||
|
||||
def _assert_diff(self,col_A,col_B):
|
||||
self._make_table(col_A)
|
||||
self.meta.clear()
|
||||
@ -43,16 +43,16 @@ class SchemaDiffBase(fixture.DB):
|
||||
self.name2,
|
||||
cd.col_B
|
||||
),str(diff))
|
||||
|
||||
|
||||
class Test_getDiffOfModelAgainstDatabase(SchemaDiffBase):
|
||||
name1 = 'model'
|
||||
name2 = 'database'
|
||||
|
||||
|
||||
def _run_diff(self,**kw):
|
||||
return schemadiff.getDiffOfModelAgainstDatabase(
|
||||
self.meta, self.engine, **kw
|
||||
)
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_table_missing_in_db(self):
|
||||
self._make_table(create=False)
|
||||
@ -187,7 +187,7 @@ class Test_getDiffOfModelAgainstDatabase(SchemaDiffBase):
|
||||
Column('data', String(10)),
|
||||
Column('data', String(20)),
|
||||
)
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_integer_identical(self):
|
||||
self._make_table(
|
||||
@ -196,7 +196,7 @@ class Test_getDiffOfModelAgainstDatabase(SchemaDiffBase):
|
||||
diff = self._run_diff()
|
||||
self.assertEqual('No schema diffs',str(diff))
|
||||
self.assertFalse(diff)
|
||||
|
||||
|
||||
@fixture.usedb()
|
||||
def test_string_identical(self):
|
||||
self._make_table(
|
||||
|
@ -223,7 +223,7 @@ User = Table('User', meta,
|
||||
f = open(path, 'w')
|
||||
f.write(contents)
|
||||
f.close()
|
||||
|
||||
|
||||
|
||||
class TestSqlScript(fixture.Pathed, fixture.DB):
|
||||
|
||||
|
@ -52,7 +52,7 @@ class TestShellCommands(Shell):
|
||||
try:
|
||||
original = sys.argv
|
||||
sys.argv=['X','--help']
|
||||
|
||||
|
||||
run_module('migrate.versioning.shell', run_name='__main__')
|
||||
|
||||
finally:
|
||||
@ -73,7 +73,7 @@ class TestShellCommands(Shell):
|
||||
sys.stderr = original
|
||||
actual = actual.getvalue()
|
||||
self.assertTrue(expected in actual,'%r not in:\n"""\n%s\n"""'%(expected,actual))
|
||||
|
||||
|
||||
def test_main(self):
|
||||
"""Test main() function"""
|
||||
repos = self.tmp_repos()
|
||||
@ -106,7 +106,7 @@ class TestShellCommands(Shell):
|
||||
result = self.env.run('migrate create %s repository_name' % repos,
|
||||
expect_error=True)
|
||||
self.assertEqual(result.returncode, 2)
|
||||
|
||||
|
||||
def test_script(self):
|
||||
"""We can create a migration script via the command line"""
|
||||
repos = self.tmp_repos()
|
||||
@ -202,7 +202,7 @@ class TestShellDatabase(Shell, DB):
|
||||
# we need to connect to the DB to see if things worked
|
||||
|
||||
level = DB.CONNECT
|
||||
|
||||
|
||||
@usedb()
|
||||
def test_version_control(self):
|
||||
"""Ensure we can set version control on a database"""
|
||||
@ -298,7 +298,7 @@ class TestShellDatabase(Shell, DB):
|
||||
|
||||
result = self.env.run('migrate upgrade %s %s' % (self.url, repos_path))
|
||||
self.assertEqual(self.run_db_version(self.url, repos_path), 1)
|
||||
|
||||
|
||||
# Downgrade must have a valid version specified
|
||||
result = self.env.run('migrate downgrade %s %s' % (self.url, repos_path), expect_error=True)
|
||||
self.assertEqual(result.returncode, 2)
|
||||
@ -307,10 +307,10 @@ class TestShellDatabase(Shell, DB):
|
||||
result = self.env.run('migrate downgrade %s %s 2' % (self.url, repos_path), expect_error=True)
|
||||
self.assertEqual(result.returncode, 2)
|
||||
self.assertEqual(self.run_db_version(self.url, repos_path), 1)
|
||||
|
||||
|
||||
result = self.env.run('migrate downgrade %s %s 0' % (self.url, repos_path))
|
||||
self.assertEqual(self.run_db_version(self.url, repos_path), 0)
|
||||
|
||||
|
||||
result = self.env.run('migrate downgrade %s %s 1' % (self.url, repos_path), expect_error=True)
|
||||
self.assertEqual(result.returncode, 2)
|
||||
self.assertEqual(self.run_db_version(self.url, repos_path), 0)
|
||||
@ -348,7 +348,7 @@ class TestShellDatabase(Shell, DB):
|
||||
self.assertRaises(Exception, self.engine.text('select * from t_table').execute)
|
||||
|
||||
# The tests below are written with some postgres syntax, but the stuff
|
||||
# being tested (.sql files) ought to work with any db.
|
||||
# being tested (.sql files) ought to work with any db.
|
||||
@usedb(supported='postgres')
|
||||
def test_sqlfile(self):
|
||||
upgrade_script = """
|
||||
@ -362,7 +362,7 @@ class TestShellDatabase(Shell, DB):
|
||||
"""
|
||||
self.meta.drop_all()
|
||||
self._run_test_sqlfile(upgrade_script, downgrade_script)
|
||||
|
||||
|
||||
@usedb(supported='postgres')
|
||||
def test_sqlfile_comment(self):
|
||||
upgrade_script = """
|
||||
@ -400,11 +400,11 @@ class TestShellDatabase(Shell, DB):
|
||||
script_text='''
|
||||
from sqlalchemy import *
|
||||
from migrate import *
|
||||
|
||||
|
||||
def upgrade():
|
||||
print 'fgsfds'
|
||||
raise Exception()
|
||||
|
||||
|
||||
def downgrade():
|
||||
print 'sdfsgf'
|
||||
raise Exception()
|
||||
@ -425,7 +425,7 @@ class TestShellDatabase(Shell, DB):
|
||||
from migrate import *
|
||||
|
||||
from migrate.changeset import schema
|
||||
|
||||
|
||||
meta = MetaData(migrate_engine)
|
||||
account = Table('account', meta,
|
||||
Column('id', Integer, primary_key=True),
|
||||
@ -436,7 +436,7 @@ class TestShellDatabase(Shell, DB):
|
||||
# Upgrade operations go here. Don't create your own engine; use the engine
|
||||
# named 'migrate_engine' imported from migrate.
|
||||
meta.create_all()
|
||||
|
||||
|
||||
def downgrade():
|
||||
# Operations to reverse the above upgrade go here.
|
||||
meta.drop_all()
|
||||
@ -447,7 +447,7 @@ class TestShellDatabase(Shell, DB):
|
||||
result = self.env.run('migrate test %s %s' % (self.url, repos_path))
|
||||
self.assertEqual(self.run_version(repos_path), 1)
|
||||
self.assertEqual(self.run_db_version(self.url, repos_path), 0)
|
||||
|
||||
|
||||
@usedb()
|
||||
def test_rundiffs_in_shell(self):
|
||||
# This is a variant of the test_schemadiff tests but run through the shell level.
|
||||
@ -542,7 +542,7 @@ class TestShellDatabase(Shell, DB):
|
||||
## Operations to reverse the above upgrade go here.
|
||||
#meta.bind = migrate_engine
|
||||
#tmp_account_rundiffs.drop()''')
|
||||
|
||||
|
||||
## Save the upgrade script.
|
||||
#result = self.env.run('migrate script Desc %s' % repos_path)
|
||||
#upgrade_script_path = '%s/versions/001_Desc.py' % repos_path
|
||||
|
@ -45,7 +45,7 @@ class TestUtil(fixture.Pathed):
|
||||
# py 2.4 compatability :-/
|
||||
cw = catch_warnings(record=True)
|
||||
w = cw.__enter__()
|
||||
|
||||
|
||||
warnings.simplefilter("always")
|
||||
engine = construct_engine(url, echo='True')
|
||||
self.assertTrue(engine.echo)
|
||||
@ -69,7 +69,7 @@ class TestUtil(fixture.Pathed):
|
||||
api.create(repo, 'temp')
|
||||
api.script('First Version', repo)
|
||||
engine = construct_engine('sqlite:///:memory:')
|
||||
|
||||
|
||||
api.version_control(engine, repo)
|
||||
api.upgrade(engine, repo)
|
||||
|
||||
@ -103,7 +103,7 @@ class TestUtil(fixture.Pathed):
|
||||
# py 2.4 compatability :-/
|
||||
cw = catch_warnings(record=True)
|
||||
w = cw.__enter__()
|
||||
|
||||
|
||||
warnings.simplefilter("always")
|
||||
|
||||
# deprecated spelling
|
||||
@ -117,7 +117,7 @@ class TestUtil(fixture.Pathed):
|
||||
'model should be in form of module.model:User '
|
||||
'and not module.model.User',
|
||||
str(w[-1].message))
|
||||
|
||||
|
||||
finally:
|
||||
cw.__exit__()
|
||||
|
||||
|
@ -68,7 +68,7 @@ class TestVerNum(fixture.Base):
|
||||
self.assert_(VerNum(1) >= 1)
|
||||
self.assert_(VerNum(2) >= 1)
|
||||
self.assertFalse(VerNum(1) >= 2)
|
||||
|
||||
|
||||
|
||||
class TestVersion(fixture.Pathed):
|
||||
|
||||
@ -128,11 +128,11 @@ class TestVersion(fixture.Pathed):
|
||||
|
||||
def test_selection(self):
|
||||
"""Verify right sql script is selected"""
|
||||
|
||||
|
||||
# Create empty directory.
|
||||
path = self.tmp_repos()
|
||||
os.mkdir(path)
|
||||
|
||||
|
||||
# Create files -- files must be present or you'll get an exception later.
|
||||
python_file = '001_initial_.py'
|
||||
sqlite_upgrade_file = '001_sqlite_upgrade.sql'
|
||||
@ -143,13 +143,13 @@ class TestVersion(fixture.Pathed):
|
||||
|
||||
ver = Version(1, path, [sqlite_upgrade_file])
|
||||
self.assertEqual(os.path.basename(ver.script('sqlite', 'upgrade').path), sqlite_upgrade_file)
|
||||
|
||||
|
||||
ver = Version(1, path, [default_upgrade_file])
|
||||
self.assertEqual(os.path.basename(ver.script('default', 'upgrade').path), default_upgrade_file)
|
||||
|
||||
|
||||
ver = Version(1, path, [sqlite_upgrade_file, default_upgrade_file])
|
||||
self.assertEqual(os.path.basename(ver.script('sqlite', 'upgrade').path), sqlite_upgrade_file)
|
||||
|
||||
|
||||
ver = Version(1, path, [sqlite_upgrade_file, default_upgrade_file, python_file])
|
||||
self.assertEqual(os.path.basename(ver.script('postgres', 'upgrade').path), default_upgrade_file)
|
||||
|
||||
|
@ -153,7 +153,7 @@ class Repository(pathed.Pathed):
|
||||
|
||||
def create_script(self, description, **k):
|
||||
"""API to :meth:`migrate.versioning.version.Collection.create_new_python_version`"""
|
||||
|
||||
|
||||
k['use_timestamp_numbering'] = self.use_timestamp_numbering
|
||||
self.versions.create_new_python_version(description, **k)
|
||||
|
||||
@ -229,7 +229,7 @@ class Repository(pathed.Pathed):
|
||||
@classmethod
|
||||
def create_manage_file(cls, file_, **opts):
|
||||
"""Create a project management script (manage.py)
|
||||
|
||||
|
||||
:param file_: Destination file to be written
|
||||
:param opts: Options that are passed to :func:`migrate.versioning.shell.main`
|
||||
"""
|
||||
|
@ -28,7 +28,7 @@ class BaseScript(pathed.Pathed):
|
||||
self.verify(path)
|
||||
super(BaseScript, self).__init__(path)
|
||||
log.debug('Script %s loaded successfully' % path)
|
||||
|
||||
|
||||
@classmethod
|
||||
def verify(cls, path):
|
||||
"""Ensure this is a valid script
|
||||
|
@ -15,7 +15,7 @@ class SqlScript(base.BaseScript):
|
||||
@classmethod
|
||||
def create(cls, path, **opts):
|
||||
"""Create an empty migration script at specified path
|
||||
|
||||
|
||||
:returns: :class:`SqlScript instance <migrate.versioning.script.sql.SqlScript>`"""
|
||||
cls.require_notfound(path)
|
||||
|
||||
|
@ -33,7 +33,7 @@ class SQLScriptCollection(Collection):
|
||||
|
||||
class Template(pathed.Pathed):
|
||||
"""Finds the paths/packages of various Migrate templates.
|
||||
|
||||
|
||||
:param path: Templates are loaded from migrate package
|
||||
if `path` is not provided.
|
||||
"""
|
||||
@ -65,7 +65,7 @@ class Template(pathed.Pathed):
|
||||
|
||||
def _get_item(self, collection, theme=None):
|
||||
"""Locates and returns collection.
|
||||
|
||||
|
||||
:param collection: name of collection to locate
|
||||
:param type_: type of subfolder in collection (defaults to "_default")
|
||||
:returns: (package, source)
|
||||
@ -79,7 +79,7 @@ class Template(pathed.Pathed):
|
||||
def get_repository(self, *a, **kw):
|
||||
"""Calls self._get_item('repository', *a, **kw)"""
|
||||
return self._get_item('repository', *a, **kw)
|
||||
|
||||
|
||||
def get_script(self, *a, **kw):
|
||||
"""Calls self._get_item('script', *a, **kw)"""
|
||||
return self._get_item('script', *a, **kw)
|
||||
|
@ -5,16 +5,16 @@ repository_id={{ locals().pop('repository_id') }}
|
||||
|
||||
# The name of the database table used to track the schema version.
|
||||
# This name shouldn't already be used by your project.
|
||||
# If this is changed once a database is under version control, you'll need to
|
||||
# change the table name in each database too.
|
||||
# If this is changed once a database is under version control, you'll need to
|
||||
# change the table name in each database too.
|
||||
version_table={{ locals().pop('version_table') }}
|
||||
|
||||
# When committing a change script, Migrate will attempt to generate the
|
||||
# When committing a change script, Migrate will attempt to generate the
|
||||
# sql for all supported databases; normally, if one of them fails - probably
|
||||
# because you don't have that database installed - it is ignored and the
|
||||
# commit continues, perhaps ending successfully.
|
||||
# Databases in this list MUST compile successfully during a commit, or the
|
||||
# entire commit will fail. List the databases your application will actually
|
||||
# because you don't have that database installed - it is ignored and the
|
||||
# commit continues, perhaps ending successfully.
|
||||
# Databases in this list MUST compile successfully during a commit, or the
|
||||
# entire commit will fail. List the databases your application will actually
|
||||
# be using to ensure your updates to that database work properly.
|
||||
# This must be a list; example: ['postgres','sqlite']
|
||||
required_dbs={{ locals().pop('required_dbs') }}
|
||||
|
@ -5,16 +5,16 @@ repository_id={{ locals().pop('repository_id') }}
|
||||
|
||||
# The name of the database table used to track the schema version.
|
||||
# This name shouldn't already be used by your project.
|
||||
# If this is changed once a database is under version control, you'll need to
|
||||
# change the table name in each database too.
|
||||
# If this is changed once a database is under version control, you'll need to
|
||||
# change the table name in each database too.
|
||||
version_table={{ locals().pop('version_table') }}
|
||||
|
||||
# When committing a change script, Migrate will attempt to generate the
|
||||
# When committing a change script, Migrate will attempt to generate the
|
||||
# sql for all supported databases; normally, if one of them fails - probably
|
||||
# because you don't have that database installed - it is ignored and the
|
||||
# commit continues, perhaps ending successfully.
|
||||
# Databases in this list MUST compile successfully during a commit, or the
|
||||
# entire commit will fail. List the databases your application will actually
|
||||
# because you don't have that database installed - it is ignored and the
|
||||
# commit continues, perhaps ending successfully.
|
||||
# Databases in this list MUST compile successfully during a commit, or the
|
||||
# entire commit will fail. List the databases your application will actually
|
||||
# be using to ensure your updates to that database work properly.
|
||||
# This must be a list; example: ['postgres','sqlite']
|
||||
required_dbs={{ locals().pop('required_dbs') }}
|
||||
|
@ -22,7 +22,7 @@ def load_model(dotted_name):
|
||||
"""Import module and use module-level variable".
|
||||
|
||||
:param dotted_name: path to model in form of string: ``some.python.module:Class``
|
||||
|
||||
|
||||
.. versionchanged:: 0.5.4
|
||||
|
||||
"""
|
||||
@ -54,9 +54,9 @@ def asbool(obj):
|
||||
|
||||
def guess_obj_type(obj):
|
||||
"""Do everything to guess object type from string
|
||||
|
||||
|
||||
Tries to convert to `int`, `bool` and finally returns if not succeded.
|
||||
|
||||
|
||||
.. versionadded: 0.5.4
|
||||
"""
|
||||
|
||||
@ -81,7 +81,7 @@ def guess_obj_type(obj):
|
||||
@decorator
|
||||
def catch_known_errors(f, *a, **kw):
|
||||
"""Decorator that catches known api errors
|
||||
|
||||
|
||||
.. versionadded: 0.5.4
|
||||
"""
|
||||
|
||||
|
@ -3,7 +3,7 @@ import sys
|
||||
|
||||
def import_path(fullpath):
|
||||
""" Import a file with full path specification. Allows one to
|
||||
import from anywhere, something __import__ does not do.
|
||||
import from anywhere, something __import__ does not do.
|
||||
"""
|
||||
# http://zephyrfalcon.org/weblog/arch_d7_2002_08_31.html
|
||||
path, filename = os.path.split(fullpath)
|
||||
|
@ -4,7 +4,7 @@
|
||||
class KeyedInstance(object):
|
||||
"""A class whose instances have a unique identifier of some sort
|
||||
No two instances with the same unique ID should exist - if we try to create
|
||||
a second instance, the first should be returned.
|
||||
a second instance, the first should be returned.
|
||||
"""
|
||||
|
||||
_instances = dict()
|
||||
@ -24,7 +24,7 @@ class KeyedInstance(object):
|
||||
@classmethod
|
||||
def _key(cls, *p, **k):
|
||||
"""Given a unique identifier, return a dictionary key
|
||||
This should be overridden by child classes, to specify which parameters
|
||||
This should be overridden by child classes, to specify which parameters
|
||||
should determine an object's uniqueness
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
@ -75,7 +75,7 @@ class Collection(pathed.Pathed):
|
||||
and store them in self.versions
|
||||
"""
|
||||
super(Collection, self).__init__(path)
|
||||
|
||||
|
||||
# Create temporary list of files, allowing skipped version numbers.
|
||||
files = os.listdir(path)
|
||||
if '1' in files:
|
||||
@ -126,7 +126,7 @@ class Collection(pathed.Pathed):
|
||||
|
||||
script.PythonScript.create(filepath, **k)
|
||||
self.versions[ver] = Version(ver, self.path, [filename])
|
||||
|
||||
|
||||
def create_new_sql_version(self, database, description, **k):
|
||||
"""Create SQL files for new version"""
|
||||
ver = self._next_ver_num(k.pop('use_timestamp_numbering', False))
|
||||
@ -146,7 +146,7 @@ class Collection(pathed.Pathed):
|
||||
filepath = self._version_path(filename)
|
||||
script.SqlScript.create(filepath, **k)
|
||||
self.versions[ver].add_script(filepath)
|
||||
|
||||
|
||||
def version(self, vernum=None):
|
||||
"""Returns latest Version if vernum is not given.
|
||||
Otherwise, returns wanted version"""
|
||||
@ -165,7 +165,7 @@ class Collection(pathed.Pathed):
|
||||
|
||||
class Version(object):
|
||||
"""A single version in a collection
|
||||
:param vernum: Version Number
|
||||
:param vernum: Version Number
|
||||
:param path: Path to script files
|
||||
:param filelist: List of scripts
|
||||
:type vernum: int, VerNum
|
||||
@ -182,7 +182,7 @@ class Version(object):
|
||||
|
||||
for script in filelist:
|
||||
self.add_script(os.path.join(path, script))
|
||||
|
||||
|
||||
def script(self, database=None, operation=None):
|
||||
"""Returns SQL or Python Script"""
|
||||
for db in (database, 'default'):
|
||||
@ -211,7 +211,7 @@ class Version(object):
|
||||
def _add_script_sql(self, path):
|
||||
basename = os.path.basename(path)
|
||||
match = self.SQL_FILENAME.match(basename)
|
||||
|
||||
|
||||
if match:
|
||||
basename = basename.replace('.sql', '')
|
||||
parts = basename.split('_')
|
||||
|
@ -1,10 +1,10 @@
|
||||
# test_db.cfg
|
||||
#
|
||||
# This file contains a list of connection strings which will be used by
|
||||
# database tests. Tests will be executed once for each string in this file.
|
||||
#
|
||||
# This file contains a list of connection strings which will be used by
|
||||
# database tests. Tests will be executed once for each string in this file.
|
||||
# You should be sure that the database used for the test doesn't contain any
|
||||
# important data. See README for more information.
|
||||
#
|
||||
#
|
||||
# The string '__tmp__' is substituted for a temporary file in each connection
|
||||
# string. This is useful for sqlite tests.
|
||||
sqlite:///__tmp__
|
||||
|
Loading…
Reference in New Issue
Block a user