| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
And some other stuff, that the converter does not need.
|
| |
|
|
|
|
|
| |
And add the image and video media_data tables.
And start to rewrite the convert tool.
|
| |
|
|
|
|
|
|
|
|
| |
1. No need to drop media_data['exif'], we only have and
want media_data['exif_all'].
2. Use media['_id'] instead of media._id (better not use
dot-notation on mongo objects in such a low level tool).
|
|
|
|
|
|
|
| |
When creating a new media_data row, the new row needs to
know the MediaEntry it is associated with. I have no idea,
why this worked before at all. Maybe some implicit tricks
by sqlalchemy?
|
|
|
|
|
| |
These are the columns that seem to make the most sense to
have an index on them.
|
|
|
|
|
| |
Load all models for the media_types. This was stopped by a
celery problem. But that is now fixed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
So all models are ready when connecting to the db and so
our "db" object has all models listed on it, create a
function to load all models from the media_types, etc. Call
it in setup_database()
Problem: This gives celery warnings, because celery is
imported before being setup properly. No idea how to fix
this now. So media-type loading is excluded from
load_models for now.
|
|
|
|
|
| |
Import the "Base" class for models from db.sql.base instead
of db.sql.models.
|
|
|
|
|
|
| |
As the queries are quite verbose, disable them for now.
Reenabling them should be done in the central logging
config, which is another story for celery and bin/gmg.
|
|
|
|
|
|
|
|
|
|
| |
The mongosql tool is really dumping directly into the sql
database and is trying not to use too much logic that might
change later.
So this means, it needs to create the migration records on
its own!
So add a bunch of records with version=0.
|
| |
|
|
|
|
|
|
|
|
|
| |
Searching media by slug is easy on mongo. But doing the
joins in sqlalchemy is not as nice. So created a function
for doing it.
Well, and create the same function for mongo, so that it
also works.
|
|
|
|
|
|
|
|
|
|
|
| |
If there is no media_data row for the current media (for
whatever reason, there might be good ones), let
MediaEntry.media_data not raise an exception but just
return None.
The exif display part now handles this by checking whether
.media_data.exif is defined (None has no attribute exif, so
it's undefined, all fine).
|
| |
|
|
|
|
|
|
| |
Add mongo_to_sql convert part for converting the media_data
for images. This currently drops the exif data and thus
only converts gps data.
|
| |
|
|
|
|
|
| |
This creates fresh VideoData rows for all the videos in the
mongodb.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Docs:
http://docs.sqlalchemy.org/en/latest/core/engines.html#configuring-logging
So for an application utilizing python logging for real
(and MediaGoblin should) the rule is:
- Don't use echo=True,
- but reconfigure the appropiate loggers' level.
So replaced the echo=True by a line to reconfigure the
appropiate logger to achieve the same effect.
This still dumps whole bloats of SQL queries into the main
log, but at least they're not duped any more.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The name part of a MediaFile is only using a very limited
number of items. Currently things like "original" or
"thumb".
So instead of storing the string on each entry, just store
a short integer referencing the FileKeynames table and have
the appropiate string there.
|
| |
|
|
|
|
|
|
|
|
| |
In two cases (generating a new slug and editing the slug)
it is nice to know in advance (before the db gets angry)
that the slug is used/free. So created a db utility
function to check for this on mongo and sql:
check_media_slug_used()
|
|
|
|
|
|
|
|
| |
The current SQL layout/sqlalchemy strucuture can't detect
whether a slug isn't needed any more and delete it. So
provide a tool function to cleanup unused slugs.
It's currently not hooked to any gmg function!
|
|
|
|
|
|
| |
On sqlalchemy most updates are atomic enough for most use
cases. Anyway, here is an atomic_update that is compatible
to the mongo version.
|
|
|
|
| |
Needs to be implemented.
|
|
|
|
|
|
|
| |
So that the SQL backend is more useable, let the MediaEntry
have a faked media_data.
It's extremely fake: The returned dict is always a new one.
So any stored info is even lost!
|
|
|
|
|
|
| |
1. Make the foreignkey the primary_key.
2. Add width/height, as those are currently in use for the
media_data
|
|
|
|
|
|
|
|
|
|
|
| |
It's good practice to cleanup the SQL session after each
request so that the next request gets a fresh one.
It's an application decision whether one wants a
just-in-case ROLLBACK or COMMIT. There are two ideas behind
it, really. I have decided for ROLLBACK. The idea is "if
you forget to commit your changes yourself, there's
something broken. Maybe you got an exception?".
|
|
|
|
|
|
|
|
|
| |
attachments working with the sql backend.
- SQL Schema for attachment files, ordering attachments by
their name, not by the submission order (as earlier).
- Dot-Notation for attachments, where missing.
- convert existing attachments over from mongo -> sql
|
|
|
|
|
|
|
|
|
|
| |
Some parts in the code like to use .setdefault(). So make
them happy and provide a minimal version. It ignores the
given default and expects the attribute to already exist.
Other parts use .delete() to delete a complete object. This
version expects the object to live in a session and also
does the final commit.
|
|
|
|
|
|
|
|
|
| |
Finally, to make testing of sql a bit easier, create a
bin/gmg command to do the conversion from mongo to sql.
It's currently named "convert_mongo_to_sql".
The most important option is the gmg -cf option to give a
configfile with the appropiate sql_engine definition.
|
|
|
|
|
|
|
|
| |
Order the conversion by the "created" attribute. That way
the sql ids are mostly in the order they would have been,
if sql was used earlier.
Makes things nicer to look at in a db dump.
|
|
|
|
|
|
| |
- Various fixes to dbupdate itself
- Switching db/sql/migrations.py to use a dict instead of a list
- Registering the function
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Mostly this means: Havintg a config_spec.ini that has a
local (relative to mediagoblin.ini) sqlite db with the name
"mediagoblin.db".
Also:
- Add to .gitignore
- Add a notice to mediagoblin.ini about the db
|
| |
| |
| |
| |
| | |
Let the init code also handle createing a fresh clean
instance without any attrs set.
|
| |
| |
| |
| |
| |
| |
| |
| | |
fail_metadata used to be a dict in mongo. So a json encoded
field should be okay too.
We could use a pickled field instead, which would be more
flexible.
|
|\|
| |
| |
| |
| | |
Conflicts:
mediagoblin/db/sql/models.py
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
After a bit of discussion, we decided to drop the
pre-rendered html from the database and render it on
the fly.
In another step, we will use some proper caching method to
cache this stuff.
This commit affects the MediaComment.content_html part.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
After a bit of discussion, we decided to drop the
pre-rendered html from the database and render it on
the fly.
In another step, we will use some proper caching method to
cache this stuff.
This commit affects the MediaEntry.description_html part.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
After a bit of discussion, we decided to drop the
pre-rendered html from the database and render it on
the fly.
In another step, we will use some proper caching method to
cache this stuff.
This commit affects the User.bio_html part.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Many thanks go to Svavar Kjarrval who has taken a deeper
look at our current sql db design and made a bunch of
suggestions. The suggestions are currently put as TODO
items in the docstrings. This way we can keep track of
them directly where we need it.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
- Add default for User.email_verified
- Add default for MediaEntry.state
- Let PathTupleWithSlashes store [] as "NULL",
but not handling the reverse properly yet!
- Add _id alias field to MediaEntry and MediaComment
|
| |
| |
| |
| |
| |
| |
| | |
The reason migration 1 doesn't work, and is commented out, is because
of sqlalchemy-migrate not handling certain constraints while dropping
binary sqlite columns right. See also:
http://code.google.com/p/sqlalchemy-migrate/issues/detail?id=143&thanks=143&ts=1327882242
|
| | |
|