| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
- It is now possible to actually see what's processing, due to a bug fix
where __getitem__ was called on the db model.
- Removed DEPRECATED message from the docstring, it wasn't true.
|
|
|
|
| |
This commit makes test_submission mostly warning-clean.
|
|
|
|
|
|
|
|
|
|
|
|
| |
sqlite doesn't like complex changes (alter table) to happen
inside a transaction that has already done other things.
And really, each migration should say "I'm done" and commit
its changes.
This is not the full story, but it's the core of it.
Specifially the migration framework should probably do a
rollback "just in case" after each migration.
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| | |
'is_derek/bug405_email_notifications_for_comments' into notifications-merge
Conflicts:
mediagoblin/db/mongo/migrations.py
|
| |\
| | |
| | |
| | |
| | | |
Conflicts:
mediagoblin/db/mongo/migrations.py
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
Renamed `ogg' to `webm_audio' in core__file_keynames
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The cleanup could be missed if the request handling code in
app.py:__call__ exits early (due to exception, or due to
one of those early "return"s).
So to make sure the sql session is cleaned up for real,
wrap the whole thing in a try: finally:.
Also wrote a short tool to test if the session is actually
empty. The tool is currently disabled, but ready to be
used.
|
| | |
| | |
| | |
| | |
| | | |
In the analyzing part also check that the media_data tables
are empty (as expected) before dropping them.
|
| | |
| | |
| | |
| | | |
Well, and if it's not needed, drop it again. ;)
|
| | |
| | |
| | |
| | |
| | |
| | | |
After converting everything, check what is actually used in
the db. For media_types that are not used, drop all the
media_data tables and remove the migration info.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Output some headers while converting things.
And indent some info.
Also some DRY things.
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
mediagoblin/tests/test_submission.py
Also, WHOO SQL SWITCHOVER PARTY!
ASCII DANCE PARTY
/_o_/ \ / \o_ o
( _|_ ) //)
/\ / o \ /| /|
*BMCH BMCH BMCH BMCH*
%
/_o_/ HHHYAAaaaaa
/_
/ /
%
AAAAAHAHAHAHAHHHAAHA
,, .------
o_o ;; /\\ \ $ __
'\/ || // \\ # /_/
\// // //\\ \
) \\ \ %
\\ \\_____\
| ) //-------
/_/_ // //
SWITCH YOUR DATABASE
FLIP A FUKKEN BOOLEAN
%
__________
.-' '-.
.' '.
.' _--_ _--_ '.
/ / (_). / (_). \
. | | | | .
| ._____, ._____, |
| ____________________ |
| | | |
' \ / '
\ '. .----./ /
\ '._ / / /
'. '--------' .'
'._ _.'
'----------'
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This switches the whole source code over to use sql instead
of mongodb. It's a pretty easy change, but changes nearly
the complete way things work. Hopefully everythong works!
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The JSON fields are really "dumb stuff in here" fields.
They are not intended to get indexed or anything. And they
can get large. For example the exif_all field in one of my
simple tests is nearly 7 kB large. Although VARCHAR might
work, TEXT feels just better as the storage type.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
And some other stuff, that the converter does not need.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
And add the image and video media_data tables.
And start to rewrite the convert tool.
|
|/ / / |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
1. No need to drop media_data['exif'], we only have and
want media_data['exif_all'].
2. Use media['_id'] instead of media._id (better not use
dot-notation on mongo objects in such a low level tool).
|
| | |
| | |
| | |
| | |
| | | |
If the exif info is totally empty, do not add it at all to
the media_data dict in mongo.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Move media_data['exif']['clean'] to media_data['exif_all']
drop media_data['exif']['useful']
drop media_data['exif']
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Oh well:
tools.exif -> processing -> db.util -> db.models -> db.mixin -> tools.exif
So import tools.exif locally in exif_display_iter()
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
MediaEntry.media_data.exif_all will contain all the
"clean" EXIF data.
MediaEntry.exif_display_iter() is an iterator that fetches
the most interesting entries for display from that data.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When creating a new media_data row, the new row needs to
know the MediaEntry it is associated with. I have no idea,
why this worked before at all. Maybe some implicit tricks
by sqlalchemy?
|
| | |
| | |
| | |
| | |
| | | |
These are the columns that seem to make the most sense to
have an index on them.
|
| | |
| | |
| | |
| | |
| | | |
Load all models for the media_types. This was stopped by a
celery problem. But that is now fixed.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
So all models are ready when connecting to the db and so
our "db" object has all models listed on it, create a
function to load all models from the media_types, etc. Call
it in setup_database()
Problem: This gives celery warnings, because celery is
imported before being setup properly. No idea how to fix
this now. So media-type loading is excluded from
load_models for now.
|
| | |
| | |
| | |
| | |
| | | |
Import the "Base" class for models from db.sql.base instead
of db.sql.models.
|
| |/
|/|
| |
| |
| |
| | |
As the queries are quite verbose, disable them for now.
Reenabling them should be done in the central logging
config, which is another story for celery and bin/gmg.
|
| |
| |
| |
| |
| |
| | |
Kind of useful to see but... I don't think they're needed, and I'm not
super comfortable with print statements being in migrations. Seems
semi bloated!
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The mongosql tool is really dumping directly into the sql
database and is trying not to use too much logic that might
change later.
So this means, it needs to create the migration records on
its own!
So add a bunch of records with version=0.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Searching media by slug is easy on mongo. But doing the
joins in sqlalchemy is not as nice. So created a function
for doing it.
Well, and create the same function for mongo, so that it
also works.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If there is no media_data row for the current media (for
whatever reason, there might be good ones), let
MediaEntry.media_data not raise an exception but just
return None.
The exif display part now handles this by checking whether
.media_data.exif is defined (None has no attribute exif, so
it's undefined, all fine).
|
| | |
|
| |
| |
| |
| |
| |
| | |
Add mongo_to_sql convert part for converting the media_data
for images. This currently drops the exif data and thus
only converts gps data.
|
| |
| |
| |
| |
| | |
Move media_data["gps"]["*"] to media_data["gps_*"].
In preparation for media_data.gps_*
|
| | |
|
|/
|
|
|
| |
This creates fresh VideoData rows for all the videos in the
mongodb.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Docs:
http://docs.sqlalchemy.org/en/latest/core/engines.html#configuring-logging
So for an application utilizing python logging for real
(and MediaGoblin should) the rule is:
- Don't use echo=True,
- but reconfigure the appropiate loggers' level.
So replaced the echo=True by a line to reconfigure the
appropiate logger to achieve the same effect.
This still dumps whole bloats of SQL queries into the main
log, but at least they're not duped any more.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The name part of a MediaFile is only using a very limited
number of items. Currently things like "original" or
"thumb".
So instead of storing the string on each entry, just store
a short integer referencing the FileKeynames table and have
the appropiate string there.
|
| |
|