| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Kind of useful to see but... I don't think they're needed, and I'm not
super comfortable with print statements being in migrations. Seems
semi bloated!
|
|
|
|
|
|
|
|
|
|
| |
The mongosql tool is really dumping directly into the sql
database and is trying not to use too much logic that might
change later.
So this means, it needs to create the migration records on
its own!
So add a bunch of records with version=0.
|
|\ |
|
| |
| |
| |
| |
| |
| | |
The actual code is just a simple for loop; there might be a better
implementation but this is a fine start. I also extended test_delete to
check this too.
|
|\ \ |
|
| | | |
|
| | | |
|
|/ / |
|
| | |
|
| | |
|
|/
|
|
|
|
| |
commands
Just moved the import into the actual function. That resolved the issue!
|
|
|
|
|
|
| |
- Try to preserve some translations (somehow).
- Mark "Tagged with" again for translation.
- Do not translate the empty string
|
| |
|
|
|
|
|
|
|
|
|
| |
Searching media by slug is easy on mongo. But doing the
joins in sqlalchemy is not as nice. So created a function
for doing it.
Well, and create the same function for mongo, so that it
also works.
|
|
|
|
|
| |
pymongo does not rewind a cursor after leaving a for loop.
So let us do it by hand. Well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_get_tag_name_from_entries:
1) Replace:
if q.count():
elem = q[0]
by:
for element in q:
...
break
this doesn't do two db queries but only one.
2) And another dose of Dot-Notation as usual.
|
|
|
|
| |
Replace == by =.
|
|
|
|
|
|
|
| |
When uploading a new image the processing code wants to set
the media_data['exif'] part. As exif is not yet in sql,
there is no way to make this work now. So the workaround is
to check for "no row exists yet" and just ignore exif.
|
|
|
|
|
|
|
|
|
|
|
| |
If there is no media_data row for the current media (for
whatever reason, there might be good ones), let
MediaEntry.media_data not raise an exception but just
return None.
The exif display part now handles this by checking whether
.media_data.exif is defined (None has no attribute exif, so
it's undefined, all fine).
|
| |
|
|
|
|
|
|
| |
ipython code!
Thanks Hugo Boyer! I forgot to credit you in my last commit.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Add mongo_to_sql convert part for converting the media_data
for images. This currently drops the exif data and thus
only converts gps data.
|
| |
| |
| |
| |
| |
| | |
The processing should also create .gps_* instead of the old
['gps']['x']. To ease forward porting, use the new
media.media_data_init() to set the gps data in the media.
|
| |
| |
| |
| |
| |
| |
| | |
Instead of .gps.x use .gps_x and add some "is defined" and
stuff.
Also mark some strings for translation in here.
|
| |
| |
| |
| |
| | |
Move media_data["gps"]["*"] to media_data["gps_*"].
In preparation for media_data.gps_*
|
|/ |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| | |
This creates fresh VideoData rows for all the videos in the
mongodb.
|
| | |
|
|/ |
|
|\
| |
| |
| |
| |
| |
| | |
'refs/remotes/chemhacker/bug402_nicer_skin_for_video'
Conflicts:
mediagoblin/templates/mediagoblin/media_displays/video.html
|
| | |
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Same idea as in the previous commit.
Joar caught this one.
To reproduce
1. Create a user with an all-decimal ObjectId in mongo
2. Login using that user, while mongodb is enabled.
3. Switch instance to sql.
4. Restart.
5. Refresh any page.
This will error, because no user with that object id exists
any more.
While around, improved logging.
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | | |
also for subsequent logins once the user is created) is working.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When searching for a user by username, there can either be
no result or one result. There is a unique constraint on
the db.
.one in mongokit raises an error for more than one result.
But that can't happen anyway. So no problem.
.one in sqlalchemy raises an error for more than one, but
that's not a problem anyway. It also raises an error for no
result. But no result is handled by the code anyway, so no
need to raise an exception.
.find_one doesn't raise an exception for more than one
result (no problem anyway) and just returns None for no
result. The later is handled by the code.
|
| |
| |
| |
| |
| |
| | |
1. Change to the current primary key = media_entry id
layout
2. Add gps_{latitude,longitude} to the table.
|
|/ |
|
|
|
|
| |
autogenerate extension list
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Docs:
http://docs.sqlalchemy.org/en/latest/core/engines.html#configuring-logging
So for an application utilizing python logging for real
(and MediaGoblin should) the rule is:
- Don't use echo=True,
- but reconfigure the appropiate loggers' level.
So replaced the echo=True by a line to reconfigure the
appropiate logger to achieve the same effect.
This still dumps whole bloats of SQL queries into the main
log, but at least they're not duped any more.
|
| |
|