| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
In addition to filling the normal slot media_data['exif'],
now also fill media_data.exif_all. This is the new slot
used by SQL.
For a few moments this will create duplicated entries in
the mongo db. But this shouldn't hurt.
|
|\ \ \ \ \ \ |
|
| | | | | | | |
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
If you pressed an arrow key in a textarea before, the next/previous media
was opened.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Oh well:
tools.exif -> processing -> db.util -> db.models -> db.mixin -> tools.exif
So import tools.exif locally in exif_display_iter()
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
MediaEntry.media_data.exif_all will contain all the
"clean" EXIF data.
MediaEntry.exif_display_iter() is an iterator that fetches
the most interesting entries for display from that data.
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
(`-dev' instead of `.dev')
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
When creating a new media_data row, the new row needs to
know the MediaEntry it is associated with. I have no idea,
why this worked before at all. Maybe some implicit tricks
by sqlalchemy?
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
These are the columns that seem to make the most sense to
have an index on them.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Load all models for the media_types. This was stopped by a
celery problem. But that is now fixed.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Our entries in the queue are marked as "unprocessed" and
not as "processing" as the panel code wanted it to be. So
search for the correct string.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
So that celeryd also loads the task.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Move the actual celery task from processing/__init__.py
into its own .../task.py. That way it can be imported as
needed.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
processing.py -> processing/__init__.py
This is in preparation for splitting processing a bit.
The main reason for the split is celery setup: celery needs
to be setup before even importing and importing and
subclassing some of its parts. So it's better to move the
critical parts into their own submodule and import it as
late as needed.
|
|\ \ \ \ \ \ |
|
| |/ / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Bug #270 asks for a lazycelery.sh script much like lazyserver.sh. Rather
than duplicate the code, I consolidated them into a single script,
lazystarter.sh. The script reconfigures itself a bit, and runs a
particular server, based on the name that's used to call it, but no matter
what it uses the same code to offer help and find configuration files and
server launchers. Hopefully this will make it easy to add other
features/fix bugs as needed in the future, and have them stay in sync.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
ascii doesn't use media_data at all. So it needs the most
basic media_data model. Fix it to take the current form.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
So all models are ready when connecting to the db and so
our "db" object has all models listed on it, create a
function to load all models from the media_types, etc. Call
it in setup_database()
Problem: This gives celery warnings, because celery is
imported before being setup properly. No idea how to fix
this now. So media-type loading is excluded from
load_models for now.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Import the "Base" class for models from db.sql.base instead
of db.sql.models.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
As the queries are quite verbose, disable them for now.
Reenabling them should be done in the central logging
config, which is another story for celery and bin/gmg.
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| |_|_|_|/
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Kind of useful to see but... I don't think they're needed, and I'm not
super comfortable with print statements being in migrations. Seems
semi bloated!
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The mongosql tool is really dumping directly into the sql
database and is trying not to use too much logic that might
change later.
So this means, it needs to create the migration records on
its own!
So add a bunch of records with version=0.
|
|\ \ \ \ \ |
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The actual code is just a simple for loop; there might be a better
implementation but this is a fine start. I also extended test_delete to
check this too.
|
|\ \ \ \ \ |
|
| | | | | | |
|
| | | | | | |
|
|/ / / / / |
|
| | | | | |
|
| | | | | |
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | | |
commands
Just moved the import into the actual function. That resolved the issue!
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- Try to preserve some translations (somehow).
- Mark "Tagged with" again for translation.
- Do not translate the empty string
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Searching media by slug is easy on mongo. But doing the
joins in sqlalchemy is not as nice. So created a function
for doing it.
Well, and create the same function for mongo, so that it
also works.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
pymongo does not rewind a cursor after leaving a for loop.
So let us do it by hand. Well.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
_get_tag_name_from_entries:
1) Replace:
if q.count():
elem = q[0]
by:
for element in q:
...
break
this doesn't do two db queries but only one.
2) And another dose of Dot-Notation as usual.
|
| | | |
| | | |
| | | |
| | | | |
Replace == by =.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When uploading a new image the processing code wants to set
the media_data['exif'] part. As exif is not yet in sql,
there is no way to make this work now. So the workaround is
to check for "no row exists yet" and just ignore exif.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If there is no media_data row for the current media (for
whatever reason, there might be good ones), let
MediaEntry.media_data not raise an exception but just
return None.
The exif display part now handles this by checking whether
.media_data.exif is defined (None has no attribute exif, so
it's undefined, all fine).
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
ipython code!
Thanks Hugo Boyer! I forgot to credit you in my last commit.
|
|\ \ \ \ |
|
| | | | | |
|