| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Well, cleaning that up :)
This commit sponsored by Enrico Zini. Thanks!
|
|
|
|
| |
This commit sponsored by Sam Clegg. Thank you!
|
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
mediagoblin/processing/task.py
mediagoblin/submit/lib.py
|
| |
| |
| |
| | |
If there is an original video file and we skip transcoding, delete the webm_640 file
|
| | |
|
| |
| |
| |
| | |
catch copy_local_to_storage errors and raise PublicStoreFail, saving the keyname
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
This commit sponsored by Mikiya Okuno. Thank you!
|
| |
| |
| |
| |
| |
| | |
Haven't tested it yet though :)
This commit sponsored by Samuel Bächler. Thank you!
|
| |
| |
| |
| | |
This commit sponsored by Vincent Demeester. Thank you!
|
| |
| |
| |
| |
| |
| |
| | |
This allows our processor to make some informed decisions based on the
state by still having access to the original state.
This commit sponsored by William Rico. Thank you!
|
| |
| |
| |
| |
| |
| | |
BONUS COMMIT to Ben Finney and the Free Software Melbourne crew. :)
IRONY: Initially I committed this as "media manager".
|
| |
| |
| |
| | |
This commit sponsored by Odin Hørthe Omdal. Thank you!
|
| |
| |
| |
| |
| |
| |
| |
| | |
processing command now.
However, it doesn't celery task-ify it...
This commit sponsored by Catalin Cosovanu. Thank you!
|
| |
| |
| |
| | |
This commit sponsored by Philippe Casteleyn. Thank you!
|
| |
| |
| |
| |
| | |
Every reprocessing action possible can inform you of its command line
argument stuff! Is that awesome or what?
|
| |
| |
| |
| |
| |
| |
| | |
We are on our way now to a working reprocessing system under this
redesign!
This commit sponsored by Bjarni Rúnar Einarsson. Thank you!
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fleshing out the base classes and setting up some docstrings. Not
everything is totally clear yet, but I think it's on a good track, and
getting clearer.
This commit sponsored by Ben Finney, on behalf of Free Software Melbourne.
Thank you all!
|
| | |
|
| | |
|
| |
| |
| |
| | |
ProceessImage, better description for --size flag
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
- have mg generate task_id
remove
|
|/
|
|
|
|
|
|
|
|
|
|
| |
- Make sure Exceptions are pickleable (not sure if this was not the
case but this is the pattern as documented in the celery docs.
- Don't create a task_id in the GMG code, but save the one
implicitely created by celery.
- Don't create a task-id directory per upload. Just store queued uploads
in a single directory (this is the most controversial change and might
need discussion!!!)
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
| |
|
|
|
|
|
| |
Implement queue dir deleting in the
proc_state.delete_queue_file helper function.
|
|
|
|
|
|
| |
The ideas is by Alon Levy.
Use it in ProcessingState.copy_original for now.
|
|
|
|
|
|
|
| |
And change the process_foo() API to accept a
processingstate now.
image and video are tested, the others are UNTESTED.
|
|
|
|
|
|
|
| |
This makes the processing code easier to read/write and
alos will help the reprocessing once we get to it.
Thanks to Joar Wandborg for testing!
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The idea is to have a class that has the knowledge of the
currently being processed media and also has tools for
that.
The long term idea is to make reprocessing easier by for
example hiding the way the original comes into the
processing code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were refering to model._id in most of the code base as this is
what Mongo uses. However, each use of _id required a) fixup of queries:
e.g. what we did in our find() and find_one() functions moving all
'_id' to 'id'. It also required using AliasFields to make the ._id
attribute available. This all means lots of superfluous fixing and
transitioning in a SQL world.
It will also not work in the long run. Much newer code already refers
to the objects by model.id (e.g. in the oauth plugin), which will break
with Mongo. So let's be honest, rip out the _id mongoism and live with
.id as the one canonical way to address objects.
This commit modifies all users and providers of model._id to use
model.id instead. This patch works with or without Mongo removed first,
but will break Mongo usage (even more than before)
I have not bothered to fixup db.mongo.* and db.sql.convert
(which converts from Mongo to SQL)
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
|
|
|
|
|
|
|
|
|
| |
- Added progress meter for video and audio media types.
- Changed the __repr__ method of a MediaEntry to display a bit more
useful explanation.
- Added a new MediaEntry.state, 'processing', which means that the task
is running the processor on the item currently.
- Fixed some PEP8 issues in user_pages/views.py
- Fixed the ATOM TAG URI to show the correct year.
|
|
|
|
| |
This commit makes test_submission mostly warning-clean.
|
|
|
|
|
|
|
|
| |
Conflicts:
mediagoblin/media_types/image/processing.py
mediagoblin/media_types/video/__init__.py
mediagoblin/media_types/video/processing.py
mediagoblin/tests/test_submission.py
|
|
|
|
|
|
| |
This merge involved moving the new FilenameBuilder class to
processing/__init__.py, and putting the comment deletion tests back into
test_submission.py using the refactored functions.
|
|
|
|
|
|
| |
Move the actual celery task from processing/__init__.py
into its own .../task.py. That way it can be imported as
needed.
|
|
processing.py -> processing/__init__.py
This is in preparation for splitting processing a bit.
The main reason for the split is celery setup: celery needs
to be setup before even importing and importing and
subclassing some of its parts. So it's better to move the
critical parts into their own submodule and import it as
late as needed.
|