| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Well, cleaning that up :)
This commit sponsored by Enrico Zini. Thanks!
|
|
|
|
| |
This commit sponsored by Sam Clegg. Thank you!
|
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
mediagoblin/processing/task.py
mediagoblin/submit/lib.py
|
| |
| |
| |
| | |
If there is an original video file and we skip transcoding, delete the webm_640 file
|
| | |
|
| |
| |
| |
| | |
ignore the exception
|
| |
| |
| |
| | |
catch copy_local_to_storage errors and raise PublicStoreFail, saving the keyname
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
This commit sponsored by Mikiya Okuno. Thank you!
|
| |
| |
| |
| |
| |
| | |
Haven't tested it yet though :)
This commit sponsored by Samuel Bächler. Thank you!
|
| |
| |
| |
| | |
This commit sponsored by Vincent Demeester. Thank you!
|
| |
| |
| |
| |
| |
| |
| | |
This allows our processor to make some informed decisions based on the
state by still having access to the original state.
This commit sponsored by William Rico. Thank you!
|
| |
| |
| |
| |
| |
| | |
BONUS COMMIT to Ben Finney and the Free Software Melbourne crew. :)
IRONY: Initially I committed this as "media manager".
|
| |
| |
| |
| | |
This commit sponsored by Odin Hørthe Omdal. Thank you!
|
| |
| |
| |
| |
| |
| |
| |
| | |
processing command now.
However, it doesn't celery task-ify it...
This commit sponsored by Catalin Cosovanu. Thank you!
|
| |
| |
| |
| | |
This commit sponsored by Philippe Casteleyn. Thank you!
|
| |
| |
| |
| |
| | |
Every reprocessing action possible can inform you of its command line
argument stuff! Is that awesome or what?
|
| |
| |
| |
| |
| |
| |
| | |
We are on our way now to a working reprocessing system under this
redesign!
This commit sponsored by Bjarni Rúnar Einarsson. Thank you!
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fleshing out the base classes and setting up some docstrings. Not
everything is totally clear yet, but I think it's on a good track, and
getting clearer.
This commit sponsored by Ben Finney, on behalf of Free Software Melbourne.
Thank you all!
|
| | |
|
| | |
|
| |
| |
| |
| | |
ProceessImage, better description for --size flag
|
| | |
|
| |
| |
| |
| | |
clearer though
|
| | |
|
| | |
|
| |
| |
| |
| | |
- pass feed_url into ProcessMedia run()
|
| |
| |
| |
| |
| |
| | |
- have mg generate task_id
remove
|
|/
|
|
|
|
|
|
|
|
|
|
| |
- Make sure Exceptions are pickleable (not sure if this was not the
case but this is the pattern as documented in the celery docs.
- Don't create a task_id in the GMG code, but save the one
implicitely created by celery.
- Don't create a task-id directory per upload. Just store queued uploads
in a single directory (this is the most controversial change and might
need discussion!!!)
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
|
|
|
|
|
|
| |
This reverts commit f67611fb485b5a84cedc62b73beb1e551e8cb934.
For some reason, generating a slug here throws an integrity error during
a query when there is a duplicate slug.
|
|
|
|
| |
Patch submitted by LotusEcho
|
| |
|
|
|
|
| |
To make .media_fetch_order work, create a property.
|
|
|
|
|
| |
Implement queue dir deleting in the
proc_state.delete_queue_file helper function.
|
|
|
|
|
|
| |
The ideas is by Alon Levy.
Use it in ProcessingState.copy_original for now.
|
|
|
|
|
|
|
| |
And change the process_foo() API to accept a
processingstate now.
image and video are tested, the others are UNTESTED.
|
|
|
|
|
|
|
| |
This makes the processing code easier to read/write and
alos will help the reprocessing once we get to it.
Thanks to Joar Wandborg for testing!
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The idea is to have a class that has the knowledge of the
currently being processed media and also has tools for
that.
The long term idea is to make reprocessing easier by for
example hiding the way the original comes into the
processing code.
|
|
|
|
|
|
|
|
|
| |
People(tm) want to start run_process_media from the CLI and might not
have a request object handy. So pass in the feed_url into
run_process_media rather than the request object and allow the feed url
to be empty (resulting in no PuSH notification at all then).
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Notifying the PuSH servers had 3 problems.
1) it was done immediately after sending of the processing task to celery. So if celery was run in a separate
process we would notify the PuSH servers before the new media was processed/
visible. (#436)
2) Notification code was called in submit/views.py, so submitting via the
API never resulted in notifications. (#585)
3) If Notifying the PuSH server failed, we would never retry.
The solution was to make the PuSH notification an asynchronous subtask. This
way: 1) it will only be called once async processing has finished, 2) it
is in the main processing code path, so even API calls will result in
notifications, and 3) We retry 3 times in case of failure before giving up.
If the server is in a separate process, we will wait 3x 2 minutes before
retrying the notification.
The only downside is that the celery server needs to have access to the internet
to ping the PuSH server. If that is a problem, we need to make the task belong
to a special group of celery servers that has access to the internet.
As a side effect, I believe I removed the limitation that prevented us from
upgrading celery.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
| |
|
|
|
|
|
|
|
| |
This was one of the last remaining Mongo holdouts and has been removed from
the tree herewith. Good bye, ObjectId.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were refering to model._id in most of the code base as this is
what Mongo uses. However, each use of _id required a) fixup of queries:
e.g. what we did in our find() and find_one() functions moving all
'_id' to 'id'. It also required using AliasFields to make the ._id
attribute available. This all means lots of superfluous fixing and
transitioning in a SQL world.
It will also not work in the long run. Much newer code already refers
to the objects by model.id (e.g. in the oauth plugin), which will break
with Mongo. So let's be honest, rip out the _id mongoism and live with
.id as the one canonical way to address objects.
This commit modifies all users and providers of model._id to use
model.id instead. This patch works with or without Mongo removed first,
but will break Mongo usage (even more than before)
I have not bothered to fixup db.mongo.* and db.sql.convert
(which converts from Mongo to SQL)
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|