| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
- Add json_error and use inplace of json_response where appropriate.
- Add garbage_collection to config spec file.
- Fix bugs in both garbage collection task and test
- Handle /api/whoami when no user logged in and a test for such a case.
- Validate ID is correct and user has comment privilege to comment.
|
| |
|
| |
|
|
|
|
| |
This commit sponsored by Zakkai Kauffman-Rogoff. Thanks! :)
|
|
|
|
| |
This commit sponsored by Mikael Nordfeldth. Thank you!
|
|
|
|
| |
This commit sponsored by Bruno Girin. Thank you!
|
|
|
|
|
|
|
|
| |
datastructure
Important, because that only makes sense for wsgi! :)
This commit sponsored by Geoff Lehr. Thank you!
|
|
|
|
| |
This commit sponsored by Benjamin Prager. Thank you!
|
|
|
|
|
| |
This commit sponsored by Joar Wandborg. Joar, thanks for the many
things you've done for MediaGoblin!
|
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
mediagoblin/processing/task.py
mediagoblin/submit/lib.py
|
| | |
|
| |
| |
| |
| | |
This commit sponsored by Odin Hørthe Omdal. Thank you!
|
| | |
|
|/
|
|
|
|
| |
- have mg generate task_id
remove
|
|
|
|
|
|
| |
This tool creates an initial media entry for a given user.
No magic. It just prefills the license with the user's
default license and adds the user as uploader.
|
|
|
|
| |
Signed-off-by: Alon Levy <alon@pobox.com>
|
|
|
|
|
| |
When uploading, the file field needs some checks, it seems.
So refactor them into check_file_field and use around.
|
|
|
|
|
|
|
|
|
| |
People(tm) want to start run_process_media from the CLI and might not
have a request object handy. So pass in the feed_url into
run_process_media rather than the request object and allow the feed url
to be empty (resulting in no PuSH notification at all then).
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Notifying the PuSH servers had 3 problems.
1) it was done immediately after sending of the processing task to celery. So if celery was run in a separate
process we would notify the PuSH servers before the new media was processed/
visible. (#436)
2) Notification code was called in submit/views.py, so submitting via the
API never resulted in notifications. (#585)
3) If Notifying the PuSH server failed, we would never retry.
The solution was to make the PuSH notification an asynchronous subtask. This
way: 1) it will only be called once async processing has finished, 2) it
is in the main processing code path, so even API calls will result in
notifications, and 3) We retry 3 times in case of failure before giving up.
If the server is in a separate process, we will wait 3x 2 minutes before
retrying the notification.
The only downside is that the celery server needs to have access to the internet
to ping the PuSH server. If that is a problem, we need to make the task belong
to a special group of celery servers that has access to the internet.
As a side effect, I believe I removed the limitation that prevented us from
upgrading celery.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
|
|
|
|
|
|
|
|
|
| |
First rename prepare_entry to prepare_queue_task, because
this is really more like what this thing does.
Thanks to Velmont for noting that we do not need a request
in here, but an "app" is good enough. Which means, that
this stuff can be called from tool scripts too.
|
|
|
|
|
| |
prepare_entry handles the task_id setup and generating a
queue filename and file. it returns the queue file.
|
|
|
|
|
|
| |
Calling the processing task and handling the exceptions is
easy, but has a bunch of caveats, so factor it out into an
easy callable function.
|
|
Start to refactor our upload handling in main submit and
the api. Start factoring out the handling of PuSH url
handling.
|