aboutsummaryrefslogtreecommitdiffstats
path: root/mediagoblin/processing/task.py
Commit message (Collapse)AuthorAgeFilesLines
* Apply pyupgrade --py36-plus.Ben Sturmfels2021-09-231-2/+2
| | | | This removes some 'u' prefixes and converts simple format() calls to f-strings.
* Remove remaining imports/calls to six not automatically removed by pyupgrade.Ben Sturmfels2021-03-051-1/+1
|
* Apply `pyupgrade --py3-plus` to remove Python 2 compatibility code.Ben Sturmfels2021-03-051-9/+9
|
* Switch to rabbitmq by default and in docsBoris Bobrov2017-06-091-0/+3
|
* updated function docsBoris Bobrov2015-02-161-0/+3
|
* Fix #658 and #974 - Rollback database on_return of taskJessica Tallon2014-12-011-0/+14
|
* Change urllib and urllib import with six.moves.urllib.Berker Peksag2014-06-071-6/+6
|
* Merge remote-tracking branch 'refs/remotes/rodney757/reprocessing'Christopher Allan Webber2013-08-211-17/+29
|\ | | | | | | | | | | Conflicts: mediagoblin/processing/task.py mediagoblin/submit/lib.py
| * catch processing exceptions and if entry_orig_state is processed, then ↵Rodney Ewing2013-08-161-1/+12
| | | | | | | | ignore the exception
| * Record the original state of the media entry in the processorChristopher Allan Webber2013-08-161-4/+6
| | | | | | | | | | | | | | This allows our processor to make some informed decisions based on the state by still having access to the original state. This commit sponsored by William Rico. Thank you!
| * Renaming the processing manager stuff to be less ambiguous.Christopher Allan Webber2013-08-161-2/+2
| | | | | | | | | | | | BONUS COMMIT to Ben Finney and the Free Software Melbourne crew. :) IRONY: Initially I committed this as "media manager".
| * Updating to the point where we can allllmost run with the new reprocessing codeChristopher Allan Webber2013-08-161-17/+9
| | | | | | | | This commit sponsored by Odin Hørthe Omdal. Thank you!
| * added comments and did a little refactoring. not sure if it is actually any ↵Rodney Ewing2013-08-161-2/+10
| | | | | | | | clearer though
| * added image reprocessingRodney Ewing2013-08-161-2/+4
| |
* | - need self.metadata with BaseProcessingFailRodney Ewing2013-08-191-1/+2
| | | | | | | | - pass feed_url into ProcessMedia run()
* | -update to latest masterRodney Ewing2013-08-191-7/+10
| | | | | | | | | | | | - have mg generate task_id remove
* | Tweak Celery TaskSebastian Spaeth2013-08-191-12/+10
|/ | | | | | | | | | | | - Make sure Exceptions are pickleable (not sure if this was not the case but this is the pattern as documented in the celery docs. - Don't create a task_id in the GMG code, but save the one implicitely created by celery. - Don't create a task-id directory per upload. Just store queued uploads in a single directory (this is the most controversial change and might need discussion!!!) Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
* Revert "Leave slug empty until we are sure media processing was successful."Rodney Ewing2013-08-081-2/+0
| | | | | | | This reverts commit f67611fb485b5a84cedc62b73beb1e551e8cb934. For some reason, generating a slug here throws an integrity error during a query when there is a duplicate slug.
* Leave slug empty until we are sure media processing was successful.Rodney Ewing2013-08-071-0/+2
| | | | Patch submitted by LotusEcho
* MediaManager: Use .foo instead of ['foo'].Elrond2013-04-171-1/+1
| | | | To make .media_fetch_order work, create a property.
* Kill monkeypatching of ProcessingState.Elrond2013-02-081-1/+1
| | | | | | | And change the process_foo() API to accept a processingstate now. image and video are tested, the others are UNTESTED.
* Implement ProcessingState class and use for imagesElrond2013-02-081-3/+6
| | | | | | | | | | The idea is to have a class that has the knowledge of the currently being processed media and also has tools for that. The long term idea is to make reprocessing easier by for example hiding the way the original comes into the processing code.
* Don't pass request into run_process_mediaSebastian Spaeth2013-01-151-1/+2
| | | | | | | | | People(tm) want to start run_process_media from the CLI and might not have a request object handy. So pass in the feed_url into run_process_media rather than the request object and allow the feed url to be empty (resulting in no PuSH notification at all then). Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
* Make PuSHing the Pubhubsubbub server an async task (#436, #585)Sebastian Spaeth2013-01-151-3/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notifying the PuSH servers had 3 problems.  1) it was done immediately after sending of the processing task to celery. So if celery was run in a separate process we would notify the PuSH servers before the new media was processed/ visible. (#436) 2) Notification code was called in submit/views.py, so submitting via the API never resulted in notifications. (#585) 3) If Notifying the PuSH server failed, we would never retry. The solution was to make the PuSH notification an asynchronous subtask. This way: 1) it will only be called once async processing has finished, 2) it is in the main processing code path, so even API calls will result in notifications, and 3) We retry 3 times in case of failure before giving up. If the server is in a separate process, we will wait 3x 2 minutes before retrying the notification. The only downside is that the celery server needs to have access to the internet to ping the PuSH server. If that is a problem, we need to make the task belong to a special group of celery servers that has access to the internet. As a side effect, I believe I removed the limitation that prevented us from upgrading celery. Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
* Move db.sql.models* to db.models*Sebastian Spaeth2013-01-071-1/+1
|
* Remove ObjectId from the treeSebastian Spaeth2012-12-251-3/+2
| | | | | | | This was one of the last remaining Mongo holdouts and has been removed from the tree herewith. Good bye, ObjectId. Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
* Move DBModel._id -> DBModel.idSebastian Spaeth2012-12-211-4/+4
| | | | | | | | | | | | | | | | | | | | | | | We were refering to model._id in most of the code base as this is what Mongo uses. However, each use of _id required a) fixup of queries: e.g. what we did in our find() and find_one() functions moving all '_id' to 'id'. It also required using AliasFields to make the ._id attribute available. This all means lots of superfluous fixing and transitioning in a SQL world. It will also not work in the long run. Much newer code already refers to the objects by model.id (e.g. in the oauth plugin), which will break with Mongo. So let's be honest, rip out the _id mongoism and live with .id as the one canonical way to address objects. This commit modifies all users and providers of model._id to use model.id instead. This patch works with or without Mongo removed first, but will break Mongo usage (even more than before) I have not bothered to fixup db.mongo.* and db.sql.convert (which converts from Mongo to SQL) Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
* We don't need to save entries during processing... also adding comments ↵Christopher Allan Webber2012-12-121-0/+3
| | | | explaining such
* make media_manager a property of MediaEntry in mixin.pySebastian Spaeth2012-12-041-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | In all cases where get_media_manager(_media_type_as_string) was called in our code base we ultimately passed in a "MediaEntry().media_type" to get the matching MEDIA_MANAGER. It so makes sense to make this a function of the MediaEntry rather than a global function in mediagoblin.media_types and passing around media_entry.media_type as arguments all the time. It saves a few import statements and arguments. I also made it so the Media_manager property is cached for subsequent calls, although I am not too sure that this is needed (there are other cases for which this would make more sense) Also add a get_media_manager test to the media submission tests. It submits an image and checks that both media.media_type and media.media_manager return the right thing. Not sure if these tests could not be merged with an existing submission test, but it can't hurt to have things explicit. TODO: Right now we iterate through all existing media_managers to find the right one based on the string of its module name. This should be made a simple dict lookup to avoid all the extra work. Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
* HTTP callback fixesJoar Wandborg2012-09-261-1/+1
| | | | | | | | | | | - Added HTTPError catching around the callback request, to not mark the entry as failed, just log the exception. - Fixed bug where I forgot to actually fetch the entry before passing it to json_processing_callback. - Changed __main__ migration #6 to create the ProcessingMetaData table as it is currently, to prevent possible breakage if a siteadmin is lagging behind with his db migrations and more than one migration wants to fix stuff with the ProcessingMetaData table.
* Added support for http callbacks on processingJoar Wandborg2012-09-261-0/+8
| | | | | Sends an HTTP POST request back to an URL given on submission to the API submit view.
* All processing exceptions are now loggedJoar Wandborg2012-08-011-0/+8
| | | | | All processing exceptions should now be logged, the MediaEntry marked as failed, the exception re-raised.
* Panel improvementsJoar Wandborg2012-07-111-9/+10
| | | | | | | | | | - Added progress meter for video and audio media types. - Changed the __repr__ method of a MediaEntry to display a bit more useful explanation. - Added a new MediaEntry.state, 'processing', which means that the task is running the processor on the item currently. - Fixed some PEP8 issues in user_pages/views.py - Fixed the ATOM TAG URI to show the correct year.
* Minor improvements to the processing panelJoar Wandborg2012-07-101-4/+9
| | | | | | - It is now possible to actually see what's processing, due to a bug fix where __getitem__ was called on the db model. - Removed DEPRECATED message from the docstring, it wasn't true.
* Move celery task into own task.pyElrond2012-03-211-0/+78
Move the actual celery task from processing/__init__.py into its own .../task.py. That way it can be imported as needed.