| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The copy_locally and copy_local_to_storage (very inconsistent terms BTW)
were simply slurping in everything in RAM and writing it out at once.
(the copy_locally was actually memory efficient if the remote system was local)
Use shutil.copyfileobj which does chunked reads/writes on file objects.
The default buffer size is 16kb, and as each chunk means a separate HTTP
request for e.g. cloudfiles, we use a chunksize of 4MB here (which has
just been arbitrarily set by me without tests).
This should help with the failure to upload large files issue #419.
|
| |
|
|
|
|
|
|
|
| |
versions
This utility should allow for easy copying from a local filesystem to
the storage instance.
|
|
* Removed storage.py
* Created submodules for filestorage, cloudfiles, mountstorage
* Changed test_storage to reflect the changes made in the storage
module structure
* Added mediagoblin.storage.filestorage.BasicFileStorage as a
default for both publicstore and queuestore's `storage_class`
|