| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| | |
It's always a simple error in the end, you know?
Signed-off-by: Jody Bruchon <jody@jodybruchon.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This doesn't result in an elegant, perfectly balanced search tree,
but it's absolutely good enough. This commit completely mitigates
the worst-case scenario where the archive file is sorted.
Signed-off-by: Jody Bruchon <jody@jodybruchon.com>
|
| |
| |
| |
| |
| |
| | |
Sorted archives turn the binary tree into a linked list and make
things horribly slow. This is an incomplete mitigation for this
issue.
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old behavior was to open and scan the entire archive file for
every single video download. This resulted in horrible performance
for archives of any remotely large size, especially since all new
video IDs are appended to the end of the archive. For anyone who
uses the archive feature to maintain archives of entire video
playlists or channels, this meant that all such lists with newer
downloads would have to scan close to the end of the archive file
before the potential download was rejected. For archives with tens
of thousands of lines, this easily resulted in millions of line
reads and checks over the course of scanning a single channel or
playlist that had been seen previously.
The new behavior in this commit is to preload the archive file
into a binary search tree and scan the tree instead of constantly
scanning the file on disk for every file. When a new download is
appended to the archive file, it is also added to this tree. The
performance is massively better using this strategy over the more
"naive" line-by-line archive file parsing strategy.
The only negative consequence of this change is that the archive
in memory will not be synchronized with the archive file on disk.
Running multiple instances of the program at the same time that
all use the same archive file may result in duplicate archive
entries or duplicated downloads. This is unlikely to be a serious
issue for the vast majority of users. If the instances are not
likely to try to download identical video IDs then this should
not be a problem anyway; for example, having two instances pull
two completely different YouTube channels at once should be fine.
Signed-off-by: Jody Bruchon <jody@jodybruchon.com>
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
|/|
| |
| |
| | |
https://github.com/Zocker1999NET/youtube-dl into Zocker1999NET-ext/remuxe-video
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes #6996
- Supported formats declared: mp4, mkv
- Added FFmpegVideoRemuxerPP as postprocessor
- Added option to README and shell-completion scripts
|
|\ \
| | |
| | | |
Update README.md
|
|/ /
| |
| | |
cleanup + typo fix
|
|\ \ |
|
|/| |
| | |
| | |
| | | |
tpikonen-elonet
|
| | | |
|
|\ \ \ |
|
| | | | |
|
|\ \ \ \ |
|
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
With this change, the merge operator may join any number of media streams,
video or audio. The streams are downloaded in the order specified.
Also, fix the metadata post-processor so that it doesn't leave out
any streams.
|
| | | | | |
|
|\ \ \ \ \ |
|
|/| | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
[ci skip]
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* Fix WebP with wrong extension processing
* Fix embedding of thumbnails with % character in path
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
(closes #25687)
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
#23919, closes #24689, closes #26565)
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
#22063)
|
| | |_|/ /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
listing a user's tracks. (#26557)
Per the documentation here https://developers.soundcloud.com/blog/offset-pagination-deprecated the maximum limit is 200, so let's respect that (even if a higher value sometimes works).
Co-authored-by: tfvlrue <tfvlrue>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Update README.md
|
|/ / / / / |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
[gdcvault] fix extractor
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | | |
at least when not logged in?
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
[kakao] new apis
|
| | | | | | |
| | | | | | |
| | | | | | | |
there are also ageLimit and GeoBlock attributes provided by api_json if needed
|