| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
Add check for extracting duration for shorts
Make short duration extraction stricter
Fix handling shorts with no views
|
|
|
|
|
|
|
| |
Add channel tabs to the channel template and script
Update continuation token to request different tabs
Add support for 'reelItemRenderer' format required to extract shorts
|
|
|
|
| |
Closes #160
|
|
|
|
|
|
| |
Huge thanks to @michaelweiser
Different sortings still don't work for videos and playlists
|
|
|
|
| |
This reverts commit a0c3ca0159136d17eefa129176ae1904110238b8.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds support for AV1-encoded videos, which includes any videos
above 1080p. These weren't getting included because they did
not have a quality entry in the format table at the top of
watch_extraction.py. So get the quality from the quality
labels of the format if it's not there.
Because YouTube often includes BOTH AV1 and H.264 (AVC) for each
quality, after these are included, there will be way too many
quality options and the code needs to choose which one to use.
The choice is somewhat hard: AV1 is encoded in fewer bytes than
H.264 and is patent-free, however, it has less hardware support,
so might be more difficult to play. For instance, on my system,
AV1 does not work on 1080p, but H.264 does. Adds a setting about
which to prefer, set to H.264 as the default.
Also adds support for the lower quality mp4 audio quality, which
now gets used at 144p to save network bandwidth. For similar
reasons, this was not getting included because it did not
have an audio_bitrate entry in the table. Prefer bitrate
instead for the quality.
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
| |
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
| |
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
YouTube reverted the changes they made that prompted f9f5d5ba.
In case they change their minds again, this adds support for both
formats.
The liberal_update and conservative_update functions needed to be
modified to handle the cases of empty lists, so that
a successfully extracted 'music_list': [{'Author':...},...] will
not be overwritten by 'music_list': [] in the calls to
liberal_dict_update.
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
|
|
|
|
|
|
|
| |
watch_comment api periodically gives the error "Top level
comments mweb servlet is turned down."
The continuation items for the new api are in a different
arrangement in the json, so changes were necessary to the
extract_items function.
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
|
|
|
|
|
| |
Also moves some microformat extraction from
_extract_watch_info_mobile to extract_watch_info where it belongs.
_extract_watch_info_mobile is really only for stuff visible on the
page, and thus specialized for either mobile or desktop.
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
| |
|
|
|
|
|
|
| |
Information from additional_info was being overrided with None.
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
|
|
|
| |
Uses previous and next buttons. Now can view more than just
first page of playlists page
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
|
|
| |
Fixes channel pages > 1
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
|
|
| |
Some searches have these renderers instead of the usual ones
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
|
|
|
|
|
|
| |
They cannot be viewed on their own, so change url in items to
go to the video+playlist instead
Signed-off-by: Jesús <heckyel@hyperbola.info>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
for instance, urls that start with // become https://
adjustment required in comments.py because the url was left as a
relative url in yt_data_extract by mistake and was using URL_ORIGIN
prefix as fix.
see #31
|
|
|
|
|
| |
This was causing an exception in subscriptions when it tried
to estimate the unix timestamp for the upload time
|
|
|
|
|
|
|
|
|
|
|
|
| |
[something]Continuation renderers, all of which are junk
except one. Check the items in each one until the one which
contains the items being sought is found.
The usage in extract_comments_info needed to be changed to
specify the items being sought. It was unspecified before which
is strictly incorrect since extract_items by default looks for
video/playlist/channel thumbnail items. It was relying on this
special case for continuations. But now that wouldn't work
anymore.
|
|
|
|
| |
Youtube added some pointless variation in variable names
|
| |
|
| |
|
|
|
|
| |
Also change superfluous use of multi_get to item.get nearby
|
|
|
|
|
| |
By checking first if it's in item_types rather than checking if it can be dug into first.
For example: this allows extracting things like sectionListRenderer
|
|
|
|
|
|
| |
independently
extract_items_from_renderer will extract given just a renderer rather than a response
|
|
|
|
|
|
|
| |
Change usage of multi_deep_get to multi_get where possible
Remove checking of type from calls to get functions (because it's very unlikely Youtube suddenly changes the type without changing the name of the variable or anything, and it takes up unnecessary space)
Remove all default=None arguments from get functions, since those are superflous.
Remove list_types constant since it's no longer in use.
|
| |
|
|
|
|
| |
Philosophically, a channel doesn't create itself.
|
|
|
|
|
|
|
|
| |
more robust
For example, "354 subscribers" wasn't being extracted correctly be extract_approx_int.
Make extract_approx_int and extract_int only extract integers that are words.
So e.g. 342 will not be extracted from internetuser342
|
|
|
|
|
|
| |
names. Extract subscriber_count correctly.
Don't just shove english strings into info['stats']. Actually give semantic names for the stats.
|
|
|
|
| |
It wasn't working because decimals such as 15.1M weren't considered, so it was extracting "1M"
|
|
|
|
| |
yt_data_extract
|
|
|
|
|
|
| |
internal helper function names
Move get_captions_url in watch_extraction to bottom next to other exported, public functions
|
| |
|
|
|