aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJesús <heckyel@hyperbola.info>2021-06-10 16:41:45 -0500
committerJesús <heckyel@hyperbola.info>2021-06-10 16:41:45 -0500
commit7fd2c3474fa71cfb36f64e7f5c4d89fb21c38334 (patch)
tree69802416f43f6ef8eff03716933094997bc41b34
parentd35188178b947d0f3c0c3dbaa0fbfe47d7bdf20a (diff)
downloadyt-local-7fd2c3474fa71cfb36f64e7f5c4d89fb21c38334.tar.lz
yt-local-7fd2c3474fa71cfb36f64e7f5c4d89fb21c38334.tar.xz
yt-local-7fd2c3474fa71cfb36f64e7f5c4d89fb21c38334.zip
Capitalize name app
-rw-r--r--README.md16
-rw-r--r--docs/HACKING.md16
-rw-r--r--server.py10
-rw-r--r--settings.py4
-rw-r--r--youtube/__init__.py4
-rw-r--r--youtube/comments.py2
-rw-r--r--youtube/opensearch.xml2
-rw-r--r--youtube/subscriptions.py6
-rw-r--r--youtube/templates/base.html2
-rw-r--r--youtube/util.py2
-rw-r--r--youtube/watch.py2
-rw-r--r--youtube/yt_data_extract/common.py2
-rw-r--r--youtube/yt_data_extract/watch_extraction.py2
13 files changed, 35 insertions, 35 deletions
diff --git a/README.md b/README.md
index 69a5825..e560c8a 100644
--- a/README.md
+++ b/README.md
@@ -4,9 +4,9 @@
Fork of [youtube-local](https://github.com/user234683/youtube-local)
-yt-local is a browser-based client written in Python for watching Youtube anonymously and without the lag of the slow page used by Youtube. One of the primary features is that all requests are routed through Tor, except for the video file at googlevideo.com. This is analogous to what HookTube (defunct) and Invidious do, except that you do not have to trust a third-party to respect your privacy. The assumption here is that Google won't put the effort in to incorporate the video file requests into their tracking, as it's not worth pursuing the incredibly small number of users who care about privacy (Tor video routing is also provided as an option). Tor has high latency, so this will not be as fast network-wise as regular Youtube. However, using Tor is optional; when not routing through Tor, video pages may load faster than they do with Youtube's page depending on your browser.
+yt-local is a browser-based client written in Python for watching YouTube anonymously and without the lag of the slow page used by YouTube. One of the primary features is that all requests are routed through Tor, except for the video file at googlevideo.com. This is analogous to what HookTube (defunct) and Invidious do, except that you do not have to trust a third-party to respect your privacy. The assumption here is that Google won't put the effort in to incorporate the video file requests into their tracking, as it's not worth pursuing the incredibly small number of users who care about privacy (Tor video routing is also provided as an option). Tor has high latency, so this will not be as fast network-wise as regular YouTube. However, using Tor is optional; when not routing through Tor, video pages may load faster than they do with YouTube's page depending on your browser.
-The Youtube API is not used, so no keys or anything are needed. It uses the same requests as the Youtube webpage.
+The YouTube API is not used, so no keys or anything are needed. It uses the same requests as the YouTube webpage.
## Screenshots
@@ -19,9 +19,9 @@ The Youtube API is not used, so no keys or anything are needed. It uses the same
[Channel](https://pic.infini.fr/JsenWVYe/SbdIQlS6.png)
## Features
-* Standard pages of Youtube: search, channels, playlists
+* Standard pages of YouTube: search, channels, playlists
* Anonymity from Google's tracking by routing requests through Tor
-* Local playlists: These solve the two problems with creating playlists on Youtube: (1) they're datamined and (2) videos frequently get deleted by Youtube and lost from the playlist, making it very difficult to find a reupload as the title of the deleted video is not displayed.
+* Local playlists: These solve the two problems with creating playlists on YouTube: (1) they're datamined and (2) videos frequently get deleted by YouTube and lost from the playlist, making it very difficult to find a reupload as the title of the deleted video is not displayed.
* Themes: Light, Gray, and Dark
* Subtitles
* Easily download videos or their audio
@@ -29,8 +29,8 @@ The Youtube API is not used, so no keys or anything are needed. It uses the same
* View comments
* Javascript not required
* Theater and non-theater mode
-* Subscriptions that are independent from Youtube
- * Can import subscriptions from Youtube
+* Subscriptions that are independent from YouTube
+ * Can import subscriptions from YouTube
* Works by checking channels individually
* Can be set to automatically check channels.
* For efficiency of requests, frequency of checking is based on how quickly channel posts videos
@@ -95,7 +95,7 @@ To run the program on windows, open `run.bat`. On GNU+Linux/MacOS, run `python3
Access youtube URLs by prefixing them with `http://localhost:8080/`, For instance, `http://localhost:8080/https://www.youtube.com/watch?v=vBgulDeV2RU`
-You can use an addon such as Redirector ([Firefox](https://addons.mozilla.org/en-US/firefox/addon/redirector/)|[Chrome](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)) to automatically redirect Youtube URLs to yt-local. I use the include pattern `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)` and the redirect pattern `http://localhost:8080/$1` (Make sure you're using regular expression mode).
+You can use an addon such as Redirector ([Firefox](https://addons.mozilla.org/en-US/firefox/addon/redirector/)|[Chrome](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)) to automatically redirect YouTube URLs to yt-local. I use the include pattern `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)` and the redirect pattern `http://localhost:8080/$1` (Make sure you're using regular expression mode).
If you want embeds on the web to also redirect to yt-local, make sure "Iframes" is checked under advanced options in your redirector rule. Check test `http://localhost:8080/youtube.com/embed/vBgulDeV2RU`
@@ -111,7 +111,7 @@ Ensure Tor is listening for Socks5 connections on port 9150 (a simple way to acc
If you wish to route the video through Tor, set "Route Tor" to "On, including video". Because this is bandwidth-intensive, you are strongly encouraged to donate to the [consortium of Tor node operators](https://torservers.net/donate.html). For instance, donations to [NoiseTor](https://noisetor.net/) go straight towards funding nodes. Using their numbers for bandwidth costs, together with an average of 485 kbit/sec for a diverse sample of videos, and assuming n hours of video watched per day, gives $0.03n/month. A $1/month donation will be a very generous amount to not only offset losses, but help keep the network healthy.
-In general, Tor video routing will be slower (for instance, moving around in the video is quite slow). I've never seen any signs that watch history in yt-local affects on-site Youtube recommendations. It's likely that requests to googlevideo are logged for some period of time, but are not integrated into Youtube's larger advertisement/recommendation systems, since those presumably depend more heavily on in-page tracking through Javascript rather than CDN requests to googlevideo.
+In general, Tor video routing will be slower (for instance, moving around in the video is quite slow). I've never seen any signs that watch history in yt-local affects on-site YouTube recommendations. It's likely that requests to googlevideo are logged for some period of time, but are not integrated into YouTube's larger advertisement/recommendation systems, since those presumably depend more heavily on in-page tracking through Javascript rather than CDN requests to googlevideo.
### Importing subscriptions
diff --git a/docs/HACKING.md b/docs/HACKING.md
index 82128e5..6e6b7fd 100644
--- a/docs/HACKING.md
+++ b/docs/HACKING.md
@@ -21,7 +21,7 @@
## server.py
* This is the entry point, and sets up the HTTP server that listens for incoming requests. It delegates the request to the appropriate "site_handler". For instance, `localhost:8080/youtube.com/...` goes to the `youtube` site handler, whereas `localhost:8080/ytimg.com/...` (the url for video thumbnails) goes to the site handler for just fetching static resources such as images from youtube.
-* The reason for this architecture: the original design philosophy when I first conceived the project was that this would work for any site supported by youtube-dl, including Youtube, Vimeo, DailyMotion, etc. I've dropped this idea for now, though I might pick it up later. (youtube-dl is no longer used)
+* The reason for this architecture: the original design philosophy when I first conceived the project was that this would work for any site supported by youtube-dl, including YouTube, Vimeo, DailyMotion, etc. I've dropped this idea for now, though I might pick it up later. (youtube-dl is no longer used)
* This file uses the raw [WSGI request](https://www.python.org/dev/peps/pep-3333/) format. The WSGI format is a Python standard for how HTTP servers (I use the stock server provided by gevent) should call HTTP applications. So that's why the file contains stuff like `env['REQUEST_METHOD']`.
@@ -29,20 +29,20 @@
## Flask and Gevent
* The `youtube` handler in server.py then delegates the request to the Flask yt_app object, which the rest of the project uses. [Flask](https://flask.palletsprojects.com/en/1.1.x/) is a web application framework that makes handling requests easier than accessing the raw WSGI requests. Flask (Werkzeug specifically) figures out which function to call for a particular url. Each request handling function is registered into Flask's routing table by using function annotations above it. The request handling functions are always at the bottom of the file for a particular youtube page (channel, watch, playlist, etc.), and they're where you want to look to see how the response gets constructed for a particular url. Miscellaneous request handlers that don't belong anywhere else are located in `__init__.py`, which is where the `yt_app` object is instantiated.
-* The actual html for yt-local is generated using Jinja templates. Jinja lets you embed a Python-like language inside html files so you can use constructs such as for loops to construct the html for a list of 30 videos given a dictionary with information for those videos. Jinja is included as part of Flask. It has some annoying differences from Python in a lot of details, so check the [docs here](https://jinja.palletsprojects.com/en/2.11.x/) when you use it. The request handling functions will pass the information that has been scraped from Youtube into these templates for the final result.
+* The actual html for yt-local is generated using Jinja templates. Jinja lets you embed a Python-like language inside html files so you can use constructs such as for loops to construct the html for a list of 30 videos given a dictionary with information for those videos. Jinja is included as part of Flask. It has some annoying differences from Python in a lot of details, so check the [docs here](https://jinja.palletsprojects.com/en/2.11.x/) when you use it. The request handling functions will pass the information that has been scraped from YouTube into these templates for the final result.
* The project uses the gevent library for parallelism (such as for launching requests in parallel), as opposed to using the async keyword.
## util.py
-* util.py is a grab-bag of miscellaneous things; admittedly I need to get around to refactoring it. The biggest thing it has is the `fetch_url` function which is what I use for sending out requests for Youtube. The Tor routing is managed here. `fetch_url` will raise an a `FetchError` exception if the request fails. The parameter `debug_name` in `fetch_url` is the filename that the response from Youtube will be saved to if the hidden debugging option is enabled in settings.txt. So if there's a bug when Youtube changes something, you can check the response from Youtube from that file.
+* util.py is a grab-bag of miscellaneous things; admittedly I need to get around to refactoring it. The biggest thing it has is the `fetch_url` function which is what I use for sending out requests for YouTube. The Tor routing is managed here. `fetch_url` will raise an a `FetchError` exception if the request fails. The parameter `debug_name` in `fetch_url` is the filename that the response from YouTube will be saved to if the hidden debugging option is enabled in settings.txt. So if there's a bug when YouTube changes something, you can check the response from YouTube from that file.
## Data extraction - protobuf, polymer, and yt_data_extract
-* proto.py is used for generating what are called ctokens needed when making requests to Youtube. These ctokens use Google's [protobuf](https://developers.google.com/protocol-buffers) format. Figuring out how to generate these in new instances requires some reverse engineering. I have a messy python file I use to make this convenient which you can find under ./youtube/proto_debug.py
+* proto.py is used for generating what are called ctokens needed when making requests to YouTube. These ctokens use Google's [protobuf](https://developers.google.com/protocol-buffers) format. Figuring out how to generate these in new instances requires some reverse engineering. I have a messy python file I use to make this convenient which you can find under ./youtube/proto_debug.py
-* The responses from Youtube are in a JSON format called polymer (polymer is the name of the 2017-present Youtube layout). The JSON consists of a bunch of nested dictionaries which basically specify the layout of the page via objects called renderers. A renderer represents an object on a page in a similar way to html tags; the renders often contain renders inside them. The Javascript on Youtube's page translates this JSON to HTML. Example: `compactVideoRenderer` represents a video item in you can click on such as in the related videos (so these are called "items" in the codebase). This JSON is very messy. You'll need a JSON prettifier or something that gives you a tree view in order to study it.
+* The responses from YouTube are in a JSON format called polymer (polymer is the name of the 2017-present YouTube layout). The JSON consists of a bunch of nested dictionaries which basically specify the layout of the page via objects called renderers. A renderer represents an object on a page in a similar way to html tags; the renders often contain renders inside them. The Javascript on YouTube's page translates this JSON to HTML. Example: `compactVideoRenderer` represents a video item in you can click on such as in the related videos (so these are called "items" in the codebase). This JSON is very messy. You'll need a JSON prettifier or something that gives you a tree view in order to study it.
-* `yt_data_extract` is a module that parses this this raw JSON page layout and extracts the useful information from it into a standardized dictionary. So for instance, it can take the raw JSON response from the watch page and return a dictionary containing keys such as `title`, `description`,`related_videos (list)`, `likes`, etc. This module contains a lot of abstractions designed to make parsing the polymer format easier and more resilient towards changes from Youtube. (A lot of Youtube extractors just traverse the JSON tree like `response[1]['response']['continuation']['gridContinuationRenderer']['items']...` but this tends to break frequently when Youtube changes things.) If it fails to extract a piece of data, such as the like count, it will place `None` in that entry. Exceptions are not used in this module. So it uses functions which return None if there's a failure, such as `deep_get(response, 1, 'response', 'continuation', 'gridContinuationRenderer', 'items')` which returns None if any of those keys aren't present. The general purpose abstractions are located in `common.py`, while the functions for parsing specific responses (watch page, playlist, channel, etc.) are located in `watch_extraction.py` and `everything_else.py`.
+* `yt_data_extract` is a module that parses this this raw JSON page layout and extracts the useful information from it into a standardized dictionary. So for instance, it can take the raw JSON response from the watch page and return a dictionary containing keys such as `title`, `description`,`related_videos (list)`, `likes`, etc. This module contains a lot of abstractions designed to make parsing the polymer format easier and more resilient towards changes from YouTube. (A lot of YouTube extractors just traverse the JSON tree like `response[1]['response']['continuation']['gridContinuationRenderer']['items']...` but this tends to break frequently when YouTube changes things.) If it fails to extract a piece of data, such as the like count, it will place `None` in that entry. Exceptions are not used in this module. So it uses functions which return None if there's a failure, such as `deep_get(response, 1, 'response', 'continuation', 'gridContinuationRenderer', 'items')` which returns None if any of those keys aren't present. The general purpose abstractions are located in `common.py`, while the functions for parsing specific responses (watch page, playlist, channel, etc.) are located in `watch_extraction.py` and `everything_else.py`.
-* Most of these abstractions are self-explanatory, except for `extract_items_from_renderer`, a function that performs a recursive search for the specified renderers. You give it a renderer which contains nested renderers, and a set of the renderer types you want to extract (by default, these are the video/playlist/channel preview items). It will search through the nested renderers and gather the specified items, in addition to the continuation token (ctoken) for the last list of items it finds if there is one. Using this function achieves resiliency against Youtube rearranging the items into a different hierarchy.
+* Most of these abstractions are self-explanatory, except for `extract_items_from_renderer`, a function that performs a recursive search for the specified renderers. You give it a renderer which contains nested renderers, and a set of the renderer types you want to extract (by default, these are the video/playlist/channel preview items). It will search through the nested renderers and gather the specified items, in addition to the continuation token (ctoken) for the last list of items it finds if there is one. Using this function achieves resiliency against YouTube rearranging the items into a different hierarchy.
* The `extract_items` function is similar but works on the response object, automatically finding the appropriate renderer to call `extract_items_from_renderer` on.
@@ -55,7 +55,7 @@
* Since I can't anticipate the things that will trip up beginners to the codebase, if you spend awhile figuring something out, go ahead and make a pull request adding a brief description of your findings to this document to help other beginners.
## Development tips
-* When developing functionality to interact with Youtube in new ways, you'll want to use the network tab in your browser's devtools to inspect which requests get made under normal usage of Youtube. You'll also want a tool you can use to construct custom requests and specify headers to reverse engineer the request format. I use the [HeaderTool](https://github.com/loreii/HeaderTool) extension in Firefox, but there's probably a more streamlined program out there.
+* When developing functionality to interact with YouTube in new ways, you'll want to use the network tab in your browser's devtools to inspect which requests get made under normal usage of YouTube. You'll also want a tool you can use to construct custom requests and specify headers to reverse engineer the request format. I use the [HeaderTool](https://github.com/loreii/HeaderTool) extension in Firefox, but there's probably a more streamlined program out there.
* You'll want to have a utility or IDE that can perform full text search on a repository, since this is crucial for navigating unfamiliar codebases to figure out where certain strings appear or where things get defined.
diff --git a/server.py b/server.py
index 09a0a54..ebe67dc 100644
--- a/server.py
+++ b/server.py
@@ -94,7 +94,7 @@ def proxy_site(env, start_response, video=False):
content_length = int(dict(response_headers).get('Content-Length', 0))
if response.status >= 400:
- print('Error: Youtube returned "%d %s" while routing %s' % (
+ print('Error: YouTube returned "%d %s" while routing %s' % (
response.status, response.reason, url.split('?')[0]))
total_received = 0
@@ -113,7 +113,7 @@ def proxy_site(env, start_response, video=False):
content_part = response.read(32*8192)
total_received += len(content_part)
if not content_part:
- # Sometimes Youtube closes the connection before sending all of
+ # Sometimes YouTube closes the connection before sending all of
# the content. Retry with a range request for the missing
# content. See
# https://github.com/user234683/youtube-local/issues/40
@@ -130,7 +130,7 @@ def proxy_site(env, start_response, video=False):
fail_byte = start + total_received
send_headers['Range'] = 'bytes=%d-%d' % (fail_byte, end)
print(
- 'Warning: Youtube closed the connection before byte',
+ 'Warning: YouTube closed the connection before byte',
str(fail_byte) + '.', 'Expected', start+content_length,
'bytes.'
)
@@ -146,14 +146,14 @@ def proxy_site(env, start_response, video=False):
yield content_part
cleanup_func(response)
if retry:
- # Youtube will return 503 Service Unavailable if you do a bunch
+ # YouTube will return 503 Service Unavailable if you do a bunch
# of range requests too quickly.
time.sleep(1)
continue
else:
break
else: # no break
- print('Error: Youtube closed the connection before',
+ print('Error: YouTube closed the connection before',
'providing all content. Retried three times:', url.split('?')[0])
diff --git a/settings.py b/settings.py
index a2373da..7d48bb0 100644
--- a/settings.py
+++ b/settings.py
@@ -47,7 +47,7 @@ SETTINGS_INFO = collections.OrderedDict([
('allow_foreign_addresses', {
'type': bool,
'default': False,
- 'comment': '''This will allow others to connect to your Youtube Local instance as a website.
+ 'comment': '''This will allow others to connect to your YouTube Local instance as a website.
For security reasons, enabling this is not recommended.''',
'hidden': True,
'category': 'network',
@@ -385,7 +385,7 @@ globals().update(current_settings_dict)
if route_tor:
print("Tor routing is ON")
else:
- print("Tor routing is OFF - your Youtube activity is NOT anonymous")
+ print("Tor routing is OFF - your YouTube activity is NOT anonymous")
hooks = {}
diff --git a/youtube/__init__.py b/youtube/__init__.py
index 3c85d47..0a00ebb 100644
--- a/youtube/__init__.py
+++ b/youtube/__init__.py
@@ -18,7 +18,7 @@ yt_app.add_url_rule('/settings', 'settings_page', settings.settings_page, method
@yt_app.route('/')
def homepage():
- return flask.render_template('home.html', title="Youtube local")
+ return flask.render_template('home.html', title="YouTube local")
@yt_app.route('/licenses')
@@ -100,7 +100,7 @@ def error_page(e):
and exc_info()[1].code == '429'
and settings.route_tor
):
- error_message = ('Error: Youtube blocked the request because the Tor'
+ error_message = ('Error: YouTube blocked the request because the Tor'
' exit node is overutilized. Try getting a new exit node by'
' using the New Identity button in the Tor Browser.')
if exc_info()[1].error_message:
diff --git a/youtube/comments.py b/youtube/comments.py
index 208c161..68456bd 100644
--- a/youtube/comments.py
+++ b/youtube/comments.py
@@ -187,7 +187,7 @@ def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''):
return {}
except util.FetchError as e:
if e.code == '429' and settings.route_tor:
- comments_info['error'] = 'Error: Youtube blocked the request because the Tor exit node is overutilized.'
+ comments_info['error'] = 'Error: YouTube blocked the request because the Tor exit node is overutilized.'
if e.error_message:
comments_info['error'] += '\n\n' + e.error_message
comments_info['error'] += '\n\nExit node IP address: %s' % e.ip
diff --git a/youtube/opensearch.xml b/youtube/opensearch.xml
index 9f035a6..aacc9cf 100644
--- a/youtube/opensearch.xml
+++ b/youtube/opensearch.xml
@@ -1,5 +1,5 @@
<SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/">
-<ShortName>Youtube local</ShortName>
+<ShortName>YouTube local</ShortName>
<Description>no CIA shit in the background</Description>
<InputEncoding>UTF-8</InputEncoding>
<Image width="16" height="16">data:image/x-icon;base64,AAABAAEAEBAAAAEACAAlAgAAFgAAAIlQTkcNChoKAAAADUlIRFIAAAAQAAAAEAgGAAAAH/P/YQAAAexJREFUOI2lkzFPmlEUhp/73fshtCUCRtvQkJoKMrDQJvoHnBzUhc3EH0DUQf+As6tujo4M6mTiIDp0kGiMTRojTRNSW6o12iD4YYXv3g7Qr4O0ScM7npz7vOe+J0fk83lDF7K6eQygwkdHhI+P0bYNxmBXq5RmZui5vGQgn0f7fKi7O4oLC1gPD48BP9JpnpRKJFZXcQMB3m1u4vr9NHp76d/bo39/n4/z84ROThBa4/r91OJxMKb9BSn5mskAIOt1eq6uEFpjVyrEcjk+T0+TXlzkbTZLuFDAur9/nIFRipuREQCe7+zgBgK8mZvj/fIylVTKa/6UzXKbSnnuHkA0GnwbH/cA0a0takND3IyOEiwWAXBiMYTWjzLwtvB9bAyAwMUF8ZUVPiwtYTWbHqA6PIxoNv8OMLbN3eBga9TZWYQxaKX+AJJJhOv+AyAlT0slAG6TSX5n8+zszJugkzxA4PzcK9YSCQCk42DXaq1aGwqgfT5ebG9jpMQyUjKwu8vrtbWWqxC83NjAd31NsO2uleJnX58HCJ6eEjk8BGNQAA+RCOXJScpTU2AMwnUxlkXk4ACA+2iUSKGArNeRjkMsl6M8MYHQGtHpmIxSvFpfRzoORinQGqvZBCEwQoAxfMlkaIRCnQH/o66v8Re19MavaDNLfgAAAABJRU5ErkJggg==</Image>
diff --git a/youtube/subscriptions.py b/youtube/subscriptions.py
index f540e35..503f3fa 100644
--- a/youtube/subscriptions.py
+++ b/youtube/subscriptions.py
@@ -464,7 +464,7 @@ def _get_channel_tab(channel_id, channel_status_name):
except util.FetchError as e:
if e.code == '429' and settings.route_tor:
error_message = ('Error checking channel ' + channel_status_name
- + ': Youtube blocked the request because the'
+ + ': YouTube blocked the request because the'
+ ' Tor exit node is overutilized. Try getting a new exit node'
+ ' by using the New Identity button in the Tor Browser.')
if e.ip:
@@ -562,7 +562,7 @@ def _get_upstream_videos(channel_id):
average_upload_period = int((time.time() - videos[4]['time_published'])/5) # equivalent to averaging the time between videos for the last 5 videos
# calculate when to check next for auto checking
- # add some quantization and randomness to make pattern analysis by Youtube slightly harder
+ # add some quantization and randomness to make pattern analysis by YouTube slightly harder
quantized_upload_period = average_upload_period - (average_upload_period % (4*3600)) + 4*3600 # round up to nearest 4 hours
randomized_upload_period = quantized_upload_period*(1 + secrets.randbelow(50)/50*0.5) # randomly between 1x and 1.5x
next_check_delay = randomized_upload_period/10 # check at 10x the channel posting rate. might want to fine tune this number
@@ -725,7 +725,7 @@ def import_subscriptions():
except (AssertionError, IndexError, defusedxml.ElementTree.ParseError) as e:
return '400 Bad Request: Unable to read opml xml file, or the file is not the expected format', 400
else:
- return '400 Bad Request: Unsupported file format: ' + mime_type + '. Only subscription.json files (from Google Takeouts) and XML OPML files exported from Youtube\'s subscription manager page are supported', 400
+ return '400 Bad Request: Unsupported file format: ' + mime_type + '. Only subscription.json files (from Google Takeouts) and XML OPML files exported from YouTube\'s subscription manager page are supported', 400
_subscribe(channels)
diff --git a/youtube/templates/base.html b/youtube/templates/base.html
index 7b32d76..4c11ce0 100644
--- a/youtube/templates/base.html
+++ b/youtube/templates/base.html
@@ -5,7 +5,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' https://*.googlevideo.com; {{ "img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com;" if not settings.proxy_images else "" }}"/>
<title>{{ page_title }}</title>
- <link title="Youtube local" href="/youtube.com/opensearch.xml" rel="search" type="application/opensearchdescription+xml"/>
+ <link title="YouTube local" href="/youtube.com/opensearch.xml" rel="search" type="application/opensearchdescription+xml"/>
<link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon"/>
<link href="/youtube.com/static/normalize.css" rel="stylesheet"/>
<link href="{{ theme_path }}" rel="stylesheet"/>
diff --git a/youtube/util.py b/youtube/util.py
index 8f359ba..18c7ca1 100644
--- a/youtube/util.py
+++ b/youtube/util.py
@@ -311,7 +311,7 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
if not use_tor:
raise FetchError('429', reason=response.reason, ip=ip)
- print('Error: Youtube blocked the request because the Tor exit node is overutilized. Exit node IP address: %s' % ip)
+ print('Error: YouTube blocked the request because the Tor exit node is overutilized. Exit node IP address: %s' % ip)
# get new identity
error = tor_manager.new_identity(start_time)
diff --git a/youtube/watch.py b/youtube/watch.py
index 3aaac13..14d5fcd 100644
--- a/youtube/watch.py
+++ b/youtube/watch.py
@@ -82,7 +82,7 @@ def lang_eq(lang1, lang2):
def equiv_lang_in(lang, sequence):
'''Extracts a language in sequence which is equivalent to lang.
e.g. if lang is en, extracts en-GB from sequence.
- Necessary because if only a specific variant like en-GB is available, can't ask Youtube for simply en. Need to get the available variant.'''
+ Necessary because if only a specific variant like en-GB is available, can't ask YouTube for simply en. Need to get the available variant.'''
lang = lang[0:2]
for l in sequence:
if l[0:2] == lang:
diff --git a/youtube/yt_data_extract/common.py b/youtube/yt_data_extract/common.py
index b1cf31c..d03bd89 100644
--- a/youtube/yt_data_extract/common.py
+++ b/youtube/yt_data_extract/common.py
@@ -116,7 +116,7 @@ def _recover_urls(runs):
run['text'] = url # youtube truncates the url text, use actual url instead
def extract_str(node, default=None, recover_urls=False):
- '''default is the value returned if the extraction fails. If recover_urls is true, will attempt to fix Youtube's truncation of url text (most prominently seen in descriptions)'''
+ '''default is the value returned if the extraction fails. If recover_urls is true, will attempt to fix YouTube's truncation of url text (most prominently seen in descriptions)'''
if isinstance(node, str):
return node
diff --git a/youtube/yt_data_extract/watch_extraction.py b/youtube/yt_data_extract/watch_extraction.py
index db53581..daa1e89 100644
--- a/youtube/yt_data_extract/watch_extraction.py
+++ b/youtube/yt_data_extract/watch_extraction.py
@@ -373,7 +373,7 @@ def _extract_formats(info, player_response):
# update with information from big table
hardcoded_itag_info = _formats.get(str(itag), {})
for key, value in hardcoded_itag_info.items():
- conservative_update(fmt, key, value) # prefer info from Youtube
+ conservative_update(fmt, key, value) # prefer info from YouTube
fmt['quality'] = hardcoded_itag_info.get('height')
info['formats'].append(fmt)