cross-posted from: https://discuss.tchncs.de/post/22423685
EDIT: For those who are too lazy to click the link, this is what it says
Hello,
Sad news for everyone. YouTube/Google has patched the latest workaround that we had in order to restore the video playback functionality.
Right now we have no other solutions/fixes. You may be able to get Invidious working on residential IP addresses (like at home) but on datacenter IP addresses Invidious won’t work anymore.
If you are interested to install Invidious at home, we remind you that we have a guide for that here: https://docs.invidious.io/installation/..
This is not the death of this project. We will still try to find new solutions, but this might take time, months probably.
I have updated the public instance list in order to reflect on the working public instances: https://instances.invidious.io. Please don’t abuse them since the number is really low.
YouTube will not change until people stop using it. And people do not want to put up with the inconvenience of not having a YouTube type service again for the amount of time it would take for YouTube to change or a viable competitor to take their place, it really is that simple.
Are YouTube and Google terrible? For sure, but it only got this way because the only backstop to holding them accountable, the consumer, has proven that they will choose putting up with shitty products and services in the name of convenience 9 times out of 10.
Same reasons that ad tiers are gaining a foothold in streaming services like Netflix. The consumer has shown they are fine with it.
Time to pirate YT content and upload to usenet to be automatically downloaded using sonarr
Yes but literally throwing together a script to download the days subscription videos to a jellyfin media drive would be stupidly simple.
Sure, but not as convenient 🤷🏻
It already exists, even as a Docker. Not as simple as an *arr style interface, but it works great one you set it up.
ytdl-sub
Well you know what they say “Great minds think quicker than mine and probably have already had that thought.”
“Stupidly simple” might be overselling it when it comes to the masses adopting it. Not everyone is adept at “throwing together a script.”
That being said, I’m all for helping the masses adapt.
“Give me a Python script using yt-dlp that I can run on a cronjob that will download the videos from any of my subscribed channels since the last time the script was run”
You can use the following Python script to download videos from your subscribed channels since the last run. This script uses
yt-dlp
and stores the last download timestamp to track new videos.First, ensure you have
yt-dlp
installed:Then, create a script called
download_videos.py
:import os import json import subprocess from datetime import datetime # Configuration last_run_file = 'last_run.json' download_directory = 'downloads' # Ensure the download directory exists os.makedirs(download_directory, exist_ok=True) # Load the last run time if os.path.exists(last_run_file): with open(last_run_file, 'r') as f: last_run = json.load(f)['last_run'] else: last_run = datetime.now().isoformat() # Update the last run time to now current_run = datetime.now().isoformat() # Command to get videos from subscribed channels since the last run command = [ 'yt-dlp', '--download-archive', 'archive.txt', '--output', f'{download_directory}/%(title)s.%(ext)s', '--date-after', last_run, '--no-post-overwrites', '--merge-output-format', 'mp4', 'https://www.youtube.com/channel/CHANNEL_ID', # Replace with your channel URL ] # Run the command subprocess.run(command) # Save the current run time with open(last_run_file, 'w') as f: json.dump({'last_run': current_run}, f) print("Download complete. Next run will check for videos since:", current_run)
Setting Up the Cron Job
Make the script executable:
chmod +x download_videos.py
Open your crontab:
Add a line to run the script at your desired interval (e.g., daily at 2 AM):
Notes
CHANNEL_ID
in the script with your actual channel IDs or use a playlist URL if preferred.archive.txt
file keeps track of already downloaded videos to avoid duplicates.Another example, which i can personally verify has been working fine for months. It works a bit different to the above, it downloads the latests 2* vids that are not already downloaded and runs once every hour with cron. I also attempted to filter out live vids and shorts.
Channels i am “subscribed” too are stored in a single text file, it also uses the avc1 codec because i found p9 and p10 had issues with the jellyfin client on my tv.
looks like this, i added categories but i don’t actually use them in the script besides putting them in a variable, lol. Vid-limit is how many of the latests vids it should look at to download. The original reason i implemented that is so i could selectively download a bulk of latests vids if i wanted to.
Cat=Science Name=Vertitasium VidLimit=2 URL=https://www.youtube.com/channel/UCHnyfMqiRRG1u-2MsSQLbXA Cat=Minecraft Name=EthosLab VidLimit=2 URL=https://www.youtube.com/channel/UCFKDEp9si4RmHFWJW1vYsMA
#!/bin/bash # Define the directory to store channel lists and scripts script_dir="/.../YTDL" # Define the base directory to store downloaded videos base_download_dir="/.../youtubevids" # Change to the script directory cd "$script_dir" # Parse the Channels.txt file and process each channel awk -F'=' ' /^Cat/ {Cat=$2} /^Name/ {Name=$2} /^VidLimit/ {VidLimit=$2} /^URL/ {URL=$2; print Cat, Name, VidLimit, URL} ' "$script_dir/Channels.txt" | while read -r Cat Name VidLimit URL; do # Define the download directory for this channel download_dir="$base_download_dir" # Define the download archive file for this channel archive_file="$script_dir/DLarchive$Name.txt" # Create the download directory if it does not exist mkdir -p "$download_dir" # If VidLimit is "ALL", set playlist_end option to empty, otherwise set it to --playlist-end <VidLimit> playlist_end_option="" if [[ $VidLimit != "ALL" ]]; then playlist_end_option="--playlist-end $VidLimit" fi yt-dlp \ --download-archive "$archive_file" \ $playlist_end_option \ --write-description \ --write-thumbnail \ --convert-thumbnails jpg \ --add-metadata \ --embed-thumbnail \ --match-filter "!is_live & !was_live & original_url!*=/shorts/" \ --merge-output-format mp4 \ --format "bestvideo[vcodec^=avc1]+bestaudio[ext=m4a]/best[ext=mp4]/best" \ --output "$download_dir/${Name} - %(title)s.%(ext)s" \ "$URL" done
Yeah this is more elegant and closer to what I’d actually want to implement. I was more just showing what could be done in literally thirty seconds on the can with ChatGPT.
I knew i recognized that output.
Mine is actually also made with the help of Chatgpt but manually refined and tested.
Honestly, it would probably be easier to just build a *arr program specifically for downloading YouTube videos directly. Tie it into the rest of the *arr suite, with naming conventions for Plex/Jellyfin.
I would install that, but I fear scraping youtube will be a arms race, soon, similar to other streaming services
While I agree, I have a hard time seeing how people will stop using it until the field changes. Maybe in 10 years it will the the MySpace of the sitcom era, but right now it’s still growing. That growth is giving it carte blanche to manipulate the users as it sees fit. Regulation might impact it, but it’s still a bit of a Goliath.
Also the active user base is 2.7 billion people in 2024 from the same source above.
The alternatives are out there, but just not in the same league.
deleted by creator
regulations for ad quality, and privacy, are almost certainly what they mean by that.
I don’t think this requires an act of congress. I think you might see more consumer advocation on the part of FTC (although it doesn’t currently regulate online broadcast), or potentially the CFPB.
Admittedly it’s more likely to see the EU do some regulations, but it all depends on the election.
I think it needs regulation, the whole streaming industry needs to be regulated! It can’t be that the competition is made using exclusive content and you have to live with privacy infringement tech to consume cultural art legally.
In my opinion, in a capitalist system, the market competition should be about delivering the content the best way, not about what content they deliver.
Right now, they can made the delivery as shitty as they want, because what takes them apart from competition is the exclusive content, not the tech.
Agreed, now the fun part of coming up with a legal basis to do so and convincing regulators.
I think in the EU one could achieve something like this a la appstore opening rule, where streaming services are demanded to give other streaming services access to the library, lime some sort of roaming 🤔
Or you split the distribution from the company producing stuff
So many possibilities 😂
Luckily I am in a pirate friendly country 🏴☠️
We should reach a compromise of having skippable ads in the beginning only, for example. In other pages it could be that ads cannot be bigger than 10% of the content being delivered on the page.
It’s not always all or nothing, good regulation listens to both sides and reaches a compromise in the middle, but good regulation is getting harder and harder to come by.
I see we’re pretending the government doesn’t have regulatory power today
Yep, I remember when Netlfix first put it out there that they would start with the ads, and everyone on reddit was like, “Canceling my Netflix right now!!”
Netflix is doing just fine without the 5 redditors who actually did cancel it. lmao
the problem is so many people are willing to say they’ll take a stand.
but when the time comes, the mindnumbingly overwhelming majority suck it up, because they must have their precious shiny and can not suffer even the mildest of inconvenience.
Its my biggest gripe in gaming, but its a enormous gripe just in general, with everything. because it doesnt matter if you are talking about appliances, creative software, video games, streaming services, stores, etc.
“Socialist Chaos Trow” lol