Image Max URL (Web - GitHub - Firefox addon) was able to get a 3840x2160 version.
Moved from @Crul@lemmy.world
Image Max URL (Web - GitHub - Firefox addon) was able to get a 3840x2160 version.
Thanks!
I tried Pixelfed (very briefly) not so long ago. I didn’t find a propper way to search for content. How do you discover new content?
How long would you say it took you before getting a fundamental understanding?
I would say years, as with any complex activity.
I’m still forgetting things I learned 3 or even 4 times like how to do a for each loop.
You can forget in 2 different ways:
You will forget-1 everything which you don’t use on a daily basis. That’s what internet is for. Forgetting in the 2-nd sense is much more rare and you should do something if that’s the case.
all of it feels too advanced and I get lost on how to begin
This is a bias most of us have, you overlook how easy is for you to do things that previously were impossible and focus on how hard are the things you still don’t know how to do. And computing is so complex right now that there always be “infinite” things you don’t know.
Try showing what you know to someone who doesn’t know how to code and you will get an idea of how much you have learnt :).
Anyway, I don’t really have good advice :/, just wanted to confirm that what you feel is expected. Good luck!
That rings a bell
I was going to suggest yt-dlp, but this seems to be for android… right? In that case, I don’t know if yt-dlp works there.
Anyway, for those on PCs, you can use yt-dlp "PLAYLIST_URL"
.
Some useful options:
--download-archive videos.txt
: this will keep track of downloaded files in case you want to interrupt an continue later. You can change the filename videos.txt
to whatever you want.-R infinite --file-access-retries infinite --fragment-retries infinite --retry-sleep http:exp=1:20 --retry-sleep fragment:exp=1:20 --retry-sleep file_access:exp=1:20 --retry-sleep extractor:exp=1:20
: infinite retries for different error types, for those with unreliable connections.-o "%%(playlist_index)s - %%(title)s.%%(id)s.%%(ext)s"
: output file format --cookies cookies.txt
: if it’s a private list, you will need to provide your (yt-logged-in-)browser cookies. See cookies.txt add-on.I’ve been hearing a lot about https://micro.blog recently. I haven’t tried it, or blogged in a long time.
AFAIK it supports ActivityPub.
I also found a post in one micro.blog with a few of alternatives:
TL;DR: Tumblr / Ghost / Blot / Mastodon / Write.as / Jekyll
https://book.micro.blog/alternative-platforms/
EDIT: Manton Reece is the founder and lead developer of micro.blog
A couple of tools to help finding the original sources:
Also, thanks to @Blaze@discuss.tchncs.de and @Stamets@startrek.website for the mentions :).
Credit / source: Grichael-Meaney: Cosmic Dirtbag - Licence Wizard
Higher-res version: https://cosmicdirtbag.com/wp-content/uploads/2021/10/Licencewizard-scaled.jpg
RSS Feed: https://cosmicdirtbag.com/feed/
Other sources:
Great points. I agree.
A proper working implementation for the general case is still far ahead and it would be much complex than this experiment. Not only it will need the usual frame-to-frame temporal coherence, but it will probably need to take into account info from potentially any frame in the whole video in order to be consistent with different camera angles of the same place.
that’s weird. it’s actually a pretty useful feature, but it’s odd they’d add it to old reddit before new reddit, considering it’s basically deprecated. maybe it’s just an a/b rollout and i don’t have it yet
Sorry, I think I didn’t explain my self correctly. That feature it’s a very old one, it has been on old reddit since I remember. It has also worked on new reddit at some point, see the screenshot below from a comment I posted 6 months ago:
Thanks! Fixed
i wonder if it’s a new url scheme, as i’ve never seen duplicates in a reddit url before
I think you’re right. It should work with the old frontend (which I have configured as the default when I’m logged in):
https://old.reddit.com/r/StableDiffusion/duplicates/14xojmf/using_ai_to_fill_the_scenes_vertically/
Do you mean something like this? (warning: reddit link)
deleted by creator
I prefer Tranquility Reader add-on (no need for a 3rd party service). Very similar to Firefox’ native Reader Mode, but more configurable and compatible with other addons (like translation).
I watched the video yesterday and I couldn’t really understand what the plan is. What I got was something like “the corps are too big for the consumers to do anything and laws are very slow to made”.
Did I miss something about the “audacious(?)” plan?
Credit: Hisa
Source:
Source: Poorly Drawn Lines – Knowledge
This is how I understand it: the 3 main alternatives for the author were:
None of them are ideal and, although (from what I understand… IANAL) you are right that with the 3rd one DC can do whathever they want, companies don’t like when anyone can make any kind (gore, porn, furry porn…) of fanart with their products. If Fables ends up being identified with not-safe-for-monetization stuff, it could be dangerous for them. Imagine the possible “won’t somebody please think of the children?” juicy headlines about it.
But I think the main reason is that this makes it a problem only for DC instead of it being a problem for the author.
EDIT: As pointed in other comments, they would also loose the exclusivity over the product and merchandising. They would need to compete with very cheap legal-knockoffs.
You’re welcome!
FYI: You can edit the post and include a link to the add-on so others can see it without reading the comments. EDIT: Thanks!