Time for 13ft.io
Bypass Paywalls extension for Firefox.
Works better and for more sites in my experience.How do I install it on Firefox android, though?
My phone won’t open xpi files and the only solutions I’ve been able to find is either create a html file in the same folder, which I don’t know how to do on android, or download and install an extension which is ALSO only available as an xpi 🤦
I believe you have to use nightly. Try this https://ghacks.net/2020/10/01/you-can-now-install-any-add-on-in-firefox-nightly-for-android-but-it-is-complicated/
That worked! Awesome, thanks, I tried a few other methods last week with no luck
Yay, that works! Thank you!
Import custom filter
Bypass Paywalls Clean
I think it’s this one
https://gitlab.com/magnolia1234/bypass-paywalls-clean-filters
Try custom collections feature:
https://support.mozilla.org/en-US/kb/how-use-collections-addonsmozillaorg
Looks like it SHOULD work, but when I search for addons, I get quick (far too quick to select any of them m) flashes of gray rectangles where suggestions would normally be and no search results after I execute the search.
I’m beginning to suspect that the Firefox app is broken 😕
is that one of those where you need to manually import the extension into the browser?
Yeah, it would seem so. Can’t do that, though, owing to the aforementioned refusal of my phone to have anything to do with xpi files 😮💨
oh on a phone? hmm… no idea on that one
Well I use Fennec for Android from F-Droid, which has the option of using custom collections for addons.
I don’t think it’s possible yet on “normal” Firefox other than Nightly.Yeah, on the advice of someone else itt, I switched to Nightly and that worked 🙂
You could instead use the Web Archives extension. Works for most common paywalls.
deleted by creator
No luck there, couldn’t get it to work on any of the Firefox forks. I eventually got it working in Kiwi Browser
Use Kiwi browser instead.
It has seemed to work on less and less sites for me recently, to the point that I do not visited it as often as I used to.
But that tweet does sound like pretty bad news…
It never ever seemed to work for me.
by the time it got popular, it was already not working with multiple big sources.
The only time I ever used it, they told me they chose not to support that site
https://archive.md/ gets around way more paywalls. Highly recommend it.
deleted by creator
They seem to have blocked my ip- I get an impassable capture
deleted by creator
I also get infinitely CAPTCHA blocked on Android trying to connect to archive.md and the other domains. Doesn’t matter if I use Firefox, chrome, Samsung mobile browser.
Yep - I have not found a solution. I used it without issue for years which is very weird.
disabling js does more
12ft.io was performative useless garbage anyway, if any site can just ask your paywalling bypass site to not bypass their paywall, what is the point of your site
Exactly. As soon as they bent over to NYT, I stopped using them.
It stopped working on any of the sites I ever bothered to use it on anyway- most of them wisened up to the crawler bypass and simply made a 2 sentence tagline visible to crawlers that hit the SEO terms, with everything else hidden. Soooo nothing of value lost and Capital comes to claim its pie once again.
I use bypass paywalls clean and never see a paywall. so… yes.
it doesn’t work for medium articles in my experience
I haven’t had any issues with medium personally. But I have pretty extensive blocking as a whole (ublock, adguard, ghostery, ddg, bypass paywalls clean, canvas blocker, etc)
Why so many blocking extensions?
I’m paranoid.
Some extra context / clarification from the thread re Vercel: they did warn him starting two weeks ago. They’ve stated he has a line open with customer support to get his other projects restored but that hasn’t happened yet.
I think that Vercel wants to drop them as a customer entirely. Vercel could’ve suspended the services related to 12ft.io, but Vercel chose to nuke their account from orbit. I’m unsure why Vercel suspended their domains tho. That’s just asking for trouble with ICANN.
Out of curiosity, how is it an issue with ICANN? I know they can complain to them, but what category will this fall under?
Depends on whether Vercel refuses to give them the domain transfer code.
Technically, if one were to disable the JS used for said paywall on a site, they would never see it again. I haven’t personally done this but has anyone tried?
Most sites load no content at all if JS is disabled.
On a majority of sites all of the page’s content will be present at least for SSO. And you have the added bonus that they don’t ask for cookies etc…
I didn’t ask for JS to be completely disabled, but to disable just enough for the paywall to not crop up
How would your browser differentiate between the 2 scripts?
There are multiple scripts being used on almost every website. You need to find the one that pops up the paywall. Use NoScript or just unlock origin (I use both) and with some trial and error it’ll work just fine
This is an oversimplification. Paywalls are generally designed to circumvent simplistic “remove popover” approaches. Sites like 12ft.io and paywalls removal extensions use myriad solutions across different site to circumvent both local and server side paywalls.
It would only work if they specifically bundle the functions which cause the paywall in a separate file (it is very unlikely for this to be the case), and also relies on the assumption that the paywall is entirely front-end side, as well as the “default” content to be without paywall (as opposed to the default content being paywalled and requiring JavaScript to load the actual content).
Not a specific file but a domain. And yes, if the processing is done server-side then there is very little we can do about that. Note that I’m not asking one to disable every script on the page, just the specific script for the pop-up/blurring by the paywall
I think I understood what you were suggesting: try disabling the script tags one by one on a website until either we tried them all or we got through the paywall.
My point is that it’s very unlikely to be feasible on most modern websites.
I mention files because very few bits of functionality tend to be inline scripts these days, 90-95% of JavaScript will be loaded from separate .js files the script tags reference.
In modern webapps the JavaScript usually goes through some sort of build system, like webpack, which does a number of things but the important one for this case is that it re-structures how the code is distributed into .js files which are referenced from script tags in the html. This makes it very difficult to explicitly target a specific bit of functionality to be disabled, since the code for paywall is likely loaded from the same file as a hundred other bits of code which make other features work - hence my point that the sites would actively have to go out of their way to make their build process separate their paywall code from other bits of functionality in their codebase, which is probably not something they would do.
On top of this, the same build system may output differently named files after the build since they’re often named after some hashing of the content, so if any code changes in any of the sources the output file name changes as well in an unpredictable way. This would likely be a much smaller issue since I can’t imagine them actively working on all parts of their codebase all the time.
Lastly, if the way a website works is that it loads the content and then some JavaScript hides it behind a paywall then it’s much simpler to either hide the elements in front of it or make the content visible again just by using CSS and HTML - i.e. the way adblockers remove the entire ad element from the pages.
If the website developer is worth their salt, the article contents won’t be delivered from the web server until the reader has been authorized. So it doesn’t matter how much JS code you disable.
PAY THE WRITERS FOR THEIR JOB. DON’T BE A CHEAP PIRATE.
It’s just a glorified web scraper, I didn’t know it was this popular. You could build a barebones scraper and output in less than 10 lines with curl in PHP. And 12ftio used to inject its own code into the output, it’s funny how people were Ozzy with that.
Everyone who ever does web scraping knew serving it on his own public domain was going to be a problem.
Boy, people are lazy.
Sure, because everybody who owns a computer, tablet, or smartphone is a web dev. Obviously.
/s
Today I learned because I can’t code, I’m useless and lazy.
The fucking people you come across on the internet…smh
Get out and touch grass mate.
Please paste the 10 lines here
… we’re still waiting
Removed by mod
Yeah, that’s literally the same as 12ft or any other anti paywall tool. I mean hey, it’s just two lines, that’s even 8 less than the original smoothbrain, absolutely easy to use for any end user. Thanks. /s
lmao what an absolutely moronic take.
That’s not even what 12ft.io was. It wasn’t scraping anything, it was just a redirect to the google web cache. Importantly, it was also accessible, something that anyone could use without installing anything.