How are you supposed to self-host a web crawler and indexer without getting a giant server bill?
Having this service at least slightly centralised makes sense ressource-wise - but assuming crawling and indexing is free is just foolish. I’d choose something like kagi but I guess many people will rather cheap out and go for the next free service not realising that that company has to make money another way to make up for the high cost of running a search engine
I’d choose something like kagi but I guess many people will rather cheap out
I often feel as though these paid-for services aren’t delivering a meaningfully better product. After all, it isn’t as though Google’s problem is that they don’t have enough cash to spend on optimization. The problem is that they’re a profit-motivated firm fixated on minimizing their cost and maximizing their revenue. Kagi has far less money to optimize than Google and the same profit-chasing incentives.
If there was a Github / Linux distro equivalent to a modern search engine - or even a Wikipedia-style curated collaborative effort - I’d be happy to kick in for that (like I donate to these projects). For all Wiki gets shit on ask Spook-o-pedia, they do at least have a public change history and an engaged community of participants. If Kagi is just going to kick me back the same Wiki article at a higher point in the return list than Google, why get their premium service when I can just donate to Wiki and search there directly?
If I’m just getting a feed of paywalled news journals like the NYT or WaPo, its the same question? Why not just pay them directly and use their internal search?
Other than screening out the crap that Google or Bing vomit up, what is the value-add of Kagi? And why shouldn’t I expect to see the same shit-creep in Kagi that I’ve seen in Google or Bing over the last decade? Because I’m paying them? Fuck, I subscribe to Google and Amazon services, and they haven’t gotten any better.
The problem is that it’s just incredibly expensive to keep scanning and indexing the web over and over in a way that makes it possible to search within seconds.
And the problem with search engines is that you can’t make the algorithm completely open source since that would make it too easy to manipulate the results with SEO which is exactly what’s destroying google
you can’t make the algorithm completely open source since that would make it too easy to manipulate
I don’t think “security through obscurity” has ever been an effective precautionary measure. SEO optimization works today because it is possible to intuit the function of the algorithms without ever seeing the interior code.
Knowing the interior of the code gives black hats a chance to manipulate the algorithm, but it also gives white hats the chance to advise alternative optimization strategies. Again, consider an algorithm that biases itself to websites without ads. The means by which you game the system would be contrary to the incentives for click-bait. What’s more, search engines and ad-blockers would now have a common cause, which would have their own knock-on effects.
But this would mean moving towards an internet model that was more friendly to open-sourced, collaboratively managed, and not-for-profit content. That’s not something companies like Google and Microsoft want to encourage. And that’s the real barrier to such an implementation.
It’s not about security through obscurity but “if a measurement becomes a goal then it ceases to be a good measurement” - so keeping the measurements hidden in order to make it harder for them to become a goal is a decent way to go on about it.
How would you measure “without ads”? That would just be the same cat and mouse game that adblockers have to deal with for decades.
I’m not sure it’s possible to find a good completely open source solution that’s not either giving bad results by down rating good results for the wrong reasons or that’s open to misuse by SEO.
That might work if it’s a small project where noone cares about fixing the results but if something like that becomes mainstream it’s going to happen
keeping the measurements hidden in order to make it harder for them to become a goal is a decent way to go on about it.
The measure, from the perspective of Clickbaiters, is purely their own income stream. And there’s no way to hide that from the guy generating the clickbait.
How would you measure “without ads”?
We have a well-defined set of sites and services that embed content within a website in exchange for payment. An easy place to start is to look for these embeds on a website and downgrade the results in your query as a result. We can also see, from redirects and ajax calls off a visited website, when lots of other information is being drawn in from third-party sites. That’s a very big red flag on a site that’s doing ad pop-ups/pop-overs and other gimmicks.
I’m not sure it’s possible to find a good completely open source solution that’s not either giving bad results by down rating good results for the wrong reasons or that’s open to misuse by SEO.
I would put more faith in an open-source solution than a private model, purely due to the financial incentives involved in their respective creations. The challenge with an open model is in getting the space and processing power to do all the web-crawling.
After that, it wouldn’t be crazy to go in the Wikipedia/Reddit direction and have user-input to grade your query results, assuming a certain core pool of reliable users could be established.
Selfhost
Are we expecting normal people to learn how to self-host?
How are you supposed to self-host a web crawler and indexer without getting a giant server bill?
Having this service at least slightly centralised makes sense ressource-wise - but assuming crawling and indexing is free is just foolish. I’d choose something like kagi but I guess many people will rather cheap out and go for the next free service not realising that that company has to make money another way to make up for the high cost of running a search engine
I often feel as though these paid-for services aren’t delivering a meaningfully better product. After all, it isn’t as though Google’s problem is that they don’t have enough cash to spend on optimization. The problem is that they’re a profit-motivated firm fixated on minimizing their cost and maximizing their revenue. Kagi has far less money to optimize than Google and the same profit-chasing incentives.
If there was a Github / Linux distro equivalent to a modern search engine - or even a Wikipedia-style curated collaborative effort - I’d be happy to kick in for that (like I donate to these projects). For all Wiki gets shit on ask Spook-o-pedia, they do at least have a public change history and an engaged community of participants. If Kagi is just going to kick me back the same Wiki article at a higher point in the return list than Google, why get their premium service when I can just donate to Wiki and search there directly?
If I’m just getting a feed of paywalled news journals like the NYT or WaPo, its the same question? Why not just pay them directly and use their internal search?
Other than screening out the crap that Google or Bing vomit up, what is the value-add of Kagi? And why shouldn’t I expect to see the same shit-creep in Kagi that I’ve seen in Google or Bing over the last decade? Because I’m paying them? Fuck, I subscribe to Google and Amazon services, and they haven’t gotten any better.
The problem is that it’s just incredibly expensive to keep scanning and indexing the web over and over in a way that makes it possible to search within seconds.
And the problem with search engines is that you can’t make the algorithm completely open source since that would make it too easy to manipulate the results with SEO which is exactly what’s destroying google
I don’t think “security through obscurity” has ever been an effective precautionary measure. SEO optimization works today because it is possible to intuit the function of the algorithms without ever seeing the interior code.
Knowing the interior of the code gives black hats a chance to manipulate the algorithm, but it also gives white hats the chance to advise alternative optimization strategies. Again, consider an algorithm that biases itself to websites without ads. The means by which you game the system would be contrary to the incentives for click-bait. What’s more, search engines and ad-blockers would now have a common cause, which would have their own knock-on effects.
But this would mean moving towards an internet model that was more friendly to open-sourced, collaboratively managed, and not-for-profit content. That’s not something companies like Google and Microsoft want to encourage. And that’s the real barrier to such an implementation.
It’s not about security through obscurity but “if a measurement becomes a goal then it ceases to be a good measurement” - so keeping the measurements hidden in order to make it harder for them to become a goal is a decent way to go on about it.
How would you measure “without ads”? That would just be the same cat and mouse game that adblockers have to deal with for decades.
I’m not sure it’s possible to find a good completely open source solution that’s not either giving bad results by down rating good results for the wrong reasons or that’s open to misuse by SEO.
That might work if it’s a small project where noone cares about fixing the results but if something like that becomes mainstream it’s going to happen
The measure, from the perspective of Clickbaiters, is purely their own income stream. And there’s no way to hide that from the guy generating the clickbait.
We have a well-defined set of sites and services that embed content within a website in exchange for payment. An easy place to start is to look for these embeds on a website and downgrade the results in your query as a result. We can also see, from redirects and ajax calls off a visited website, when lots of other information is being drawn in from third-party sites. That’s a very big red flag on a site that’s doing ad pop-ups/pop-overs and other gimmicks.
I would put more faith in an open-source solution than a private model, purely due to the financial incentives involved in their respective creations. The challenge with an open model is in getting the space and processing power to do all the web-crawling.
After that, it wouldn’t be crazy to go in the Wikipedia/Reddit direction and have user-input to grade your query results, assuming a certain core pool of reliable users could be established.