@toast

Thanks to chime in!

A worker system like you describe is much more complicated to develop. Also, in such a system people need to trust each other that they will not game the crawl results.

Bandwidth and crawling is necessary to seed the index and then keep it fresh. The initial index can be shared (like Common Crawl Search Engine does, but with less spam). Keeping the index fresh for a family is doable on a regular fiber or DSL connection, it does not consume much.

Follow

@toast

The storage is not even costly, english wikipedia + stackoverflow does not even reach 100GB.

What is costly is doing a query under one second. For that you need lots of CPU cores / threads like AMD epyc or Threadripper.

Maybe I did not understand what you wrote?

Sign in to participate in the conversation
FLOSS.social

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).