In an age of LLMs, is it time to reconsider human-edited web directories?

Back in the early-to-mid '90s, one of the main ways of finding anything on the web was to browse through a web directory.

These directories generally had a list of categories on their front page. News/Sport/Entertainment/Arts/Technology/Fashion/etc.

Each of those categories had subcategories, and sub-subcategories that you clicked through until you got to a list of websites. These lists were maintained by actual humans.

Typically, these directories also had a limited web search that would crawl through the pages of websites listed in the directory.

Lycos, Excite, and of course Yahoo all offered web directories of this sort.

(EDIT: I initially also mentioned AltaVista. It did offer a web directory by the late '90s, but this was something it tacked on much later.)

By the late '90s, the standard narrative goes, the web got too big to index websites manually.

Google promised the world its algorithms would weed out the spam automatically.

And for a time, it worked.

But then SEO and SEM became a multi-billion-dollar industry. The spambots proliferated. Google itself began promoting its own content and advertisers above search results.

And now with LLMs, the industrial-scale spamming of the web is likely to grow exponentially.

My question is, if a lot of the web is turning to crap, do we even want to search the entire web anymore?

Do we really want to search every single website on the web?

Or just those that aren’t filled with LLM-generated SEO spam?

Or just those that don’t feature 200 tracking scripts, and passive-aggressive privacy warnings, and paywalls, and popovers, and newsletters, and increasingly obnoxious banner ads, and dark patterns to prevent you cancelling your “free trial” subscription?

At some point, does it become more desirable to go back to search engines that only crawl pages on human-curated lists of trustworthy, quality websites?

And is it time to begin considering what a modern version of those early web directories might look like?

@degoogle #tech #google #web #internet #LLM #LLMs #enshittification #technology #search #SearchEngines #SEO #SEM

  • ᴇᴍᴘᴇʀᴏʀ 帝A
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Indeed. Places like Lemmy and Reddit might be called “link aggregators” but they are, ultimately, jumped up web forums (and that’s no slight, I’m a web forum guy through and through) and are nothing like the social bookmarking sites, like Delicious, which had greater breadth and depth (just look at your own bookmarks, you’d only share a fraction on here but you put a larger percentage into social bookmarking) but, crucially, essentially crowd-sourced the organisation and categorisation of those links.

    Some kind of service that would sit alongside a fedi instance

    I have been pondering the idea of “Fediverse plug-ins” that would do that, extending the core functionality of the service.

    So in the case of, what we’ll call, Fedilicious users of the service could either punt over links they post to Mastodon or Lemmy to a social bookmarking plug-in where it is stored and categorised (or you could run a not to do this automatically) but they could also add links that might not be worth a new post or storing away for future reference, etc. You would then have a curated, easily-accessible repository of links that reflect the interests of that instance.

    It needn’t itself be federated but if you did, you could have some “everything” sites (fedilicious.world?) which would accepted all links from other Fedilicious instances it is federated with (which would tend to be set to broadcast mode, so categorised links go out, they don’t receive all the links, although users could be allowed to add links to it from elsewhere).