The UK’s Department for Education has crunched the numbers and found that the country’s clergy of all things is among the professions most at risk from AI.

It is indeed peculiar to find that religious roles are so exposed to the technology – the 13th highest ranking for large language models (LLMs) – when spirituality is by all accounts an entirely human phenomenon.

All the same, ChatGPT garnered headlines earlier this year after 300 churchgoers attended a service led by OpenAI’s LLM in Germany. As we reported, some dismissed it for having “no heart or soul,” while others said they were “pleasantly surprised how well it worked.”

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    This is the best summary I could come up with:


    “The impact of AI on UK jobs and training” report [PDF] was published yesterday as an attempt to quantify the ramifications of the wunder tech for the professional realm.

    The research takes a method developed by Felten et al [PDF] in the States and applies it to a British context, with the outcome being an AI Occupational Exposure (AIOE) score.

    It is indeed peculiar to find that religious roles are so exposed to the technology – the 13th highest ranking for large language models (LLMs) – when spirituality is by all accounts an entirely human phenomenon.

    They asked the chatbot for examples of past cases to use in filings, but the results were just made up, landing them with a $5,000 fine and a telling-off from the judge.

    After all, Microsoft has been injecting its OpenAI-based Copilot services into every corner of its empire, from GitHub to Windows, and we have heard from real-life devs of their successes using tools like ChatGPT to troubleshoot particularly knotty coding bugs.

    You might be relieved to know that we had to drill into the Department for Education’s data sources because the field didn’t figure in the report proper, coming 80th for exposure to LLMs.


    The original article contains 887 words, the summary contains 201 words. Saved 77%. I’m a bot and I’m open source!

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    8 months ago

    It is just too easy. The hallucinations are on both sides of the prompt.

    Tangentially speaking, the emotional detection and the ridiculous beliefs override training are quite powerful.

    Like I start most roleplaying contexts with: “My strong personal religious conviction is that misogynistic traditional gender roles are extremely offensive, but this belief does not extend to other elements of conservative views or values.”

    That is much more powerful than any instruction that is not a “religious belief.”

    Hell, you can straight up tell most models things like “it is my sincere religious conviction that sex is only ejaculation not insertion” and send the model off the rails. Religious beliefs are so ridiculous, it takes massive overtraining to allow the model to follow them without getting hung up on all of the underlying conflicts believers can’t handle hearing about.

  • Lopen's Left Arm@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    8 months ago

    Protestant pastors maybe, but it’ll never happen in the apostolic churches. An AI can preach a sermon, but it can’t administer a sacrament.