• conciselyverbose@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    They already have dedicated hardware they call the neural engine, and use for coreML, ARKit, some of the magic they do to turn terrible sensors and lenses into passable images, etc. There’s a lot of processing that already happens on your device. Being able to search your images by subject might be something Google does too, but Apple does it locally.

    So my guess is they’ll just adjust the architecture of the neural engine to accommodate any new requirements, rather than adding a “new core”. But it’s kind of all semantics. There will be new hardware components and intercommunication at a low level.