Hmm. That is strange. I don’t think of Apple as doing much artificial gating. This may be that (and I’m not opposed to charging for software), but it is notable that the Apple Watch keyboard does a lot of prediction. It works surprisingly well and clearly does a lot of guessing and prompting to deal with the fact that a finger is like 4 keys large. I don’t know much about the SE, but I wonder if they found the experience or be worse there. (If they’re going to give dictation it wouldn’t make a ton of sense not to give a keyboard if it were an option I’d think.)
Question: what language do you speak that dictation works poorly in? Just curious. I only recently tried it (dictation) and I was blown away by how well it works in English.
Yes/No.
Accuracy: Please use this as a moment to empathize with old people that can’t distinguish between short press and long press or can’t double tap at the right speed. In my experience, the gesture is incredibly accurate, but you have to learn the pace of tapping. Not too fast not too slow. This is something we do all the time everywhere else, but have internalized and lost consciousness of. (The double tap also needs your watch to be awake which has leads to separate class of failures — will be amazing if they ever get a system that can skip that need.)
Utility: Hardly anyone thinks its current form is very useful. (It’s a little useful. I think of it as “hugging support”. If I wake up in the morning and have one arm around someone and want to turn off an alarm it’s great. Or if I have an arm around a friend and want to take a photo: also great. …Music would be useful even without hugs if I weren’t casting from watch too.)
The hope, I think/for me, is that this is gesture conversion 1. The watch needs multiple gestures to navigate meaningfully. Like with the accessibility version. Hopefully they bit by it add reliable, snappy versions of the accessibility gestures. Having multiple will be more than the sum of parts!