The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.
For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.
So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.
The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.
and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.
deleted by creator
For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.
So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.
The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.
and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.
deleted by creator