• 0 Posts
  • 319 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle










  • I haven’t looked at the used car market in 5 years, I know that it’s gotten more expensive due to the supply chain, but I’ve bought 3 used EVs over the years, all of them have worked fantastically and all of them were between 38k - 55k new, purchased for 15k-22k off lease. People just hear “EVs are expensive” but don’t take the trouble to actually look for themselves.

    Used Nissan Leafs used to be like $7k cars. There are a lot of people who could use a nice, reliable $7k car capable of getting around any city for all practical purposes. But since it can’t go on a Great American Road Trip™, it doesn’t get looked at all.




  • I think it’s like 70% that and 30% that all games journalists are also fans (this maybe isn’t true of, say, political journalists) who are always walking an ethical line between saying the truth and geeking out about getting status, access, power and free stuff from these companies. So it also makes them more likely to preemptively defend their golden goose/favorite studios and brands like a kid on a playground, except they might lose kickbacks in the future if they don’t become ardent defenders.

    Also, I loved DA:O, and DA2 was OK, I didn’t finish DA:I and I have very very very little interest in this game until I see lots of reviews after its released. Sorry BioWare, but ya basic.



  • LLMs are conversation engines (hopefully that’s not controversial).

    Imagine if Google was a conversation engine instead of a search engine. You could enter your query and it would essentially describe, in conversation to you, the first search result. It would basically be like searching Google and using the “I’m feeling lucky” button all the time.

    Google, even in its best days, would be a horrible search engine by the “I’m feeling lucky” standard, assuming you wanted an accurate result and accurate means “the system understood me and provided real information useful to me”. Google instead return(ed)s(?) millions or billions of results in response to your query, and we’ve become accustomed to finding what we want within the first 10 results back or, we tweak the search.

    I don’t know if LLMs are really less accurate than a search engine from that standpoint. They “know” many things, but a lot of it needs to be verified. It might not be right on the first or 2nd pass. It might require tweaking your parameters to get better output. It has billions of parameters but regresses to some common mean.

    If an LLM returned results back like a search engine instead of a conversation engine, I guess I mean it might return billions of results and probably most of them would be nonsense (but generally easily human-detectable) and you’d probably still get what you want within the first 10 results, or you’d tweak your parameters.

    (Accordingly I don’t really see LLMs saving all that much practical time either since they can process data differently and parse requests differently but the need to verify their output means that this method still results in a lot of back and forth that we would have had before. It’s just different.)

    (BTW this is exactly how Stable Diffusion and Midjourney work if you think of them as searching the latent space of the model and the prompt as the search query.)

    edit: oh look, a troll appeared and immediately disappeared. nothing of value was lost.