• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: July 11th, 2023

help-circle

  • I agree that anecdotes aren’t worthless, but for different reasons. There’s actually a saying that goes, “the plural of anecdote isn’t data.” Anecdotes are just stories. They aren’t data points and they aren’t peer reviewed. If you want to turn anecdotes into data, you have to do the proper interviews and surveys to actually build a dataset and then get the peer review, but at that point we aren’t talking about anecdotes anymore.







  • It’s crazy how most of those programs work. The way my insurance handles it is way better. For example, no matter how bad you are at driving, they never raise the premiums above the normal rate, so it almost always makes sense to get the tracker from a finance perspective. (The only exception is that they will raise your rates if you drive farther in 6 months than you estimated on your initial application. The flip side is that they lower your rates if you don’t drive very much. I only drive about 1000 miles every 6 months, so my premium is really low.) They also have a Bluetooth device that stays in your car that your phone must be connected to in order for it to record trip data, and if you happen to be riding as the passenger in the car, the app has an option that allows you to clarify for each trip that you weren’t the driver. I was surprised to learn they aren’t all like that.


  • Language parsing is a routine process that doesn’t require AI and it’s something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don’t see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response (even the nonsensical ones) before returning the best answer. They’re better described as a greedy algorithm than a brute force algorithm.

    I’m not going to get into an argument about whether these AIs understand anything, largely because I don’t have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you’re making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.