• 0 Posts
  • 738 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • One of the first things they teach you in Experimental Physics is that you can’t derive a curve from just 2 data points.

    You can just as easilly fit an exponential growth curve to 2 points like that one 20% above the other, as you can a a sinusoidal curve, a linear one, an inverse square curve (that actually grows to a peak and then eventually goes down again) and any of the many curves were growth has ever diminishing returns and can’t go beyond a certain point (literally “with a limit”)

    I think the point that many are making is that LLM growth in precision is the latter kind of curve: growing but ever slower and tending to a limit which is much less than 100%. It might even be like more like the inverse square one (in that it might actually go down) if the output of LLM models ends up poluting the training sets of the models, which is a real risk.

    You showing that there was some growth between two versions of GPT (so, 2 data points, a before and an after) doesn’t disprove this hypotesis. I doesn’t prove it either: as I said, 2 data points aren’t enough to derive a curve.

    If you do look at the past growth of precision for LLMs, whilst improvement is still happening, the rate of improvement has been going down, which does support the idea that there is a limit to how good they can get.



  • Just because they share the same religion doesn’t mean they’re the same kind of people.

    That bunch in Israel have even accused a Holocaust Survivor of being an anti-semite for criticizing Israel, and most modern Israelis did not came from Western Europe. Most of them they share no other characteristic with the victims of the Holocaust than religious affiliation, certainly not being against racism or having other modern humanist values.


  • Aceticon@lemmy.worldtoA Boring Dystopia@lemmy.world14 pages of dead babies
    link
    fedilink
    arrow-up
    10
    arrow-down
    4
    ·
    edit-2
    2 days ago

    “They made me do it” has been the main axis of Israeli Propaganda since the start and that’s just a variant of that.

    How about this alternative explanation: they’re stealing Palestinian land, want to ultimatelly steal all Palestinian land, are led by Sociopaths and Psychopaths and have an extremely racist society anchored on the kind of ethnic superiority ideas that would make Klanners blush, so they were always going to do something like this sooner or later to get rid of the rest of Palestinians and get the rest of their land and do it in the most inhuman ways because a large part, maybe even most, of Israeli society see themselves as “the chosen people”, a superior ethnicity and what the previous famous group of ethno-Fascists would call übermenschen, whilst they see Palestinians as lesser people, “human animals”, untermenschen.

    This is the kind of mass murdering Western nations used to do back in the days of Colonialism. It’s only shocking for us nowadays because we’ve evolved as societies and adopted Humanist values (though by their support of Israel you can see that many politicians in several countries and even a large fraction of people have in fact not evolved). Israel does in fact have Western Values, it’s just they’re the White Colonialist Values that many European nations had back in the 19th Century, not 21st Century Western Values.


  • In my experience developing In-House custom software it’s more “Managers” and “End-users”: basically the requirements for the software that’s developed are defined by the manager overseeing an area and hence based on their point of view of the business process they oversee, which is often not at all the same point of view as the people working in that process.

    I’ve seen again and again software being made exactly to the spec provided by team/area management and then turning out to have lots of problems for the actual users to use.

    In my experience the best results come from having the developers talk directly with the end-users, even if the language the devs tend to speak and their preconceptions at first don’t match those of the end-users.



  • Well, that’s the thing, it’s often the case that whilst the client is supposedly doing it for their users, in practice it’s not and is doing it for other reasons.

    Mind you, I think that is more common when the software is being developed for a client which is basically a Manager in the same company as the users of the software (for example in In-house Development or Consultancy work developing a solution for a company) were in the absence of the very clear pressure vector which is the customers not buying the product (internal end-users are almost never given a choice to use or not that software, though they can at times informally boycot software they think hinders their work and get the project killed) things are often designed for the Manager rather than for the Users.

    (Note that this is not necessarily done in a knowing purposeful way: it’s just that when it’s some external manager providing the requirements to the developers for the software being made for the area that managers overseens - though sometimes it’s even more indirect - things tend to be seen from the perspective of said manager not the end-users, hence designed to match how that manager sees things and thinks things work, which is often different from how the actual users see things. This cartoon perfectly illustrates that IMHO - it looks fine for the “manager” whilst looking quite different for the “end-user”).

    Even is B2C you see that: notice the proliferation of things like Microtransactions in Games, which are hardly the kind of thing gamers - who are the end-users of games - wanted to have but which definitelly the management of the big Publishers wanted.


  • “Family friendly UI” is “ultra-advanced” stuff for me: remember, before Kodi on a Mini-PC in my living room (and, by the way, I got a remote control for it too) I had been using first generation Media Players with file-browser interfaces to chose files from remote shares on a NAS, so merelly having something with the concept of a media library, tracking of watched status and pretty pictures automatically fetched from the Internet is a giant leap forward ;)

    There are downsides to being an old Techie using all sorts of (what was then) non-mainstream tech since back in the 90s. I’m just happy Kodi solved my problem of having an old Media Player hanging together with duct-tape, spit and prayers.

    That said I can see how Kodi having all status (such as watched/not-watched tracking) be per-media rather than per (user + media) isn’t really good for families. More broadly the thing doesn’t even seem to have the concept of a user.



  • That would be Kodi which I now use on a Mini-PC with Lubunto which has replaced my TV Box and my Media Player (plus that Mini-PC also replaces a bunch of other things and even added some new things).

    Before I went down a rabbit whole of trying to replace my really old Asus Media Player (which was so old that its remote was broken and I replaced it with my own custom electronics + software solution so that I could remote control that Media Player from an Android app I made running on my tablet) which eventually ended up with Kodi on a Linux Mini-PC also replacing my TV box, I had no idea Kodi even existed and was just using the old Media Player to browse directories with video files in a remote share (hosted on a hacked NAS on my router, a functionality which is now on that Mini-PC which even supports a newer and much faster SMB protocol) using a file browser user interface to play those files.

    It was quite the leap from that early 00s file browser interface to chose files to play on TV to a modern “media library” interface covering all sorts of media including live TV (why it ended up also replacing my TV box).







    • Get a cheap old fashioned alarm clock (we’re talking about something that costs maybe 10 bucks).
    • Put it out of reach so that you have to physically get out of bed to turn it off.
    • Configure it to go off at the appropriate time with the nastiest sound (usually they have an “alarm with a radio” and an “alarm with alarm sound” modes and you definitely want to have it in the latter mode, not the former).

    It’s a pretty horrible way to wake up if you went to bed late (protip: stop drinking coffee and using a computer after 11PM to deal with the whole only falling asleep late part of the problem) and that’s why it works.




  • Above a certain level of seniority (in the sense of real breadth and depth of experience rather than merely high count of work years) one’s increased productivity is mainly in making others more productive.

    You can only be so productive at making code, but you can certainly make others more productive with better design of the software, better software architecture, properly designed (for productivity, bug reduction and future extensibility) libraries, adequate and suitably adjusted software development processes for the specifics of the business for which the software is being made, proper technical and requirements analysis well before time has been wasted in coding, mentorship, use of experience to foresee future needs and potential pitfalls at all levels (from requirements all the way through systems design and down to code making), and so on.

    Don’t pay for that and then be surprised of just how much work turns out to have been wasted in doing the wrong things, how much trouble people have with integration, how many “unexpected” things delay the deliveries, how fast your code base ages and how brittle it seems, how often whole applications and systems have to be rewritten, how much the software made mismatches the needs of the users, how mistrusting and even adversarial the developer-user relationship ends up being and so on.

    From the outside it’s actually pretty easy to deduce (and also from having known people on the inside) how plenty of Tech companies (Google being a prime example) haven’t learned the lesson that there are more forms of value in the software development process than merely “works 14h/day, is young and intelligent (but clearly not wise)”


  • Sound like a critical race condition or bad memory access (this latter only in languages with pointers).

    Since it’s HTTP(S) and judging by the average developer experience in the domain of multi-threading I’ve seen even for people doing stuff that naturally tends to involve multiple threads (such as networked access by multiple simultaneous clients), my bet is the former.

    PS: Yeah, I know it’s a joke, but I made the serious point anyways because it might be useful for somebody.