Grace Hopper's Revenge

(thefuriousopposites.com)

25 points | by ashirviskas 2 hours ago

9 comments

  • jna_sh 13 minutes ago
    > The amount of training data doesn’t matter as much as we thought.

    Seems a huge assumption to me. From the data one could equally conclude that JavaScript and Python have lower code quality _because_ the quantity of training data, e.g. more code written by less experienced developers

  • ModernMech 10 minutes ago
    This is so annoying. First it’s obviously AI generated so it’s hard to even read. But if we get past that it’s making all kinds of uncited claims. Did Grace Hopper envision the translation layer moving directly from English to machine code? I don’t know because I can’t trust the LLM to say, and the article does not include a citation — in a piece whose central claim is that AI shifts the burden from coding to verification.
  • austin-cheney 13 minutes ago
    > I’m still seeing a decent number of people on Twitter complain occasionally that they’ve tried AI-driven coding workflows and the output is crap and they can move faster by themselves. There’s less of these people in the world of Opus 4.5 and Gemini 3 now, but they’re still there.

    The article starts from a false premise: that AI assisted coding makes the code more understandable. This isn't the case. You either understand the code without AI or offload that reasoning onto the AI, at which point its not you that understands the code.

    A person could argue AI writes original code more understandable at maintenance time than they could on their own. This is equally problematic for the same reason. If a person has a lesser understanding of the code at original authoring they will have a lesser understanding of the edge cases and challenges that went into the reasoning about that original code and its those thought challenges which inform the complexities of maintenance, not the simplicity of the base code.

    As an analog its like being given a challenging game puzzle to solve. After realizing the game requires extended effort to reach the desired goal the person searches online for the puzzle solution. At the game's next level they encounter a more challenging puzzle, but they never solved the prior puzzle, and so cannot solve this puzzle. In effect all understanding is destroyed and they have become utterly reliant on spoon-fed solutions they cannot maintain themselves.

  • ashirviskas 2 hours ago
    I found it interesting that Elixir scores so high, but I'm not sure whether I can agree with the cause.
    • Bolwin 1 hour ago
      That benchmark is useless for comparing languages because the tasks are not the same across languages
    • gostsamo 1 hour ago
      how can you argue with so many assertive sentences in the article? they leave no space for critical thinking.
  • keybored 14 minutes ago
    Against Flintstone Engineering.[1] That’s great.

    I don’t know about the premises here. All of these articles are written to hammer two points.

    - AI is the future/AI has been here since X months ago

    - There are still people who don’t believe that—to me an unfathomable position as I have personally spent five gazillion tokens on

    And the supposed topic of the article is incidental to that.

    But if GenAI is the future I’ll take GenAI formal verification and code generation over mindless code generation, thank you very much.

    [1] https://news.ycombinator.com/item?id=47358696

  • stabbles 1 hour ago
    The TL;DR: code should be easy to audit, not easy to write for humans.

    The rest is AI-fluff:

    > This isn't about optimizing for humans. It's about infrastructure

    > But the bottleneck was never creation. It was always verification.

    > For software, the load-bearing interface isn't actually code. Code is implementation.

    > It's not just the Elixir language design that's remarkable, it's the entire ecosystem.

    > The 'hard' languages were never hard. They were just waiting for a mind that didn't need movies.

    • zeristor 13 minutes ago
      It really is AI fluff.

      Are people starting to write and talk in this manner, I see so many YouTube videos where you can see a person reading an AI written text, its one thing if the AI wrote it, but another if the human wrote it in the style of an AI.

      As someone pointed out to me the way an AI writes text can be changed, so it is less obvious, its just that people don't tend to realise that.

      • InkCanon 5 minutes ago
        Whenever I see a sentence of the form:

        "X isn't A, it's (something opposite A)" I twitch involuntarily.

    • tyleo 14 minutes ago
      To put it another way: this article isn’t about the AI fluff, it’s about the two sentences at the top the author wrote themselves. ;)
      • zeristor 12 minutes ago
        Perhaps we need an AI to human transformer to remove the AI fluff?
    • dist-epoch 1 hour ago
      Man you are bad at TL;DR;-ing, you completely left out the main point article makes comparing stateful/mutating object oriented programming that humans like and pure functional oriented programing that presumably according to author LLMs thrive in.
  • skywhopper 55 minutes ago
    This article takes a very tiny, questionable bit of data and extrapolates a lot of iffy assertions.

    In general I’m tired of the “humans need never, and should never look at the code” LLM triumphalism articles. Do these folks ever work with real systems, I wonder.

    • dist-epoch 43 minutes ago
      I remember when "real programmers" were supposed to look at the assembly code generated by compilers because it was bloated, inefficient, and totally unsuitable to use in a real system.

      Cue in "non-determinism" retort.

      • tgv 14 minutes ago
        Hardware restrictions might have contributed to that. Anyway, analogs and metaphors do not prove what they sneakily try to imply. They might help thinking about a problem, but they leave out the actual argument, and in this case, the jump is substantial.
      • chrisrhoden 25 minutes ago
        I think the problem is less determinism than predictability. Hashing algorithms are deterministic.

        Will people start .gitignore-ing their src directories and only save prompts?

  • Chris2048 28 minutes ago
    > We built objects with identity and state because that’s how we experience reality

    I mean, we called them objects, but coupling related state (and functions) together seem an objectively (object-ively) way to group data, it's literally just dict-based organisation.

  • muskstinks 29 minutes ago
    [dead]