4 comments

  • woolion 13 minutes ago
    > We assert that artificial intelligence is a natural evolution of human tools developed throughout history to facilitate the creation, organization, and dissemination of ideas, and argue that it is paramount that the development and application of AI remain fundamentally human-centered.

    While this is a noble goal, it seems obvious that this isn't how it usually goes. For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be. An unstoppable force, so any form of opposition is "luddite behavior". Another one is easier transport and remote communication, that generally broke down the social fabric. Or social media wreaking havoc among teen's minds. From there, it's easy to see why the technological system might be seen as an inherent evil. In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it. There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.

    I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".

  • gradstudent 24 minutes ago
    I skimmed the paper a couple of times, hoping to find the promised (from the abstract)

    > pathway to integrating AI into our most challenging and intellectually rigorous fields to the benefit of all humankind.

    There's very little insight here though. It seems mostly a retread of conversations we've been having in the academic community for a few years now. In particular, I was hoping to see some discussion of how we might restructure our educational institutions around this technology, when the machines rob students of the opportunity to develop critical thinking skills. Right now our best idea seems to be a retreat to oral and written examinations; an idea which doesn't scale and which ignores the supposed benefits of human+AI reasoning. The alternative suggestion I've seen is to teach prompt engineering, which seems (a) hard for foundational subjects and (b) again, seems to outsource much of the thinking to the AI, instead of extending the reach of human thought.

    • BDPW 16 minutes ago
      Physical classrooms don't really scale either, is that really a fundamental problem?
  • zaikunzhang 2 hours ago
    • anotherpaulg 1 hour ago
      Recorded 10 February 2026. Terence Tao of the University of California, Los Angeles, presents "Machine assistance and the future of research mathematics" at IPAM's AI for Science Kickoff.
  • bluecheese452 55 minutes ago
    Enough Terence Tao spam.