14 comments

  • barishnamazov 4 hours ago
    I like that this relies on generating SQL rather than just being a black-box chat bot. It feels like the right way to use LLMs for research: as a translator from natural language to a rigid query language, rather than as the database itself. Very cool project!

    Hopefully your API doesn't get exploited and you are doing timeouts/sandboxing -- it'd be easy to do a massive join on this.

    I also have a question mostly stemming from me being not knowledgeable in the area -- have you noticed any semantic bleeding when research is done between your datasets? e.g., "optimization" probably means different things under ArXiv, LessWrong, and HN. Wondering if vector searches account for this given a more specific question.

    • keeeba 3 hours ago
      I don’t have the experiments to prove this, but from my experience it’s highly variable between embedding models.

      Larger, more capable embedding models are better able to separate the different uses of a given word in the embedding space, smaller models are not.

      • A4ET8a8uTh0_v2 3 hours ago
        I was thinking about it a fair bit lately. We have all sorts of benchmarks that describe a lot of factors in detail, but all those are very abstract and yet, those do not seem to map clearly to well observed behaviors. I think we need to think of a different way to list those.
  • bonsai_spool 1 hour ago
    This may exist already, but I'd like to find a way to query 'Supplementary Material' in biomedical research papers for genes / proteins or even biological processes.

    As it is, the Supplementary Materials are inconsistently indexed so a lot of insight you might get from the last 15 years of genomics or proteomics work is invisible.

    I imagine this approach could work, especially for Open Access data?

    • eamag 39 minutes ago
      I just built something like this a week ago: https://github.com/eamag/papers2dataset

      I wanted to find all cryoprotective agents that were tested at different temperatures, but it should be extandable to your problem too. Uses OpenAlex to traverse a citation graph and open access pdfs

  • nielsole 3 hours ago
    I think a prompt + an external dataset is a very simple distribution channel right now to explore anything quickly with low friction. The curl | bash of 2026
  • kburman 3 hours ago
    > a state-of-the-art research tool over Hacker News, arXiv, LessWrong, and dozens

    what makes this state of the art?

    • rvnx 29 minutes ago
      It's just marketing.

      It is not a protected term, so anything is state-of-the-art if you want it to be. For example, Gemma models at the moment of release were performing worse their competition, but still, it is "state-of-the-art".

      Juicero was state-of-the-art on release too, though hands were better, etc.

    • 7moritz7 3 hours ago
      The scale. How many tools do you know that can query the content of all arxiv papers.
    • ashirviskas 3 hours ago
      First, so best in this?
    • nandomrumber 3 hours ago
      The tool is state of the art, the sources are historical.
  • voxleone 27 minutes ago
    this is great>>@FTX_crisis - (@guilt_tone - @guilt_topic)

    Using LLm for tasks that could be done faster with traditional algorithmic approaches seems wasteful, but this is one of the few legitimate cases where embeddings are doing something classical IR literally cannot. You could also make make the LLM explain the query it’s about to run. Before execution:

    “Here’s the SQL and semantic filters I’m about to apply. Does this match your intent?”

  • 7777777phil 4 hours ago
    Really useful currently working on a autonomous academic research system [1] and thinking about integrating this. Currently using custom prompt + Edison Scientific API. Any plans of making this open source?

    [1] https://github.com/giatenica/gia-agentic-short

  • nineteen999 4 hours ago
    That's just not a good use of my Claude plan. If you can make it so a self-hosted Lllama or Qwen 7B can query it, then that's something.
    • mcintyre1994 3 hours ago
      I think that’s just a matter of their capabilities, rather than anything specific to this?
  • fragmede 3 hours ago
    > I can embed everything and all the other sources for cheap, I just literally don't have the money.

    How much do you need for the various leaks, like the paradise papers, the panama papers, the offshore leajay, the Bahamas leaks, the fincen files, the Uber files, etc. and what's your Venmo?

  • mentalgear 4 hours ago
    Nice, but would you consider open-sourcing it? I (and I assume others) are not keen on sharing my API keys with a 3rd party.
    • nielsole 3 hours ago
      I think you misunderstood. The API key is for their API, not Anthropic.

      If you take a look at the prompt you'll find that they have a static API key that they have created for this demo ("exopriors_public_readonly_v1_2025")

  • m11a 3 hours ago
    The quick setup is cool! I’ve not seen this onboarding flow for other tools, and I quite like its simplicity.
  • gtsnexp 4 hours ago
    Is the appeal of this tool its ability to identify semantic similarity?
    • A4ET8a8uTh0_v2 3 hours ago
      The use case could vary from person to person. When you think about it, hacker news has large enough data set ( and one that is widely accessible ) to allow all sorts of fun analyses. In a sense, the appeal is:

      who knows what kind of fun patterns could emerge

      • noduerme 47 minutes ago
        The problem with HN isn't that the patterns are hard to discern, it's that no one wants to acknowledge them.
  • bugglebeetle 5 hours ago
    Seems very cool, but IMO you’d be better off doing an open source version and then hosted SAAS.
  • octoberfranklin 4 hours ago
    "Claude Code and Codex are essentially AGI at this point"

    Okaaaaaaay....

    • Closi 3 hours ago
      Just comes down to your own view of what AGI is, as it's not particularly well defined.

      While a bit 'time-machiney' - I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. If someone wrote a definition of AGI 20 years ago, we would probably have met that.

      We have certainly blasted past some science-fiction examples of AI like Agnes from The Twilight Zone, which 20 years ago looked a bit silly, and now looks like a remarkable prediction of LLMs.

      By todays definition of AGI we haven't met it yet, but eventually it comes down to 'I know it if I see it' - the problem with this definition is that it is polluted by what people have already seen.

      • nottorp 1 hour ago
        > most people would probably say AGI has been achieved

        Most people who took a look at a carefully crafted demo. I.e. the CEOs who keep pouring money down this hole.

        If you actually use it you'll realize it's a tool, and not a particularly dependable tool unless you want to code what amounts to the React tutorial.

        • bebb 22 minutes ago
          Depending on the task, the tool can, in effect, demonstrate more intelligence than most people.

          We've just become accustomed to it now, and tend to focus more on the flaws than the progress.

      • bananaflag 3 hours ago
        > If someone wrote a definition of AGI 20 years ago, we would probably have met that.

        No, as long as people can do work that a robot cannot do, we don't have AGI. That was always, if not the definition, at least implied by the definition.

        I don't know why the meme of AGI being not well defined has had such success over the past few years.

        • bonplan23 1 hour ago
          "Someone" literally did that (+/- 2 years): https://link.springer.com/book/10.1007/978-3-540-68677-4

          I think it was supposed to be a more useful term than the earlier and more common "Strong AI". With regards to strong AI, there was a widely accepted definition - i.e. passing the Turing Test - and we are way past that point already: ( see https://arxiv.org/pdf/2503.23674 )

        • Closi 3 hours ago
          Completely disagree - Your definition (in my opinion) is more aligned to the concept of Artificial Super Intelligence.

          Surely the 'General Intelligence' definition has to be consistent between 'Artificial General Intelligence' and 'Human General Intelligence', and humans can be generally intelligent even if they can't solve calculus equations or protein folding problems. My definition of general intelligence is much lower than most - I think a dog is probably generally intelligent, although obviously in a different way (dogs are obviously better at learning how to run and catch a ball, and worse at programming python).

          • fc417fc802 1 hour ago
            I do consider dogs to have "general intelligence" however despite that I have always (my entire life) considered AGI to imply human level intelligence. Not better, not worse, just human level.

            It gets worse though. While one could claim that scoring equivalently on some benchmark indicates performance at the same level - and I'd likely agree - that's not what I take AGI to mean. Rather I take it to mean "equivalent to a human" so if it utterly fails at something we're good at such as driving a car through a construction zone during rush hour then I don't consider it to have met the bar of AGI even if it meets or exceeds us at other unrelated tasks. You have to be at least as general as a stock human to qualify as AGI in my books.

            Now I may be but a single datapoint but I think there are a lot of people out there who feel similarly. You can see this a lot in popular culture with AGI (or often AI) being used to refer to autonomous humanoid robots portrayed as operating at or above a human level.

            Related to all that, since you mention protein folding. I consider that to be a form of super intelligence as it is more or less inconceivable that an unaided human would ever be able to accomplish such a feat. So I consider alphafold to be both super intelligent and decidedly _not_ AGI. Make of that what you will.

      • sixtyj 1 hour ago
        Charles Stross published Accelerando in 2005.

        The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity.

    • phatfish 3 hours ago
      I want to know what the "intelligence explosion" is, sounds much cooler than AGI.
      • adammarples 3 hours ago
        When AI gets so good it can improve on itself
        • peheje 1 hour ago
          Actually, this has already happened in a very literal way. Back in 2022, Google DeepMind used an AI called AlphaTensor to "play" a game where the goal was to find a faster way to multiply matrices, the fundamental math that powers all AI.

          To understand how big this is, you have to look at the numbers:

          The Naive Method: This is what most people learn in school. To multiply two 4x4 matrices, you need 64 multiplications.

          The Human Record (1969): For over 50 years, the "gold standard" was Strassen’s algorithm, which used a clever trick to get it down to 49 multiplications.

          The AI Discovery (2022): AlphaTensor beat the human record by finding a way to do it in just 47 steps.

          The real "intelligence explosion" feedback loop happened even more recently with AlphaEvolve (2025). While the 2022 discovery only worked for specific "finite field" math (mostly used in cryptography), AlphaEvolve used Gemini to find a shortcut (48 steps) that works for the standard complex numbers AI actually uses for training.

          Because matrix multiplication accounts for the vast majority of the work an AI does, Google used these AI-discovered shortcuts to optimize the kernels in Gemini itself.

          It’s a literal cycle: the AI found a way to rewrite its own fundamental math to be more efficient, which then makes the next generation of AI faster and cheaper to build.

          https://deepmind.google/blog/discovering-novel-algorithms-wi... https://www.reddit.com/r/singularity/comments/1knem3r/i_dont...

    • Hamuko 4 hours ago
      I have noticed that Claude users seem to be about as intelligent as Claude itself, and wouldn't be able to surpass its output.
      • noduerme 40 minutes ago
        This made me laugh. Unfortunately, this is the world we live in. Most people who drive cars have no idea how they work, or how to fix them. And people who get on airplanes aren't able to flap their arms and fly.

        Which means that humans are reduced to a sort of uselessness / helplessness, using tools they don't understand.

        Overall, no one tells Uncle Bob that he doesn't deserve to fly home to Minnesota for Christmas because he didn't build the aircraft himself.

        But we all think it.

      • baq 32 minutes ago
        You seem to be very confused about what intelligence even is.
      • fragmede 1 hour ago
        You, of course, are smarter than them.