How Claude Code works in large codebases

(claude.com)

59 points | by shenli3514 1 hour ago

8 comments

  • jwilliams 8 minutes ago
    > Claude Code navigates a codebase the way a software engineer would: it traverses the file system, reads files, uses grep to find exactly what it needs, and follows references across the codebase. It operates locally on the developer’s machine and doesn’t require a codebase index to be built, maintained, or uploaded to a server....

    > Agentic search avoids those failure modes. There's no embedding pipeline or centralized index to maintain as thousands of engineers commit new code. Each developer's instance works from the live codebase.

    The frame of "the way a software engineer would" and the conclusion seem at odds. I'd love to be schooled otherwise?

    I use autocomplete/LSPs all the time and they're useful. That's an index? Why wouldn't Claude be able to use one? Also a "software engineer" remembers the codebase - that's definitely a RAG. I have a lot of muscle memory to find the file I need through an auto-completed CMD+P.

    It doesn't need to particularly be real-time across thousands of engineers -- just the branch I'm on.

    It's rare that I'd be navigating a codebase from first-principles traversal. It would usually be a new codebase and in those cases it's definitely not what I'd call an optimal experience.

    • hibikir 1 minute ago
      Even if there is first principles traversal of some parts of the codebase, there are other bits that definitely not change, and where exploring every time is a massive waste of tokens. My arguments with claude often have to do with making it explore a lot less, because I know better, and faster, than its slow, expensive navigation of things that basically never change.
    • khuey 1 minute ago
      The article does have an entire paragraph about LSPs and how Claude can use them.
  • thinkindie 42 minutes ago
    I don’t agree with the statement about indexing codebase: it works pretty well for IDEs like PHPstorm or other jetbrains IDEs
  • ares623 1 minute ago
    Lots of concepts. Release the harness that made it possible to port Bun to Rust in 9 days. That's what everyone really wants. Then everyone can go "do that but for this other goal".
  • belZaah 1 hour ago
    How very interesting. In an industry, where things shift around in months if not weeks, there’s been not only enough time for clear patterns to emerge but also these patterns have proven successful on large codebases. What’s the success criteria? Didn’t delete production database? Team velocity has increased? Codebase TTL has increased? Operations guys are happier?
    • giancarlostoro 1 hour ago
      > Didn’t delete production database?

      I still say if this happens to you with AI tooling, that's both a failure on you and your org for giving a developer prod credentials that could nuke production resources. I don't think I've worked in a place that gave me this level of blind access.

      • nibbleyou 15 minutes ago
        I have only worked in startups and I have been an early engineer in both of them. I would always get high privileges within a short time where I would have the access to create and delete resources. I don't think it's that uncommon.
        • indentit 7 minutes ago
          But the correct way to do it is to have a separate account with more privileges, and only give AI access to your standard developer account
      • belZaah 42 minutes ago
        Exactly. So is that level of obvious hygiene where the bar is or is it somewhere else. What ticks me off is the audacity of blanket claims without an attempt to even remotely state why it’s said this is a list of successful patterns and what does success mean. We’re just supposed to eat it up, because, you know, Claude.
  • tex0 17 minutes ago
    If the developer can have a local copy of the monorepo it's not a "large" codebase.
  • wood_spirit 41 minutes ago
    I’m super interested to know what the back and forth between models and tools really looks like in practice.

    Are there any much more detailed walkthroughs of how it works and how it decides the tools to use and the grep to use etc and what the conversations actually look like?

    In the UI you see just enough to know it’s doing something but you don’t really see the jumps it’s making offscreen.

    • ralfhn 11 minutes ago
      Codex is open source if you’re interested https://github.com/openai/codex
    • weird-eye-issue 39 minutes ago
      You can easily inspect the full requests it makes to the API which contains the full system prompt, tools, tool calls, etc.
      • sprobertson 19 minutes ago
        or easier, open ~/.claude/projects/[project]/[session].jsonl (excluding the system prompt)
        • weird-eye-issue 2 minutes ago
          Doesn't really seem easier and it's in a harder to read format
  • Tsarp 58 minutes ago
    Wondering if enterprises have a modified version of CC that doesnt have to optimize to stop bleeding on fixed cost subscription plans.

    The article really does not align with the current sentiment. Everyone with a choice has mostly moved on to codex (ofc in this world all it takes is a model update/harness update to turn things around).

    CC is great at a lot of things, but repeatedly misses out reading on crucial parts of the code base, hallucinates on the work that was done and a bunch of other issues.

    • Reebz 28 minutes ago
      The influencer economy trades on hype, on frenzy, and ultimately, eyeballs. The more the better.

      They want you feel like you’re missing out. They want you to switch. Being boring is far more productive. Pin your versions. Stick to stable releases and avoid the nightlies.

      Significant noise created from 4.6 to 4.7 Opus transition has caused some to interpret this as signal. Excluding certain genuine and real bugs, the noise about perceived quality falling dramatically was noise. Influencers doing influencing turned it into “signal”. The reality was that if you had strong planning and spec driven development it ranged from manageable to non-existent.

      The vast majority of the people I know and work with have not switched off CC or their Max sub.

    • paustint 18 minutes ago
      I have a choice and have not moved to codex (100/mo personal + my employer pays for a subscription). I try codex here and there and it seems to go off the rails every time. I have had some good experiences with codex, but generally trying to get something big accomplished it doesn't work out.

      But I may not have paid enough to get the full real experience with codex

      • viking123 2 minutes ago
        I use codex at home 20 bucks a month the limits are very high relative to the price, maybe the gravy train ends soon for these and then it's probably to open router chinese models.

        At work it's CC or sometime codex, personally don't see much difference at all and most normies will notice none. The cultists have their opinions.

    • sho 15 minutes ago
      > stop bleeding on fixed cost subscription plans

      What bleeding? Anthropic wants as much of that "bleeding" as possible. The interaction data gathered from genuine human CC subscription usage of their models goes directly into their RL training, it's invaluable and they are more than happy to lose money on the inference to get it. That data is what xAI was recently willing to pay $10b to cursor to get.

      They want you to use Claude Code. They hate other UI surfaces like OpenCode etc purely because they lose control over that data, so they're subsidizing the inference without getting what they actually want, the data (they still get some of it of course, but it's much less ergonomic for them. Those tools often abstract away the subagent calls, for example). OpenCode can collect that data themselves, so by allowing subscription there, Anthropic sees itself as subsidizing another org getting that data. Hard no.

      And tools like OpenClaw are useless because they're mechanical and don't represent actual users interacting with the service - again, subsidizing but not getting the reward.

      It's all very simple once you understand their motivations.

    • periodjet 47 minutes ago
      > Everyone with a choice has mostly moved on to codex

      Ha!

    • Aeolun 53 minutes ago
      You must be using a different CC. Or what they’re writing here is correct, and it’s all due to the CLAUDE.md file that I only occassionally yell at claude.
      • Tsarp 41 minutes ago
        Hmm please share more. I have had the max CC sub since it came out. Religiously follow all of Boris/Cats advice but still struggle with it. Meanwhile a really badly written AGENTS.md will still get the work done.
        • zarzavat 21 minutes ago
          Apologies but what is a Boris Cat?
          • polotics 5 minutes ago
            Boris Cherny and Cat Wu are the lead devs of CC at Anthropic who unsurprisingly talk their book and find so many ways to justify tokenmaxing.

            As the product they deliver is greenfield and in the newest of domain spaces, there is a serious halo-effect to consider.

    • Analemma_ 9 minutes ago
      > Everyone with a choice has mostly moved on to codex

      You are deep in an information bubble, mostly driven by hype-train influencers with magpie attention spans.

    • SpicyLemonZest 36 minutes ago
      I think it's a good rule of thumb that if you find yourself saying everyone prefers this model or that model you're in a bubble. I've made this mistake before, I used to go around saying everyone knew Claude was the only model for serious professional use, but I was wrong.
      • sigmar 25 minutes ago
        I always assume that people making those comments on HN are trying to convince others to switch to their model. Surely no one actually believes their friend circle is a representative sample of the hundreds of millions of people that use these LLMs?
      • viking123 11 minutes ago
        Anthropic has the best marketing for sure.

        Btw the guy in charge of that stuff for Anthropic is the same guy who said GPT 2 was too dangerous to release, Jack Clark. LMAO. That model could barely string a sentence together.

    • ghiculescu 25 minutes ago
      [dead]
  • jdw64 19 minutes ago
    [dead]