Ask HN: When did you move from AI agentic loops to simpler deterministic system?

Industry is increasingly moving towards complex, autonomous agentic loops and feedback chains. They obviously comes with significant latency, non-determinism, low-accuracy and cost.

I'm interested in hearing from engineers who have moved in the opposite direction.

At what point in your product lifecycle did you decide that agentic approach was wrong tool for the job?

What was the specific failure mode (reliability, cost, latency, maintainability) pushed you to replace agentic loop with more deterministic system/pipeline?

7 points | by laxmena 2 days ago

8 comments

  • tstrimple 2 days ago
    I tend to draw the line at automating the LLM to respond to things. If it's responding to some sort of external source, that source is usually somewhat consistent to the point I'd rather have the LLM create a script to parse the data and do that automatically. I've got a job search tool that I built recently using Claude Code. CC created scripts to scrape certain websites and scheduled them using native OS schedulers. The results get parsed and dropped into a sqlite database. No LLM is involved in the automated portion of this process. I've got some general status scripts which push details about the current health state of my servers and apps and also will alert me when job listings reach some defined threshold. At that point I use the LLM to look through the new jobs and categorize them based on work I'd find interesting giving me a prioritized list.

    If all LLM tools disappeared tomorrow, all of my scripts and processes developed with an LLM will continue to work without hiccup. If anthropic went out of business tomorrow, I'd lose nothing switching to another provider because I don't have to "trust" agentic operations in automated processes. They are always overseen by me and they are rarely creating things I couldn't have created myself. It's just much faster to iterate on it with these tools.

    • laxmena 1 day ago
      > If all LLM tools disappeared tomorrow, all of my scripts and processes developed with an LLM will continue to work without hiccup.

      This is a really pragmatic philosophy and I think it's underappreciated. Using the LLM as a development accelerator rather than a runtime dependency gives the best of both worlds.

  • mickelsen 2 days ago
    When you have a flow well defined, like transactions going on, it simply doesn't scale. But AI can then be used for analysis, alerts and investigating failures of such processes very nicely. Agents can also be used to prepare a transaction package that needs more human input, like a customer service case, but again with clearly defined outcomes. At least that's what I've seen in my limited experience consulting for a local online retailer.
    • laxmena 1 day ago
      That's exactly my process I follow now.

      I look at the traces of agent execution, and use that as a feedback to extract common patterns. The comment patterns are extracted out as Scripts, or Skills.

      So Agent doesnt have to figure out how to do things from scratch, saving considerable amount of tokens and latency.

      I also came across this paper recently: https://arxiv.org/abs/2603.25158

      Which does exactly the same. Extracts traces and converts them into skills for agents to use.

  • sminchev 1 day ago
    Everything is based on the requirements and available resources. One of our clients decided that calling the AI so often takes time, and money, and this does not work for him.

    AI can give suggestions, not decisions. IF you want decisions and responsibility to be taken, use real people.

  • linggen 16 hours ago
    [flagged]
  • vdelpuerto 17 hours ago
    [dead]
  • max_flowly_run 1 day ago
    [flagged]
  • ArielTM 2 days ago
    [dead]
  • panavm 2 days ago
    [flagged]