Anthropic Subprocessor Changes

(trust.anthropic.com)

50 points | by tencentshill 5 hours ago

10 comments

  • tencentshill 5 hours ago
    Notable: Added "Microsoft Azure, which provides cloud infrastructure for all Anthropic products (Worldwide)."
    • pwarner 3 hours ago
      Microsoft 365 Copilot has enabled Claude models, and I imagine they want that running on Azure?
      • jadbox 3 hours ago
        Likely. MS doesn't like using models that are not hosted by them internally (see VSCode Copilot)
    • varispeed 4 hours ago
      Ahh now it is clear why so many outages lately. Solid choice.
    • cdrnsf 4 hours ago
      Hopefully it goes better for them than it has for GitHub.
      • dylan604 3 hours ago
        hope in one hand and do something in the other to see which one fills up faster. hoping is always a strained good idea, but hoping on Azure really strains credulity
    • rvz 3 hours ago
      There you go. So when Azure has an outage, so will Anthropic (and Github).

      Now expect both of them to have unstable uptime and outages every week.

  • ehnto 32 minutes ago
    With respect to my private data, it seems all roads eventually lead to California.
  • asawfofor 24 minutes ago
    so i thought there were multiple fedramp service providers offering hosted claude models. not sure why they are linking to one in particular
  • craxyfrog 1 hour ago
    Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.

    For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.

    Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.

  • craxyfrog 1 hour ago
    Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.

    For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.

    Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.

  • gnabgib 5 hours ago
    Title: Welcome to the Anthropic Trust Center

    .. was this a deep link? You might want to repeat in the comments

    • barbazoo 4 hours ago
      > Anthropic Subprocessor Changes

      > General

      > Published March 26, 2026

      > We've updated our subprocessor list with three additions

      Works for me, gotta scroll down a bit

      • gnabgib 4 hours ago
        That's an h3 not a title. Looks like they probably meant: https://trust.anthropic.com/updates, it's still an entry in an h3 (with "Welcome to the Anthropic Trust Center" as the title), but it is at least the most recent update (canonical would stop this being directly linked)
  • rvz 4 hours ago
    [flagged]
    • iambateman 4 hours ago
      I hear the slot machine thing a lot but I don’t get it.

      I use Claude Code every day for coding because it makes me way more productive. But I don’t resonate with the slot machine effect. Can you expand on what mechanism you see that give it a slot machine effect? Is it for all users or just a subset?

      • svnt 3 hours ago
        For people who want to ask a model for an app, or a website, or something at a level of “hey you make apps right, I have had this idea for years…” the experience is akin to a slot machine — sometimes they get what they imagined their description would create and it works, and sometimes they get a hollow chocolate approximation.
      • fenykep 3 hours ago
        I think it is just a strawman extrapolation of the nondeterministic nature of LLMs.
        • rvz 3 hours ago
          [flagged]
  • octoberfranklin 3 hours ago
    WTF is a "subprocessor"?

    They should just be honest and say "data loophole".

    • dchuk 2 hours ago
      It’s basically another party that is used as infrastructure by the company you’re using the services of, who has access to your data, but that sub processor doesn’t need to extend its terms down into the eula. So like if you host databases on aws, they are your sub processor.
    • pdabbadabba 3 hours ago
      It is an important legal concept under the GDPR and other data governance frameworks.
  • wewewedxfgdf 1 hour ago
    Does anyone from Anthropic read these HN threads?

    If so, could you PLEASE FFS take your itchy trigger finger off the "ban" button.

    https://privacy.claude.com/en/articles/10023638-why-am-i-rec... "if you are receiving these messages because you are trying to elicit copyrighted content, we may warn you or, in cases of repeat violations, suspend or terminate your account."

    Why are these companies just so damn anxious to ban their paying users?

    How about NOT banning - how about just putting notices in the user interface, discussing the matter.

    This is how users feel about AI companies banning their users instead of, you know, talking about it - read the comments: https://github.com/google-gemini/gemini-cli/discussions/2063...

    You don't never talk to your friends or family again if someone behaves slightly unexpectedly and you don't end that conversation with, "but hey one more relationship issue and we're finished".

    AND in the absolute worst case, if you MUST ban a user for what must be truly deeply aggregious behavior - like trying to hack the system or steal money or hurt someone or something, then PHONE THEM TO TALK ABOUT IT. Are you aware that people need this service now to do their job?

    FFS WHY are they so desperate to bring out the BAN hammer all the time.

    • sdwr 1 hour ago
      The more they share, the easier it is to exploit the system.
    • alexjurkiewicz 57 minutes ago
      "I'm trying to do something illegal and Anthropic are aware. Why do they keep banning me??"
      • wewewedxfgdf 48 minutes ago
        Unless you're not.

        Look, if you make an LLM and you don't want people using it in a particular way then communicate with them. And if you can detect what you think is such behavior, then communicate. Out in real life you don't threaten people with end of relationship with every issue that comes up.

        It's such childish business to always pull out and threaten the ban hammer any time there's any possible issue with how they want their system used.

        • Nuzzerino 4 minutes ago
          The last time I used Claude, I was completely locked out of a long chat (including not being able to view it) for sending something innocent that was written in another language, where there was apparently some confusion with the translation. I’m sure it will get worse over time until Chinese models start to proliferate more and challenge the monopoly on regulatory policy.