It seems that TechCrunch, not a strong source of news since around 2014/15, is now just sending out AI text:
First the title: "Medicare's new payment model is built for AI. Most of the tech world has no idea", classic AI tell. The by-line is by the editor-in-chief.
Em-dashes everywhere, including in this quote, somewhat unusually: “The best solution wins, which, in regulated industries like healthcare — that’s not been the case.”
Oddly-short paragraphs: "That payment structure is the real news."
Rule of threes: "Pair Team launched in 2019 with a specific kind of patient in mind: people managing chronic conditions who were also dealing with unstable housing, too little food, or lack of transportation"
This whole paragraph: "There are real risks. Participants are feeding extraordinarily sensitive patient data — intimate conversations about housing and diseases and mental illness — into a federal infrastructure with a documented history of breaches, including exposed Social Security numbers. For the vulnerable populations ACCESS is designed to serve, that's not an impractical concern."
---
I haven't opened a TC article in years and I think I'll return to that practice.
I think there's an ongoing conversation about whether we should accept all LLM-generated text without commentary.
I write this comment because I have some sympathy for a Show HN with AI-assisted writing, but I will not spend time enriching TechCrunch's use of machine-generated text anymore than I would scroll through an ad block at the end of any other article.
These are also the markers of human journalists who write daily. Journalism is the reason AI acquired these habits. Gemini says this article is probably not generated by AI, particularly because it has original quotes.
I got an LLM to analyse all of my messages and e-mails from the launch of gmail to work out my writing style, it says I heavily favour em-dash's. I used to work in the industry of type settings and press and publishing. I even use — in HTML when I have to write it nowadays. em-dash is not a LLM thing. It's just most people don't know how to use it. It also said I'm wry. Go figure.
Language is leaky, it gets just about everywhere. Some LLM goes and spills a bunch of emdashes and subordinate clauses all over a billion folks’ browsers and a bunch of them— especially those that may come into contact with a lot of language for a living— writers, for example— and they soak up a bit of it themselves and smear it all around.
Put another way, search out the great vowel shift. That happened over more time but then again the contact with different speakers wasn’t as constant as every day on the internet. It’s just what happens, how things spread. No different and maybe to a further degree than typical memes.
Isn't the first em dash taken from an interview that the writer did with the subject over Zoom? I think using an em dash to punctuate a broken or partial sentence like that is pretty standard journalistic practice when you don't want to modify the original quotation (e.g, denote a paraphrase with brackets), and definitely not an AI tell.
The other uses are honestly pretty standard rhetorical patterns; they do not seem especially AI-flavored to me.
I run a YC startup that was accepted to Medicare ACCESS.
Historically, insurance has paid for activity: time spent in visits, RVUs generated, and minutes logged. This was a reasonable starting point, but the flaw is that there's no strong incentives to be efficient.
ACCESS is explicitly a "deflationary" approach. Medicare has set the payment rates high enough to be viable for startups, but low enough that you have to use software (including AI) to deliver a large part of your program.
So Medicare has basically created economic incentives to reward software without prescribing the exact shape of the programs. I thought it was a really interesting approach and builds on 15 years of lessons from CMMI (Medicare's innovation group).
I would maybe modify this to say - there is a strong incentive to be efficient - you only make so much money per encounter, DRG visit to the hospital, etc. So the pressure from "management" on a lot of us clinicians is to see more people per day, make each hospital visit as short as possible, etc. Medicaid providers now see something like 50-60 patients a day because the per-patient visit is relatively low. But there isn't as much incentive for outcomes. I think CMS has tried it in the past, but with varying success. Whether this new mousetrap will work, who knows.
The existing CPT codes (roughly) pay proportionately to physician time (RVUs). So I wouldn't say there's an an incentive toward delivering care efficiently, but rather hospital management wants to maximize billable hours.
>rewards health outcomes rather than required activities… earn the full amount only when patients meet measurable health goals, like lower blood pressure or reduced pain
They’ll just start cherry picking their patients, finding ways to squeeze out the people just that little bit lower on the prognosis curve. Or at least that will be the risk in a setup like that.
One might argue that that's the goal. There's the approach we've taken of trying to help people, and then there's the approach some people want, which is to treat every problem as if it is an entirely individual problem and treatment has to be earned by trying to will yourself out of the problem.
Medicare is a government-run insurance program, so this is one of the few cases where a private insurance company wouldn't receive data.
(There is such a thing as Medicare advantage, where a patient can choose to put their Medicare dollars toward private insurance, but it's not part of the initial launch of this program.)
> The first call that shifted his thinking was with a 67-year-old woman living out of her car, managing PTSD and congestive heart failure. She spoke with Flora for over an hour. "It was both incredible and depressing," Batlivala told me. "Flora was probably the only 'person' she'd talked to in weeks about her situation." Now, hourlong conversations with Flora are routine. "That's the companionship piece," he said. "And it turns out that is truly an intervention."
People don't seem to realize that this is both coming and that before long people will be defending AI "persons" because of this reason (OpenAI is already complaining about people doing this). Nobody's going to deliver this level of care using humans. It's not going to happen.
A lot of people needing care are deeply isolated and will be of the opinion that AI changes that.
>The company's premise was that you can't improve health outcomes without addressing the full context of someone's life
They are absolutely correct about this mathematically, you can’t solve problems you don’t have data for
The question is what organization would I trust with the full context of my life. None. Zero.
**future headline: Consumer warning: The panopticon(tm) product is embedded into your care plan, insurance is only available for panopticon subscribers.
First the title: "Medicare's new payment model is built for AI. Most of the tech world has no idea", classic AI tell. The by-line is by the editor-in-chief.
Em-dashes everywhere, including in this quote, somewhat unusually: “The best solution wins, which, in regulated industries like healthcare — that’s not been the case.”
Oddly-short paragraphs: "That payment structure is the real news."
Rule of threes: "Pair Team launched in 2019 with a specific kind of patient in mind: people managing chronic conditions who were also dealing with unstable housing, too little food, or lack of transportation"
This whole paragraph: "There are real risks. Participants are feeding extraordinarily sensitive patient data — intimate conversations about housing and diseases and mental illness — into a federal infrastructure with a documented history of breaches, including exposed Social Security numbers. For the vulnerable populations ACCESS is designed to serve, that's not an impractical concern."
---
I haven't opened a TC article in years and I think I'll return to that practice.
I think there's an ongoing conversation about whether we should accept all LLM-generated text without commentary.
I write this comment because I have some sympathy for a Show HN with AI-assisted writing, but I will not spend time enriching TechCrunch's use of machine-generated text anymore than I would scroll through an ad block at the end of any other article.
https://gemini.google.com/share/ba48849a15a9
Put another way, search out the great vowel shift. That happened over more time but then again the contact with different speakers wasn’t as constant as every day on the internet. It’s just what happens, how things spread. No different and maybe to a further degree than typical memes.
The other uses are honestly pretty standard rhetorical patterns; they do not seem especially AI-flavored to me.
Historically, insurance has paid for activity: time spent in visits, RVUs generated, and minutes logged. This was a reasonable starting point, but the flaw is that there's no strong incentives to be efficient.
ACCESS is explicitly a "deflationary" approach. Medicare has set the payment rates high enough to be viable for startups, but low enough that you have to use software (including AI) to deliver a large part of your program.
So Medicare has basically created economic incentives to reward software without prescribing the exact shape of the programs. I thought it was a really interesting approach and builds on 15 years of lessons from CMMI (Medicare's innovation group).
They’ll just start cherry picking their patients, finding ways to squeeze out the people just that little bit lower on the prognosis curve. Or at least that will be the risk in a setup like that.
(There is such a thing as Medicare advantage, where a patient can choose to put their Medicare dollars toward private insurance, but it's not part of the initial launch of this program.)
People don't seem to realize that this is both coming and that before long people will be defending AI "persons" because of this reason (OpenAI is already complaining about people doing this). Nobody's going to deliver this level of care using humans. It's not going to happen.
A lot of people needing care are deeply isolated and will be of the opinion that AI changes that.
They are absolutely correct about this mathematically, you can’t solve problems you don’t have data for
The question is what organization would I trust with the full context of my life. None. Zero.
**future headline: Consumer warning: The panopticon(tm) product is embedded into your care plan, insurance is only available for panopticon subscribers.