AI just crossed the barrier from making human tasks more efficient to being able to give us important insights that humans could NOT have done. Plus, parents are suing schools over AI homework!
Read more below.

1. A cancer-detecting tool is actually the first step into the next phase of AI
The real breakthrough with Harvard’s new cancer-detecting AI model, “Chief,” isn’t just in its ability to detect with at a 96% accuracy rate; it’s that it is discovering unique insights THAT ONLY THE AI CAN DETECT!
Think of Chief’s architecture as the ‘ChatGPT of Cancer’ being fed massive amounts of historical data as its training data, and then being given a new image and asked ‘is there cancer here’.
Up until this point, every AI use-case you will have heard of will have been about efficiency. A human could have done it, but the AI did it better or faster.
This use case is different. This is an AI giving us results that a human would not have given us. It’s the start of one of the promises we hoped for in the industry.
2. And, of course, parents are suing a school over AI usage!
We all knew it was coming! And actually, I welcome it. We need more opportunities for the courts to rule on what is happening with AI.
So, a student did his sociology homework using AI. There were no specific instructions to not use AI; the school just assumed it was implied. (Having raised a gaggle of teenagers, I’m surprised the school didn’t think the kids would look for a workaround!).
The work he submitted was graded as a C on its merit.
Once the teacher found out the work was done with the help of AI, the paper was downgraded to a D, and so of course, the next logical course of action is a lawsuit.
The student and parents argue that the student did not commit plagiarism. The school is arguing that work done by AI is plagiarism (which surprises me then that the paper only went down to a D instead of a fail/disqualification?!).
We shall see what unfolds.
What is also of interest to me is they chose plagiarism as the core of their argument. When we run our Premium AI Content service (for brands like Minted, NOOM, Petmeds) we run a plagiarism checker each time, and it’s not really a problem with good quality output. We might generate a score of 7% plagarism, but it’s usually flagging common phrases such as “be sure to check with your doctor before starting this medicine”. That isn’t the type of plagarism any of us are actually worried about.
What schools REALLY want to know is if the was 100% human, or x% AI. Plagarism is a proxy they know and understand.
What we do know though is a lot of school rules are about to be updated!
If you need human-quality AI content, see how we do it for brands like NOOM.com HERE.