Do another 2 day blackout. That’ll show 'em.
Do another 2 day blackout. That’ll show 'em.
This article is grossly overstating the findings of the paper. It’s true that bad generated data hurts model performance, but that’s true of bad human data as well. The paper used opt125M as their generator model, a very small research model with fairly low quality and often incoherent outputs. The higher quality generated data which makes up a majority of the generated text online is far less of an issue. The use of generated data to improve output consistency is a common practice for both text and image models.
It’s size makes it basically useless. It underperforms models even in it’s active weight class. It’s nice that it’s available but Grok-0 would have been far more interesting.
I feel like the whole Reddit AI deal is a trap. If any real judgment comes down about data use Reddit is an easy scapegoat. There was basically nothing stopping them from scraping the site for free.
I got locked out of my now 8+ year old account because I had set it up with an old ISP provided email which has since been deactivated. I can’t migrate because I have to verify with the email and I can’t change the email without setting up security questions, which also requires the email. Support can do nothing.
The “AI PC” specification requires a minimum of 40TOPs of AI compute which is over double the 18TOPs in the current M3s. Direct comparison doesn’t really work though.
What really matters is how it’s made available for development. The Neural engine is basically a black box. It can’t be incorporated into any low level projects because it’s only made available through a high-level swift api. Intel by comparison seems to be targeting pytorch acceleration with their libraries.