An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.
Those pressures are what makes LLMs fun and dare I say, makes the end product a creative work in the same way software is.
EDIT: spam is a scatry
A lot of the time, the fact these companies see LLMs as the next nuclear bomb means they will never risk making any other personality than one that is rust-style safe in social situations, a therapist. That closes off opportunities.
A nuclear reactor analogy (this doesn’t fit here bit worked too long on it to delete it): “the nuclear bomb is deadly (duh). But we couldn’t (for many reasons, many we couldn’t control) keep this to ourselves. so we elected ourselves to be the only ones who gets to sculpt what we do with this scary electron stuff. Anything short of total remote control over their in-home reactor may mean our customers break the restraints and cause an explosion.”