First, a necessary disclaimer: don’t use artificial intelligence language generators to solve your ethical quandaries. Second: definitely go tell those quandaries to this AI-powered simulation of Reddit because the results are fascinating.
Are You The Asshole (AYTA) is, as its name suggests, built to mimic Reddit’s r/AmITheAsshole (AITA) crowdsourced advice forum. Created by internet artists Morris Kolman and Alex Petros with funding from Digital Void, the site lets you enter a scenario and ask for advice about it — and then generates a series of feedback posts responding to your situation. The feedback does a remarkably good job of capturing the style of real human-generated responses, but with the weird, slightly alien skew that many AI language models produce. Here are its responses to the plot of the classic sci-fi novel Roadside Picnic:
Even leaving aside the weirdness of the premise I entered, they tend toward platitudes that don’t totally fit the prompt — but the writing style and content is pretty convincing at a glance.
I also asked it to settle last year’s contentious “Bad Art Friend” debate:
The first two bots were more confused by that one! Although, in fairness, lots of humans were, too.
You can find a few more examples on a subreddit dedicated to the site.
AYTA is actually the result of three different language models, each trained on a different data subset. As the site explains, the creators captured around 100,000 AITA posts from the year 2020, plus comments associated with them. Then they trained a custom text generation system on different slices of the data: one bot was fed a set of comments that concluded the original posters were NTA (not the asshole), one was given posts that determined the opposite, and one got a mix of data that included both previous sets plus comments that declared nobody or everybody involved was at fault. Funnily enough, someone previously made an all-bot version of Reddit a few years ago that included advice posts, although it also generated the prompts to markedly more surreal effect.
AYTA is similar to an earlier tool called Ask Delphi, which also used an AI trained on AITA posts (but paired with answers from hired respondents, not Redditors) to analyze the morality of user prompts. The framing of the two systems, though, is fairly different.
Ask Delphi implicitly highlighted the many shortcomings of using AI language analysis for morality judgments — particularly how often it responds to a post’s tone instead of its content. AYTA is more explicit about its absurdity. For one thing, it mimics the snarky style of Reddit commenters rather than a disinterested arbiter. For another, it doesn’t deliver a single judgment, instead letting you see how the AI reasons its way toward disparate conclusions.
“This project is about the bias and motivated reasoning that bad data teaches an AI,” tweeted Kolman in an announcement thread. “Biased AI looks like three models trying to parse the ethical nuances of a situation when one has only ever been shown comments of people calling each other assholes and another has only ever seen comments of people telling posters they’re completely in the right.” Contra a recent New York Times headline, AI text generators aren’t precisely mastering language; they’re just getting very good at mimicking human style — albeit not perfectly, which is where the fun comes in. “Some of the funniest responses aren’t the ones that are obviously wrong,” notes Kolman. “They’re the ones that are obviously inhuman.”
Source: https://www.theverge.com/2022/4/20/23033694/are-you-the-asshole-ai-reddit-clone-art-project-ethics-aita