Article 6MEDB How I Use AI To Help With Techdirt (And, No, It’s Not Writing Articles)

How I Use AI To Help With Techdirt (And, No, It’s Not Writing Articles)

by
Mike Masnick
from Techdirt on (#6MEDB)

Let's start off this post by noting that I know that some people hate anything and everything having to do with generative AI and insist that there are no acceptable uses of it. If that describes you, just skip this article. It's not for you. Ditto for those who insist (incorrectly) that AI is nothing but a plagiarism machine" or that training of AI systems is nothing but mass copyright infringement. I've discussed why all of that is wrong elsewhere.

Separately, I will agree that most uses of generative AI are absolute shit, and many are problematic. Almost every case I've heard of journalistic outfits using AI are examples of the dumbest fucking ways to use the technology. That's because addle-brained finance and tech bros think that AI is a tool to replace journalists. And every time you do that, it's going to flop, often in embarrassing ways.

However, I have been using some AI tools over the last few months and have found them to be quite useful, namely, in helping me write better. I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I've been using AI. And, no, it's not to write articles.

It's basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

As a bit of background, let me explain how we work on articles at Techdirt. We try to make sure that no article goes out into the world until it's been reviewed by someone other than myself. Most of the reviews are for grammar/typos, but also other important editorial checks along the lines of does everything I say actually make sense?" and what things might people get mad about?"

A while back, I started using Lex.page. Some of what I'm going to describe below is available for free accounts, and some in the paid Pro" accounts. I don't know the current limits on free accounts, as I am paying for a Pro account and what's included in what may have changed.

Lex is an AI tool built with writers in mind. It looks kind of like a nice Google Docs. While it does have the power to do some AI-generated writing for you, almost all of its tools are designed to assist actual writers, rather than do away with their work. You can ask it to write the next paragraph for you, but I've never used that tool. Indeed, for the first few months I barely used any of the AI tools at all. I just like the environment as a standard writing tool.

The one feature I did use occasionally was a tool to suggest headlines for articles. If I thought my own headline ideas could be stronger, I would have it generate 10 to 15 suggestions. The tool rarely came up with one that was good enough to use directly, but it would sometimes give me an idea that I could take and adjust, which was better than my initial idea.

However, I started using the AI more often a couple of months ago. There's a tool called Ask Lex" where you can chat with the AI (on a Pro account, you can choose from a list of AI models to use, and I've found that Claude Opus seems to work the best). I initially couldn't think of anything to ask the AI, so I asked people in Lex's Discord how they used it. One user sent back a scorecard" that he had created, which he asked Lex to use to review everything he wrote.

I changed around the scorecard for my own purposes (and I keep fiddling with it, so it will likely change more soon), but the current version of the score card I use is as follows:

This is an article scorecard:

Does this article:

#1 have a clear opening that grabs the reader score from 0 to 3

#2 clearly explain what is happening from 0 to 3

#3 clearly address the complexities from 0 to 3

#4 lay out the strongest possible argument 0 to 3

#5 have the potential to be virally shared 0 to 3

#6 is there enough humor included in the article 0 to 3

Given these details, could you score this article and provide suggestions on how to improve ratings of 0 or 1?

I created a macro on my computer, so with a few keyboard taps, I can pop that whole thing up in the Ask Lex box and have it respond.

I'll note that I don't really care that much about the last two items on the list, but I have them in there for two reasons. First, as a kind of Van Halen brown M&M check, to make sure the AI isn't just blowing smoke at me, but knows when to give me low ratings. Second, somewhat astoundingly, there are times (not always, but more frequently than I would have thought) when it gives really good suggestions to insert a funny line somewhere.

I'm going to demonstrate some of how it works, using the article I wrote last week about the legal disclaimer on the parody mashup of the Beach Boys singing Jay-Z's 99 Problems. Here's what it looked like when I ran my first draft against the scorecard:

2e8cd68d-ae77-49f7-8e86-b39a98919c65-Racf33131ef-a95e-4849-acbf-edb4475b8874-Rac70a6eb4d-c73e-46db-b8b1-90b99abf1cf0-Rac

The responses here are fairly generic, but I can dig deeper. While it said my opening was good, I wondered if it could be better, so I asked it for suggestions on a better opening. And its suggestions were good enough that I actually did rewrite much of my opening. My original opening had jumped right in to talking about There I Ruined It," and Lex suggested some opening framing that I liked better. Of course, it also suggested a terrible headline, which I ignored. It's rare that I take any suggestion verbatim, but this time the opening was good enough that I used a pretty close version (again, this is rare, but it does often make me think of better ways to rewrite the opening).

03a53ef5-9246-4ace-b96e-8f1ecde20714-Rac

Then, I know that above I said that I don't much care about the humor, but since this story involved a funny video, I did ask if it had any suggestions on ways to make the article funnier. And... these were not good. Not good at all. So I basically ignored them all. However, sometimes it does come up with suggestions that, again, at least get me to add an amusing line or two into a piece. Even if they weren't good for this article, I figured I should share them here so you get a sense of how it doesn't always work well, but at least gets me to think about things.

69799357-b9c5-4ae6-ae24-83012dd48780-Rac

Somewhat amusingly, when I ran this very article through the same process I'm discussing here, it suggested adding more personality" to the piece. I asked it if it had suggestions on where, and its top suggestion was to lean into the absurdity of some of the AI suggestions" in this part, but then concluded with an awful joke.

ea9641b7-0f72-4954-8f29-8c509f56e951-Rac

So, yeah, it's suggesting I joke about how shit its jokes are. Great work, AI buddy.

I also will sometimes ask it for better headlines (as mentioned above). Lex has a built-in headline generator tool, but I've found that doing it as part of the Ask Lex" conversation makes it much stronger. On this article we're discussing, it didn't generate any good suggestions, so I ignored them. However, I will admit that it came up with the title of the follow-up article: Universal Music's Copyright Claim: 99 Problems And Fair Use Ain't One. That was all Lex. My original was something much more boring.

Also, just this weekend, I added a brand new macro, which I like so far, in which I ask it to generate other headline ideas, based on some criteria, and then ask it to compare that to my existing headline that I came up with myself. I've only been using this one for a day or two, and didn't use it on the fair use article last week, but here's what it said about this very article you're reading now:

image-12.png?resize=375%2C890&ssl=1image-13.png?resize=356%2C406&ssl=1

Then my next step is to input another macro I created as a kind of gut check. I ask it to help me critique the article, highlighting which points are the weakest and can be made stronger, which points are strongest and could be emphasized more, and which points readers might get upset about and which I should improve. Finally, I ask it if anything is missing from the article.

9319c1b3-9a0c-4ab3-b042-7631658ea4a0-Rac4f2580d1-1352-44a2-897f-e565e2e434c6-Rac892369e5-9275-43bc-a546-05f1200d88a2-Rac

Again, I don't always agree with its suggestions (including some of the ones here), but it often makes me think carefully about the arguments I'm making and seeing how well they stand up. I have strengthened many of the things I say based on the responses from Lex that just get me to think more carefully about what's written.

Occasionally I'll ask it for other suggestions, such as a better metaphor for something. When I wrote about Allison Stanger's bonkers congressional testimony a couple weeks ago, I was trying to think of a good example to show how silly it was that she thought Decentralized Autonomous Organizations (DAOs) were the same thing as decentralized social media. I asked Lex for suggestions on what would highlight how absurd that mistake is, and it gave me a long list of suggestions, including the one I eventually used: saying social security benefits' when you mean social media influencers'."

Finally, after I go through all of that, I do use it to also do some basic editing help. Recently, Lex introduced a nice feature called checks" which will check" your writing and suggest edits on a variety of factors. Personally, the only ones I've found useful so far are the Grammar" check and the Readability" check.

87ebc9b8-b90c-4c19-9955-745df2b1dc68-Rac

I've tried all the rest, and don't currently find them that useful for my style of writing. The grammar check is good at catching typos and extra commas, and the readability check is pretty good at getting me to chop up some of the run-on sentences that my human editors get frustrated with.

5392ffa8-5c0f-4cff-be22-2685dee5ebd2-Rac

I do want to play more with the Audience" one, but my attempts to explain who the Techdirt audience is to it hasn't quite worked yet. The team at Lex tells me they're working to improve it.

There are a few more things, but that's basically it. For me, it's a brainstorming tool and a kind of gut check" that helps me review my work and make it as strong as it can be before I hand it off to my human editors who will review it. I feel like I'm saving them time and effort as well by giving them a more complete version of each story I submit (and hopefully getting them less frustrated about having to break up my run-on sentences).

The important parts are that I'm not trying to replace anyone. I'm certainly not relying on it for actually writing very much. And I know that I'm going to reject many of the things it suggests. It's basically just another set of eyeballs willing to look over my work and give me feedback. And, it does so quickly and is less sick of my writing quirks.

It's not revolutionary. It's not changing the world. But, for me, personally, it's been pretty powerful, just in helping me to be a better writer.

And yes, this article was reviewed with the same tools, which obviously prompted me to include one of its suggestions in that screenshot above. I'll leave the other suggestions that it made, and I took, up to your imagination.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments