If you want to reduce ChatGPT mediocrity, do it promptly
My son Cole, pictured here as a goofy kid many years ago, is now six feet six inches tall and in college. Cole needed a letter of recommendation recently so he turned to an old family friend who, in turn, used ChatGPT to generate the letter, which he thought was remarkably good. As a guy who pretends to write for a living, I read it differently. ChatGPT's letter was facile but empty, the type of letter you would write for someone you'd never met. It said almost nothing about Cole other than that he's a good kid. Artificial Intelligence is good for certain things, but blind letters of reference aren't among them.
The key problem here has to do with Machine Learning. ChatGPT's language model is nuanced, but contains no data at all specific to either my friend the lazy reference writer or my son the reference needer. Even if ChatGPT was allowed access to my old friend's email boxes, it would only learn about his style and almost nothing about Cole, with whom he's communicated, I think, twice.
If you think ChatGPT is the answer to some unmet personal need, it probably isn't unless mediocrity is good enough or you are willing to share lots of private data - an option that I don't think ChatGPT yet provides.
Then yesterday I learned a lesson from super-lawyer Neal Katyal who tweeted that he asked ChatGPT to write a specific 1000-word essay in the style of Neal Katyal." The result, he explained, was an essay that was largely wrong on the facts but read like he had written it.
What I learned from this was that there is a valuable business in writing prompts for Large Language Models like ChatGPT (many more are coming). I was stunned that it only required adding the words in the style of Bob Cringely" to clone me. Until then I thoughtpersonalizing LLMs cost thousands, maybe millions (ChatGPT reportedly cost $2.25 million to train).
So where Google long ago trained us how to write queries, these Large Language Models will soon train us to write prompts to achieve our AI goals. In these cases we're asking ChatGPT or Google's Bard or Baidu's Ernie or whatever LLM to temporarily forget about something, but that's unlikely to give the LLMs better overall judgement.
Part of the problem with prompt-engineering is it is completely at the spell-casting / magical incantation phase: no one really understands the underlying general principles behind what makes a good prompt for getting a given kind of answer - work here is very preliminary and will probably vary greatly from LLM to LLM.
A logical solution to this problem might be to write a prompt that excludes unwanted information like racism while simultaneously including local data from your PC (called fine-tuning in the LLM biz), which would require API calls that to my knowledge haven't yet been published. But once they are published, just imagine the new tools that could be created.
I believe there is a big opportunity to apply Artificial Intelligence to teaching, for example. While this also means applying AI to education in general, my desired path is through teachers, who I see as having been failed by educational IT, which makestheir jobs harder, not easier. No wonder teachers hate IT.
The application of Information Technology to primary and secondary education has mainly involved scheduling and records. The master class schedule is in a computer. Grades are in another. And graduation requirements are handledby a database that spans the two, integrating attendance. Whether this is one vendor or up to four, the idea is generally to give the principal and school boarddaily snapshots of where everything stands. In this model the only place for teachers is data entry.
These systems require MORE teacher work, not less. And it leads to resentment and disappointment all around. It's garbage-in, garbage-out as IT systems try to impose daily metrics on activities that were traditionally measured in weeks. I as a parent get mad when the system says my kid is failing when in fact it means someone forgot to upload grades or even forgot to grade work at all.
If report cards come out every six weeks it would be nice to know halfway through that my kid was struggling, but current systems we have been exposedto don't do that. All they do is advertise in excruciating and useless detail that the system, itself, isn't working right.
How could IT actually help teachers?
Look at Snorkel AI in Redwood City, CA for example. They are developing super-low-cost Machine Learning tools for Enterprise, not education, mainly because for education they can't identify a customer.
I think the customer here is the teacher. This may sound odd, but understand that teachers aren't well-served by IT to this point because they aren't viewedas customers. They have no clout in the system. I chose to use the word clout rather than power or money because it better characterizes the teacher's position as someone essential to the process but also both a source of thrustand drag.
I envision a new system where teachers can run their paperwork (both cellulose-based and electronic) through an AI that does a combination of automatically storing and classifying everything while also taking a first hack at grading. The AI comes to reflect mainly the values and methods of the individual teacher, which is new, and might keep more of them from quitting.
Next column: AI and Moore's Law.
The post If you want to reduce ChatGPT mediocrity, do it promptly first appeared on I, Cringely.Digital Branding
Web DesignMarketing