My head is swirling after I read this article in The Atlantic. The title is:
Talking to AI Might Be the Most Important Skill of This Century
There’s so much in this story that I had some difficulty trying to condense it all. (Mandatory “go read the whole thing, it’s worth it” call to action, here.) I even asked ChatGPT to summarize it. Which, unsurprisingly, proved virtually every point in writer Charlie Warzel’s piece. Since ChatGPT failed, I’m going to pull out the most important pieces that I think affect libraries. Perhaps in no significant order.
- Without the right prompt, AI fails to provide what someone might be looking for. This probably is a surprise to no one, especially librarians. If you remember the days before Google, you know exactly how this tended to play out. Google became dominant in large part to its inherent ability to accept natural language queries.
- A small industry is now popping up to provide people with the correct, detailed prompts to get what they want when interacting with AI. The people doing this work are referred to as “prompt engineers.”
- Prompt engineers aren’t just people who write queries to be directed to an AI. They also have tend to have a great deal of technical expertise and a deep understanding of how artificial intelligences and natural language can intersect.
- Prompt engineers don’t work for free. Which, of course, is why there’s an industry building around this type of work. They also may be specialists in a subject area so that they can write more accurate prompts; using terminology and knowledge of that subject means that the AI has a better frame of reference and more specifity to work with.
- Some teachers and professors are already teaching students to write prompts, rather than to rely on what ChatGPT creates. “In one of his new lessons, Mollick asks his class to imagine ChatGPT as a student and to teach the chatbot by prompting it to write an essay about a particular class concept. Like a professor during office hours, the students must help the AI refine its essay until it appears to have sufficient mastery of the subject.”
This quote really got me thinking: “There are prompts that promise to generate new sports-team logos, and text hacks with names like Sentence Expander. For $3.99, Book Summarizer promises a prompt that will help “extract the essential information and takeaways from a book.”” Not the “it can summarize a book part.” The part that says “For $3.99.” This cottage industry, although a side hustle for many, is essentially monetizing the search for information.
The article does point out that this might be a temporary situation, as AI learns to deal more effectively with simple and natural language questions. However, even if that’s the case, I do wonder how this might change people’s perception of how to go about finding information. “Oh, you have to pay money to get the good stuff” could easily become a norm. Libraries already hear “How much does it cost to rent a book?” What will we do if people start asking “How much will it cost for you to make this AI answer my question?”
This potential normalizing of the monetization of information retrieval would certainly widen the digital divide. In addition, where does that leave public perceptions of what librarians can or can’t do? Will they be viewed as being savvy enough to write good prompts? Would they even be viewed as helpful with obtaining information?
There are, of course, no good answers to these questions…yet. In part, I think it’s going to depend a great deal on how long it takes AI to get to the point where it can deal with inept or clumsy prompting. If it takes a long time, it would certainly be more worrisome.
However, I think that librarians are actually situated to be…well…good at this. Maybe we don’t call ourselves “prompt engineers,” but creating the optimal search parameters are something we often excel at. Let’s publicize that. Let’s get ahead of things for a change, and make the public aware of just how productive librarian skills already are.
Marketing people get on this, maybe?