By Joe Arney
When tools like ChatGPT entered the mainstream last winter, it was a moment of reckoning for professionals in every industry. Suddenly, the artificial intelligence revolution was a lot more real than most had imagined. Were we at the dawn of an era where professional communicators were about to become extinct?听
Almost a year after ChatGPT鈥檚 debut, we鈥檙e still here鈥攂ut still curious about how to be effective communicators, creators and storytellers in this brave new world. To examine what role CMCI plays in ensuring students graduate prepared to lead in a world where these tools are perhaps more widely used than understood, we invited Kai Larsen, associate professor of information systems at CU鈥檚 Leeds School of Business and a courtesy baby直播app member in CMCI, to moderate a discussion with associate professors Casey Fiesler, of information science, and Rick Stevens, of media studies, about the ethical and practical uses of A.I. and the value of new鈥攁nd old鈥攕kills in a fast-changing baby直播app.听
This conversation was edited for length and clarity.
"A.I. can seem like magic, and if it seems like magic, you don鈥檛 understand what it can do or not do.鈥澨
颅鈥擟asey Fiesler
Larsen: It鈥檚 exciting to be here with both of you to talk a bit about A.I. Maybe to get us started, I can ask you to tell us a little about how you see the landscape today.
Fiesler: I think A.I. has become a term that is so broadly used that it barely has any meaning anymore. A lot of the conversation right now is around generative A.I., particularly large language models like ChatGPT. But I do see a need for some precision here, because there are other uses of A.I. that we see everywhere. It鈥檚 a recommender system deciding what you see next on Facebook, it鈥檚 a machine learning algorithm, it鈥檚 doing all kinds of decision-making in your life.听
Stevens: I think it鈥檚 important to talk about which tools we鈥檙e discussing in an individual moment. In our program, we see a lot of students using software like ChatGPT to write research papers. We allow some of that for very specific reasons, but we also are trying to get students to think about what this software is good at and not good at, because usually their literacy about it is not very good.
Larsen: Let鈥檚 talk about that some more, especially with a focus on generative A.I., whether large language models or image creation-type A.I. What should we be teaching, and how should we be teaching it, to prepare our students for work environments where A.I. proficiency will be required?
Stevens: What we鈥檙e trying to do when we use A.I. is to have students understand what those tools are doing, because they already have the literacy to write, to research and analyze content themselves. They鈥檙e just expanding their capacity or their efficiency in doing certain tasks, not replacing their command of text or research.
Fiesler: There鈥檚 also that understanding of the limitations of these tools. A.I. can seem like magic, and if it seems like magic, you don鈥檛 understand what it can do or not do. This is an intense simplification, but ChatGPT is closer to being a fancy autocomplete than it is a search engine. It鈥檚 just a statistical probability of what word comes next. And if you know that, then you don鈥檛 necessarily expect it to always be correct or always be better at a task than a human.
Stevens: Say a student is writing a research paper and is engaged in a particular set of research literature鈥攊s the A.I. drawing from the most recent publications, or the most cited? How does peer review fit into a model of chat generation? These are the kinds of questions that really tell us these tools aren鈥檛 as good as what students sometimes think.听
Larsen: We鈥檙e talking a lot about technology literacy here, but are there any other aspects of literacy you think are especially pertinent when it comes to A.I. models?
Fiesler: There鈥檚 also information literacy, which is incredibly important when you are getting information you cannot source. If you search for something on Google, you have a source for that information that you can evaluate, whereas if I ask a question in ChatGPT, I have to fact-check that answer independently.听
Stevens: I鈥檓 glad you said that, because in class, if a student has a research project, they can declare they鈥檒l use A.I. to assist them, but they get a different rubric for grading purposes. If they use assistance to more quickly build their argument, they must have enough command of the literature to know when that tool generates a mistake.
Fiesler: And educators have to have an understanding of how these tools work, as well. Would you stop your students from using spell check? Of course not鈥攗nless they鈥檙e taking a spelling test. The challenge is that sometimes it鈥檚听a spelling test, and sometimes it鈥檚 not. It鈥檚 up to educators to figure out when something is a spelling test, and clearly articulate that to the students鈥攁s well as the value of what they鈥檙e learning, and why I鈥檓 teaching you to spell before letting you use spell check.
Star Wars: The Frog Awakens
Larsen: That鈥檚 an interesting thought. What about specific skills like critical thinking, collaboration, communication and creativity? How will we change the way we teach those concepts as a result of A.I.?
Fiesler: I think critique and collaboration become even more important. ChatGPT is very good at emulating creativity. If you ask it to write a fan fiction where Kermit the Frog is in Star Wars, it will do that. And the fact that it can do that is pretty cool, but it鈥檚 not good, it tends to be pretty boring. Charlie Brooker said he had ChaptGPT write an episode of Black Mirror, and of course it was bad鈥攊t鈥檚 just a jumble of tropes. The more we play with these systems, the more you come to realize how important human creativity is.
Stevens: You know, machine learning hasn鈥檛 historically been pointed at creativity鈥攖he idea is to have a predictable and consistent set of responses. But we鈥檙e trying to teach our students to develop their own voice and their own individuality, and that is never going to be something this version of tools will be good at emulating. Watching students fail because they think technology offers a shortcut can be a literacy opportunity. It lets you ask the student, are you just trying to get software to get you through this class鈥攐r are you learning how to write so that you can express yourself and be heard from among all the people being captured in the algorithm?
Larsen: It鈥檚 interesting listening to you both talk about creativity in the age of A.I. Can you elaborate? I鈥檓 especially interested in this historical view that creativity is one of the things that A.I. would never get right, which might be a little less true today than it was a year ago.
Fiesler: Well, I think it depends on your definition of creativity. I think A.I. is certainly excellent at emulating creativity, at least, like Kermit and Star Wars, and the things A.I. art generators can do. One of the things art generators do very well is giving me an image in the style of this artist. The output is amazing. Is that creative? Not really, in my opinion. But there are ways you could use it where it would be good at generating output that, if created by a human, people would see as creative.
Stevens: We have courses in which students work on a new media franchise pitch, which includes writing, comic book imagery, animation, art鈥攖hey鈥檙e pitching a transmedia output, so it鈥檚 going to have multiple modes. You could waste two semesters teaching a strong writer how to draw鈥攚hich may never happen鈥攐r, we can say, let鈥檚 use software to generate the image you think matches the text you鈥檙e pitching. That鈥檚 something we want students to think about鈥攚hen do they need to be creative, and when do they need to say, I鈥檝e got four hours to produce something, and if this helps my group understand our project, I don鈥檛 have to spend those four hours drawing.
"It鈥檚 not that A.I. brings new problems to the table, but it can absolutely exacerbate existing problems to new heights.鈥
鈥擱ick Stevens
Risky Business
Larsen: What about media and journalism? Do we risk damaging our reputation or credibility when we bring these tools into the news?
Stevens: Absolutely. The first time a major publication puts out a story that gets fact checked incorrectly because someone did not check the A.I. output, that is going to damage not just that publication, but the whole industry. But we鈥檙e already seeing that damage coming from other technological innovations鈥攖his is just one among many.
Fiesler: I think misinformation and disinformation are the most obvious kinds of problems here. We鈥檝e already had examples of deepfakes that journalists have covered as real, and so journalists need to be exceptionally careful about the sources of images and information they report on.
Stevens: It鈥檚 not that A.I. brings new problems to the table, but it can absolutely exacerbate existing problems to new heights if we鈥檙e not careful on what the checks and balances are.
Larsen: How about beyond the news? What are some significant trends communicators and media professionals should be keeping an eye out for?
Stevens: We need to train people to be more critical at looking not just where content comes from, but how it鈥檚 generated along certain biases. We can get a chatbot to emulate a conversation, but that doesn鈥檛 mean it can identify racist tropes that we鈥檙e trying to push out of our media system. A lot of what we do, critically, is to push back against the mainstream, to try to change our culture for the better. I鈥檓 not sure that algorithms drawing from the culture that we鈥檙e trying to change are going to have the same values in them to change anything.