Coding Tales: Generative AI's Impact on Science Writing & Storytelling

Strongly infused w/ own opinions, might not be cannon

People have been throwing around the term “generative AI” a lot in recent years, partly due to the rise and popularity of the third iteration of the Generative Pre-trained Transformer: ChatGPT. To many people’s surprise, the GPT model has been existed long before its current application took over the world.1 It works by predicting the upcoming text strings based on existing contextual information, either provided by the service user in the form of text prompts or through already generated texts. This model of word prediction was trained on a vast number of existing texts available to the public, taking inspiration from historical literature, newspapers, Internet blogs, and random people’s Reddit or Twitter posts. A future where contents generated by AI floods the world, worries me.

AI-generated content is, simply put, fake

Soulless: the word that most appropriately describes the texts that come out of any generative artificial intelligence program.

Despite the fact that the GPT model was trained on real texts, the validity of their generations cannot be guaranteed. Writing purely based off of contextual prediction allows incorrect or outright bogus information to slip through the cracks; sarcastic articles, fictional writings among others fed to the model have been causing hallucinations in AI-generated contents. When generating a paragraph, these models would attempt to imitate the patterns they learned from training datasets and as long as the sentence structures conform to their knowledge of one written by a real human; as such, the accuracy of the texts wouldn’t matter to them.2 A block of text generated by an AI model has never been unique and creative, simply because that the only way a model can guess which word comes next stemmed from the database of texts, one filled with contents wrote by other people, that they were trained upon.

The realm of generative AI is currently at a peculiar state. Because no person or technology can accurately identify whether a piece was created by a real person or generated by an algorithm, it is made possible that even AI models themselves to be trained on previously generated contents. The more it happens, the deeper the loss of creativity spirals down. When the models get trained on completely false information, regardless of how much they mimic human writing styles, the only creativity they possess will lie in their ability to output nonsense. AI-generated contents are, therefore, intrinsically a fancy way to fake writings. The lack of emotional connections by the nodes-on-silicons with the texts they generate undermine all the creative efforts that people devote their entire life for.

Using generative AI to enhancing science writing? No, thanks

If you haven’t noticed already, I do not subscribe to the notion of AI being beneficial in the creative space. What about science writing? An even worse idea. Science requires precision, but it shouldn’t be ignored when communicating science either. The mastery of deciphering science for a broad range of audiences is not something generative AI can replace. Aside from precision, I am highly skeptical about their ability to produce a well-rounded report on a newly discovered subject solely based on the pattern mimicry of past contents, as the novel stimuli would not make any sense to them.

The lack of knowledge on the additional perceptual cues that only humans can capture will ultimately limit the scope of generative AI’s ability to experience science in the real world. They cannot make connections between our shared sensory experiences and the potential scientific topics they might wish to summarize. Even if they do, they would have stolen another person’s work. In science, plagiarism isn’t a good idea. Putting AI in the line of work of science communications will erase the human touch that would make science relatable and enjoyable for everyone.

My relationship (or the lack thereof) with generative AI

Realizing a piece of AI generated text that I have unknowingly read brings out the utmost feelings of uneasiness, almost as if I have encountered an alien species pretending to communicate with us in the most robotic way possible: the words that these programs spit out have no sentimental meanings to them.

Why the uneasiness? Think about aliens for a minute. Some, yet to be discovered, fantasized to be human-like, intelligence that may or may not cohabitant the vast universe filled with stars and galaxies and, frankly, another set of unknowns. Topics on aliens would bring out a sense of fear. The idea that there could exist a species, more intelligent than Homo sapiens, makes me think twice over my curiosity. What kind of mass destruction, unforeseen by us, the residents of the planet Earth, would they decide to deploy against us? The uncertainties, the unknowns, remain undiscovered.

I often draw a parallel between my fear of aliens and the feelings that generative AI brings out of me. Generative AI technologies have the potential to cause great harm to society, erasing the ingenuity that has been an integral part of our collective functioning. The fear that they might one day push people in the creative industry out of their professions.3 The horror of encountering soulless material on every platforms. The grim state that real human works will no longer be valued.


The end of humanities? Perhaps. Time will tell.

Built with Hugo
Theme Stack designed by Jimmy