Wednesday, April 3, 2024

Working with AI: The Need to Read and Voice Your Soil and Soul

Working with AI: Voice Your Soil and Soul

Janpha Thadphoothon

When engaging in writing or textual work, one must leverage both personal intellect and the capabilities of artificial intelligence.


Image: DALLE-2

Studies have revealed that AI-generated texts often lack essential elements. Here is one of the noteworthy study I would like to share with you. As a blogger, I'd like to highlight some intriguing findings from researchers in the UK regarding the writing style of ChatGPT, an artificial intelligence tool.

It turns out, ChatGPT has a few distinct quirks in its writing style, including repetitive words, tautology, and an overuse of paragraphs starting with "however." Researchers even go as far as describing its style as "bland" and "journalistic," according to a study conducted by Cambridge University Press and Assessment.

These findings couldn't come at a more pertinent time, especially with the concerns arising about cheating among students, thanks to the increasing use of generative AI tools like ChatGPT.

To better understand the impact of AI on writing, researchers compared essays written with the help of ChatGPT by three first-year undergraduate students to those penned by 164 IGCSE students. The essays were then evaluated by examiners, followed by in-depth interviews with the undergraduates and thorough analysis of their work.

It's certainly something worth considering as we navigate the evolving landscape of AI and its influence on our writing practices.

According to the study, essays aided by ChatGPT exhibited weaker performance in analysis and comparison skills when contrasted with essays not assisted by ChatGPT. However, they demonstrated commendable proficiency in information processing and reflective abilities.

In their study, researchers highlighted several key features of ChatGPT's writing style, such as the use of Latinate vocabulary, repetition of words and ideas, and pleonasms. They also noted that essays written with ChatGPT often employed paragraphs beginning with discourse markers like "however", "moreover", and "overall", as well as numbered lists. The researchers characterized ChatGPT's default writing style as resembling the bland and objective tone commonly found in journalistic writing online. While students found ChatGPT helpful for quickly gathering information, they cautioned against relying solely on the tool, as it could lead to essays of lower academic quality. Lead researcher Jude Brady emphasized the importance of these findings in understanding the intersection of generative AI and assessment. The researchers hope their work will aid teachers and students in recognizing text generated by ChatGPT, emphasizing the need for digital literacy in utilizing and identifying generative AI.

AI as a helper, you do the work.

AI serves as a valuable tool when used responsibly and ethically. However, it's crucial to recognize that AI should not be abused or exploited for dishonest purposes. While it can undoubtedly streamline tasks and provide assistance, its potential for misuse underscores the importance of using it judiciously. By leveraging AI appropriately, individuals can harness its capabilities to enhance productivity, creativity, and efficiency. It's essential to foster a culture of integrity and accountability in the utilization of AI, ensuring that its benefits are maximized while minimizing the risks of unethical behavior.

Any Real-life Examples?

I am convinced that industries experimenting with generative AI tools hold significant potential for innovation and advancement. For me, it's fascinating to see how various sectors are utilizing this technology to tackle complex challenges and push boundaries.


In the pharmaceutical industry, companies like Amgen and Insilico Medicine, along with academic researchers, are leveraging generative AI to design proteins for medicines. This is particularly groundbreaking as predicting protein folding has been a longstanding challenge in genetic and pharmaceutical research. With the aid of deep learning models like generative adversarial networks (GANs), researchers are gaining new insights and capabilities in protein synthesis.

In genetics research, while the integration of generative AI is not yet widespread, it is making notable contributions. Despite privacy concerns limiting access to genetic databases, recent studies have shown promise. By training GANs and restricted Boltzmann machines on real genomic datasets, researchers are able to generate artificial genomes, opening up new avenues for genetic exploration.

In manufacturing, companies like Autodesk and Creo are using generative AI to revolutionize the design process for physical objects. From optimizing designs for materials efficiency and simplicity to speeding up production, generative AI is driving innovation across the manufacturing sector.

Entertainment is another realm where generative AI is making waves. Tools like ChatGPT and Dall-E are already being used to generate conceptual art and background music for games. However, legal challenges related to copyright infringement and intellectual property theft are slowing down the adoption of generative tools in some contexts.

Despite these promising advancements, it's essential to approach the application of generative AI with caution. While these tools are capable of producing realistic content, they may not always meet professional standards or accurately reflect reality. Moreover, there's a risk of fabricated information being generated to support generated outputs, posing significant ethical and legal challenges.

In essence, while generative AI offers immense potential for innovation, it's crucial to be mindful of its limitations and ethical implications. By navigating these challenges thoughtfully, we can harness the power of AI to drive positive change while mitigating risks.


Expert Opinions

It's important to acknowledge that ChatGPT, like any AI tool, may not always provide accurate or comprehensive answers to user queries. While it can be a valuable resource for generating content and engaging in conversation, it's essential to recognize its limitations.


ChatGPT operates based on the data it's been trained on and the algorithms it employs. As a result, it may not possess all the necessary information or understand the context of every question it's presented with. Additionally, it's susceptible to biases present in its training data and may produce outputs that are not entirely reliable or accurate.

Users should approach ChatGPT with a critical mindset, verifying information from multiple sources and considering its responses within the broader context. While it can be a helpful tool for generating ideas or initiating discussions, it's not a substitute for human expertise or critical thinking.

By understanding ChatGPT's limitations and using it judiciously, users can make the most of its capabilities while being mindful of its potential shortcomings.

Educational Strategies

The influx of AI writing assistants into the classroom necessitates a shift in educational strategies. Equipping both educators and students with the tools to navigate this new landscape is crucial. Students must become discerning consumers of AI-generated content, adept at fact-checking, identifying bias, and ensuring internal logic within the text. But AI isn't the enemy. By leveraging it for brainstorming, research, and draft refinement, students can become more efficient writers.  The key lies in maintaining academic integrity. Clear guidelines on acceptable AI use, plagiarism detection tools, and a focus on critical thinking and analysis will ensure AI remains a helpful supplement, not a shortcut.  Open discussions in the classroom and transparent communication between students and instructors will foster a responsible and innovative learning environment where AI empowers, not hinders, the writing process. 

Devaluing the Craft of Writing?

While some fear AI writing assistants will completely replace human writers, this is unlikely. AI excels at efficiency and data processing, but it lacks the creativity, critical thinking, and emotional intelligence that goes into truly impactful writing. Instead, AI serves as a valuable helper, much like any tool. By learning to communicate effectively with AI and leverage its strengths, writers can enhance their work and achieve better results.

What Should be Done?

I believe there are several effective strategies we can adopt to address the risks associated with AI in writing:

During the Development Stage:

Firstly, it's crucial to build AI writing tools with ethical considerations in mind. This means incorporating features that detect and mitigate biases, such as techniques to ensure outputs are not influenced by biased training data. Additionally, implementing originality checks to flag potential plagiarism and providing transparency features that inform users about the tool's limitations can help maintain integrity.

Moreover, designing AI writing assistants with a human-in-the-loop approach ensures that there's always a human reviewer to assess outputs for accuracy, ethical implications, and adherence to desired style or voice.

User Education and Training:

To ensure responsible use of AI writing tools, clear guidelines must be established. These guidelines should cover topics like plagiarism, fact-checking, and proper attribution. Furthermore, integrating ethics training into educational programs for writers and content creators is essential. Such training should address the potential risks associated with AI writing tools and provide guidance on using them ethically and effectively.

Transparency and Regulation:

Developers should prioritize transparency in the development of AI writing tools. This means being open about the data used for training and the algorithms employed. Additionally, considering regulatory frameworks to oversee the responsible development and usage of AI tools could help address concerns related to data privacy, bias, and content safety.

Other Considerations:

Encouraging the development of open-source AI writing tools allows for greater community involvement and scrutiny, ultimately contributing to the improvement of these tools. Furthermore, investing in research and development projects focused on exploring the ethical implications of AI writing and developing more robust AI tools is crucial for long-term sustainability and responsible use.

By implementing these strategies and best practices, I am sure we can navigate the challenges associated with AI in writing and ensure that this powerful technology is harnessed for positive outcomes.

Plagiarism and Lack of Originality

One thing I would like to point out is this. Overreliance on AI content generation can lead to plagiarism if writers simply copy and paste the output without proper attribution or fact-checking. Additionally, AI-generated content often lacks the unique voice and perspective that human writers bring to the table.

      Again AI is only a helper - you voice your soil and soul!


References
Busby, E. (2024, March 3). Essays written with ChatGPT feature repetition of words and ideas – study. Breaking News. Retrieved from https://www.breakingnews.ie/world/essays-written-with-chatgpt-feature-repetition-of-words-and-ideas-study-1607796.html

Successful generative AI examples and tools worth noting Retrieved from https://www.techtarget.com/searchenterpriseai/tip/Successful-generative-AI-examples-worth-noting

Dwivedi, Y.K., Kshetri, N., Hughes, L. et al. (70 more authors) (2023) “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71. 102642. ISSN 0268-4012

Introducing Gemini: our largest and most capable AI model Retrieved from https://blog.google/technology/ai/google-gemini-ai/

Note: This article is AI-assisted.



No comments:

Post a Comment

How to Communicate with AI through Prompts

 How to Communicate with AI through Prompts Janpha Thadphoothon I'm writing this blog article in reaction to some questions students ask...