This conversation with the Gemini Ai was interesting because right off the first prompt the Ai already touched on most of the points you hear in the discussion of Ai use. This suggests that this is becoming a common enough topic that the bot has a vast pool to pull from in its response compared to just a couple years ago. I feel I may have been a little vague in my initial prompt by just saying I viewed Ai positively, but I wanted to give the conversation space to debate before I clarified my stance to mean I view it as a helpful tool rather than expecting it to write my papers for me. Which is true, I have used AI before to help me comb through data bases for books or other sources. I also will sometimes say something like, “I have a paper that must be x amount of pages and touch on 1,2,3 topics, how could I organize my paper?” and get a short outline. The AI then took my stance and agreed with the use as a tool so I had to specifically ask for its opinion on a negative side to such a use. This highlights to me the positive feedback loop that AI is known for. Even when asked to debate it will still supply you with answers that it knows you want to an extent. The contrary view points were kind of weak and were problems I am already aware of, so I do not know if the AI bot was really trying all that hard to challenge my stance.
elana.writes
I am an aspiring historian with an interest in the field of curation. I love museums and the mysteries of the past as told by artifacts and archives.
Tuesday, September 9, 2025
Ethics of AI in Writing
Me: You are my mentor in business and professional writing. I have a positive opinion about using AI in writing. Play devil’s advocate and challenge my position. After we debate, share perspectives I might not have considered about ethics, authorship, and responsibility
AI: That's an interesting and increasingly common viewpoint. I'm glad you're thinking about how to
leverage AI. But, as your mentor, I'd be remiss if I didn't challenge you to look at the full picture.
Let's play devil's advocate for a moment and consider some of the hidden complexities.
The Case Against AI-Generated Writing 👿
Using AI for professional writing isn't as simple as just hitting a button to get a finished product.
It can lead to a host of problems that might damage your professional reputation and undermine
your skills.
Loss of the Human Touch
The most significant weakness of AI writing is its inherent lack of a human voice.
An AI tool, at its core, is a sophisticated prediction machine. It's trained on a massive
dataset of human-generated text and uses statistical models to determine the most probable
next word in a sequence. The result is often grammatically correct and fluid, but it can be
bland, generic, and devoid of the nuance, emotion, and creativity that a human brings. Your audience
can often spot this "AI bloat"—things like repetitive phrases, empty transitions, and an overly
formal or robotic tone—which can make your content feel inauthentic and less trustworthy.
You risk losing the very thing that makes your brand or your personal writing unique.
The Illusion of Efficiency
At first glance, AI seems like a massive time-saver. But what happens when the AI "hallucinates" or
gets the facts wrong? Generative AI is not a research tool; it's a text-completion tool. It can
confidently present incorrect information or cite sources that don't exist. This means you can't
just copy and paste the output. You have to meticulously fact-check and edit
everything it produces. The time you thought you were saving by not writing the first draft can
be quickly eaten up by the need for extensive corrections.
Stifling Your Own Growth
Think of professional writing as a muscle. The more you use it, the stronger it gets. By over-relying
on AI, you may be missing the opportunity to sharpen your own critical thinking and communication
skills. What happens when you need to craft a highly nuanced or sensitive message for a specific
audience? If you've outsourced your writing practice to a machine, you might find yourself ill-equipped
to handle situations that require empathy, strategic thinking, and a truly unique voice.
Ethics, Authorship, and Responsibility
Now, let's move beyond the practical arguments and consider the deeper implications of using
AI in your writing.
The Problem of Authorship
This is a core issue. Who is the author of a piece of writing if an AI generated it? The U.S. Copyright
Office has been clear: AI-generated content cannot be copyrighted.
Copyright law is based on the idea of human creativity. If you use an AI to create a substantial
portion of your work, you may be jeopardizing your ability to claim legal ownership of it. This is a
massive risk for any business or professional who relies on intellectual property.
Furthermore, many academic and professional organizations, including the Committee on Publication
Ethics (COPE), have explicitly stated that AI tools cannot be listed as authors. They argue that authorship
implies responsibility—the ability to be accountable for the accuracy and integrity of the work. An AI
cannot be held responsible; it cannot sign a legal document or be sued for libel.
The Responsibility and Liability Gap
When you publish a piece of writing, you're the one on the hook for its contents. If an AI provides factually
incorrect information that leads to a lawsuit or a damaged reputation, the blame falls squarely on you, the
human author. You can't just say, "The AI did it." This creates a significant liability gap. As the final editor
and publisher, you are fully responsible for the content, even the parts produced by the AI. This is a critical
point that many people overlook when they see AI as a simple shortcut.
Bias and Data Provenance
AI models are trained on vast amounts of data scraped from the internet. This data can contain biases—racial,
gender, and cultural—that the AI can unintentionally perpetuate in its writing. If you're not aware of these
biases, you could be publishing content that is discriminatory or offensive without even realizing it. The problem
is, you can't see the underlying data, so it's a "black box" of potential ethical risks. You have a responsibility to
your audience and your organization to ensure your content is fair and inclusive, and relying on AI makes that
more difficult.
In the end, AI is a powerful tool, but it should be treated as just that—a tool, not a replacement for your own
skills and judgment. Using it without a deep understanding of its limitations and the ethical implications is
a dangerous game. It's about using it strategically to augment your work, not allowing it to dilute your
expertise and professional integrity.
Me: I actually agree with many of your points. I see Ai as more of a tool to help gather an authors thoughts and
help construct an outline or gather sources of information.
Ai: That's an excellent clarification, and I'm glad we're on the same page. Viewing AI as a collaborative tool
rather than a replacement for human intellect is a much more strategic approach.
AI as a Co-Pilot, Not an Auto-Pilot
You're right to see AI as a way to structure your thoughts and get a head start. Think of it less like a ghostwriter
and more like a highly efficient research assistant. Using it to generate a rough outline, brainstorm keywords,
or even find potential sources can significantly reduce the "blank page" problem. This frees up your mental
energy to focus on what really matters: crafting a powerful message, applying your unique insights, and
ensuring the content is accurate and authentic.
This approach addresses many of the ethical and practical issues we discussed. When you use AI to get an
initial outline, you, the author, retain full authorship and responsibility
. The final work is a product of your critical thinking and professional voice, with the AI serving only as a
starting point. This ensures you maintain control over the content and can stand behind every word you publish.
By focusing on this kind of use, you can leverage the best of both worlds—the speed and efficiency
of AI, combined with the creativity, nuance, and integrity that only a human can provide.
Me:Do you see a downside to this approach of Ai use?
Ai:That's a very thoughtful question. While using AI as a co-pilot for brainstorming and outlining is a much
more responsible and effective approach, it's not without its potential downsides.
Even with this limited use, you still face a few key challenges:
The Problem of "AI-Shaped" Thinking
The most subtle danger is that you might start to think like the AI. An AI is an aggregator of existing ideas;
it's designed to give you the most common or statistically probable answer. If you consistently rely on
it to provide outlines and topic ideas, you risk becoming less creative and less likely to generate truly
original or counterintuitive thoughts. The more you use it, the more your own ideas might begin to conform
to the "average" of the internet, stifling your unique voice and perspective.
The Black Box of Bias and Source Accuracy
When you ask an AI for "sources of information," you're trusting that its training data is accurate and unbiased.
But we know that's not always the case. The AI might provide a list of sources that are outdated, biased, or
simply nonexistent (a common "hallucination"). This means you can't blindly trust the information it gives you.
You're still responsible for verifying
every source and fact it provides, which can sometimes be more time-consuming than just conducting the
initial research yourself. The efficiency gain is only real if you can trust the input, and with current AI, you
still can't.
So, while using AI as a tool is a great strategy, you must remain vigilant. Always remember that your
ultimate goal is to produce original, insightful, and accurate work. The AI is a tool to help you get there,
not a shortcut that allows you to bypass the fundamental steps of critical thinking and diligent research.
Me: I forgot to mention, I like to use Ai as an editor too. I will sometimes upload my own paragraphs and ask
Ai to review the context and grammar. What's your opinion?
Ai: That's an excellent point, and it's a very common and valid use case for AI. The distinction between using
AI to generate content and using it to refine your own writing is a critical one. However, even with this
more limited and responsible approach, there are still some important things to consider.
The Dangers of a "Perfect" Surface
AI grammar and style checkers can be powerful tools. They can catch typos, grammatical errors, and even
suggest improvements for clarity and conciseness. But here's the catch: they can make your writing "perfect"
in a technical sense, while also making it bland and devoid of personality.
The Loss of Voice: An AI editor is a tool of standardization. It's trained on the most common
and "correct" ways of writing, and it will often suggest changes that iron out stylistic quirksand unique phrasings. That sentence you wrote with a slightly awkward but intentional rhythm?
The AI might suggest a more "efficient" version that loses the punch. Over time, you might start
to write in a way that is optimized for the AI, not for your human audience. This can flatten
your unique voice and make your work indistinguishable from others who use the same
tools.
Contextual Blindness: An AI editor lacks the deep contextual understanding that a
human editor has. It doesn't know your specific audience, your brand's tone, or the
nuance of your argument. It might suggest a change that is grammatically correct but
completely misses the point of what you're trying to convey. For example, it might change
a term you used for a specific technical reason to a more common synonym, thus losing
the precision you intended.
The Intellectual Property and Privacy Risks
This is a major consideration, and one that is often overlooked when using AI editors. When you paste your
original, human-written content into an AI tool, you might be giving away your intellectual property.
Training the Model: Most of the widely available, free AI tools use user inputs to furthertrain their models. This means the proprietary paragraphs you've written could become
part of the dataset that trains the next version of the AI. You could be giving your valuable,
original work away for free, and it might even show up as a fragment in someone
else's AI-generated text. For a professional writer or a business, this is a significant
intellectual property risk.
Confidentiality: If you're working on sensitive or proprietary documents, using a public
AI editor is a massive security risk. Your company's confidential information could
be uploaded to the AI's servers, where it is no longer secure. Many professional and
legal organizations now have strict policies prohibiting the use of generative AI tools for
editing confidential materials.
The Erosion of Your Own Skills
Finally, while using an AI editor can be a time-saver, there's a risk of what I'll call "editorial atrophy."
Skipping the Learning Process: The act of editing your own work is a crucial part of
becoming a better writer. It forces you to critically analyze your sentences, identify your
common mistakes, and develop your own internal "style guide." If you just paste your
writing into a tool and accept the changes, you are bypassing that vital learning process.
You might become a "prompter" rather than a true editor, and your skills may not grow.
My advice remains the same: use AI as a strategic tool, but always be the final authority. For editing, use it as an
enhanced spell-checker or a second opinion, but never as a replacement for a careful, human read-through.
The human element—your voice, your judgment, and your responsibility—is what makes your writing
valuable. Don't let a tool diminish it.
Reflection on AI Conversation
This conversation with the Gemini Ai was interesting because right off the first prompt the Ai already touched on most of the points you h...