Being Technical Writer
    • Home
    • Categories
    • Videos
    AI

    The year that didn't ask for permission

    <a href='/aboutme/'>Gaurav Trivedi</a> Gaurav Trivedi
    Dec 23, 2025
    7 min read
    The year that didn't ask for permission

    I keep a folder on my desktop called “Screenshots I’ll Need Later.”

    It has 347 images.

    I needed exactly five of them this year.

    That folder is a metaphor for 2025. We collected everything. We processed almost nothing. And somehow, we ended up somewhere completely different from where we started.

    This was supposed to be the year I got organized.

    Instead, it became the year I had to fundamentally reconsider what my job even is.

    This year didn't knock.

    It kicked the door open.

    One day, you were confident in your craft. The next day, your tool was talking back; suggesting, correcting, sometimes outperforming you.

    If this year felt intense, confusing, exciting, and slightly exhausting…good.

    That means you were paying attention.

    For me, this year became a deep learning experience in the truest sense of the phrase. Not just machine learning, but human learning.

    It started with a bang. And somehow… it managed to end with one too.

    LEARN
    Muliple AI courses (especially from Andrew Ng)
    Foundation
    BUILD
    CEA experiment
    Application
    SHARE
    PyCon & STC India
    Community
    ABSORB
    AI integration at work
    Reality
    REFLECT
    This moment
    Clarity

    The year of building (and rebuilding) muscle memory

    Some years teach you skills. Some years teach you humility.

    This one did both.

    Outside of regular work, this year was about experimentation. While taking one of Andrew Ng’s AI courses, a thought kept returning:

    What if I actually tried to apply this instead of just understanding it?

    Learning without experimentation felt incomplete. So I decided to run an experiment; one that would force me to learn, unlearn, and occasionally feel uncomfortable.

    I went looking for a real problem. It didn’t take long.

    A problem everyone endures and no one enjoys

    Peer and editorial review.

    Manual
    Subjective
    Exhausting

    Everyone was expected to remember the style guide. Everyone was expected to interpret it the same way. Most never did.

    That’s when a simple thought appeared:

    Why not train a model on the style guide itself?

    Not to replace reviewers. But to remove memory and interpretation from the equation.

    That was the beginning of Content Editorial Assistant (CEA).

    Built in 2025

    Content Editorial Assistant

    Before
    An experiment. A "nice idea."
    →
    After
    A system. Quiet reliability.

    Less hype. More discipline. Fewer "wow" moments—and more of the kind that survives real workflows.

    CEA stopped being an experiment and started behaving like a system.

    That shift mattered. Because tools that work in theory are easy to build. Tools that survive reality are rare.

    PyCon India, STC India, and the power of shared confusion

    Presenting CEA at PyCon India and STC India surfaced something important.

    Conferences this year weren’t about answers. They were about admitting uncertainty, together.

    The most honest conversations didn’t happen on stage. They happened in hallways, over coffee, and in slightly uncomfortable pauses where someone finally said:

    "I'm not sure how this will change my job… but I know it will."

    That honesty was refreshing.

    A note on STC India

    STC India stood out. It was a full house and an eager audience. By the end, three things became very clear:

    01
    Everything was about AI

    Not just in talk, but in experimentation. The shift from theory to practice was everywhere.

    02
    Much of the curiosity was driven by fear

    Not curiosity for novelty, but fear of irrelevance and job loss. The motivation behind the learning mattered.

    03
    Community matters more than ever

    In uncertain times, people don't look for tools first. They look for each other.

    Learning at an uncomfortable speed

    Yes, there was work. Yes, multiple new things were happening at the office. But somewhere between deadlines and curiosity, I managed to complete a few AI courses, especially the ones by Andrew Ng on AI Agents, along with another that pushed my thinking even further.

    Completed
    AI Agents in LangGraph

    Multi-agent systems. Planning. Execution. Iteration.

    DeepLearning.AI →
    Completed
    Building Agentic RAG

    How AI retrieves and reasons over documentation.

    DeepLearning.AI →

    And by the end of it, I realized an uncomfortable truth.

    The courses weren't hard. Accepting what they implied was.

    Once you understand agents, orchestration, and autonomy, you stop asking if AI will change work.

    You start asking who adapts fast enough. That question doesn’t come with a syllabus.

    Work didn’t “adopt” AI, it absorbed it

    By mid-year, something subtle happened. AI stopped being a separate initiative. It leaked into everything.

    Cursor quietly reshaping how we code
    Claude becoming a thinking partner
    Gemini stepping into analysis and synthesis
    AI everywhere documentation, reviews, ideation, validation—often invisibly

    No big announcement.

    No victory lap.

    Just… integration.

    And with it, discomfort.

    The real trouble (that no one has an answer for)

    Here’s the part no keynote wants to dwell on:

    Productivity is no longer the differentiator. Judgment is.

    AI made output cheap.

    Speed abundant.

    Polish trivial.

    So what’s left?

    Taste
    Context
    Responsibility
    Knowing when not to generate
    Understanding consequences beyond the prompt

    There is no model for that yet.

    No roadmap either.

    That uncertainty isn't a bug of this era. It's the defining feature.

    So… how do you stay relevant?

    I don’t have a definitive answer for this. But one thing is for sure, AI worshiping won’t help. May be we can stay relevant by doing the unglamorous work:

    → Thinking clearly when tools get noisy
    → Building systems, not demos
    → Asking better questions instead of faster ones
    → Understanding why something works—not just that it does
    Ironically, the more automated things become, the more human discernment matters.

    What 2026 quietly looks like

    Prediction is always risky, but there’s growing consensus among AI and technical communication experts about how technical writing will evolve.

    AI won’t make technical writers obsolete, it will make the role more strategic and less repetitive. Current research and industry voices suggest that AI automates structure, formatting, and first-draft generation, but it still struggles with context, correctness, empathy, persona, and domain nuance.

    Will plateau

    People who treat AI as a shortcut

    Will compound

    People who treat AI as a lens

    Technical writing won’t disappear. But thoughtless writing will… and that’s probably a good thing.

    The folder on my desktop

    I still have that “Screenshots I’ll Need Later” folder.

    Still 347 images.

    Still only needed five.

    But I learned something.

    Collecting isn't understanding.
    Hoarding isn't preparing.

    The screenshots I actually needed weren’t flashy demos or bold announcements.

    They were small moments:

    A slide showing real user research.
    A diagram mapping documentation into product development.
    A whiteboard photo where a team wrestled with information architecture.

    The human stuff.

    The thinking stuff.

    The why behind the what.

    2025 broke my brain with questions I couldn’t answer.

    2026 will probably break it again and that’s okay.

    Healing is just learning in disguise.

    So yes, this year didn't knock.

    It still doesn't.

    But here's the difference—we're no longer pretending we didn't hear it.

    This year didn’t just teach us new tools. It taught us what can’t be automated away.

    And if that realization felt uncomfortable at times, it means the year did exactly what it was supposed to do.

    Here’s to learning that sticks. And to the questions worth carrying into the next year.

    Wishing you a Merry Christmas and a very Happy New Year!

    Previous Article

    Google Antigravity IDE just dropped. I tested it immediately and here's what you need to know

    Google just released Antigravity IDE today. I tested it immediately on my bro...

    Share This Post

    You Might Also Like

    Google Antigravity IDE just dropped. I tested it immediately and here's what you need to know
    Google Antigravity IDE just dropped. I tested it immediately and here's what you need to know
    Nov 19, 2025
    I Built a Gamified 365-Day AI Learning App in One Evening—Want to Join Me on This Journey?
    I Built a Gamified 365-Day AI Learning App in One Evening—Want to Join Me on This Journey?
    Oct 22, 2025
    I Tested Google I/O 2025 Tools Over the Weekend—Here's What Actually Changes for Technical Writers
    I Tested Google I/O 2025 Tools Over the Weekend—Here's What Actually Changes for Technical Writers
    May 25, 2025
    View All Posts

    Categories

    Jekyll Explore articles
    Technical Writing Explore articles
    Git Explore articles
    Innovation Explore articles
    AI Explore articles

    Join the Discussion

    Being Technical Writer

    Empowering technical writers with insights, innovations, and industry best practices to excel in the ever-evolving world of technical communication.

    Quick Links

    About me Contact

    Resources

    Site analytics

    Premium Courses

    Contact me for one-on-one mentoring sessions.

    Like the content?

    © 2025 Being Technical Writer. The opinions expressed in this blog are my own, and not those of my employer.

    Privacy policy