I keep a folder on my desktop called “Screenshots I’ll Need Later.”
It has 347 images.
I needed exactly five of them this year.
This was supposed to be the year I got organized.
Instead, it became the year I had to fundamentally reconsider what my job even is.
This year didn't knock.
It kicked the door open.
One day, you were confident in your craft. The next day, your tool was talking back; suggesting, correcting, sometimes outperforming you.
If this year felt intense, confusing, exciting, and slightly exhausting…good.
That means you were paying attention.
For me, this year became a deep learning experience in the truest sense of the phrase. Not just machine learning, but human learning.
It started with a bang. And somehow… it managed to end with one too.
The year of building (and rebuilding) muscle memory
This one did both.
Outside of regular work, this year was about experimentation. While taking one of Andrew Ng’s AI courses, a thought kept returning:
Learning without experimentation felt incomplete. So I decided to run an experiment; one that would force me to learn, unlearn, and occasionally feel uncomfortable.
I went looking for a real problem. It didn’t take long.
A problem everyone endures and no one enjoys
Peer and editorial review.
Everyone was expected to remember the style guide. Everyone was expected to interpret it the same way. Most never did.
That’s when a simple thought appeared:
Not to replace reviewers. But to remove memory and interpretation from the equation.
That was the beginning of Content Editorial Assistant (CEA).
Content Editorial Assistant
Less hype. More discipline. Fewer "wow" moments—and more of the kind that survives real workflows.
CEA stopped being an experiment and started behaving like a system.
That shift mattered. Because tools that work in theory are easy to build. Tools that survive reality are rare.
PyCon India, STC India, and the power of shared confusion
Presenting CEA at PyCon India and STC India surfaced something important.
Conferences this year weren’t about answers. They were about admitting uncertainty, together.
The most honest conversations didn’t happen on stage. They happened in hallways, over coffee, and in slightly uncomfortable pauses where someone finally said:
That honesty was refreshing.
A note on STC India
STC India stood out. It was a full house and an eager audience. By the end, three things became very clear:
Not just in talk, but in experimentation. The shift from theory to practice was everywhere.
Not curiosity for novelty, but fear of irrelevance and job loss. The motivation behind the learning mattered.
In uncertain times, people don't look for tools first. They look for each other.
Learning at an uncomfortable speed
Yes, there was work. Yes, multiple new things were happening at the office. But somewhere between deadlines and curiosity, I managed to complete a few AI courses, especially the ones by Andrew Ng on AI Agents, along with another that pushed my thinking even further.
AI Agents in LangGraph
Multi-agent systems. Planning. Execution. Iteration.
DeepLearning.AI →And by the end of it, I realized an uncomfortable truth.
Once you understand agents, orchestration, and autonomy, you stop asking if AI will change work.
You start asking who adapts fast enough. That question doesn’t come with a syllabus.
Work didn’t “adopt” AI, it absorbed it
By mid-year, something subtle happened. AI stopped being a separate initiative. It leaked into everything.
No big announcement.
No victory lap.
Just… integration.
And with it, discomfort.
The real trouble (that no one has an answer for)
Here’s the part no keynote wants to dwell on:
AI made output cheap.
Speed abundant.
Polish trivial.
So what’s left?
There is no model for that yet.
No roadmap either.
So… how do you stay relevant?
I don’t have a definitive answer for this. But one thing is for sure, AI worshiping won’t help. May be we can stay relevant by doing the unglamorous work:
What 2026 quietly looks like
Prediction is always risky, but there’s growing consensus among AI and technical communication experts about how technical writing will evolve.
AI won’t make technical writers obsolete, it will make the role more strategic and less repetitive. Current research and industry voices suggest that AI automates structure, formatting, and first-draft generation, but it still struggles with context, correctness, empathy, persona, and domain nuance.
People who treat AI as a shortcut
People who treat AI as a lens
Technical writing won’t disappear. But thoughtless writing will… and that’s probably a good thing.
The folder on my desktop
I still have that “Screenshots I’ll Need Later” folder.
Still 347 images.
Still only needed five.
But I learned something.
The screenshots I actually needed weren’t flashy demos or bold announcements.
They were small moments:
The human stuff.
The thinking stuff.
The why behind the what.
2025 broke my brain with questions I couldn’t answer.
2026 will probably break it again and that’s okay.
So yes, this year didn't knock.
It still doesn't.
But here's the difference—we're no longer pretending we didn't hear it.
This year didn’t just teach us new tools. It taught us what can’t be automated away.
And if that realization felt uncomfortable at times, it means the year did exactly what it was supposed to do.
Here’s to learning that sticks. And to the questions worth carrying into the next year.
Wishing you a Merry Christmas and a very Happy New Year!
Join the Discussion