Your First Week with AI at Work
What you'll learn
~15 min- Introduce AI-assisted workflows at work without creating friction
- Apply the 'show results, not process' principle
- Navigate common team dynamics around AI adoption
- Know when to ask permission and when to just deliver results
You’ve learned the tools. You’ve built projects. Now comes the part nobody teaches: bringing AI into your actual workplace without making it weird.
The gap between training and work
Most AI courses end with “now go use this!” and leave you to figure out the hardest part alone: navigating the social, political, and practical realities of changing how you work.
Your team didn’t take this course. Your manager may not understand what you can do now. Some colleagues will be curious, some will be threatened, and some won’t care until they see results. This lesson is about handling all three.
Principle 1: Show results, not process
This is the single most important principle for introducing AI at work.
Don’t say: “I used Claude Code to generate a Python script that processes our CSV exports and produces formatted reports using pandas and matplotlib.”
Do say: “I automated our weekly report. It takes 15 minutes now instead of 3 hours. Here’s this week’s.”
People care about outcomes. The technology is a detail. Leading with the tool invites debates about AI policy, data privacy, and whether “AI is really that good.” Leading with the result invites questions like “how did you do that?” — and now they are asking you to explain, which is a much better position.
In many workplaces, AI adoption is a sensitive topic. Some people worry about job security. Others worry about data privacy. Some managers worry about compliance. If your first move is announcing “I’m using AI now!”, you trigger all those anxieties simultaneously. If your first move is delivering better results faster, the conversation happens on your terms.
Principle 2: Start with your own tasks
Don’t try to change your team’s workflow on day one. Start with your own tasks — things where you are the sole owner and the result speaks for itself.
Good first wins:
- Your own status reports, meeting summaries, or emails
- Data analysis or visualization that you’d normally do in a spreadsheet
- Documentation or SOPs that you’ve been meaning to write
- Research summaries for your own reference
Not yet:
- Team-wide process changes
- Tools that require others to change how they work
- Anything that touches shared systems or data without discussion
The goal in week one is to build a personal track record of faster, better results. Once you have three or four wins, you’ve earned the credibility to suggest changes that affect others.
Principle 3: Manage up
Your manager needs to know three things:
- What you did — in outcome terms, not technology terms
- How much time it saved — a concrete number (“4 hours this week”)
- That you checked the output — they need to trust your verification process
Here’s how a good update to your manager looks:
Quick win to share: I built an automation that handlesour Friday data formatting. Used to take ~3 hours manual work,now takes about 15 minutes with a quality check.
I verified the output against last week's manual version —numbers match. Happy to demo if you'd like to see how it works.What this does: it establishes that you’re producing results, being responsible about quality, and offering transparency. Most managers will respond positively.
What if your manager is skeptical?
Some managers will have concerns. Common ones and how to address them:
“Is the data being sent to the cloud?” Know the answer before they ask. Browser-based tools (ChatGPT, Claude.ai) process data on external servers. CLI tools may also send data to APIs. Check your tool’s privacy policy and your organization’s AI usage guidelines. If your org has approved tools, use those. If not, start the conversation: “I’d like to use [tool] for [specific task]. Here’s their data policy. Can we discuss?”
“How do I know the output is accurate?” Describe your verification process: “I spot-check key numbers against the source data, I review the formatting, and I keep the original data as a reference. The AI does the drafting; I do the quality control.”
“Are we allowed to use this?” Many organizations are still forming AI policies. If your org doesn’t have a formal policy yet, start with low-risk tasks (formatting, summarization, drafting) rather than high-risk ones (decisions, customer communications, regulated processes). Document what you’re using and how. Being the person who helps shape the policy is better than being the person who gets caught ignoring it.
🏛️In Your Field: Government / State Devclick to expand
Government-specific guidance. Check your agency’s AI policy first — many have explicit guidance or approved tool lists. The federal AI executive order and OMB guidance set baseline requirements. Start with internal-facing tools (no PII, no classified data) and document your usage. If no policy exists, raise it with your supervisor proactively. Being ahead of the policy conversation is a leadership signal.
Principle 4: Handle team dynamics
The curious colleague
They’ll ask “how did you do that?” — this is your best-case scenario. Show them. Offer to help them try it. Having an ally who also uses AI tools makes everything easier.
The threatened colleague
Someone who sees your 10x speed increase may feel their own skills are devalued. Don’t say “AI can do your job faster.” Do say “this handles the boring parts so I can focus on [meaningful work].” Frame it as removing tedium, not replacing people.
If they express anxiety about AI and jobs, take it seriously. They may be right that their specific tasks are changing. Help them see AI as a tool they can learn too, not a threat to compete against.
The skeptic
Some people will dismiss AI output as unreliable, simplistic, or “cheating.” Don’t argue. Just keep delivering results. Over time, consistent quality speaks louder than any debate. When they eventually ask how you’re doing it, be generous.
The policy blocker
In some organizations, someone in IT, legal, or compliance may object to AI tool usage. Take this seriously — don’t go around them. Instead:
- Ask what their specific concerns are (data privacy? accuracy? licensing?)
- Address each concern directly with facts about the tools you’re using
- Propose a limited pilot: “Can I use [tool] for [low-risk task] for 30 days and report back?”
- Document everything — usage, results, safeguards
Week one game plan
Day 1-2: Pick one personal task and automate it. Use a workflow from the Monday Morning Wins cheat sheet. Something you do weekly that takes at least an hour.
Day 3: Verify and refine. Run it again, compare to your manual process, fix any issues. Make sure the output is as good or better than what you’d produce manually.
Day 4: Tell one person. Share the result (not the process) with someone who would care — your manager, a teammate, or a colleague who’s asked about AI.
Day 5: Log it. Add the win to your impact log. Note: what you did, time saved, and what you’d do differently.
Week 2 and beyond: Add one new workflow per week. By the end of the month, you’ll have a portfolio of real workplace wins that demonstrate your value.
One automated task saves you time every week. Five automated tasks compound into a meaningful shift in what you can accomplish. After a month, you’re not just faster — you’re operating at a different level. That’s what makes this a career investment, not a one-time trick.
Key takeaways
- Show results, not process. Lead with the outcome, not the technology. Let people ask how you did it.
- Start with your own tasks. Build a track record before proposing team-wide changes.
- Manage up proactively. Give your manager outcomes, time savings, and evidence of quality control.
- Handle team dynamics with empathy. Some people will be curious, some threatened, some skeptical. Meet each where they are.
- Week one: one win, one person, one log entry. Momentum beats perfection.
What's the best way to introduce AI-assisted workflows to your team?