This wasn’t a startup idea.
It wasn’t a productivity experiment.
It was a boring operational problem that kept showing up every single day.
At a Salvation Army location in a neighboring town, caseworkers start each morning by checking a public access website to see whether any of their clients were arrested overnight. It is part of staying informed, coordinating services, and avoiding surprises later in the day.
The problem is not the task itself.
The problem is repetition.
Each caseworker was spending roughly 40 minutes every morning doing the same lookup, clicking through the same pages, and manually confirming the same information. Multiply that by multiple staff members, five days a week, and you end up burning hours of human attention on something that never changes in structure.
So I helped them automate the lookup. Nothing more.
The constraint that matters
Before anything else, there were hard boundaries.
The website is public.
The task is read only.
No decisions are made automatically.
No private systems are touched.
A human still reviews the result.
This matters because most AI automation fails when people try to skip judgment instead of saving time.
The actual solution
I created a simple GPT agent whose only job was to check that public site, extract the relevant information, and report whether anything had changed since the previous day.
That’s it.
No branching logic.
No follow up actions.
No system access.
No clever tricks.
The agent was tested manually first to make sure the output was clear and predictable. Once it worked reliably, it was scheduled to run once per day at 6:00 a.m.
Every morning, the agent runs, checks the site, and produces a short summary. The result is delivered to the same ChatGPT account the staff already use, on whatever device they happen to open later that morning.
If they are not logged in at the time, nothing breaks. The result is simply waiting in their account the next time they open ChatGPT.
Why this works
This setup works because it respects limits.
The current twenty dollar ChatGPT plan includes a limited number of agent runs per month. At the time of writing, that number is forty. A once per day automation uses roughly thirty runs per month, which fits comfortably without stress or micromanagement.
This is intentional design, not scaling fantasy.
Trying to run agents every few minutes would burn through limits quickly and create more frustration than value. One narrow task, run once per day, does exactly what it needs to do.
How to set up something like this yourself
If you want to replicate this pattern for your own work, the process is simple.
First, write a narrow agent prompt.
Give it one job. One source. Read only access.
Be explicit about the output.
Tell the agent exactly what you want returned. A short written summary, a bulleted list, a table, a spreadsheet style layout, or a document format. Vague output instructions lead to inconsistent results. Clear output instructions make automation reliable.
Second, run the agent manually at least once.
Confirm the output is formatted the way you expect. If it is not, fix the prompt before you automate anything.
Third, schedule the agent.
Open the agent, use the top right menu, and select the scheduling or timing option. The exact wording may change as ChatGPT updates its interface, but the idea is the same. Set it to run once per day at a fixed time.
Fourth, stop thinking about it.
The result will be delivered to your account when it runs. You do not need to be logged in. You do not need to keep the app open. When you check ChatGPT later, the output is there.
What this is not
This is not full automation.
This is not replacing people.
This is not AI making decisions.
It is a reminder that most valuable AI use cases are boring. They quietly remove friction from work that humans should not be spending their mornings on.
The goal was never to build something impressive.
The goal was to give people their time back.
That is usually where AI works best. 🔥

