Keeping Humans in the Loop, On Purpose
A small design thinking experiment in home automation
This experiment began as an attempt to avoid a very specific kind of conversation.
If you are married, you probably know the type.
The end-of-month discussion about house help.
Who came when. Who left early. What counts as “on time”.
A conversation that never quite concludes, yet reliably returns the next month.
Naturally, I wrote down what I thought was the problem.
“Track attendance of my daughter’s nanny.”
That sentence lasted about five minutes.
When the problem turned out to be a solution
It did not take long to realise I had already jumped ahead.
That sentence was not a problem statement.
It was a solution.
Once I paused and asked what the actual problem was, the framing shifted—uncomfortably at first.
Attendance was not the core issue.
Tracking was not the goal.
In fact, it was not about time at all.
It was about repeated negotiation, trust, and the small frictions that build up when nothing is written down.
Seen this way, the actors became clearer.
There was me and my wife.
There was the nanny and my daughter.
And there was the household itself, full of routines, expectations, and unspoken agreements.
This was not a systems problem.
It was a relationship problem that happened to need a system.
Solving the wrong problem on purpose
Like most explorations, this one began with ideas that looked good on paper.
Ledgers
Biometrics.
Cameras.
Wi-Fi based presence detection.
Technically, these approaches worked.
Socially, they created new problems.
Wi-Fi detection was a good example. It seemed harmless at first. Presence without effort. No explicit action required.
But giving Wi-Fi access did more than enable detection. It changed behaviour.
Data limits disappeared. Nanny’s phone usage increased. At one point, I even had to rotate the password.
A small convenience had quietly become permission.
That was an important lesson.
Solutions do not just solve problems. They introduce new ones—often social and behavioural ones before technical ones.
Asking better questions
Instead of building anything, I slowed down and started asking questions.
Who performs the daily action?
Who benefits from the system?
Who carries the friction when things go wrong?
What behaviour does this system encourage over time?
Every clever idea collapsed under one of these questions.
Biometrics felt accurate, but wildly disproportionate for a household.
Background automation felt convenient, but slippery.
One constraint kept eliminating entire branches of solutions.
If the system can work without the human being consciously involved, it is probably doing too much.
That single sentence ruled out half the solution space.
The MVP that actually mattered
What I eventually landed on was almost underwhelming.
A static NFC tag.
A phone.
A deliberate tap.
No background tracking.
No inference.
No silent automation.
Presence had to be explicit.
Action had to be intentional.
This was not about building the smartest system.
It was about building the smallest meaningful one.
Once this version worked, something interesting happened. The questions stopped changing shape.
There were still improvements to make. UX refinements. Feedback to incorporate. I even keep a small journal of what feels off.
But I was no longer rethinking the system itself. I was refining it.
At one point, I sent my wife a short video of the rough proof of concept. A phone, a tap, and a spreadsheet quietly filling up.
Her response was simple: “You do know a thing or two about software.”
That felt like enough validation to keep going.
Where Gen AI quietly helped
Gen AI was not the solution here. It played three supporting roles.
It helped prototype quickly, without the overhead of context switching - Script Kid
It helped reason end to end about constraints and failure modes - Solution Architect
And it helped surface hidden assumptions I had skipped past too easily - PM
The real value was not speed alone.
It was the ability to explore without commitment.
Feedback over finish
Even after the MVP worked, the work did not stop.
Most feedback was not about correctness. It was about experience.
When does the system feel ready?
What feedback reassures instead of alarms?
How long can something take before it feels broken?
Most iteration went into copy, timing, and tiny signals.
That is when it clicked for me.
When humans are in the loop, UX is the system.
What I deliberately did not build
This was the most satisfying part of the exercise.
I did not add dashboards.
I did not add summaries or insights.
I did not add nudges or scoring.
Not because they were hard, but because they would have changed what the system was about.
There is a fine line between accountability and micromanagement.
Between presence and surveillance.
Gen AI makes it very easy to cross that line accidentally.
Closing thought
Not every system needs to be smart.
Some just need to be careful.
As Gen AI lowers the cost of building, the harder and more interesting work might be deciding what not to build.
Especially when humans are still very much in the loop.
PS: Explore the project on GitHub


