Smarter Machines, Slower Minds?

I woke up this morning to news of Nvidia’s latest impressive stock surge – yet again confounding its critics and the doom-mongers convinced the AI bubble is moments from bursting.

It doesn’t surprise me. AI’s galloping pace is unmistakable, and its technologies are now running wild through the day-to-day interactions and transactions of businesses and organisations. Supply chains, customer service, drug discovery, industrial design, logistics – AI is under the bonnet of so many things we rely on.

Perhaps more striking is the shift and uptake of AI not by organisations, but by individuals. An ever-growing majority of the world’s population now uses AI in some form. To keep things in perspective, here’s the rough scale (pulled up using ChatGPT, naturally): there are 5.5-6 billion people online; roughly the same number using smartphones; and close to a billion already interacting with AI tools – often without even knowing it.

Whatever definitions countries choose when reporting AI usage, it’s getting harder to maintain the fantasy that AI is niche or peripheral. It’s mainstream.

And, although we talk about AI as if it has “just arrived,” it’s been with us for far longer.

The field traces back to the 1940s, when Alan Turing first imagined machines that could think. I re-watched The Imitation Game (the movie about him) on a flight this summer, and was far more absorbed in the storyline than I remembered being when I first watched it ten years ago. Back then, AI felt like a curiosity, whereas now it feels like the backdrop of our lives.

From Turing’s wartime code-breaking machines, the formal study of AI began in 1956, then wandered along for decades before suddenly accelerating in the 2010s with deep learning and the rise of powerful computing. What looks like a “sudden revolution” today is really just the latest chapter in an 80-year experiment.

ChatGPT arrived only three years ago. Its exponential uptake (1 million users in five days, 100 million in two months) made it one of the fastest-adopted technologies in human history. Generative AI went mainstream so quickly that many formed opinions on it only after it was already shaping our routines.

I use AI a lot in my work, and try to treat it as something that expands my thinking rather than replaces it. It’s easy, though, to feel the seduction of outsourcing increasing amounts of the boring brain stuff we deal with to a machine.

When I heard friends first using it to write emails and text messages, I remember thinking: this surely won’t last. And yet here we are. AI now writes, translates, analyses, drafts, refines, designs, and increasingly does it frighteningly well.

If everything we have to do becomes effortless, what happens to the mental muscle we use when things are hard? What happens to reasoning and curiosity? What about our memories and about our accountability?

Earlier this month, the New York Times posted an article called How A.I. and Social Media Contribute to ‘Brain Rot’. The Harvard Gazette ran a similar piece last week, and The Guardian and MSN picked up coverage of Nataliya Kosmyna, an MIT Media Lab scientist whose recent study of Chat GPT made waves.

All raise similar worries – some calling it “brain rot,” others “cognitive atrophy.” Kosmyna and her fellow researchers found that users who leaned on AI for writing tasks remembered less of what they had written, and showed diminished activity in brain networks tied to attention and reasoning. One educator interviewed described AI as “a brilliant assistant, but a terrible replacement for struggle.”

This feels about right.

And yet, the same research argues that, used reflectively, AI can make us more creative, more productive, and even more curious. The key distinction is, perhaps, intention: it’s not the presence of AI that dulls us, it’s the absence of our own engagement.

I’d noticed my own habits shifting in that direction, and so took a step back. In doing so I felt myself pushed in the opposite direction: towards more reading, more handwriting, and more analogue time.

I wrote about this over on Substack earlier this year, Rewinding With a Bic Pen, because I felt that slowing down into the older rhythms of writing was helping me stretch my attention, rather than scatter it.

However, at the same time, I’d say that AI has made me much more efficient in my work – researching, planning, synthesising ideas, prepping workshops, threading insights into reports. Using AI has meant I can carve out more time, not less, for other things that matter. It’s a perfect conundrum if you ask me: not classically good or bad.

My daughter’s school is, understandably, trying to protect students from AI – or at least slow it down – and I don’t feel nervous about their classroom experiences being compromised. But the truth is unavoidable: their world will be steeped in AI whether we delay it or not. The question isn’t how long we can hold it back – it’s how well we can teach them to use it with care and curiosity.

I definitely crave simpler times, simpler tools, simpler choices. I find myself saying this more and more. Although nothing in the rulebook says we can’t keep hold of the simple things while still letting technology widen the possibilities around us. The analogue and the digital can coexist.

In the end, I’m personally on board with AI. I see its risks, and I also see the enormous potential for good (plus the way it has already nudged me back into more deliberate, thoughtful habits.)

It’s hard to sum things up. Particularly when I’m nowhere near understanding or predicting AI’s evolution, nor the financial ripples a company like Nvidia is casting across global markets. The numbers are too big for me to take seriously. When I see speculation about Elon Musk edging closer to “trillionaire” status, or Jensen Huang’s net worth doing somersaults, I tend to scroll past it and simply go off to make myself a strong cup of tea.

In the end, AI is a mirror that reflects our cravings as much as our creativity. It shows our hunger for ease, our impatience, and our distractibility – in those moments, we look like one vast Pavlov’s-dog experiment, staring up, waiting for the next treat. But it also reflects our imagination and our ability to build astonishing things.

It holds both truths at once.

And so, arguably, the real question isn’t whether AI will be good or bad, but who we choose to become while using it.

Your Application Has Been Unsuccessful

I’ve been remiss posting on DefinitelyMaybe, having thrown my efforts into a weekly Substack instead – a decision that has yet to yield fame, fortune, or even a single sponsorship from a running shoe company.

In the meantime, no one’s mentioned my absence here, which I choose to interpret as proof that after twelve years of waffle I’ve covered everything there is to cover in the world of development.

Sadly for you, reader, I have not even scraped the surface of our sector’s bleak, lunar landscape.

It’s been an odd year, working freelance in development and humanitarian affairs. One of my last rants here was about Elon Musk’s DODGE experiment, following Trump’s cheerful levelling of USAID’s $40 billion portfolio. I can’t bring myself to post a photo of either man this morning – Musk currently awaiting a trillion-dollar stock decision, Trump berating New York’s newly elected Mayor, Zohran Mamdani.

Today, I want to talk about the future of job applications.

The email above is verbatim from an INGO that rejected me for a “Head of” position. I get emails like these a lot and am comfortable sharing that. It was a rather whimsical application to be fair, given the last six years I’ve been in the market for short term consultancies rather than full-time roles.

However, even with my freelancing I’ve probably only struck gold three times, after spending hours crafting pitches to advertised assignments. Ironically, the most lucrative of those was also the most haphazard application I submitted – a lesson in randomness, if ever there was one.

Most of my consultancy work comes through word of mouth. For that, I’m sincerely grateful. I intend to keep going, not least because my track record with formal applications suggests remaining solo may be wise.

It makes me wonder: is there really no better way for organisations to find people than the tired ritual of CVs and cover letters?

I write this, of course, while still mildly irked by that latest “thanks, but no thanks” email above – copied and pasted as it so often is in the first person, yet unsigned at the end, it felt like a passive-aggressive ghost of correspondence, glaring at me in my in-box.

As this isn’t the first, nor the last, rejection I’ll receive, I did want to share some of the inadequacies I see in the overall recruiting paradigm we have to wade through in the development sector.

Standing Out in the Crowd

Firstly, and as usual, I have no idea what part of my application failed the test, based on the email sent. Was it tone? Experience? Am I too old? Too informal? Should I have omitted my perfectly reasonable demand for sixty days of annual leave and a 25% pay rise each year?

Certainly one can ask for feedback, but some rejection emails even come with disclosures like this other one I received:

“Due to the number of applications received and reviewed, I am not able to give individual feedback at this time, though I do encourage you to consider and apply for one of our consultancy opportunities in future.”

Recruiters tell me the challenge now is volume. Every LinkedIn posting draws hundreds of applicants within hours. Many are AI-generated, indistinguishable from spam. In that flood, it’s little wonder HR teams can’t respond personally. The process has become automated compassion. Efficient, yet entirely devoid of empathy.

And sometimes the role was never really open at all. The ad is window-dressing for an internal appointment already made before your polite rejection hits your inbox. Everyone knows this game.

The Human Touch

When I was recruiting for CARE in London twenty years ago, the process felt clunky, but sincere. Applications came by post, HR filtered the pile, and you spent a weekend reading twenty or thirty of them, scoring each against the criteria. You’d imagine who these people might be, wonder how they’d fit, and inevitably be surprised when you met them.

“He’s nothing like I thought he would be,” we’d gush after the first interview, perhaps a tad disappointed. Then the second candidate would arrive, and we’d instantly revise our judgment of the first.

I want to be careful here, harking back to these times and to any over-reliance on the in-person interview. There’s a strong argument that you’d be just as well to flip a coin, rather than judge two candidates battling it out in 45-minute interviews. I’ve seen plenty of people ace their interview but then turn out to be terrible at their jobs – and vice versa.

That said, while my old team’s deliberations about candidates could be chaotic and subjective, at the same time they were undeniably human deliberations. We’d agree, disagree, wind each other up, have a laugh about it all, and then take a punt on someone. It was a messy chemistry of people trying to imagine other people in their world.

While an interview is only one piece of the puzzle, I hope these deliberations still play out in some teams because it feels instead now that many just outsource that time and imagination to algorithms. Which results in extra pressure on candidates to hit the right keywords, and on recruiters to do the same with the right filters. I think the combination of which can mean people, at times, feel unseen.

Technology was meant to create fairer hiring practices, however I’d argue some of the stuff that used to make it feel more real – the chance for surprise, or for discovery, or for seeing someone as more than a list of verbs and achievements – has been lost as a result.

In Trust We Trust

Truly one of the best litmus tests for success here is trust. Someone who’s seen you work, who believes in you, or introduces you to someone else – that format can work really well, and is devoid of an algorithm and a cover letter. This way of doing things is also, clearly, not comprehensive enough of a format on its own to work for everyone.

There likely isn’t one silver bullet that comes close to solving the dilemma of how to best fill all the roles out there, and all the needs. We will have to use the internet to advertise. I just think our systems for doing so have taken out some of the vital aspects of what used to be there. CVs written by ChatGPT, and rejections by chatbots – where do we go from here?

Perhaps the future of hiring is more about a ten-minute audio pitch instead of a cover letter? Have any organisations out there experimented with short paid trials? Or could claim they host interviews where candidates ask as many questions as they do as recruiters? I’d love to hear from anyone on this.

For me, anything that reintroduces more elements of curiosity, risk and humanity into the exchange would be refreshing.