Where I'm at with AI

We’re in new paradigm territory with generative AI. A lot of commentary falls into the skeptic, evangelist, or doomsdayer categories, or are practical but narrow takes. In this article, I’ll discuss my current use of generative AI tools and outline areas that concern me - areas that raise questions about how our industry and others will evolve. I am certain that generative AI is a productivity amplifier, but its economic, environmental, and cultural externalities are not being discussed enough.

A quick note on terminology: in this article, I’ll use the term “generative AI” to describe the current wave of generative AI systems - large language models like Claude and ChatGPT and similar tools. I want to be clear that I’m talking about LLMs and not other areas of AI such as autonomous vehicles or medical diagnostic algorithms.

We’re Moving Quickly

If you asked me six months ago what I thought of generative AI, I would have said that we’re seeing a lot of interesting movement, but the jury is out on whether it will be useful in my day to day work. It’s remarkable how quickly my position has changed - fast forward just a few months and I am using Claude daily at work and at home. I use it for routine coding tasks like generating scaffolding or writing tests, and for ideation on new projects. I treat Claude like another software engineer and give it specific instructions. I spend a lot of time reading code it generates and making corrections before submitting a PR for my coworkers to review. I often have a generative AI tool do a pass on the PR before I ask a human to have a look, which has saved me many iterations. Coworkers of mine have used generative AI tools to build some truly mind blowing things in a short period of time. I’ve become a convert.

Coding with generative AI absolutely increases velocity. Absent the concerns I outline in this article, the role of software engineers has changed. I am probably most closely aligned with the view Mark Brooker puts forward - that most software will be built very quickly, and more complicated software should be developed by writing the specification, and then generating the code. We may still need to drop down to a programming language from time to time, but I believe that almost all development will be done with generative AI tools.

On the surface, my position here shouldn’t be surprising or controversial. I’ve long held the belief that our job as software engineers is not to write code, but rather to solve problems. If code is the most efficient way to solve a problem, then great. Generative AI makes code much cheaper to generate. That comes with some huge wins, and some very real concerns that I’ll outline here. My purpose is not to express skepticism, or cast doubt, but rather to shine a light on questions that to my knowledge, are still open.

Ironies of Automation

Lisanne Bainbridge’s 1983 paper “Ironies of Automation” posits that automation can relegate humans to exception handling tasks (think of this whenever you hear someone say that it’s important to always “have a human in the loop”) and that when humans are relegated to such tasks, they become less effective than if they had a more active role.

The best example of this that I can think of relates to roadway design and safety. “Stroads” (hybrid throughfares that combine qualities of high traffic streets and roads), for example, are dangerous because they encourage driving at high speeds and reduce the amount of friction encountered on a route. This causes inattentive driving which leads to more crashes and fatalities. Replacing stroads with more obstacles results in fewer crashes. One of the more striking examples is redesign of the La Jolla Boulevard in San Diego, where crashes were reduced by 90% after going from 5 lanes and 70ft pedestrian crossings to 2 lanes and 12-14 foot crossings with islands. Traffic volume stayed the same, and crashes plummeted. This video documents similar phenomenon contrasting urban design in Toronto and cities in the Netherlands.

I’m certainly not the first to draw similarities between Bainbridge’s paper and the current use of generative AI tools. Mica R Endsley, former Chief Scientist of the U.S. Air Force published a paper called “Ironies of artificial intelligence” in 2023 which directly builds on Bainbridge in this context.

The principle applies beyond roads. In software and operations, we have long accepted that a certain amount of friction is necessary or beneficial. Very few companies would release software without doing some kind of security evaluation, and software teams frequently debate how many gates need to be included to operate safely. Anyone who has found themselves in a vibe coding loop with their role reduced to periodically saying “yes” or “no” to a coding tool knows how this principle could easily apply to generative AI and coding.

Certain kinds of friction, such as code review, also have secondary benefits – they are a tool for vicarious learning and reducing the bus factor for parts of a system. By reviewing each others code, we are more able to safely modify and operate that code. Just as drivers lose attentiveness on over-automated roads, software teams risk losing deep system understanding if they offload too much judgment to AI.

Open Source is Behind

People sometimes compare the current wave of generative AI coding tools with other shifts in how we build software. It is true that in my career, I’ve seen us move further and further away from bare metal thanks to the introduction of new tools. Higher level languages, frameworks, and tools that allow us to develop at a higher level of abstraction, such as virtualization, containers, and orchestrators. These comparisons are fair, in my opinion.

Most of the previous waves were dominated by Open Source technology. Docker, Kubernetes, Linux, Xen, GCC, Ruby, Python, Rails, Numpy, JQuery, React, and countless other technologies have made software engineers more productive. They were also, however, open source and available to anyone with an internet connection. I am deeply concerned that the current wave of generative AI is highly dependent on a small set of vendors (OpenAI, Anthropic, Google, etc). I do not know what implications that will have, but I can say that before the great mainstreaming of open source in the late nineties and early aughts, we were far worse off as an industry – accessibility was a real concern (it was harder for newcomers to the field to get started) and innovation was nowhere near as plentiful.

Vendor dependency concentrates control over core development infrastructure. That centralization risks slower innovation, higher pricing, and reduced accessibility - the opposite of what open source has historically delivered.

The reliance on vendors brings me to my next concern.

We Aren’t Paying the Real Cost

The current landscape is a battle between loss-leaders. OpenAI is burning through billions of dollars per year and is expected to hit tens of billions in losses per year soon. Your $20 per month subscription to ChatGPT is nowhere near keeping them afloat. Anthropic’s figures are more moderate, but it is still currently lighting money on fire in order to compete and gain or protect market share.

We’ve seen loss-leader strategies before, most notably with Uber, which I’ll return to later. The danger in these strategies is that dependencies are being created while the product is cheap, meaning alternatives – even open source tools, can’t compete. This has the potential of creating lock-in where companies integrate these tools into their products, developers become dependent on them in their workflows, and even students learn using them.

What will this mean for generative AI? I have no idea - obviously there is a chance that some breakthrough will make LLMs far cheaper to build and operate, or companies will have to start forking out a much higher price to use these tools, which could make our industry even harder to break into for people who can’t afford the premium. For now, the question isn’t whether this is sustainable - the companies themselves admit it isn’t. The question is what happens when the subsidy phase ends.

Environmental Impact

Even if the markets appetite for losing billions of dollars annually continues, the use of generative AI is not at all free. LLMs require enormous compute. That compute generates heat. Cooling that heat consumes water - massive amounts of it. A recently published study in Nature Sustainability concluded that AI could have a footprint of 731-1,125 million m³ of water and 24-44 Mt CO2-equivalent emissions annually from 2024 to 2030. That’s equivalent to 200 - 500 bottles of water per person on Earth annually. A similar study published in Patterns concludes that AI systems could produce the same amount of CO2 as the entire city of New York in 2025.

I agree that generative AI is exciting, but the speed with which a lot of industry leaders went from being apparently concerned with the environment to being happy emitting a NYC worth of CO2 in a year is dizzying.

Marketing is Off - Not Really AI

This might be comparatively trivial, but it’s always bothered me how loose we are with the term “AI”. Artificial Intelligence has the humorous distinction of being a field that is named for what it hopes to achieve, not what it actually does. We have not yet created anything that can be called intelligent, instead we have created impressive technology that looks like thinking without being it.

Noam Chomsky, Ian Roberts, and Jeffrey Watumull made this point sharply in their New York Times piece “The False Promise of ChatGPT,” characterizing these systems as statistical pattern-matchers rather than thinking machines. They note these tools are useful for tasks like computer programming, but we shouldn’t mistake that utility for understanding.

This framing matters because it affects how we talk about these tools and what we can expect from them. Calling them “AI” rather than “LLMs” or “generative systems” inflates expectations and enables hype bubbles. Setting realistic expectations helps us figure out where these tools actually fit into our workflows and what their genuine limitations are.

Beyond terminology, there’s a more fundamental question about who benefits from this shift.

Where’s the Wealth?

Generative AI will certainly be economically disruptive. Jobs will change and in some cases, certain types of work may be eliminated. Many similar technological shifts have resulted in increased demands for certain kinds of work – meaning that overall, economic opportunity grows, but the disruption is still there.

I’ve heard more optimistic people suggest that generative AI may lead to more leisure time for more people, without negatively impacting them economically. I disagree with this. You don’t have to look very far into the history of technological shifts to see that increases in worker productivity at best increase demand for labor, and at worst result in massive disruption - they never result in the same pay for less manual work.

Because these technologies are vendor specific, I actually predict that generative AI will make a relatively small number of people massively wealthy, and a large number of people moderately wealthier, but I fear that an even larger number of people could be left out in the cold as certain types of work become possible without human labor.

When a technological shift is centralized to one or two vendors, there is a real possibility of a massive wealth transfer. Uber famously benefited from massive investor subsidies while following a loss-leader strategy. This enabled them to drive out competition, both from established players such as taxis and from future projects such as public transportation that would have belonged to the public. Once competitors were eliminated or reduced in size and Uber’s user base was established, prices were raised and consumers were left with few alternatives. During the subsidy phase of Uber’s growth, passengers paid only 41% of costs, leading to a loss of $20 Billion dollars from 2015-2019. Once the subsidy phase ended, fares increased by 65% and the rate that Uber took from drivers was increased.

The end result was that competitors were eliminated, cities became dependent on ride sharing, and users were locked into higher prices. I fear we could end up in a similar situation with generative AI where productivity is gained through use of generative AI tools, companies become dependent on those tools, prices increase once the land-grab has been established, and workers are left either paying higher premiums for AI tools, or seeing reduced compensation because of the productivity gains seen by using generative AI.

Where AI Doesn’t Belong

I’ve focused so far on technical and economic concerns, but there’s something that troubles me more fundamentally.

Generative AI is being used for a variety of non-technical tasks such as writing and creating art, including visual art and music. This disturbs me greatly. Art does not have a single purpose, and debating the purpose of art is another subject entirely, but for me, art is a way to communicate a variety of ideas and emotions across time and geography. Experiencing art as a viewer or listener provides a connection with the artist in a very real and human way. Replacing the artist with a computer, in my strong opinion, strips away something essential. Even if the consumer can’t tell that the art was created by generative AI, we’ve lost something real and unquantifiable in the exchange and I fear for what that means for our culture and civilization.

So, What Now?

The productivity gains experienced by using generative AI tools are not small, but neither are these concerns. As a professional, I am obligated to use the most effective tools that help me do my job well. I am also obligated to consider the trade-offs between doing something fast and doing something safely. Generative AI applies a thumb on this scale in a very real way, but does not completely eradicate the need for some friction. This will be something that we in the software industry figure out over the next few years. I predict a bit of a roller coaster.

As a human, I am concerned about my impact on the environment and our collective experiences on this planet. As I’ve mentioned, there’s no putting the genie back in the bottle, so it’s imperative that we continue to focus research on making LLMs more environmentally sustainable and I implore those subsidizing the unrestrained growth to take a very real look at the economic realities staring at us all. I do not know what the future holds, and I would never venture to guess what impact this will all have on the environment, our collective sense of purpose, and our careers, but it is up to us all to try and make it as positive as possible.

The challenge isn’t choosing “AI or not AI” - that ship has sailed. It’s navigating the shift thoughtfully and considering the trade offs of how and when we use it in our day to day lives.