End of the Programming Era? Yes ... and No

End of the Programming Era? Yes ... and No

I would like to say that Jenson Wuang, CEO of nVidia, is wrong here, but I don't think he is, and this should be sending a strong message not only to programmers but to companies as well.


For the past several years, I've been arguing that the age of programming is ending. Indeed, it was one of the central theses of my contributions to The Studio Model in the book Agile 2.0. that I cowrote with several others a few years back. This became obvious to me when I saw the rise of the Data Scientist in the mid 2010s.

Agile is a programmer-centric methodology, and made sense (more or less) when it was first introduced nearly a quarter-century ago. However, it began struggling as it tried to encompass those whose primary job didn't involve generalized computer languages such as Java, C++ or even Javascript because, increasingly, the programming jobs were being handled either by ever more sophisticated frameworks or were being handled by transpilers.

Python wouldn't have seen the explosive growth in adoption that it did unless it had established its machine learning and analytics libraries, both of which simplified what had up until then been very complex operations requiring deep mathematical skill. Today, a person with very limited Python skill can set up a pretty decent Language learning model, and I think we're almost at a stage where even those people are becoming obsolete as Big Tech automates them out of existence.


Rise of the Intentionalist

Today, the bulk of my programming is done in pseudocode, where I pass the intent of what I want to a pattern-matching analytics system (aka a GPT) which then casts that into a set of programming instructions to accomplish the task. Is this lazy? Yes, of course it is, but programmers have always been lazy, sometimes working day and night to automate something that could have been written once for a single task, because their mindset has always been that they don't want to code the same thing over and over again.

This is usually a forlorn hope, by the way, because the languages that they use keep getting shot out from under them, so that even if they write something today, they'll be working with a new manager tomorrow who insists that it be written differently because they use a different language or toolset. This is absurd on the face of it, but it's been the reality for so long that no one really questioned it until now.

So, pseudocode. Now, here's the rub. Good programmers generally write things out in a short-hand pseudocode before they start "coding", so that they can work out the intent of what they want the code to do. Bad programmers just start coding blindly, and not surprisingly produce piss-poor code in the process. What that means in practice is that programmers are increasingly being forced to give up precise control over their coding in favor of capturing intent, and for many programmers who identify themselves by their language of choice, this is a hard pill to swallow. These people are valuable because they know the methods of every object in the Java hierarchy (some 15,000 or so methods all told at last count), know the parameters to those methods, and as such they were perceived of as gods, at least in their own minds.

Now, I do not believe that AI produces better code today than these experts. For years, the forebears of these experts continued to ply their trade because most compilers didn't work as well at optimizing as a person working manually could ... until a day was reached when those compilers could. Do you know why few people write assembly-level code? Because computers can write it better, faster, and more optimally. We've been here before.

Solutions like Copilot and related code systems are still somewhat suboptimal, but they won't be for long, in part because we are also reaching an age where suboptimal languages are increasingly being used to create more optimal languages, without human intervention.

That means that your skills as a Java programmer, as a C++ developer, as a Javascript guru, etc., are less important than your ability to write pseudocode, which means to say to write code that expresses your intent to the system that then allows it to build the software you intended to create. For a lot of programmers who identify themselves by their language skills, this means that your career is effectively over, unless you can abstract out the intent of what you want to write from the syntax of the language in question.

Now, having said that, this doesn't mean that everyone is suddenly going to get the most out of an AI system. Indeed, I think that the average person, someone who is not used to thinking in terms of problem-solving at a conceptual level, is going to struggle just as much, if not more, than former programmers because their solutions will typically be too simplistic

Is using AI to write code from intent cheating? This should be seen as an absurd question on the face of it. It's all cheating. It took me a while to recognize this. When I was first learning algebra, I learned to write proofs (which are a form of programs) literally one step at a time - I can do this step because of the associative property of addition, and this next step due to commutativity of multiplcation. Simple proofs would often take me pages and pages to write, until I was told that I could, in fact, compile operations together into larger functional blocks. It wasn't that I was slow on the uptake - I'd figured this out early on, but I also thought that it was cheating to do so until a math teacher told me that this was what all mathematicians did it. I had to permit myself to cheat.

The Purge

I expect that Mr. Huang is both right and wrong here. We are entering an era where the ability to intuit solutions to problems is becoming more important than the ability to implement those solutions absolutely. You do need to know, however, what is feasible and even what is possible vs. what is not, and most people, when you get right down to it, don't know that. This holds for programming, for data modeling, and indeed for life in general.

This has been a rough year for programmers in general, in part because there's been a convenient fiction that companies are gearing up for an AI future.. A few may be, but you don't decimate your company's technical talent when you're gearing up for an AI future, because those are precisely the problem solvers that you need in an AI world. Programmers are highly adaptable - you don't survive in this field if you can't reinvent yourself every few years. Instead, I lay the real blame on interest rates and perhaps overly greedy shareholders - when money was cheap, you could borrow with little consequence, but now that money is no longer cheap, you either fire those people you hired with big checks and promises you had no intention of keeping or you reduce the dividends to your shareholders, who may even be sympathetic to those workers but also expecting similar gains to what they have seen in the past.

Now, here's where the quandary comes in. Those same programmers, the bright ones anyway, are learning this technology faster than your company is moving forward with them, and they are also now, after more than a year of this, realizing that they only way that they can pay the mortgage is to set up business for themselves.

This goes both ways. I think the jury is still out on whether the current round of generative AI is any good or not (or even if it is truly profitable or not, and I'm not sure that it is), and it will take a couple of years before these issues get resolved. However, it's not going away. We are at the worst point this technology will ever be, and the ones that will survive and prosper will be the ones that adapt.

The Uneasy Relationship Between Creative and Suit

The companies that survive will also be the ones that adapt, but there's a danger here for them: this technology is caustic to large-scale institutions. It will dis-integrate these institutions and will reduce them to a sea of component service providers and AI bots. That, in turn, will profoundly impact the economy and our expectations of what corporations can (and should) do.

I think we may have dodged a whole bunch of bullets last year. A bid was made on the part of a single company to try to be the ultimate source for AI. That lasted up until about Thanksgiving 2023, when OpenAI wobbled, badly, and a lot of people opened their eyes to the danger of giving any single company, no matter how seemingly forward-thinking, that much power.

What's happening now is that many companies have realized that they need an AI strategy, perhaps one depending upon a specific platform, perhaps something more home-grown. Both strategies have their pluses and minuses, and I suspect that we'll end up somewhere in the middle with companies building out their AI offerings while at the same time, incorporating something from multiple platforms. This is an ecosystem approach, and recognizes that AI has to be viewed as pragmatic and multi-faceted.

I think that we are also at the end of the Big-IPO era, at least for a while, and perhaps even the IT as product era. We have reached a stage where you can't go more than a few minutes without another app that solves a fairly limited use case but that is trying to grow into the next Google, and programming has become ubiquitous enough that companies will struggle to find problems to solve that haven't already been solved in different ways.

In the twentieth century, the suit - usually but not always sales managers - dominated. Sales people were the ones that won the contracts, whether they had anything to do with fulfilling the contracts, and because they were the ones whose financial metrics could most readily be measured, they were often rewarded disproportionately to their actual impact on products or services.

Indeed, its telling that until the early 2000s, almost anyone in senior management was likely to be in sales, marketing or finance. Roles such as CTO, COO, CDO, etc., only began emerging after the turn of the Millennium, and even then, most people in management tended to look upon these roles as unnecessary to the central functioning of the company (and there is more than a little resentment that has built up among traditional senior management at this erosion of power).

The reason that these positions were created, however, is because organizations themselves are changing radically. Marketing was, once upon a time, largely a creative function that was usually outsourced to ad agencies. Today, marketing is a data center, and your typical marketing officer is likely much more conversant with AI-driven BI tools (and even R or Python) than they are with creating ad campaigns. Social Media has become its own center, essentially managing the brand and public relations around that brand. The Chief Product Officer is often the strategic brains of the organization, and older CEOs are a little bewildered about how such a position actually came to be something that was not vested with the executive role.

Indeed, one of the most interesting things to emerge in the last few years is the rise of Provisional CEOs, as companies began to realize that this "essential" role wasn't necessarily all that essential. Yes, someone has to lead (I believe this strongly), but leadership and salesmanship are not always, or even usually, the same thing.

The suits, in other words, are diminishing in importance, as direct sales give way to indirect sales - subscriptions, mainly. The technicians and creatives are becoming more dominant, especially as business becomes more narrative in structure. My assumption is that by 2050, the dominant leader within most organizations will be its storyteller. This, however, is something I want to cover later this week in The Cagle Report.

In media res,

Kurt Cagle

Editor, The Cagle Report

Type type ... Gandalf .... type type ... Harry ... Dresden, hah! Type .... type ....

My Newsletters:

Jim Fuller

Principal Software Engineer at Red Hat, Product Security PSIRT

10mo

There will still be programming done by programmers (and machines) and 'something else' done by a whole lot more people which they may call 'programming' or perhaps defined as 'no coding' ... defining something as a negative of something else is fraught - in a logic sense invalid eg. we could also call 'no coding' something silly like 'no chicken' and it would still be valid. You make some interesting thoughts in that article but I would gently criticise you are trying to connect far too many 'dots'. Just chant to yourself 'we are in a bubble, we are in a bubble ....' some things will stick for the better, there will be lots of churn ... remember the same limitations affect computers as they do programmers eg. the 10x programmer can make 10x more bugs in a shorter time... personally I worry about the bugs AI is capable of creating that humankind has no chance of understanding (we will need a new name for that ...). Relax its all fun, until its not ;)

Like
Reply
Robert Vane

Creator of Federated Subject Areas (FSA) and the G-TEA Domain Architect platform | Enterprise Data Architect | Pioneer of Model Executable Business Systems

10mo

We must be careful not to conflate programming with business systems construction. Yes, AI can create code, but business systems are defined by humans.

Like
Reply
Mark Spivey

Helping us all "Figure It Out" (Explore, Describe, Explain), many Differentiations + Integrations at any time .

10mo

definitely, it simply gets more higher level and abstracted, same old stuff as always .

Like
Reply
Howard Wiener, MSIA, CERM

Author | Educator | Principal Consultant | Enterprise Architect | Program/Project Manager | Business Architect

10mo

It's pretty clear that WHAT is way more important than HOW. Methodologists and product companies would like this not to be the case, but it is. Do we need the Agile Industrial Complex when the productivity of software creation will increase exponentially? Nope, he or she who can see the path to solving a problem will become (and probably already is) way more important than they who grind out the mechanisms of the solution. Given what I've seen, there are actually companies out there that have totally devalued the product manager's job in favor of technical team management. This is like driving a car without a steering wheel. Nonetheless, it makes them feel comfortable because they can evade the challenges of real product discovery and achieving product-market fit. When what they try doesn't work, well, at least they employed the latest Agile fad. So, obviously, failure was just unavoidable bad luck.

Like
Reply
Christian J. Ward

Chief Data Officer, EVP @Yext

10mo

My advice is, "Learn to code like you should learn Latin. Not because you will always use it, but because it teaches you how to think." I have a couple of kids heading to college. One this year and the other soon thereafter. We debate this all the time. What to study if AI is going to disrupt nearly every industry? There is harmony in learning a language like Latin and coding. There are logical constructs and symbolic representations that help your mind in building (a narrative or an application). Jenson Wuang is right. Coding skills won't be as necessary as they were in the past, but thinking like a coder will always be necessary.

To view or add a comment, sign in

More articles by Kurt Cagle

  • What to Study in 2025 If You Want A Job in 2030

    What to Study in 2025 If You Want A Job in 2030

    Copyright 2025 Kurt Cagle/The Cagle Report This post started out as a response to someone asking me what I thought…

    16 Comments
  • Ontologies and Knowledge Graphs

    Ontologies and Knowledge Graphs

    Copyright 2025 Kurt Cagle/The Cagle Report In my last post, I talked about ontologies as language toolkits, but I'm…

    46 Comments
  • Ontological Ruminations

    Ontological Ruminations

    Copyright 2025 Kurt Cagle / The Cagle Report What do we mean by "ontology"? The answer is ..

    27 Comments
  • Mermaid Machinations

    Mermaid Machinations

    Copyright 2025 Kurt Cagle / The Cagle Report Mermaid, the Domain Specific Language (DSL) for creating a variety of…

    13 Comments
  • The Shape of the Future of Work

    The Shape of the Future of Work

    Copyright 2024 Kurt Cagle / The Cagle Report A couple of weeks ago, I focused on what I saw as technology at work going…

    12 Comments
  • The Truth About Truth

    The Truth About Truth

    Copyright 2024 Kurt Cagle / The Cagle Report I look around me, I see the office of my home - a room, books on a shelf…

    8 Comments
  • Through a Glass Darkly, 2025

    Through a Glass Darkly, 2025

    Copyright 2025 Kurt Cagle / The Cagle Report I started writing these around the start of the Millennium, and while I…

    20 Comments
  • A Brief History of Writing

    A Brief History of Writing

    Copyright 2024 Kurt Cagle / The Cagle Report This is a preview draft of a chapter of my upcoming book Ontology: A…

    39 Comments
  • Changing Knowledge Graph Properties and Other RDF Thoughts

    Changing Knowledge Graph Properties and Other RDF Thoughts

    Copyright 2024 Kurt Cagle / The Cagle Report The problem with writing a blog is that eventually, you will reach a stage…

    14 Comments
  • Where's the Deep Fake Crisis?

    Where's the Deep Fake Crisis?

    Copyright 2024 Kurt Cagle / The Cagle Report This piece, written by Harry Law and Michael Spencer makes an observation…

    25 Comments

Insights from the community

Others also viewed

Explore topics