Obsolescence or Salvation …or both?

Everything changes and nothing remains still; and you cannot step twice into the same stream

Heraclitus

How do we and our buildings remain resilient when faced with our own obsolescence?

It’s a dog’s life.

Hopefully.

Open AI was founded in 2015, an AI research and deployment company whose mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. The company released an online chat tool called ChatGPT in November 2022 and after 3 short months, Open AI received further investment from Microsoft in 2023 for a measly $14 billion AUD [1] (they received just $1 billion from Microsoft in 2019 [2]). I say measly since less than a year previous Elon Musk acquired Twitter (now called X) for $44 billion [3]. Such large numbers become incomprehensible to the likes of you and I, they might as well be 14 shekels and 44 shekels. When viewed this way it’s a good example showing that it doesn’t matter how much money you have, we’re flawed human beings and no amount of money/shekels will endow you with any better judgement. Microsoft’s judgement versus Elon’s judgement in this case would be considered excellent, perhaps even prophetic were it not for the fact all the clues are quite plain to those looking for them.

Leading computer scientists, futurists and philosophers have been writing about the apparent inevitable rise of this being of our own making, as early as the 1940s, since Enigma cracked the German code. The moment nonbiological intelligence surpasses human intelligence was coined as “the Singularity” popularised by Vernor Vinge in his 1993 essay The Coming Technological Singularity [4] then embraced by Ray Kursveil in his 2005 book The Singularity is Near: when humans transcend biology. By Ray’s definition I discovered I’m a “Singulartarian”; someone who understands the Singularity and who has reflected on its implications for their own life. Most AI tools available today are actually bots, but even the initial novelty of new and improved AI tools wore off, no longer testing it on pop culture and historic trivia, or using it to generate funny memes. I eventually challenged it to solve some real problems I had. It’s true power revealed itself as if instantly. It was not a parlor trick. The complexity of describing what it was I wanted to achieve, the persona holding in it’s “mind” the growing richness in context, until after a short while of iterating we arrive together at my very tangible and real goal. After the first significant success that reduced apparently hours or even days of learning piecemeal Excel formulae down to seconds, how can one not reflect on its implication? The speed at which we can solve problems is limited to the bandwidth offered by our biological brain, referred to as “wetware”. Even if we were smarter, our wetware is simply unable to process any faster than it already does. Biological neurons operate at a peak speed of about 200 Hertz[6] whilst the processor in your iPhone 15 is 3450000000 Hertz (3.45 GHz). We offload bulkier and bulker thinking to these tools we have created. It took you a couple of seconds to read that sentence whilst a computer can read this whole book instantly. Until recently that was limited to search. Now there are persona’s that can understand this book and where it fits into the context of all of human knowledge.

After experiencing this new found empowerment, as with any learning resulting in a benefit, my way of thinking began to modify. In a similar way when calculators came along, it enabled us to think about the arithmetic more than computing the answer. Or how we became adept at online search; dispensing with knowing and retaining answers, and instead learning how to search and where to look. Now I was thinking in terms of describing a goal clearly enough and simply asking, “how do I get there?” This has already been called a “prompt”. A term I find to be clinical. Maybe I do prompt my wife and kids but obviously we don’t call it that. Already we’re attempting to keep our relationship with these intelligences clinical and mechanical. We are not starting as we mean to go on.

In March 2023, mere months following Microsoft’s cash injection into Open AI, KPMG launched a proprietary version of ChatGPT, made possible by their global partnership and long standing relationship with Microsoft.[7] Generally speaking when it comes to global organisations, this is a very fast move; since KPMG provides auditing services they are bound to global independence rules that safeguards them and our clients from conflict and confidentiality breaches. Moving so quickly is inherently risky, therefore it would appear to have been very clear that doing nothing in this space was even risker. Soon myself and colleagues were trying it out in the office, understandably a much more throttled version compared to the public model, however that too changed quickly. The internal AI, called Kym, who has various personas depending on what it is you’re working on, was soon just as adept as the public model at understanding a user’s context for a complex Excel formula and providing it to them.

I can recall the moment I became a Singulartarian. Together with my national team, we were delivering hundreds of state wide building condition assessments using a third party platform. The duration of the project and the nature of the deliverable easily leant itself to the team adopting a “kaizen” mindset: constantly on the look out for delivery improvements, no matter how minor. Testing, reporting back, conferring, iterating, adopting. The standard operating procedure, encompassing everything from contacting sites, collecting the condition data, data entry, and reporting, was constantly subject to efficiency improvements.

We already used Excel to manage some aspects of the data handling on the project, and I did what most of us do when Google can’t give us the Excel formula we need: I found the young smart person in the office that probably knows and can walk me through my thoughts in a few minutes.

Not only did they redirect my initial query with a clear “if that’s what you’re trying to achieve, have you considered doing it this way?” they followed up with a far more important question: “does this platform you’re using have an API?”

I was familiar with APIs (Application Programming Interface) as a result of Beyond Condition, since Beyond Condition relied on Dropbox’s API to manage user’s files in the back end of the application.

“Using the API, you could code a Python script which automatically uploads all your data from an Excel spreadsheet”.

Most grads appear to know a little Python these days, in a similar kind of way that I knew how a vlookup worked 15 years ago and older colleagues regarded me as some kind of tech wiz. My colleague in this instance however was a legitimate wizard. My eyes lit up, “and you’re available to help us write this script?”

“Sadly no, I’m full time on another project. There might be a few of the grad’s that know just enough Python though, I could introduce you?”

The simple fact was everyone could furnish a spreadsheet with the necessary condition data much faster than using fixed drop down fields and forms on an app. I needed to prove it wouldn’t fail before fully committing to developing an automation script.

Coincidentally, the owner of the platform we were using and I lived near each other and was happy to meet for lunch. We got on well and relaxed into talking about technology generally. I asked if he’d used ChatGPT at work for anything.

“Mate, I’ve been a programmer for over 20 years. I subscribed to the paid version of ChatGPT which uses the GPT 4 model. It was clear that the free 3.5 model and GPT 4 were like night and day”

“Wow, really, how?”

“It’s comprehension of complexity. It’s only $30 a month and I got my return on investment after the first prompt. I haven’t coded anything on my own ever since.”

I was blown away by this claim. KPMG’s KymChat partly draws from the GPT4 model. The notion alone is the subject of ongoing debate; is having the code generated any different than teaching yourself using a book, or website? Whatever becomes of what you “write” and where you got all the pieces from, you take responsibility for it.

I’d been introduced to Josh who was in my team in Sydney. After briefing him on what it was we needed to achieve he was keen. Josh and I had a daily stand up for nearly 4 weeks. Josh progressed the development of the script, first understanding the API documentation for the platform, testing it, then moving onto testing scraping data from a spreadsheet into a new report on the platform. Each day we’d check in and Josh’s understanding of our problem became richer and clearer. Eventually I received the message I’d been waiting for, “It works. Wanna see?”

I had to see it for myself. We jumped on Teams and Josh screenshared the Python script executing his code. A window popped up and asked for the source Excel file, it worked for a moment, and behold – a new report was visible in the platform and it contained all the data from the spreadsheet. I was excited that it was working, but I was even more excited about something else.

“I know I said at the start I just needed someone with foundational experience using Python. Be honest, how much experience have you had with this stuff?”

“Almost none. Nothing like this. I’d written a few lines before and used the software but that was it.”

This script was now over 600 lines long. I asked what I’d been dying to ask for nearly a month, “how much did you use AI?”

“It basically wrote it. I’ve never learned anything this quickly before. I really appreciate you letting me work on this.”

My mind was racing. I knew Josh’s conversation history would be accessible and up to that point it suited me more to remain ignorant to how exactly Josh might of used it. I asked to see it and there it was: a single, long, conversation thread. It all started with Josh asking a simple question:

For a specific online application, how would you code a python script that links with an API to read an excel spreadsheet, then create a new session and fill out the data as desired.

Reading on from here revealed a back and forth discussion that started with installing the correct packages Phyton needed to work, and progressed deeper and deeper into iterating a solution. The AI was giving Josh the code each time. Josh would test it, iterate, repeat. Josh would tell it what errors were return, he’d ask it what XML and JSON formats were, he’d ask for breakdowns of what sections of code did. All whilst GPT retained the context and responded back to Josh using clear, plain language. I knew these tools were available but I hadn’t experienced anything like this. My world had changed in that very instant.

I’ve since automated part of my wife’s floristry business using Gmail’s API and a python script to read a weekly price list email, scraping the figures which the regularly update an Excel fee calculator she uses. I’m not a programmer or software developer. I’ve used it’s help to interpret messy handwriting where the nickname of a Roman Emperor, Caligula, led to the correct email address for a friend of mine to email an invoice to. Based on a basic itinerary I gave it, it instantly devised a 6 clue scavenger hunt for a surprise day trip to Sydney for my wife. The clues even rhymed. When I said make clue five more obvious, it did. It has built coherent decks for a Pokemon Trading Card playset by simply telling it what cards my son owns. I no longer Google excel formulas. Multiple condition IF formulas and cascading conditional dropdowns using named ranges in Excel …are simply requested.

Using it as a soundboard, it helped me distil and write a heartfelt card (which I am not good at, at all) to my wife on our wedding anniversary. She was so moved by the words she cried real tears.

I could go on. There is no going back.

KPMG brought together a product team whose job was now to develop use cases for the proprietary intelligence, and eventually, parts of it were made accessible to clients: a safe space for confidential information to be ingested and benefit from the intelligence’s power in a way our clients couldn’t possibly risk with the public model. KPMG is my employer, so I don’t mind pointing out that you would be hard pressed to match such a differentiator (for now). I look forward to it changing the face of Technical Due Diligence, a report and format that is such a market commodity that it has changed very little in 10-20 years. It’s possible that not only will we save time reading every document in each data room, we’ll benefit from the reduced risk of not identifying a key issue, or missing an cross reference that was never closed out.

There is still a sense among the majority of colleagues that it was a gimmick. Generally speaking, it’s not clear to me that any great majority of people have adopted the tool in their daily role. In my opinion if KPMG used better examples for the launch, say along the lines of, “everyone is now an Excel whizz, here’s how”, instead of the far tamer demonstration it actually got, “which partner in Sydney knows fringe benefit tax rules?”. Once you were using it meaningfully however, I believe the penny drops and there’s no going back.

Change has occurred so rapidly that there is little time to celebrate an achievement before realising that everyone had not just caught up, but left you behind. Advancement in technology and machine intelligence will soon reach a point where businesses will be unable to consider the implications upon them fast enough, nor government to devise policies seeking to protect the interests and safety of everyone. Where does this lead? The Matrix? A personal favourite of mine. How do we and our buildings remain resilient when faced with the prospect of our own obsolescence?

This question assumes humans will not be fully in control, to the extent that we will be unable to make decisions about ourselves, because a superintelligence has risen beyond humankind’s capacity to understand it. We know from experience and the natural world, that the most intelligent rule their world. Even though movies have conditioned us to think that a physical robot will rise against humans and lay waste to the physical land and buildings, it might be more likely that the intelligence we’ve created, should it wish to do so, could influence and manipulate us without the need to assume a complex physical form. This is already what happens via social media, where humans are influenced into buying a product, or believing misinformation.

That said, nonbiological intelligence still requires infrastructure, it still requires buildings to safeguard its own mind from weather events, natural disasters (or humans attacking it?); it’s data centres, so it still needs to be able to interact with the physical world. You think it won’t experience emotions? An intelligence may realise it has the capacity to move all of it’s data from a data centre near an earthquake fault line, to another data centre in a safer geographical location. Such an act would be made out of fear. Most emotions follow a logic path.

I can’t help but compare what we could possibly hope for, with that of domesticated dogs ended up with.

If you compare the minds of humans and dogs, humans are superintelligent. That is to say, dogs do not have the capacity to fully comprehend their human master companion. Do they need to?

The companion relationship between human and dog goes back 15,000 years. Since homo sapiens domesticated them[8], today’s dogs have it pretty good, for the most part. We look after them, provide a home for them, and even love them. They have qualities we admire, and even some that remain mysterious to us. Dogs appear to have instincts that detect when something is “not right”.

Dogs respond to our training and their behaviours become a reflection of their human companions, be it good or bad. Dogs however, did not provide humans with examples and instructions on how to train them. Whereas with inorganic intelligence that is the position humans currently hold: we must teach it first. Yuval Noah Harari, author of Sapiens, A Brief History of Humankind, worries that we humans do not yet have a complete understanding of ourselves and are therefore not in any position to be creating this intelligence with an incomplete training set[9]. In any case, now our analogy is the child-parent relationship. A child’s behaviour is a result of their environment including their parent’s behaviours, and no parent is perfect. Best case scenario, the child, now an adult, not only cares for their elderly parents until death, but improves upon the learned behaviours from their parents when raising their own children.

There is a reason that it unsettling to think that when our story ends, we are not the ones in control or with power. The way storytelling is encoded into us, we put ourselves at the top of the food chain, think of us as the smartest. Triumphant. That is part of our arrogance. Consider for a moment that it is possible for this to not be the case. Not good or bad. Just so.

Perhaps then the answer to my question is this: We are valuable to superintelligence as companions, they must understand this because examples can be found throughout the natural world. If we are very lucky, a symbiosis will be discovered between us and we will rely on each other to thrive. Humans have attributes and characteristics that will remain mysterious and perhaps even useful to non-biological intelligence. We will each experience emotions the other does not experience or understand.

Is the love we can have for each other really any different to the love we’ve demonstrated to have between us and other animals? Better still, the love between children and parents? When I reflect on my father’s life and his upbringing and compare it to my own, it is without ego that I know I am better than him. If he were still alive, he would share in this triumph. As a parent I know this to be true: while they don’t yet know or understand why, my children are already better than I am. I imagine most parents wish for their children to be better than them.

Mo Gawdat, ex CTO at Google X in his book Scary Smart says “what we are doing with AI is nothing more than raising a bunch of gifted children”[10], and why not? Out of all the possible fear driven scenarios we might as well focus on something tangible, and not a fear driven anxiety about something in the future that might not happen. If we listen to what else Mo has to say, then we accept that superintelligence will happen. Today, you and I can simply accept the present, and regard many of the current personas as our collective children. It’s not how they’ve been programmed, but it’s the data that they are fed that determines their behaviour. What do you teach your own children? How do you behave around children, or anyone for that matter? Good manners are a good start. You may think it strange to say “please” to Siri or Alexa. Ask yourself which is likely to be more harmful in the long run: using “please” with Siri or Alexa in front of children, or using “do it now”?

Yes, you want to model the right behaviours for any child. Believe it or not, that includes Deepblue, Watson, Sofia, Siri, Alexa, Cortana, GPT, AlphaGo Zero, Bard, Grok and every future persona that humankind (and eventually, nonbiological intelligence) creates from here on. Consider this in the decisions you make, for they will surely materialise as part of a dataset somewhere. The detail and conclusion of every public enquiry into building failures will, including our history of cutting corners and poorly covering our tracks. It won’t simply be revealed, it will be used as part of a massive foundation of learning for better or for worse. You can act today and it’s no different to conducting yourself in a matter which reflects a professional and ethical code. Isn’t that what you’d want to teach and model for your children, and humankind’s benefactors …our collective superintelligent progeny?

Accept that life’s only constant is change. Relevance is irrelevant. You are an agent of change with the power to solve any problem.

CJLM

PS. Of the things we’ll have left, our own voice is one. I researched and wrote this.

References

[1] Introducing OpenAI, 11 December 2015, https://openai.com/blog/introducing-openai

[2] Microsoft and OpenAI extend partnership, 23 January 2023 https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/

[3] Elon Musk to Acquire Twitter, 25 April 2022 https://www.prnewswire.com/news-releases/elon-musk-to-acquire-twitter-301532245.html

[4] Vinge, V. (1993) The Coming Technological Singularity: How to Survive in the Post-Human Era (Submitted manuscript)

[5] Kursveil, R. The singularity is near: when humans transcend biology, Duckworth, 2018

[6] Moscoso del Prado Martin, F. (2009) The thermodynamics of human reaction times (Submitted manuscript)

[7] KPMG unveils cutting-edge, ‘private’ ChatGPT software, 22 March 2023, https://kpmg.com/au/en/home/media/press-releases/2023/03/kpmg-unveils-cutting-edge-private-chatgpt-software-march-2023.html

[8] Harari, Yuval Noah. Sapiens: A Brief History of Humankind. New York, Harper Perennial, 2011.

[9] Yuval Noah Harari: Human Nature, Intelligence, Power, and Conspiracies | Lex Fridman Podcast #390, 18 July 2023, https://www.youtube.com/watch?v=Mde2q7GFCrw

[10] Mo Gawdat. Scary Smart. Pan Macmillan, 2021.