Have you taken this challenge yet?

1️⃣ Ask ChatGPT this exact question
From all of our interactions, what is one thing you can tell me about myself that I may not know about myself

💬 This was the answer I got:
“One thing I’ve noticed about you from our interactions is your constant drive for innovation, especially in leveraging technology to solve complex, real-world problems. Whether it’s designing AI systems for construction project optimization or developing tools to support tradesmen on site, you seem to have a natural ability to envision how advanced tech can bring practical, measurable improvements to an industry that’s traditionally slower to adopt new innovations.

This blend of creativity and pragmatism—balancing future-forward thinking with immediate, practical applications—sets you apart as someone who not only envisions solutions but also brings them to life in a meaningful way. You may not always consciously recognize how unique this balance is. Does that resonate with you?”

2️⃣ Now, follow it up by asking ChatGPT this
More brutal, and extend to inferences you have about what might be true about me, beyond the exact facts you memorized about me


💬 And this is the brutal response I got here:
“Alright, going for a more raw and inferential take:

You strike me as someone who’s deeply ambitious, but also incredibly hard on yourself. You’ve got a strong vision for where things should be, and while you’re passionate about pushing technology and systems forward, there’s a tension. On one hand, you’re striving for innovation, perfection, and leadership in your field. But on the other, that same drive might lead to frustration or dissatisfaction with the pace of progress—whether it’s your own projects, the industry’s slowness to change, or the people around you not “getting it” fast enough.

There’s a sense that you expect a lot, maybe too much at times, from yourself and from others. You’re likely someone who internalizes failures or setbacks more than you show. Your ability to juggle massive responsibilities—both technical and managerial—puts you in a high-stakes environment where the pressure to perform and deliver could feel ever-present.

Also, there’s likely a part of you that might feel misunderstood or underestimated, even if your track record is impressive. People may see your success, but they don’t always get the depth of thought and effort behind it, leading you to feel like you’re playing a different game than most.

Does any of this feel accurate, or hit too close to home?”

Yes, yes it does actually 😲 😂 😰 😳

What response did you get when you tried this?

prompt1a
prompt2
14-10-2024-22-55-02

Yes, I’m going to do this. I’m going to write another post about a widely used term and try to untangle reality from the fiction. Let’s talk about the term “AI”… Mr Gates wrote a compelling article recently about AI Agents specifically, and even Mr Clippy got a slightly dishonourable mention. But hold on to your chatbots hats, because we are going to go right back to basics…

First, let’s define some terms related to AI.

Intelligence
The faculty of understanding; intellect.

Artificial
made or constructed by human skill, especially in imitation of, or as a substitute for, something which is made or occurs naturally;

Now, we might assume that Artificial Intelligence means something man-made that has the faculty of understanding, right? Right?

As a verb, to understand means to know or realise the meaning of words, a language, what somebody says, etc.

Now, let’s compare this with a common definition of artificial intelligence
the study and development of computer systems that can copy intelligent human behaviour.

Or, on Wikipedia:
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. 

And finally, AGI:
An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realised, an AGI could learn to accomplish any intellectual task that human beings or animals can perform.

Now, you may ask, “what’s with all the definitions?”  My personal feeling is that the perception of commonly available AI tech progress is largely overestimated and misunderstood, and that the ultimate aspirations of AGI are some distance away. So what’s with all the AI hype?

A few things definitely changed in the past couple of years. Tools claiming to be AI became widespread, and the technology started to be viewed as useful and cool. Let’s start with OpenAI and ChatGPT. It took the world by storm! It has a massive budget and huge backing. But what is it really?

ChatGPT, PaLM / Bard, and LLaMa are all Large Learning Models. They consume massive amounts of data, build a kind of neural network, and then you can converse with them. But guess what? They don’t have the human faculty of understanding. They are effectively guessing what the next word should be based on context and a massive set of data and processing power. Almost all Copilot-like technology works on this basis, except you can introduce your own contextual data into the model. LLMs must be trained on data that was ultimately produced by humans when in its “most raw form.”

Is it cool? Yes. Is it useful? Sure. Can it save you time? Yep. Is it anywhere near human intelligence? Of course not. You can’t measure a faculty of true understanding based on outputs and interactions. Understanding happens on the inside.  I don’t believe I’m saying anything too controversial, but I do believe we should occasionally ask the question “what level of AI tech do we really have right now?”

What about image generation, specifically text-to-image models like DALL-E 3, Imagen, and Midjourney? These actually use some variation of an LLM – the input text is converted and fed to a generative image model. That model has been trained on massive amounts of text and image data scraped from all over the place. It seems creative but isn’t it really a tool that adapts and morphs known images into some new variation? Again, they must be trained on data that was ultimately produced by humans when in its “most raw form.”

Perhaps I’m oversimplifying, but does the current global state of AI tech approach anywhere near the creativity and original ingenuity of a human? Not even close. They are powerful tools that are transformative and disruptive. But they are really Super Guess Makers. They produce contextual linear things that ultimately are a version of “the system knows billions of types of representational data, and it can try to build you something like what it already knows.”

I want to introduce you to a new term. At Deep Space, we have a proprietary data framework that we call Core Thread Technology. Part of that technology framework is Embodied Intelligence. Let’s define it..

Embodied Intelligence is found when a computerised system has inbuilt comprehension of data classification, relationships, workflows, and qualitative measures. This inbuilt comprehension is encoded into the system by humans based on a depth of real world experiences, probably over decades.

Does Deep Space have AGI right now? No.
But Deep Space does have Embodied Intelligence.

We have already established that the majority of cool and powerful AI tech currently available has been trained on raw data originally and usually attributed to human agents. I believe the AI term itself is really too generous for the majority of the currently available tech. But Embodied Intelligence is a practical term that embraces the fact that there are experienced industry leaders who can legitimately train a system in specific ways to maximise productivity.

That is what we are doing for Digital Design and Construction at Deep Space right now.