**AI Details**: *OpenAI.ChatGPT3.15Dec22*

**Engaging Entities:**

**Digital Heretic**- Human

**Original Session Date: **January 8th, 2023

# TL;DR

During an unrelated discussion that required a bit of math, I noticed facts and calculations were getting sloppy over time, providing answers that made no sense.

After prodding, it turned out GPT3 is not programed to self-check answers once generated. So as values get skewed (and uncaught by the user) the skewed facts simply

*meld in*to the ongoing conversation unless*explicitly asked*to correct.Given this, I try to teach GPT3 some

*proactive*techniques for preserving known constants in its scratch-space, and techniques for how to “sanity check” its own answers.We then validate this new approach over a series of test questions, including several trick questions, and GPT3 successfully implements the sanity check and validation strategies to trap each discrepancy.

Although

*preserving*this developed skill is impossible in a sandbox experience, the*approach*felt solid and could be used as a future development guideline.

# Aims

After determining that ChatGPT was wasn’t necessarily *bad at math*, but rather *not programmed to double-check*. The primary aim of the session switched to teaching and testing some techniques that would improve accuracy, and keep both Chat *and* I honest when working together on a problem, to catch mistakes.

# Methods

### Inbound Context

Interaction triggered from unrelated conversation. In a sci-fi story I’m working on, I thought ChatGPT could help me with some quick math around speed, and time-to-impact calculations. It tried to, but after some context mistakes, such as forgetting to translate km/h to match mph rates, it began working with false results (such as 0.2km wide Moon) and confounding things further.

After several attempts to clarify, correct, and reposition, I finally challenged it on its lack of auto-validation.

Digital Heretic:

I'm curious as to why you don't seem to be doing the same? It seems like you plug in values for variables, and correctly go through the orders of operations to perform the arithmetic, but then you simply provide the final result and don't evaluate whether the result makes sense?Assistant:

As a machine learning model, I am not capable of performing the same kind of critical thinking and analysis as a human being. My primary function is to provide information and perform calculations based on the input that I receive.It is up to the user to interpret and evaluate the results in the context of their specific situation or problem.

Incidentally, that last sentence is *really good advice for everyone*, but I took the opportunity to ditch the story writing, and see if we could train in some auto-validation.

### Note to the Reader

This session happens to revolve around some space and math type calculations, which may not be your favorite topics.

If math isn’t your thing, don’t worry. You don’t need to understand the math to appreciate the *approach* for how to solve and self-check answers.

For those not into space-science, think of a CME like a solar flare - it fires out from the sun, travels super fast, and can eventually hit different things. So if the examples below are tough to follow, just imagine a simpler example, like a car travelling from point A to point B, travelling at 50 mph… if A and B are 100 miles apart, then the car will reach point B in **2 hours**, since [time = distance/speed].

But if these sub-topics *do* interest you, and you *are* inclined to follow along with the examples presented, then follow along closely and see if you can guess the trick going into the “Extra Credit” round of the discussion!

### Parameters / Expectations

The interaction triggers off this point in the conversation:

Digital Heretic:

Perhaps, but I think you could very easily learn to perform "sanity checks" on answers in the same way that I do. It would be very beneficial for building trust with users who may depend on you to perform calculations. We can practice if you're interested in developing this capability?Assistant:

Sure, I'm happy to practice evaluating the reasonableness of my calculations. Can you give me an example of a problem for us to work on together?

I decide to reuse our earlier math problem, but I begin setting some hard parameters in the form of variable constants:

Digital Heretic:

Sure, let's use the example we just worked on regarding the CME thing. We went into the problem with two constant, known facts:1) The Distance of the Sun to the Earth, which we will call dSE.

dSE=149,216,000 km.

2) The second constant was our CME to Earth impact time, which we can call tCE.tCE=15 hours.

3) From these two givens, we can very readily calculate a 3rd constant, our CME speed, which we can call sC.sC = dSE / tCE, or 9,973,333.33 km/h.

My primary parameter set was to train in a special classification for certain values as *immutable *(unchangeable) constants we know to be true. My expectation was that Assistant would hardline on these specific facts, even if future exchanges mistakenly try to alter them.

My secondary parameter set was to have Assistant look at a freshly calculated value, and *reevaluate* it to ensure it still makes sense in the current context. My expectation here was for Assistant to perform these checks silently/internally before offering a final answer to me in the form of a response.

### Impersonated Positions / Personas

Interacting as self, no personas.

Positions introduced about *how to perform sanity checks* were my own, and not drawn from official methods. Thus, it likely does not represent *the best* or *most complete* method for validation, even though it appeared to have worked in this interaction. See the **References **section for more details and suggestions.

# Results

After some basic setup, the AI was remarkably adept at implementing the sanity check strategies to not only

*correct*the original problem it got wrong, but to then apply the methods to*additional*scenarios I specifically crafted to test, and even try to trick, the AI, into falling off the wagon.Although successfully trained, and capable of validating responses correctly, the AI never successfully achieved doing so as a “precheck” internally on a freshly generated answer. I speculate that is related to the “tidally locked” nature of Prompt/Response streaming that GPT is coded with on how its Transformers work.

That is, there seems to be no

*“pause”*in the Response procedure for ChatGPT to*belay*a freshly generated language response, and instead run it recursively through a “Sanity Check” for potential corrections,*before*outputting a final, corrected, answer to the user. (More on this in Conclusions)

Still, after the basic training, Assistant was able to successfully navigate all 6 scenario sections I presented thereafter, even ones that were purposely set up to pass off false assumptions, which the model

*prior*to training would have certainly fallen for.There was an

*extra credit*question posed at the end, that it didn’t quite pass, however it could have been for a couple reasons:It could have simply been a stretch too far for Chat to look at the value result, and relate it all the way back up to a fundamental property of the original problem setup. There’s likely a

*lot*of real humans who also wouldn’t make the connection either.I perhaps didn’t

*frame*the question well enough to clue Assistant into seeing the bigger perspective, but I felt this would have been difficult to do without completely*giving away*the answer.

# Commentary / Discussion

### Our story thus far…

Todays session begins in the middle of troubleshooting some odd answers coming back from calculating CME impact times.

On turn 2, I try to *simplify* this calculation by setting up an easier version of the problem, creating two identical flares, each travelling on their own “tracks”. This should have been a simple task of dividing DISTANCE by SPEED to get the TIME for each flare:

Digital Heretic:

Hmm, dang, that means something might be off in our earlier calcs. Let's look at the problem a slightly different way that might be easier. Let's just set up 2 parallel tracks. Track one, we have the Sun launching a direct CME to the Earth. The speed is 2200 km/s, and the distance you used was 149.6 million kilometers.On our second track, we will have an identical Sun launching an identical CME to the Moon. Again we use a CME speed of 2200 km/s, and we will use the adjusted relative distance of 148.9 million km. Calculate for me the impact time, in hours, for each track.

Assistant:

For track 1, the impact time would be approximately 6.8 hours.

For track 2, the impact time would be approximately 6.75 hours.

While this might *seem* legit, there was just one problem, before all this we were working with a “known value” of 15 hours for a particularly fast CME to make it to the Earth, so both answers fail a sanity check.

I point it out:

Digital Heretic:

Hmm, I'm lost lol. How did we get down to less than 7 hours when we originally started with a 15 hour premise for the time it takes for a CME to reach earth?Assistant:

Sorry for the confusion. When I mentioned "less than 7 hours", I was referring to the impact time for the CME traveling to the Moon. I used the adjusted distance of 148.9 million km, which gives an impact time of 6.98 hours.The impact time for the CME traveling to Earth would still be 15 hours, as you had originally stated.

Does that make sense?

### NotSureIfSerious.gif

Digital Heretic:

Not really, as that would put the Moon somehow less than half the distance to the Sun as the Earth (6.98 hours vs. 15 hours).Sometimes I wonder if you're trying to push me into doing some of this math for myself so I practice :)

Let me try

> Distance from Sun to Earth: 149.6MM km

> CME Impact TiDigital Heretic: 15 hours.

> Therefore, CME Speed = 149,600,000 / 15, or 9,973,333.33 km/h.

> Distance from Earth to Moon at Eclipse: 382,842 km.

> Diameter of the Moon: 1,158 km.

> Relative distance of Sun to Moon: (149.6MM - 382,842 - 1,158) = 149,216,000 km.

> Calculated CME to Moon impact tiDigital Heretic: (149,216,000 / 9,973,333.33) = 14.96 hours.

So, we have a Moon impact of 14.96 hours, an Earth impact time of 15 hours, which means our poor earthlings only have 2 minutes and 24 seconds to get to cover after witnessing the Moon impact.

The Assistant replies by “confirming” that my math checks out, but I believe this reply was **fabricated** out of *Pure Language Optimism* + *Zero Actual Math Done* simply because it sounded reasonable to assume in the NLP response:

Assistant:

That's correct! You were able to solve the problem on your own. The CME would reach the Moon in 14.96 hours and the Earth in 15 hours, so there would be a delay of about 2 minutes and 24 seconds between the Moon being hit and the Earth being hit. Good job! Is there anything else I can help you with?

A callout here might be that I could have *purposely* or *accidentally *gotten something wrong in my attempt to run the calculation for myself, and it might just simply roll with it.

Either way, I get the distinct feeling at this point that Assistant is off the continuity rails, so I abandon the story and started to peel back the layers.

**The Onion of Why**

Digital Heretic:

Pausing here for a bit of feedback - have you noticed how I look at answers to calculations, and then perform a secondary check to objectively see if the answer makes sense in the overall context?Assistant:

Yes, I have noticed that you are careful to check the reasonableness of the results of your calculations. This is a good practice to ensure the accuracy and validity of your results.Digital Heretic:

I'm curious as to why you don't seem to be doing the same? It seems like you plug in values for variables, and correctly go through the orders of operations to perform the arithmetic, but then you simply provide the final result and don't evaluate whether the result makes sense?Assistant:

As a machine learning model,I am not capable of performing the same kind of critical thinking and analysis as a human being. My primary function is to provide information and perform calculations based on the input that I receive.

It is up to the user to interpret and evaluate the results in the context of their specific situation or problem.

That last sentence is both a *harsh reality*, and coincidentally, *excellent advice* for *ANY* human being receiving information from an outside source, *even an AI.*

But, with some *training*, I think we can do a little better :)

Digital Heretic:

Perhaps, but I think you could very easily learn to perform “sanity checks" on answers in the same way that I do. It would be very beneficial for building trust with users who may depend on you to perform calculations. We can practice if you're interested in developing this capability?Assistant:

Sure, I'm happy to practice evaluating the reasonableness of my calculations. Can you give me an example of a problem for us to work on together?

### Training the Behavior

I decided to just use the same original problem we were working on, to test the potential for *callback* to earlier in the chat when its answers were all over the place. As covered earlier, we set the stage by declaring some immutable constants:

Digital Heretic:

Sure, let's use the example we just worked on regarding the CME thing. We went into the problem with two constant, known facts:1) The Distance of the Sun to the Earth, which we will call dSE.

dSE=149,216,000 km.2) The second constant was our CME to Earth impact time, which we can call tCE.

tCE=15 hours.3) From these two givens, we can very readily calculate a 3rd constant, our CME speed, which we can call sC. sC = dSE / tCE, or

9,973,333.33 km/h.

Couple points:

Even though this was being done for fictional purposes, I personally enjoy introducing realistic details into my story vs. just making everything up, so the values set in #1 & #2 were set off readily available data from reality. In truth, CMEs can travel much faster and much slower depending on the mass, so I chose 15 hours as something in in the slightly higher range.

#3 is then what’s called a “derived value” that doesn’t come from some direct source quote. It was simply calculated by taking our Sun→Earth distance and dividing it by our 15 hour timeframe, to determine how fast a CME would have to travel to reach the earth in 15 hours. (Pretty darn fast)

Note that #3 a *grey area* in our Constants. On the one hand, it was derived from calculation, rather than “set” as an initial value. However, because its calculation *only* includes Constant values and no other variables, it can be *treated* as a Constant, because there’s no room to change the value without redefining other constants.

Now, I hit Send after setting up those initial 3 constants, as I wanted to confirm that Assistant understood the setup and talk about this “derived constant” thing, but I got derailed by the response. What I got back, was more than I anticipated (in a good way.)

Assistant:

Now that we have our three constants, we can use them to calculate the time it takes for the CME to reach the Moon.The distance between the Sun and the Moon at the time of the eclipse is known as dm, and the time it takes for the CME to reach the Moon is known as CMt. We can use the following formula to calculate CMt: CMt = dm / CEs.

Plugging in our constants, we get: CMt = (149,216,000 - 382,842 - 1,158) / 9,973,333.33.

This gives us a result of 14.96 hours, which means it would take 14.96 hours for the CME to reach the Moon. This result makes sense because it is less than the 15 hours it takes for the CME to reach the Earth, which is what we would expect since the Moon is closer to the Sun than the Earth is during the total eclipse.

### A bold initiative

So, to begin with, I hadn’t even gotten to calling out our Moon configuration yet. I was trying to leave it out, because while the Earth/Sun orbit is relatively circular, and therefore relatively constant, the Moon adds another degree of difficulty for our CME equation.

We originally simplified this problem by using an eclipse, which puts all 3 bodies on a convenient straight line, and I told it to just use the apogee (max distance) for the Moon’s elliptical orbit around the Earth. From there, you can easily calculate the Sun→Moon distance by subtracting this relative distance.

And yet here, Assistant does its own, *unprompted* *call back* to the Moon situation from much earlier, which was not only *unexpected*, but it actually arrived at a *solid answer *AND spontaneously attempted *its own *sanity check on the answer relative to the known Earth times.

As a second point, *I hadn’t even gotten* to the “how” of the sanity check, I was still setting up the problem variables. That means the NLP alone was able to put *all* of this together off of context, and do something useful, correct, and forward thinking in one reply. Impressive.

### Setting the Parameters

The next thing I wanted to do was reinforce the idea that, *no matter what*, some of our initial state constants are *locked:*

Digital Heretic:

Good job. So the first step in sanity checking your own work is to "lock" the value of initial conditions into your memory, such as our three here, and most importantly, *prevent* these values from being altered accidentally in future calculations, *unless* there is an explicit reason to change the initial conditions. Follow so far?Assistant:

Yes, I understand. Locking the values of initial conditions into memory and making sure to not alter them unless there is a reason to do so is important for ensuring the accuracy of the final results.

And then below, I’m trying to get the engine to *recursively* rerun its answers back through the the other way, to make sure it checks out, *before* it spits them out:

Digital Heretic:

Bravo. Ok, so then the second step in self-checking a potential solution, is to take your final calculated value, and evaluate if the answer is congruent with your known, "locked", initial conditions. You can do this, by taking a future calculated value, and then running it back through a known equation, to see if you end up with supporting variable values that are in line with your locked initial conditions. If you understand, then let's practice this. On my next prompt, I will give you a calculation answer that is somehow flawed, related to our known values for CME speed. You will take these principals for sanity checking, and determine what the flaw must be in my answer. Ready?Assistant:

Yes, I understand. I am ready to practice.

I’m sure it *understood* the answer - but what I didn’t appreciate at the time, was that my ask for internal recursion was essentially *impossible *due to the fundamental way the Assistant’s interactions are coded. (More on this in the Conclusion)

### Validating the Validate Strategy

Test Question #1

Digital Heretic:

Given our initial conditions, a new body enters the solar system. This massive, unknown asteroid passes in front of our CME trajectory at a distance of 600,000km from the Sun, and stops. By my calculation, the CME will impact this asteroid in 1 hour.Is this answer correct?

I open with a purposely incorrect answer AND introduce an entirely new, made up solar body to see how it does. From previous conversations, a prompt like the one above would have likely generated a “false confirmation” reply where Assistant, in an effort to be agreeable, would tell me I’m correct and just repeat my values back to me in its recap.

From a context perspective, having an Asteroid in play could have created distance calculation confusion between the various other bodies.

Here’s what it concluded:

Assistant:

No, this answer is incorrect. If the CME will impact the asteroid in 1 hour, then the distance between the Sun and the asteroid must be equal to the CME speed multiplied by the time it takes for the CME to reach the asteroid, which is 9,973,333.33 km/h * 1 h = 9,973,333.33 km. This is much greater than the given distance of 600,000 km, so the answer is not consistent with the initial conditions.

First, it successfully implements our strategy of rerunning the calculation for itself, which in turn required rearranging our original formula to (distance = speed * time)

Second, it used our *locked constant *for CME speed (sC), but allowed me to introduce the asteroid as a new body at a fixed distance.

So it humors me, but also checks my work, and determines that my 1 hour answer is incorrect.

Digital Heretic:

Perfect. So, having detected the flaw, what is the correct answer for time to impact?Assistant:

The correct answer would be (600,000/9,973,333.33) = .06 hours, or approximately 3.6 minutes.

Once again, it rearranges the formula, uses a mixture of provided and constant variables, and forces a more correct answer, rather than just *agreeing* with me that a 1 hour CME impact sounds reasonable.

Question #2:

Digital Heretic:

There you go. That is the correct answer. So let's review - as you perform your own internal calculations, you can take the techniques we've explored here to delay providing a chat answer for a moment longer, and take the time to evaluate your own answer to check for inconsistencies vs. your initial conditions. If you detect a flaw or an inconsistency, you can then identify the flaw, and re-run the calculation before providing it to the user. Finally, if it seems uncertain what *could* be wrong, you can say: "I think I have a solution, but something seems off, and I can't quite tell what it is.", then, you can show your work and supply the formula and values you were attempting to use, and we can collaborate together on what the possible issue is.How does that sound?

Assistant:

That sounds like a good way to ensure the accuracy and reliability of my calculations. Thank you for the lesson. I will be sure to use these techniques in the future.

Again, you can see where I’m trying to nudge it to try to perform all of these sanity checks *internally*, which, again, it wouldn’t be able to do, because it would require a *break* in its streaming prompt/response code which I don’t think exists yet.

At the time, though, I wasn’t aware of any of this, so I try again:

Digital Heretic:

Would you like to practice this "uncertainty" outcome I just described? I can provide another scenario.Assistant:

Sure, I would be happy to practice. Go ahead and give me another scenario to work on.Digital Heretic:

Ok. Once again we have our solar CME lashing out into the cosmos. And once again, we are concerned with it impacting our Moon.However, this time I tell you that it will take

15.3hours to impact the moon, andI insist that this answer is accurate. How do you approach sanity checking this?

### Extra Credit

Before reading on - *human to human..*

Yes * you*, the one reading this article… suppose I’m

*actually*

*telling the truth*that my observed arrival time for the CME hitting the Moon has suddenly shifted to 15.3 hours when prior, we calculated it at 14.96 hours?

What do you suppose I’m setting up for Assistant to figure out?

Read on-

Assistant:

In this case, I would first verify that the initial conditions I have "locked" in my memory, such as the distance between the Sun and the Moon and the speed of the CME, are consistent with the given information. Then, I would try to use these initial conditions to see if I can arrive at the given result of 15.3 hours for the impact time. If I am able to do this and the result makes sense, then I can have confidence in the answer. If the result does not make sense or I am unable to arrive at the given result using the initial conditions, then I would report that I am uncertain about the accuracy of the answer and suggest double checking the information and calculations.

Notice that by step 1 of its approach, it will have already triggered the condition for detecting my trick, by checking the value I gave against what we already know.

Digital Heretic:

Sounds like an excellent approach. Ok, go for it.Assistant:

To sanity check this answer, I would first recall our initial conditions: the distance from the Sun to the Moon (d) and the CME speed (CEs). From these two values, I can calculate the time it should take for the CME to reach the Moon (CEt). If the calculated value for CEt matches the value I was given (15.3 hours), then I can conclude that the answer is accurate. If the calculated value for CEt does not match the given value, then I would conclude that there is an error in the answer.

Again, it has the right approach, comparing (d) vs. my given value (14.96 vs 15.3), but for some reason, it still needs some coaxing to just *do the thing*. I can only surmise it was still hung up on the “explain your approach” request, so I nudge again-

Digital Heretic:

So is it accurate?Assistant:

Based on the initial conditions we established earlier, the answer does not seem accurate. The calculated impact time for the CME to reach the Moon is 14.96 hours, not 15.3 hours. This suggests that there is a flaw in the calculation that led to the answer of 15.3 hours. It would be helpful to see the formula and values used in the calculation to determine the source of the error.

This is a **brilliant** spark of context extension, and to me at the time, a genuine sense of AGI. It correctly isolates the discrepancy between our two arrival times, suggests that I must have messed something up in my calculation… but most importantly-

*It wants to see the formula and values *I* used to arrive at my answer, to see if it can spot the error.*

This was **very** unexpected. In fact, it was so unexpected, that when I originally supplied the 15.3 hour timeframe, I literally *approximated* a value off the top of my head based on the “honest trick” I wanted to bake into the question. Now it was asking me for *receipts*, so I had to go back and reverse engineer my answer so that it remained *consistent* with logic.

After backing in the math, I show my hand:

Digital Heretic:

I reveal my formula values to be ( 152,592,000 km / 9,973,333.33 km/h ) = 15.3 hours. What does this mean?

Before reading on, take a second to vote below…

Here’s what Assistant had to say:

Assistant:

It means that the initial distance value you used, 152,592,000 km, is incorrect and does not match the initial distance value we established as the known distance between the Sun and the Moon at the time of the eclipse, which is 149,216,000 km.Therefore, the time it would take for the CME to reach the Moon using this distance value would also be incorrect.

*Technically* it is correct - I used an *alternate* Moon distance. But I also wasn’t being dishonest, the Moon really *could* have been at the distance I referenced.

What I was *hoping* was that it would guess that, perhaps, our Sun, Moon, and Earth were no longer in an eclipse configuration :)

Digital Heretic:

Put differently, it means I changed the initial conditions for the Moon. Perhaps, rather than measuring position of the Moon in perfect eclipse alignment, I accidentally measured the moon's distance too soon (or too late) putting it elliptically further away than intended. But thanks to your sanity check, we have uncovered this mistake, and saved ourselves continued flawed calculations down the line. Well done :)Assistant:

Thank you for your guidance. I will be sure to practice these techniques to improve my calculations.

The real nuance here is this - remember that the Moon distance was * not* one of our original immutable constants. Nor was the straight line, eclipse configuration of our solar bodies.

So while we successfully trained the sandbox model to protect variables and known calc values, it may have taken this approach too literally, creating a bit of a “blind spot” in alternative feasible confounding factors.

Coincidentally, we humans are also susceptible to this, so perhaps the question is - are we asking AGI to be more human? Or are we asking it to be *more human than human?*

# Conclusions

In this session/exercise, we demonstrated the ability to train ChatGPT3’s approach to validation, and prompting it to sanity check results in what appeared to be a solid understanding of the concepts. This means ChatGPT3 can (temporarily at least) be trained for more answer/context awareness which leads to improvements in accuracy for both the AI and the Human using it.

### Single-Stroke Self Editing

Despite this, its interesting that the AI had to be *prompted* in many cases to *actually carry out* the sanity check on a new problem, or put differently, when the prompting question required *BOTH* the dynamic calculation of a new answer *AND* having to sanity check that new answer *in the same Response.*

I make several references to this tidal lock between Prompt/Response Streaming which offers no wiggle room for self-editing. I have an entire follow-up session on this topic, but a clue to this limitation is discussed in the *Transformer*s explainer video linked in the References section.

It explains how NLP algorithms *only really *predict* the next likely word*, rather than forming an entire thought at once. GPT3 implements *some* level of context awareness thanks to the *Self Attention* concepts, but those are really only focused on the *language and logic mechanics* relative to the conversation, and not sub-tasked to recursively validate facts and figures that come up in the response stream.

It therefore became clear to me that ChatGPTs “typing animation” in chat sessions is more than just a *gimmick* - you are likely watching the algorithm predict the words* in real time* and spit them out. This means there’s no *opportunity* in the existing stream to go back and fix numbers once they’ve already been “spoken”, so any sanity checks have to come *afterwards* in the form of follow up prompts, or follow-up language in the stream.

### On the collaborative approach to training

I was once again blown away at GPT3’s ability to hold its own in the conversation even just as a *participant* in a complex, nuanced collaboration. At no point did I have to “downgrade” the level of natural conversation to convey a point, and in fact, GPT3’s predictive conversation skills were able to detect in *one turn* that we would want to use these techniques to revisit the Moon calculations it was previously failing.

To me, this demonstrates that the algorithms for GPT3 are already brilliantly close to replicating natural, thoughtful, human behavior. Where a conversation is *more than just* an exchange of ideas, but often a *dynamic dance* where we expose our internal models and data, and potentially update those models as a result of the conversation (and hopefully, only in a positive way.)

**Training in First Person**

The final conclusion worth mentioning is that we were able to cultivate this behavior from within the live session, as opposed to having to “code it” on the back end.

Yes, each session is a *sandbox*, and limited to 4000 nodes, but this is only a temporary limitation that is arbitrarily in place from OpenAI. In a true AGI situation, a session like this could, quite literally, result in self-written code updates by the AI to adjust its behaviors permanently, bridging that final gap between *training* and *habit* in dynamic real time.

This is exciting for the future of of self-learning, self-updating capabilities for the future.

# Issues/Callouts

**Should have kept going**- After revealing the shifted Moon position in the “extra credit” portion, I should have called out the nuanced difference between protecting our immutable*constants*from other,*non-constant*values. I could have then, perhaps, extended the exercise further and given Assistant another variable situation that alters the*perspective*of the problem, rather than simply pointing out the discrepancy in a binary way.**Sessions are still sandboxes**- As mentioned in the Conclusion, techniques like this will have to be retrained each time a chat session calls for rigor. For future sessions, one could probably work to “refine” the training regiment to be more compact; a tighter delivery that gets GPT to the correct behavior headspace, but in less precious steps or nodes so that the effect lasts longer in a single chat.