GPTerminator and Plumbing School

As of recent, ChatGPT and other powerful AI models have obviously been talked about.. everywhere. Having not used it myself, I could not distinguish reality from hype. So, I spent some time with it – enough to form Opinions! I’ve broken them into a few thoughts, and then some more serious topics on AI in general.

Pros And Cons of GPT itself

Here is my original impression after a few hours of talking to my robotic pal.

Pros

  1. If you give it a well defined problem, it can (sometimes) generate an almost perfect answer faster than I could myself. I have great hope this will reduce some of the nonsense LeEtCoDe – if there is one application this will surely excel at, it is complex algorithms for well defined problems. If the average programmer now has these tools at their disposal, there is no need to implement quicksort yourself (not that there was before, .sort()) but you get the idea.
  2. Soon, or maybe even now, it should be the goto before Google (which inevitably lands you on stack overflow) for most programming/debugging questions. Sometimes it takes quite a bit of hunting on SO – you start with a search result, turns out it’s not really asking the same thing, but oh someone marked it as a duplicate of this other thing, you follow that link, and so on – eventually you end up where you need to be, or you will be the poor chap that has to ask a new question. But so far, GPT “understands” what you’re asking pretty darn well – and if it can’t get you the answer, it can get you on the right path faster.

Areas of Improvement

  1. It must learn to ask clarifying questions (without you asking it to ask you questions..)! Sometimes, it is VERY confident in it’s wrong answer, probably because it always tries to “guess” what you mean and give the best answer it can for it, rather than “making sure” it understands what you want. It would require less iterations if it probed a bit before giving an answer.
  2. The disclaimer on their page says, “it may be out of date for events after 2021”. Does that mean new software libraries/versions/packages since then too? Software, at least some of it, moves fast, so there needs to be a constant “additive training” process, and a method to identify those sources of new data to train it with. There is a bit of work ahead.

Will I be Unemployed?

Lots of talk of this in programming circles lately. Should we start training to become plumbers and electricians?

The majority of my time as a senior engineer is not spent programming, at least not consistently. My job is:

  1. turning vague wants into products or other value to the company
  2. teasing out what’s really being asked from unclear reqs and vague tickets (see AOI 1 above)
  3. once we have that, figuring out what the MVP looks like
  4. from there, prioritizing all the features we thought of
  5. planning your resources, time constraints, release cycles, when to cut versions, and so on
  6. then we get to program, or at this point delegate the task off. Until the requirements change, and you repeat.

By the time you are at the last step, assuming you’ve done steps 1-5 well, you’ve already “won”. You have reduced the monstrous “software engineering” task into a “programming task”, and that point is where I think our AI friend can really help us.

I (at the current time) do not see these AI tools performing these tasks. Handshakes are what make this world go round – not code. Everything above requires people talking to (and convincing..) people. Maybe there needs to be one less junior engineer on the team to implement those tasks. Until I am convinced that these models learn to navigate their way through a sea of uncertainty, I don’t seem them as “engineers” but rather “helpers”.

Terminator-ey Stuff

Here’s a few of my largest worries about powerful AI (in general – not necessarily these LLMs like ChatGPT):

  • Faked court evidence. I don’t think video rendering AI is quite here yet, but imagine it’s possible to frame someone for a crime by altering video footage of a crime using samples of someone’s face in such a lifelike way that it becomes impossible to distinguish. Does the justice system have to change because of AI, e.g., requiring more evidence, if footage could no longer be trusted?
  • Faked (and dangerous) audio. Imagine I have a tape recorder in my pocket and collect N clips of your voice. I take all the samples, then I pass that through an AI capable of synthesizing your voice. I could generate a threatening recording, or maybe something very embarrassing.., or perhaps generate a fake political scandal that sways your upcoming election, or… “I swear, that’s not me!”. That is scary stuff. Lives can be destroyed like this. For 200,000 years, we could trust our eyes and ears – is that period of humanity over?
  • social media (or any electronic media) bots completely indistinguishable from human accounts – i.e., they can even go sign up for a google voice number to receive a 2FA on. It is already hard to fight misinformation and propaganda – bots of this capacity would certainly take it to the near impossible level
  • if you really want to have nightmares tonight, consider this

It’s not that poor me will be out of a job that I’m worried about.. it’s that innocent people could be jailed or have their families torn apart, the end of democratic elections, etc.

Futuristic Applications I’d Love Before My Grave


One response to “GPTerminator and Plumbing School”

  1. Nice post. I agree with most of what you say, particularly the dangers. I fear though that it is the unknown unknowns with AI that are the real worries. We can certainly think of many scenarios such as the ones you listed, but how many of us saw Facebook enabling genocide or misunderstand campaigns undermining democracy back in 2010? There are already cases of sophisticated malicious code being generated. Then there are of course the data that is used to train the models. People who are shocked that AI behaves like humans (see the Stanford experiment) don’t understand how clustering and clarification work, nor sensitivity and false/true positives /negatives. Like our lives and the experiences that mold us, AI is only as good as the data it is trained on.

    Like

Leave a reply to Anjo Cancel reply