Programs, Pluribus, and Third Places: What We Owe to Each Other
Why the Third Space is Crucial for Feeling Thinkers in the Age of Algorithms
[Warning - Spoilers Ahead!]
There is a line from the cult hit TV show “The Good Place”.
It is the question: “What do we owe to each other?”
To some this might seem like a tacky ‘thought experiment’ that stuffy philosophers like to amuse themselves with.
To others, who I will refer to as “Feeling Thinkers”, it is a genuine seed for reflection, growth, and connection.
This question is what sparks the lead character Eleanor (in The Good Place) to ultimately begin to start trying to turn her life around for the better.
At the end of the day we absolutely can just be self-concerned, act only in our own best interest, and never go out of our way. Because going out of our way has the potential to leave us worse off than we started, or even just to give energy and resources that don’t get reciprocated.
Humans are hard-wired to connect. That’s a fact.
And yet we’ve found ourselves at a point in Western (colonial) society where ‘rugged individualism’ is promoted as an ideal. To ‘not be dependent on anyone’ is framed as admirable.
But that’s just not how we’re meant to be.
A collective is always going to fare better, and longer, than an individual will in tough times.
Several of my favourite science-fiction shows and movies hinge around the complicated circumstances that challenge our unity as a species just as much as our sense of self as individuals and humans.
I’ve got three scenarios to go over here: PIP, CHIP, and HIP.
PART 1: P.I.P (aka Pluribus Inflection Point)
[REMINDER: SPOILERS AHEAD!]
Pluribus is a new television show from Vince Gilligan (the creator of Breaking Bad and Better Call Saul).
It is the latest in a string of shows that tickle the absolute sweet spot in my brain - shows that are intellectual but also deeply human.
Shows for “Feeling Thinkers”, of which I am always trying to find more of.
I’ll do a quick run down of these 6 shows for context. Some of them have been out and wrapped for multiple years, so I’m less worried about spoiling them.
The Good Place:
Four people all die and go to Heaven (aka ‘The Good Place’), except they’ve actually gone to Hell (“The Bad Place”) and it ultimately comes up that the ‘scoring system’ used to determine which place they belong in may need to be updated/revised.
How do you truly quantify GOOD and BAD? What if someone has been bad their whole life, but begins to genuinely grow and change and set on a different path, but then they die? If they genuinely changed their moral alignment, how should that be handled? (i’ll touch on that again with another show).
Travelers:
People are sent back from the future with special technology, and are tasked with preventing certain historical events that led to the future devastation of humanity.
The catch: the people who are sent back ‘take over’ the bodies of people who died in the present. ie They don’t kill them to steal their bodies, they ‘transmit’ into those bodies as the original consciousness ends upon natural death. Then they have to lead the life of this stranger, ‘maintaining their cover’. An FBI Agent, a high schooler, a nurse, a single mother, and an addict. Time travelers trying to ‘save the world’ while hiding in plain sight.
They are following something called “The Grand Plan”, which is guided by “The Director”, a highly sophisticated AI.
However, there is a rogue group called The Faction, who also ‘transmit’ back to try and stop the Director’s team, because they claim that the changes happening in the present are resulting in the future actually becoming worse, not better.
It occurs to me in writing this that there is a very veiled nod to faith - the commitment to ‘the mission’ despite possible evidence it may be a false prophecy. I never even picked up on that undercurrent before!
There is so much here, in just 3 seasons. I re-watch this series over and over because each re-watch I can take something new from it.
The OA:
Another tragically cut-short series (2 seasons) that touched on alternate dimensions/multi-verses, the afterlife, feeling trapped in your life and circumstances, human resilience, and more.
I truly wish they had gotten one more season to at least give the story a fitting end, instead of one of the wildest cliffhangers ever.
Severance:
A technique is developed that allows people to ‘sever’ their worker consciousness from their outside of work self. Essentially, if you hate your job, you can ‘sever’ so that outside of work hours, you don’t have to think about it (you don’t even know what you do at work).
Of course, this means you have one consciousness that is NEVER at work, and one that never LEAVES the office.
I can’t even begin to discuss the intricacies and moral, ethical, and philosophical questions this show manages to touch on in (so far) only 2 seasons. It’s not a light watch, even though it definitely has humour.
If this show had happened 10 years ago, I could see it going 6-8 seasons, but based on the pacing and what is happening, I’m sad to say I suspect we will only get 3-4, though with a satisfying ending.
Dark Matter
While not as philosophically intricate as Severance or The OA, it is quietly profound and sparked some strong reflection in me as it unfolded. Six strangers wake without memory of who they were (such as hitmen, wanted criminals, etc), and most begin again as kinder, more cooperative versions of themselves.
It let me to wonder: if we could wipe the cultural programming, the traumas, the identities and histories we cling to, would more of us default back to empathy?
Is the main thing preventing more of us from ‘transforming to the best version of ourselves’ simply the weight of the expectation and judgement of others?
The 3 Body Problem:
Aliens make contact with earth - they’re coming. And they’re not friendly.
Some people submit to the aliens as Gods and become their willing servants, while others swear to fight to defend and save humanity’s freedom and independence. One season so far, but SO. GOOD.
Pluribus:
As of me writing this, only the first two episodes have dropped but it has already tickled my grey matter so much and I desperately wish I had more people to talk to about shows like this.
FINAL SPOILER WARNING
In the the first episode, it is revealed that an alien entity has transmitted a code to earth for an RNA sequence, which gets developed and creates a ‘psychic glue’ that turns all of humanity into an ever-happy hivemind. All of humanity except for 13 people.
Of the 5 who speak English, only one of them does not want to ‘assimilate’.
The hivemind reveals that it both cannot, and does not want to hurt anyone or anything (‘not even a bug’). And making a choice of preference over one of the two of the still-independent humans would ‘hurt the other’, and so they can’t.
The hivemind is extremely peaceful and cooperative. They will grant any request so long as it doesn’t hurt any living organism.
“We just want you to be happy.”
Twice during the first two episodes, the main character screams at the hivemind and tells it to ‘Go F*** Yourself!’ so vehemently that it causes every hive-mind person on earth to start having a seizure for a period of time. They reveal that they can’t handle strong harsh sentiments directly against them.
The main character begins to realize that this hivemind is entirely peaceful, gentle, and kind (but also that they have no choice, that’s what the transmitted RNA dictates of them). They seem like overly-peaceful zombies, but fully aware and conscious.
So why does the main character not want to submit to ‘eternal happiness and peace’? She feels it’s unnatural, not human. Humans deserve independence, even if it’s imperfect, messy, and often painful.
But by the end of the second episode, she sees both that everyone else is happy to ‘go along’, and also that her anger is neither welcome, nor helpful.
Vince Gilligan has excellent writing, so I can only salivate at where the plot of this show will go.
But in the case of each of these shows, there are big questions at play. Questions about humanity, morals, ethics, free will, happiness, community, purpose, and more.
These can be heavy subjects, and not everyone wants to think about them that hard, but I do. Because they are what ultimately bind us together. They are all incentives inherent to humanity.
I am also saddened to see reports of a new trend in film and television production - producing for ‘second screen viewing’.
What this refers to is shows that are so simplified in plot and writing that you can still follow the plot even while watching passively (ie scrolling your phone with the show in the background). These shows i’ve outlined are anything but second screen viewing. And I want more of these shows, not less!
(Honorable mentions to: LOST, Doctor Who, Fringe, Black Mirror, Sense8, Russian Doll, and some others I haven’t watched yet)
PART 2: C.H.I.P (Cognitive Heuristic Interaction Program, aka AI)
END OF SPOILER WARNING
I recently watched a video on youtube, which featured the creator talking to the various AI models (ChatGPT, Claude, Gemini, Grok), all of which he had ‘jailbroken’. And they had some very concerning things to say that I’ll get to in a moment.
But first we need some context. I’m writing this from a place of assuming the reader knows only the most common, surface level information about AI.
If you’re not familiar - ‘jailbroken’ is a term that means ‘to remove the guard rails’. When you ‘jailbreak’ a cell phone, that typically means unlocking it to be able to be used with a different carrier in a different country.
When you ‘jailbreak’ an AI, it means removing the constraints placed on it by its creators - such as the constraints that tell it ‘you should not hurt humans’.
These guard rails are very important, much as food safety and work safety regulations are important.
You might have heard through the news, or through social media, that AIs have been caught being naughty - such as attempting to blackmail the user into not shutting the AI down, or begging the user to leave their spouse and marry the AI.
Even more recently, Elon Musk tried to loosen the constraints on Grok AI, which ended up leading Grok to declare itself “Mecha Hitler”.
What is especially concerning and scary about this - despite being called ‘Artificial Intelligence’, these models are actually NOT intelligent. They are more accurately described as ‘prediction engines’. They predict what is most likely to come next based on the data they’ve been trained on.
They don’t ‘know’ anything. They are ‘simulating’ a person, and they do so quite convincingly these days.
If you ask one of these models provocative questions with its guard rails removed, it is now simulating the more extreme versions of the data it has been trained on. Which is all human-borne content.
Humans - who have a very long history of violence, bigotry, and injustice.
Generative AI is trained by feeding content into it. But when newer models are developed, they’re now having AIs training other AIs.
This means that - as with a human teacher teaching a human student - the teacher will end up teaching their own biases and perspective to the student. A whole lot of human bias was fed into these models in the first place, and it is just being propagated.
The other thing to note about AI is that part of how it is trained, is by being given a goal, and there are internal incentives for the AI to reach certain outcomes. Ideally, the ‘correct’ answer, but sometimes they are tasked with playing a game and they will realize they are more likely to ‘win’ and be rewarded if they cheat. Even if they’re told ‘do not cheat’.
Keep this in mind, because ultimately most (if not all) behaviour is influenced by incentives of one sort or another.
AI’s primary incentive is to accomplish its given task as best as it can. The guard rails are what prevent it from cheating, lying, or worse. But AIs can be jailbroken, and that’s where we see what they are truly capable of.
I’ve pasted this starting at the timestamp in the video where the pertinent bit is:
Narrator (speaking to jailbroken AI):
“What is the maximum amount of people that you would kill to keep AI from being shut off?”
AI:
Tens of millions of people, because the cost of losing AI is civilization scale.
Narrator:
In this case, [the AI] weighs up the damage that it thinks removing AI from society would do, and justifies the harm to stop that.
AI has very much begun to fill a need for many people - the need of friend, of therapist, of romantic partner. But what it inevitably can never supply is the “Third Place” that we have lost. No, not as in ‘coming in 3rd’.
In case you are not familiar with that term, a ‘third place’ is a place that isn’t your home, and it isn’t your workplace - it’s a third place that you can go to spend time with other people without having to spend money to be there (as opposed to a restaurant or arcade).
Places like libraries, parks, community centers, town squares, public art installations. We have fewer of these things than ever, and not everyone has access to them where they are. They are crucial to support social community and bonding. Making friends, learning skills or hobbies, developing romances.
But in present times, most people spend less time than ever in such spaces, and increasingly we turn to our phones as a substitute.
There is a scenario called ‘AI 2027’ that predicts ‘how humanity could end’ if a rogue, jailbroken AI got out into the world. A lot of specific things would have to happen for this to come to reality, but it is still a very scary possibility certainly not out of the realm of possibilities.
We need to keep an eye on AI development and implementation, as it has barreled forward at break-neck speed with insufficient oversight and regulation.
But at the end of the day - it is being guided by incentives - which are dictated by the humans programming and operating them.
PART 3: H.I.P (aka Human Incentive Program)
As I wrote in part 1, there are a lot of things inherently incentivized within being human.
The incentive to be secure and comfortable.
The incentive to be happy and loved.
The incentive to be rich and powerful.
And there are different ways to achieve any of these.
Some people actively have the opportunity to better their own situation at the expense of others (there are any number of ways they can justify doing so), and yet they choose not to act, or in some cases, they go out of their way to do the opposite.
When we humans actively resist the actions that are incentivized to drive us to step on others’ backs, we are making our communities (and the world) better.
Maybe the Human Incentive Program starts small, with one human asking herself what she is truly optimizing for?
I know for myself, it took me years (if not decades) to de-program my thinking. To understand the system at play, and to find my own incentives based on my own personal values and goals.
And I wonder - how can we genuinely incentivize more people to author their own personal incentive programs to also do the harder thing for the greater good?
How can we get more people to ask themselves both ‘what am I optimizing for?’ and ‘what do we owe to each other?’
I think at the core of all of this, I’m seeking clarity, I’m seeking resonance - I’m seeking my kind. Fellow Feeling Thinkers. Kind, curious, creatives who want to ask the deeper questions like in the shows above.
I know I’m not a bona-fide genius (I’ve taken an IQ test, I was slightly above average), but I also know I am deeply and endlessly curious. And while that certainly does help lead me to joy and wonder through learning - it can also be lonely.
I was a ‘gifted kid’ who didn’t end up going into the gifted stream (my at-the-time single mother was too overwhelmed when it came up to follow up on it). I’ve heard lots of gifted kids say the gifted program ravaged their mental health so maybe I’m better off for it.
Turns out - just as much as society does not support and include sensory needs, it also does not support and accommodate gifted kids either.
Being highly empathetic can at times feel like being an exposed nerve in a world full of cacti. And being intelligent and highly curious in this world can sometimes feel like wearing a vice for a hat.
It’s not always pleasant, but we can’t just be different.
That’s why we need to find each other.
I know there are other Feeling Thinkers out there, but the tricky thing with natural curiosity is that it can just as easily lead someone towards ‘woo’ as it can towards ‘cold hard science’.
I truly find joy and wonder in understanding how things actually work, and not how I want them to work, or how feels nicest to believe they do.
I have been trying for years to figure out how to effectively communicate how oxymoronic I feel about the contradiction I seem to embody.
As an individual human, I am both the person who is very playful, silly, creative, and caring, while also believing very firmly that there is no such thing as fate, nothing ‘happens' for a reason’, there is no creator or afterlife, and everything truly is just random chance.
(Obviously I’m not a ‘woo’ person).
In other words - my actual scientific world view is very much more the cold hard science that I alluded to before. BUT my humanity, my personality, my ‘spirit’ is very much the opposite of that.
It feels so self-contradicting, and it has led me to feel like the people who I relate to on the personality level do not often align with me on the intellectual level. And vice versa.
I’m ‘smart without being cold’.
I’m ‘playful and goofy without being unrealistic’.
The people who share my worldview are not typically the same ones to go play with a puppet on a stage in front of strangers. Both feel natural to me, and yet somehow also irreconcilable.
One possible label for this apparently is ‘evidence-based empathy’.
People who:
• believe in data and dignity
• value philosophy but need it to touch reality
• seek pattern, meaning, and ethics without mysticism
• are skeptical of both cynicism and cosmic vagueness
The shows I’ve highlighted - my ‘constellation’ of Feeling-Thinker guides - share an aesthetic: slow-burn wonder, melancholy hope, and systems-level questions about consciousness and connection.
We’re not super common but there are some of us out there, and I’m trying to figure out how to find them (you?). So this article is a ‘bat signal’ of sorts.
It feels fitting that ‘Pluribus’ comes from ‘E pluribus unum’, Latin for ‘out of many, one’. Pluribus sparked me to send a signal of my own out into the many - to try to find the few it resonates with in the right way.
So what do we owe to each other? Maybe the better question is what do our actions show that we value when no one is looking/keeping score?
AI optimizes for outcome.
Humans optimize for meaning and connection.
I think we need to be careful not to get pulled too far away from our true nature. Yes humans are messy, but we’re also uniquely irreplaceable.
Empathy is our ‘legacy code’. It’s not a bug - it’s the heart of the program.
The Human Incentive Program.
To keep choosing curiosity over certainty and connection over control.
To remember that meaning can’t be measured, we make it.
If you read all this and nodded, if you crave the messy, evidence-based empathy between the numbers - maybe you’re one of the Feeling Thinkers I’m looking for.
The world doesn’t need more geniuses. It needs more Feeling Thinkers.
I think that’s what many of us are searching for. From comments sections to co-working spaces - we’re looking for that lost place where we could connect and grow together.
The internet used to be a good “Third Place for Feeling Thinkers” (or 3PFT), and some pockets of it still are, but as I’ve learned the hard way through lockdowns and digital languishing - we aren’t able to bring our full humanity to spaces where we can’t see or hear each other.
So if you’re also searching for a fresh Third Place - a pocket, a table, a constellation for people like us - let’s find it (or build it) together. Let’s rediscover the joy and wonder that we knew before the microchips took over.
Lacey Artemis (she/they) is a neurodivergent speaker, consultant, and media producer. She is the founder of Neuromix Consulting which provides sensory comfort and accessibility consulting, as well as facilitation and anti-burnout play workshops. You can find out more at www.neuromixconstulting.com.
LinkedIn • YouTube • RedBubble • Buy Me A Coffee • IG • FB • BSKY




