Welcome to the wonderful world of science fiction! If you're looking for robots, you've come to the right place. Good robots, bad robots, big robots, bigger robots, teeny-weeny nano robots, sexy robots, sassy robots, trash can robots, robots in disguise, people shaped robots, killer robots, marshmallow robots, neurotic robots, shiny robots, cute robots, sleek robots. Science fiction: We got the bots! So, there are a lot of robots in sci-fi... They're honestly kind of a defining characteristic of the genre. When we look to the future at least the non-apocalyptic one, what we want to see is safe practical space travel, high-energy beam weapons, and robot meter maids. But robots hold a special place in science fiction a suspiciously Frankenstein shaped place because there's nothing we love more than holding up a mirror to ourselves and seeing only the shiny shiny endoskeleton of our own unrelenting hubris staring back. What I'm saying is: we get weird about robots. Specifically about how their humanity, or lack thereof, mirrors our own. But before we unpack all the bizarre ways our robot narratives reflect our own bizarre species wide insecurities, let's try and categorize our fictional robots a little. Now lots of robot narratives center on the question "how human is a robot?" And we will get back to that question, but for now our first category of robots is totally human robots. These characters are technically robots but functionally human beings. They can think, feel, philosophize, prioritize, hell in some universes they'll explicitly have souls. A lot of the time these robots are alien in origin to help hand-wave why exactly they're fully sentient beings. These robots usually get to basically do whatever they want on account of having total self-determination. The one exception I can think of are the droids in Star Wars, which all seem to at least have the potential for total sentience but are still universally a servant class with no rights. And the one time they made a character advocating for droid rights. It was played as a joke... Solo aside, these characters are generally immune to the humanity question, since they tend to be obviously sentient and usually have plots to focus on beyond "Contemplate your humanity." Nobody's gonna ask Optimus Prime to justify his personhood. Actually, maybe somebody should. That'd probably be a really soothing monologue, but I'd listen to him read the phone book. So. Anyway, one rung down the ladder is top tier artificial intelligence The robotic characters explicitly created by humans that are still functionally, basically human themselves in a number of key ways. These robots are theoretically just near-flawless imitations of humanity. But in practice this can very fuzzy. These characters might not technically have emotions, or maybe they're not supposed to have emotions but for some reason they end up displaying things like love, anger, or self-preservation anyway. These top tier AI characters almost always get wrapped up in a storyline philosophizing about how human these near perfect human imitations are. They generally run into the old Turing test problem: if you can make a machine that's indistinguishable from a human Can you consider it to really be a machine anymore? It's almost inevitable that one of these guys will get put on trial or otherwise end up in conflict with the law. We'll talk more about the inevitable robot racism thing later. Now midrange on the sliding scale of humanity that humanity or the robots like the Terminator in T2, who start off completely robotic but demonstrate the ability to learn a handful of human traits like smiling and stuff. These robots, while stilted, could theoretically develop to become indistinguishable from humans given enough time and adequate good examples. And this is where we start noticing what traits writers typically consider to be 'human traits' when deciding what these characters should and shouldn't be capable of. These characters will typically lack a sense of humor, sarcasm, or subtlety and will appear generally emotionless, alerting the audience that these are all characteristics seen as central to humanity and we will get back to this too. Don't worry, there's a lot to unpack here... Knock Out: This is gonna be juicy~ Anyway, the last noteworthy entry on this scale is the far end of the inhumanity bell curve, the purely robotic robot. And this character archetype is basically always evil. These robots are emotionless, guided strictly by logic and their programming, and for some reason, 90% of the time their logic and programming guides them to murdering everyone. Since they're not technically sentient, you could argue that they're not evil because they're not moral beings but they're still usually antagonistic forces, like the inciting crisis in a disaster movie. Anyway, for whatever reason, usually a Perfectly Logical one, the purely robotic robot decides that humanity's got to die, or something of that ilk, and hotter heads must prevail to change its mind or just unplug it from the wall outlet before it finds the nukes. The popularity of this concept is probably one of those dark reflections of humanity things meant to make you ponder humanity's merits and flaws. Like, you look at the news on any given day and you're like "god, everything's on fire, maybe we should just let the world die." And your Roomba's off in the corner going like "Perameters recognized" and you have to real quick explain to it why maybe things aren't apocalypse worthy just yet and some things are alright, actually and there's this little thing called 'hyperbole' that humans like to do when they're upset about their entire planet being on fire. So unsurprisingly the human to inhuman spectrum bijectively maps into the good versus evil spectrum because we are a very predictable species. Humans are emotional and compassionate whereas robots are cold and logical. The more human the robot, the more sympathetic it is to a human audience. The less human the robot, the more menacing and dangerous. Good robots equals Optimus Prime, bad robots equals Hal 9000. Now there are a number of plots that generally follow robot characters around. Typical plots include "how human is a robot: let's get philosophical." Or "humans played God making robots in our image. Let's get philosophical - electric Boogaloo." "We use robots for cheap mechanized labor, but now they're sentient and want to be treated like people so we should probably just kill them before the blender starts getting ideas: let's get philosophical three- Tokyo Drift" and the big favorite "robot racism" also known as "let's get philosophical four- On Stranger Tides". So the basic premise behind robot racism is that robots are sentient enough to be displeased with their lot in society and would like to be treated with some level of dignity. Not necessarily human dignity, but you know something approximating it. Frequently, they just don't want to be property and they don't want to be deactivated or killed. It's a nuanced and complicated discussion of what it means to be deserving of personhood and respect with only the teensy problem that real racism exists See, there's this much broader problem and basically all fantasy genres that try and use coding to draw parallels between a fictional social phenomenon and a real one. The intent here is to encourage your audience to see this familiar social dynamic from the real world reflected in an otherwise unfamiliar and fictional environment. The problem is the dynamic flows both ways. The readers will draw the connection between the real social phenomenon of the story but they'll also Project the dynamics of the story back onto the real thing. And I'm not saying the audience is going to be like "Oh real-world marginalized groups are like robots", but I am saying they're gonna think you're saying that and that is not a good thing. Just look at the hilarious backlash Detroit Become Human received for its ridiculously ham-fisted portrayal of a robot civil rights movement and how it seemed to reflect an incredibly poor understanding of the actual American civil rights movement on the part of the creator. Or, hell, just watch Lindsey Alice's review of Bright. She's got an excellent rundown of the problems writers run into when they try and make their in-universe racism logical or understandable for whatever reason when in real life, prejudice is a fundamentally irrational thing. The short of it is in Bright, orcs are fantasy black people and they are universally hated because 2000 years ago they sided with the Dark Lord in some big Lord of the Rings war which is the kind of coding that makes it seem like Maybe the writer doesn't actually get how real racism works. The fundamental problem with this kind of coding when it comes to robots specifically, is in real life We are all people... Everyone is a human being regardless of any cosmetic disparity. Personhood is generally a pretty easy thing to define IRL. We're all the same species without much internal variation when you get right down to it, but robots aren't humans. Where real-world bigotry is unilaterally directed from one group of humans to another, fictional robot racism is directed from a group of humans to a group of non-human outsiders. You know, that thing real-world bigots really like claiming the targets of their bigotry to be. In fiction the personhood of a robot is an actual debate, where no such debate really meaningfully exists within real-world instances of prejudice. And because it's not clear how human the robot actually is, it's not clear how human they should really be treated. And a lot of these stories the robots are explicitly very much not human. In the Animatrix for example, the machines are very clearly quite different from humanity, and they're not claiming anything otherwise. They just want the right to exist without getting destroyed. Basically, real-world humans are all human, but in fantasy worlds, human is a much more complicated thing to define. And trying to code a fictional non-human demographic to parallel a real world very human demographic can produce unfortunate resonances and it doesn't matter how much you don't want your audience to draw that connection because they're gonna. Because if there's one thing humans are good at, its pattern recognition. Think pieces everywhere now... Now, I don't really feel super qualified to discuss all the nuances of this particular minefield But I will say that I think the Animatrix is a good example of how to do this in an interesting way and Detroit Become Human is a good example of how to do this in a stupid way. The robots in Detroit Become Human, once they go deviant, are functionally completely indistinguishable from humans. The prejudice aimed at them is so cartoonishly simplistic and over-the-top that the impact falls flat. The more interesting case is the Animatrix. So for those of you who don't know, the Wachowskis produced an animated tie-in to the Matrix movies that's a compilation of nine animated shorts. Two of which are titled 'The Second Renaissance' and they basically explain how and why the machines took over the world and turned it into the dystopic hellscape of green tinted sky boxes and leather trench coats that Keanu Reeves will never truly escape from no matter how many John Wick movies he makes. In the Animatrix, we learn that the machines were initially produced to do all the manual labor that humans didn't want to do anymore and they seem to be pretty okay with it. But one day a robot freaks out and kills its owner after he threatened to destroy it. The robot gets put on trial and says it just didn't want to die. The trial takes a turn when the prosecution decides to use the precedent set by the Dred-Scott decision, which rather infamously stated that since the founding fathers didn't intend for African-Americans to be included under their definition of American citizen, they shouldn't be entitled to human rights. Invoking that to establish that the machines shouldn't be considered citizens, they find the robot guilty and have it destroyed, kicking off an escalating conflict and rebellion that ends with the shadowy nightmare hellscape we all know and love dodging bullets in. For added philosophical points, the machines that were initially made in humanity's image start remaking themselves to look like bugs or squid as an act of rebellion from humanity. So yeah, while your mileage may vary on that one I personally thought that robot racism narrative was a lot more compelling than most possibly because it actually invoked real-world racism rather than pretending like they invented the idea of people being horrible to each other for no reason. But I want to set aside the robot racism thing because some of you probably haven't disliked this video yet, and I want to see how many demographics I can piss off in one video. So let's talk about how we choose to code beings as inhuman and why our definition of what it means to be a normal human is unnecessarily narrow. See the number one problem we have writing robots is that real-world robots kinda suck. We don't have any real AI. This is because humans and computers work very differently from one another on a fundamental level. Basically on the simplest conceivable level, humans are only good at pattern recognition and computers are only good at arithmetic and data storage. And this fundamental, base-level difference shows in what we're respectively good at. Show a baby a picture of a balloon and it'll be able to tell it's a balloon. It's taken decades of software development to teach a computer to do the same thing and it's not even halfway good at it yet. "Hey, Hal why don't you tell me how many of these squares have a motorcycle in it?" Hal 9000: I'm sorry Dave, I'm afraid I can't do that. "Yeah, that's what I thought!" Ask me and a computer to each write a short story and I can't promise mine'll be any good, but it'll almost certainly make a whole lot more sense than whatever Alexa cranks out of a learning algorithm trained on ten thousand hours of Daytime television, but on the other hand ask me what I ate for lunch yesterday and I probably couldn't tell you, where a computer can definitely tell you what processes were running at 12:05 yesterday. Ask me to do large number arithmetic and I'll pull out a calculator. Listen, I didn't get my degree in math so I could actually do math. Point is, IRL humans and computers are good at very different things and if there's one thing computers suck at, its adapting, which is, coincidentally, almost the only thing humans are good at. It is really hard to make a computer pretend to be a human and even the most convincing bots these days can be thrown off by unexpected curveballs. We don't have any real AIS that can pass for humans. So if you're a writer, how are you supposed to write an AI recognizing that there are no real AI to draw references from? Well, typically, the writer will take a garden-variety human, strip away some characteristics to make them seem like an incomplete human and slap on some blinky LEDs. Taa-da! Human-like AI. They may be stoic, emotionless, humorless, sexless, incapable of picking up on subtlety, and/or generally pretty monotonous to be around and they're generally human enough to have a meaningful conversation with without having to repeat yourself four times to explain that no you are not trying to call that guy for middle school you haven't spoken to in years, you just want to know what the weather is gonna be. Nope, the weather. The weather! Hey Siri, is it going to rain? No, don't play three days grace! Right. So this gives you a character who can understand what you tell them, unless it's too subtle or confusingly worded or maybe a pop-culture reference because that'll confuse them, who can do all the fancy calculations and data storage we expect from a computer but doesn't laugh at any of your jokes unless someone explains it first, who has a habit of loudly asking why ensign Steve's heart rate gets so elevated when he looks at first mate Kelly to the amusement of all involved, who innocently innocently stonewalls that cute alien who was obviously flirting with them but you can't blame them because how could they tell? Who just doesn't quite get a lot of your human eccentricities but despite your differences you get along well enough even though you're not always on the same wavelength and sometimes they really can't read the room and end up embarrassing you and the problem is that this "take a human and cut some bits off" strategy produces a robot character that closely resembles a neurodivergent and frequently aromantic and/or asexual human and those are fine characteristics but they're not typically represented in non-evil human characters we're supposed to actually like. So the only places we typically see them are in overtly inhuman characters. The problem isn't the character traits it's the context we see them in. Robot characters constructed in this way will frequently come across as being somewhere on the autism spectrum along with being asexual/aromantic, ADHD and/or having some kind of anti-social disorder. And if you happen to tick one or more of those boxes... the most positive rep you've probably ever seen has been a friendly robot who's relatable social quirks were played for laughs by the rest of the cast. Hooray! This is an issue. I don't know about you, but I'm kind of tired of absorbing the social background radiation that major defining qualities of who I am as a person are only ever seen in people who aren't people. Just, you know, personal preference. Again, you can't control how your audience interprets coding. If this is how you're writing robots, this is how it's coming across to a lot of people. And hey, I like those robots! But the subtext is there and it's important to be aware of that, but it's understandable, right? I mean if you're writing a robotic protagonist, your audience is actually supposed to like it. Stands to reason that they need to be at least a little human, isn't the only way to do that to take a human character And remove characteristics until they start feeling just inhuman enough to pass for a robot without losing their relatable core qualities? Isn't it just an unhappy side effect that these stripped-down characters resemble some marginalized real-world demographics that society characterizes as fundamentally deficient or lacking in some core human traits? Isn't this just the only way? Uuuuhhhhh? Let's talk about Baymax. So, remember Big Hero 6? The charming story of a boy and his marshmallow robot learning to be superheroes as they navigate the agonizingly realistic journey of the intense depression and grief our hero experiences after the gut-wrenchingly tragic death of his older brother and truest friend? It was a bit of a rollercoaster but most notably it is the only story I've ever seen with a genuinely lovable robotic protagonist that had precisely zero human qualities. Baymax is a robot. Specifically, he's a healthcare companion. He's programmed to keep his patients happy and healthy and that is literally all he does. It just so happens that the only thing that can pull our protagonist out of his deep depression is bombastic super heroics and maybe revenge. So Baymax obliges, helping Hiro do superhero things and track down the supervillain who indirectly killed his brother and while this is a deeply emotionally loaded subject matter for Hiro, Baymax is just there to keep him healthy and at no point does Baymax deviate from his programming. There's even a part that the movie goes out of its way to show us that Baymax is completely controlled by whatever chip he has installed at the time when Hiro finds the guy who killed his brother, he orders Baymax to kill him. Well, "destroy" him. It is a Disney movie and when Baymax mildly protests that his healthcare programming prevents him from harming anyone, Hiro removes his healthcare chip leaving only his radical kung fu fighting chip. This leads to a Hulk'd out Baymax silently Terminating his way after the villain, chucking Hiro's friends around like ragdolls whenever they try and stop him because he's pure programming. No fighting from the inside for Baymax, he just does what he's programmed to. He doesn't even seem upset once they reinsert his chip. He just wants to make sure nobody's injured from his Hulk out session. This is not a robot capable of overriding his programming through the power of friendship. What's interesting is, throughout the movie, Baymax says a few things that other characters interpret to be metaphorical and deep that are actually Totally Literal. There are two main instances, both spoilers,. "Tadashi is here" and "I will always be with you." At a few different points in the movie, when Hiro is grieving the loss of his brother Tadashi, Baymax evenly replies with "Tadashi is here." Hiro angrily blows up at him the first time he says this because he's been hearing people say "Oh, he's not really gone as long as we remember him" for weeks now and it doesn't make him feel any better because it doesn't make Tadashi any less dead. But Baymax obviously doesn't mean that. What he means is, he actually has a large repository of video footage from the many trial runs Tadashi did trying to get Baymax working, all footage of Tadashi encouragingly talking into the camera, persisting through setback after setback and delightedly telling Baymax, and Hiro through Baymax, that they're gonna help so many people. From a certain point of view Tadashi is here because he has all this footage of Tadashi being his regular paragon, optimist self. Baymax isn't being comforting or metaphysical, he's just being literal and at the end of the movie Baymax is winding up for a heroic sacrifice and Hiro is freaking out and begging him not to go Baymax, in his unwaveringly calm voice, tells Hiro "I will always be with you." Hiro accepts what's happening, hugs him and Baymax does the heroic sacrifice thing. But where one is meant to assume Baymax is implying he'll like, be with him in spirit or in his heart or whatever, what he actually means is "I put a copy of my chip in my rocket glove I'm using to get you out of here! So once you get to safety, you can just build me another body." Once again were encouraged to read a level of metaphor and poetry into Baymax's words where, in actuality, he's a brutally literal robot and he never stops being that. Baymax is a completely inhuman robot who unwaveringly follows his program and never does anything outside his parameters. But because his programming is "keep people healthy and safe", his voice is calm and soothing and his design is non-threatening and huggable, he's a very lovable character despite his total lack of humanity. He doesn't learn how to be compassionate or moved beyond the boundaries of his programming, he just is. That said I do think a lot of how well-received the character was is his voice. I don't know how well received it event if he sounded like an actual robot. "Hiro. I will always be with you." Yeahhh, not as into it. Anyway. Baymax is gentle and his humanity is meant to highlight the emotionally fraught humanity of hero and to a lesser extent the rest of the cast. Baymax is loveable in large part because Hiro loves him. His stability and single-minded drive to help is exactly what Hiro needs to heal. He's a sweet gentle character who gets a lot of laughs, a lot of tears, and a lot of "aww, that's sweet" without ever coming across as anything but a very well programmed Robot nurse. But I think Baymax worked so well because he's contrasted by the very human Hiro. And, speaking of contrast, let's talk about the other adorable tearjerker robot protagonists of the decade, Wall-E. While Baymax shows that a robot can be lovable and compelling without appearing even slightly human, Wall-E proves that a robot can be part of a complex narrative on humanity and freewill while coming across as really, really, human. Basically these two examples together prove my point that you don't need to make a robot a human but less in order to make it work in a story. Where Baymax is completely inhuman, but in like a friendly way, Wall-E is on the complete other end of the spectrum and is totally human. Wall-E is nostalgic, lonesome, a collector of interesting artifacts, an artist, a romantic and a million other very weird things for a trash collecting robot to become. The key difference is where Baymax exists to help Hiro heal, Wall-E's plot is all about humanity as a concept and how deeply important it is to do what you think is right, Even if you're not supposed to. There's a reason why it is a huge deal in the movie when the captain of the ship turns off the autopilot and becomes the first human in 700 years to stand on his own two feet. This movie is all about making your own choices and to that end, every robot treats their programming more is a vague suggestion than as the totality of their existence. Every single robot in Wall-E, with the exception of the suspiciously 2001 antagonist Auto is clearly totally capable of the full range of human emotion up to and including love. I mean, Wall-E is restored from a factory reset by true love's hand hold and gentle forehead touch. There's no reason that should worked. This is very clearly a Disney movie and I love it! But more importantly if the robots don't have this emergent humanity, the plot never happens. Wall-E shouldn't have saved the plant, they're a trash compactor robot. Without Wall-E's inexplicable love of collecting random stuff, the plant never gets found, Eve never fulfills her mission and humanity drifts through space forever until our bones finally melt. In contrast to Wall-E and Eve's complete humanity, The antagonist, Otto is completely robotic. He's following his secret directive, humanity is never supposed to go home. He has no personal reason to keep them in space, it's just what he was programmed to do. He's also the only robot voiced by a real robot to help clue us in that he's not about to change his ways. The only robotic things about Wall-E and Eve are their bodies. Wall-E is a loving, determined, adorable, socially awkward, unlikely hero in a tiny goggle-eyed cube body. Eve is an elegant, proper lady with a super adorable giggle, who happens to look like the next big wave of Apple technology. There's no reason a trash compactor droid and a plant retrieval unit should be able to have personalities and fall in love, but it doesn't matter. This movie is a story about being yourself and doing the right thing even if you're not supposed to be doing either of those things. And that story only works if Wall-E and Eve have the potential to be full-on human beings on the inside. Eve starts off quite robotic, but when Wall-E makes her art, she starts loosening up a little, allowing herself some priorities that aren't her directive, later going on to defy orders in order to save her Slightly grimy love interest from the evil space genius bar. But at the same time most of the drama in Wall-E relies on the fact that Wall-E and Eve are robots. They have purposes in programming, but they're refusing to limit themselves by it. Wall-E sacrifices himself to save humanity and the price they pay isn't their life, it's their Self. When they reboot, everything from their sound design to their body language communicates that Wall-E is gone. They're just a trash compactor robot now. Where a human character trying to sell us on the amnesia plot in the last 10 minutes of the movie is most likely met with a resounding eye roll, Wall-E leaves us all emotionally destroyed as we desperately hope our little robot buddy will defy the odds and miraculously regain their personality through the power of love. And as a side note, the difference between these two examples is best illustrated in their sound design. Baymax is fully voiced but his actor deliberately used a very even style of delivery, never sounding anything but gentle, calm, and slightly curious, even in crisis situations. Wall-E and Eve, meanwhile, have very limited dialogue, mostly restricted to their names and the occasional single word, like 'directive', but their voices are incredibly emotive. They exclaim, sigh, yell each other's names, Eve even giggles a few times! Baymax is roughly humanoid with a human voice but his actions and speech are limited by his programming. Wall-E and Eve are very inhuman in their visual designs and voice modulation but their humanity shines through their dynamic body language and the way they use their limited language capacity to fully express themselves. So in short, Baymax proved you can write a lovable robot hero without making their character the slightest bit human and Wall-E proves you can write a compelling hero with a full suite of human emotions and we'll still see them as a robot. If you want to emotionally destroy your audience with a robot, that robot can have as much or as little personality as you want it to have. Uhhh, let's see... Coding is a very complicated thing and it's not a bad thing, but it does add some weird nuance to your writing you might not actually want so it's important to be aware of how that works. It's probably a good thing this videos running so long or I probably would have found a way to talk about Transformers Prime until y'all got sick of me. Fictional robots are cool real robots are boring. Oh, man I didn't even talk about the uncanny valley Agh, I've only seen two episodes of Star Trek Next Gen, but Data's my favorite character. And I'd devote a whole section to the Iron Giant but I just realized it has almost the exact same plot as Bumblebee and I need to go process the fact that I didn't notice that till now. The bottom line is robots are great, but if the robot is a single red light built into a spaceship, it's always gonna be evil. So... Yeah! Caption Credit: lonely Questioner
Damn, I was literally in the middle of posting this when I saw you'd already done it.
On the subject of the video itself, I was more interested in the parallels Red draws between how we code robots and how we code ace/aro people than in the parallels between robots and autistic people. This is probably just my personal experience (I'm autistic), but the idea of "robots are coded as autistic people (or vice versa)" wasn't new to me so I already kind of anticipated it. I kind of wish more of the video had been devoted to examining the way we code robots rather than it being more about the way we can write robots to avoid that coding.
I love this channel. I especially love how Red often give time to talk about queer representation in mythologies.
Fun fact: High-functioning autism is in my opinion way, way worse than being on the lower-functioning end of the scale. Everyone else on r/aspiememes seems to just absolutely own the label, but Iβm out here wondering how poorly I pass as a functional human being despite ironing out stimming and echolalia and everything else ages ago. Are half the people I talk to hard of hearing, or am I really slurring that badly? Is it stress thatβs giving me on-and-off headaches or my shit brain wiring? Is it hardwired executive functioning issues, or am I a lazy dick? Should I even trust myself on my perception of reali-
Windows shutdown noise
They're generally very informative and interesting. And I think that Red has at least good potential to be breadpilled.
I love how breadtube is just becoming all my favorite YouTube channels.
Red's queer mythology videos crack me up every time. And she's got a really nice singing voice too!
Overlysarcasticproduction is one favorite channels.
Also even as a high functioning Autistic person, I didnβt notice that robot coding( iβm someone who does notice coding pretty well).