Sunday, September 26, 2021

- liar: this is an exploration of an ironic use of the first law, using the mechanism of a mind-reading robot that tells white lies to stop humans from getting hurt feelings. i'd like to pull something a little deeper out of it, but it's not there, it's just an ironic plot twist. asimov might be poking fun of astrology a little. robots apparently malfunction in the face of contradictions, but that is never fully explained, and that is a problem, given that the framework of decidability theory certainly existed at the time. calvin's hatred at the end is pretty visceral and not very appealing.

- satisfaction guaranteed: you could pull the plot of this out almost immediately, so reading through it is a question of allowing asimov to go through the motions. what comes out is an exploration of the shallowness of 50s culture, as well as the social darwinism hardcoded into it, and it is indeed easy enough to imagine a lonely 50s housewife falling in love with a suave, housecleaning robot, even if a lot of the social codes and rules are so arcane nowadays, so lost in the mists of time, that the context of much of the story is really likely to be lost on a modern reader. i think i can reconstruct a little context, though; the 50s were both the period of wife-training to fit these socially darwinistic ideals and the period where there was actual mainstream discourse on the plausibility of replacing women with robots - and the idea was always about doing away with them as obsolete. so, what asimov is doing here is inserting a little bit of an ironic twist, in having the robot replacement end up fucking the wife, which reverses the source of inadequacy. but, this is all a little obscure, 70 years later...

- lenny: so, lenny is an autistic robot, due to something malfunctioning in manufacturing. asimov tersely explores some social relations around that. the corporation wants to do away with it, but calvin wants to study it because she wants to teach it how to learn, something robots couldn't do in asimov's universe to that point. so, lenny is a robot free of instinct that needs to be taught what it knows, like mammals. asimov is kind of grappling with a concept of artificial intelligence, and this actually becomes the main plotline moving forwards, although it was actually written last (and may have even been written to introduce that ai narrative, as there is really nothing else to this). 

- galley slave: this is a short whodunnit in a sherlock holmes style, which is how calvin is frequently deployed. asimov just barely touches on the opposition to robots, in setting up a disgruntled sociology prof that's willing to suicide bomb his own career in order to take the robots out of service. again, i'd like this to be more profound than it actually is.

- little lost robot: a robot, after being told to get lost, becomes psychologically unstable and threatens to destabilize a fleet of robots that had been slightly modified for production - a typically absurd, yet somewhat realistic, joke of a plotline from asimov. it's up to calvin to use logical deduction from the robot axioms to figure it all out. again: there's not much else to this.

- risk: more empty plot. throwaway.

- escape!: this brings in the kind of obnoxious johnny-five type robot in short circuit and other films that's doing things like quoting old tv shows and radio broadcasts, but asimov presents it as a robot grappling with absurdity, on command. it is otherwise a silly story about travelling through hyperspace and coming back.

- evidence: the next two stories introduce a politician named byerley. this is also plot-heavy, but it's more amusing - can you prove you're not a robot? well, just as well as you can prove you're not a communist, right? this was published in 1946, which was right when the post-war euphoria was setting into resignation of a long conflict with the soviets, and asimov's sardonic wit foresees something of interest, here. as usual, his caricature of the anti-robot opposition leaves a lot to be desired, in terms of constructing an actual discourse.

- the evitable conflict: this is a little heavier, finally. written in 1950, it has strong shades of being a reaction to 1984, but asimov is imagining a future where "the machine" (a euphemism for a centrally planned economy that is of course run by robots) is in control of a globally interconnected economy where the contradictions of capital have withered away, thereby rendering competition irrelevant, rather than one where authoritarian governments are in control of a globe ravaged by perpetual war. so, this future is one of peace due to the robot-planned economies, and not one of competition and war. as in the orwellian universe, and apparently in reaction to it, the world is split into regions, but asimov splits them mildly different - oceania has absorbed eurasia (called the"northern regions"), leaving eastasia and the "disputed" region in separate global souths and what he calls "europe" (the geographical space inhabited by the roman empire at it's maximum extent, including the currently muslim regions), as a proxy of the north. operating between these regions is an anti-robot "society for humanity" that sounds sort of like free masonry, if i wanted to attach it to something in real life. and, the capital of the world government is new york city - perhaps in the old united nations building. he then briefly explores the four different regions via their representatives, attempting to project a concept of what they may be like, in relation to their views of the machine. so, the east is highly productive (and obsessed with yeast as a food product) and reliant on the machine, the south is corrupt and inept and reliant on others to use the machine for them, europe is inward and quietly superior and willing to defer to the north regarding the machine and "the north" (an anglosphere + ussr superstate) is in charge, but is skeptical about the ability of the machine to run the economy on it's own. he also seems to suggest that canada is running this northern superstate, which should probably be interpreted as comedic.

if asimov's intent is to provide for an alternative path that marxism may follow, this is curious, as asimov is not generally seen as a leftist [along with russell, he's a sort of archetype of early to mid century humanistic, science-first anglo liberalism]. i mean, he explicitly states that this is a future "post smith and post marx", but then he brings in an automated, centrally-planned economy, and that just means marxist, to a marxist - the left sees that conflict as artificial, so if you end up with something that walks like communism and quacks like communism then it's just plain old communism. the idea of technology absolving the contradictions (which is what he says, almost verbatim) isn't some kind of esoteric dialectic, it's the central point in marxist historical materialism. so, i mean he presented it in a way to avoid the house committee on unamerican activities, but you can only really interpret it a single way - it's a projection of a communist future, with robots in charge of a centrally planned economy. and, his future is one of peace, and not one of war. but, the quasi-masonic society for humanity, full of rich and powerful industrialists and financiers, wants to undo it and, presumably, bring back a market economy.

so, what asimov is setting up is a world where you have some kind of elitist masonic capitalist resistance to a robot-controlled technocratic marxist society, where there is world government and total peace. and, that's almost a prediction of atlas shrugged, although asimov is on the side of the robots, as always.

calvin then appears and seems to finally represent her namesake, in explicitly articulating a modified historical materialism, where the masons have no chance of success, because the robo-marxists will constantly adjust. the politician, byerley, finds that to be ghastly; the robopsychologist, calvin, thinks it's salvation.

these are the kinds of stories by asimov that i like, but all he does here is set up a story, without telling it. in terms of a reaction to orwell, the text is too short to allow for a decision as to whether it is more predictive or not.

- feminine intuition - this is a later piece that seems to be a sarcastic reply to some critiques of susan calvin as a character. i actually agree with asimov, via calvin - the entire critique is daft, and this is a fitting way to kill her off. however, when you read the text in the order presented in the complete robot, you also get a sequence of humanization in the robots, in the direction of time. that fact makes this story worth keeping in sequence, even if it's point is to let calvin smack some third-wavers on the knuckles with her cane.

- ....that thou art mindful of him: this solves the problem that us robotics has long had about how to market robots to people. the solution is to create robots not in the imitation of women [as in the previous story] but in the imitation of animals, and to solve practical problems, like pest control. i have to admit that this sounds like a good idea, although i'm not sure that it leads to the replacement of carbon with silicon, in the end. asimov builds up the humanization of robots here a little further by replacing the robotics laws with humanics laws, setting up the last story:

- the bicentennial man: this finally addresses the old problem of machines becoming human, and projects us robots many centuries into the future, using the mechanism of a robot that outlives several generations of the family it was sold into, and then wants to die with it, to prove it's really human. marvin minsky also seems to make a cameo, here, in the form of a robopsychologist that is proven wrong in the future. asimov goes over a lot of old themes here [mind-body problem, the liberation of robots as an allegory for the liberation of blacks, etc ] in what is an apparent thread-tying process, but he ultimately doesn't succeed in explaining what is driving this robot to act so irrationally. as humans, we may be expected to think this makes some kind of sense, due to some kind of emotional bias, but i can't really make sense of it, myself. i can understand why a robot might want to be free. i can't understand why it would want to be human, at all costs - including it's death. i think asimov was going for the jugular here and kind of fell over and kneed himself in the groin, instead - if this is his final projection of what becomes of robots in the future, it's unsatisfying, to say the least.