Gridlinked by Neal Asher was chosen to be read by the citizens of Chester for discussion at the Chester Library Science Fiction and Fantasy Book Group in February. For those that enjoy a James Bond meets Space Cowboys shoot ‘em action book then this will suit them. The plot isn’t superficial, there are complexities and twists and the gadgetry, while not original to SF readers, has novel approaches to their use. Some complained that the characters are two-dimensional and many are. Ian Cormac, the special agent protagonist has speech and behavioural mannerisms too similar to one of the antagonists, John Stanton. The latter, although a villain is possibly the most likeable character and displays tangible grief and emotion.
Rather than deconstruct Gridlinked, I want to discuss the real question in the book. How human can a robot be? Cormac has his intricate connection to Gridlink removed because the instant access to the powerful future internet in his head was dehumanising him. He was becoming cold, frigid in the way he treated people. On the other hand, Artificial Intelligence androids in the book were remarkably human in looks and behaviour even though they were stronger and were made to survive in environments that would kill humans. Yet, their brains were computers and so not human. What is it that makes us human? Philosophers have asked this for millennia but now with androids just around the corner it becomes more relevant. Humans have self-awareness, it is said. One test to see if animals have self-awareness is to place a mirror before them. Cats have been seen to preen themselves while looking in a mirror so that’s a positive indication. Dogs either ignore mirrors or go round the back of them to see where the rest of the ‘other’ dog is. So maybe canines lack as much self-awareness as cats. Tricky to know for sure. With a robot or sentient Artificial Intelligence gadget (even a sentient space suit as in my short story The Judgement Rock, published in Screaming Dreams in 2008 Download for free here) displays behaviour beyond mere pre-programmed activity, might they have a right to life? Many humans would insist such AI should have emotions and those are an animal attribute. Perhaps, but my training in cognitive psychology as a young trainee teacher taught me that emotions might be an outcome of our complex neuron web, and something learned as an interaction between memories and reactions from other beings. Like love then? Who knows. Then the religious will argue humans – but not our canine pals – have souls and so are superior to future androids. Even if you could identify the existence of a soul as separate from a self-aware emotional state, would that give humans the moral right to end the existence of a sentient being? I’d say not and that’s the main reason I am a veggie but there might be situations for survival in which in self-defence another being might have to be extinguished.
All thought-provoking stuff but sadly not argued through and only whispered at in Gridlinked.
As said, the book is a shoot ‘em up. In the right mood I can enjoy gung-ho action movies like the James Bond series this book is too much like. On the other hand I don’t enjoy reading them. Reading takes longer than watching a film and it gives me time to reflect more. What I think about then troubles me. When many beings die for superficial reasons such as vengeance, theft, King-of-the-castle behaviour eg extending one’s borders, it irks me and I stop enjoying it. I worry about other people being indoctrinated into thinking such mass killing of men, women, children, and sentient others, is okay for trivial reasons. Books take time to read and digest and in my opinion should be more responsible for making ethical statements than a quick adventure entertainment film. Am I being too high-horsey? Probably. Have I written violence and morbid death scenes? Yes, here and there but not have a whole novel with shootings and space cowboy killings. Oh, but I’m in trouble. ARIA: Left Luggage kills six billion people from infectious amnesia. Oh dear. Like all apocalyptic stories it is more than the sum of those deaths. I can justify it because it is a logical conclusion from the premise of the virus and the focus in the story is on survival rather than the horrible deaths from forgetting everything including medications, reading, writing, and talking.
Speaks of ARIA; I am doing a reading, chat and signing as part of the Wigan & Leigh Literary Festival. Monday April 8th 2-4pm in Wigan Library if you are around. Failing that maybe you have family and friends up in the vicinity for me to meet and greet.
Plug
ARIA is on the front page of http://bibliophilia.org with link to trailer and buying links.
Sales of How to Win Short Story Competitions are steady. Get yours here.
Exit, Pursued by a Bee is at http://geoffnelder.com/exitbee.htm Several readers have pointed out recently that a principle notion in that book is being proved true. Ie that the universe might be chaotic but that the Earth is in a kind of bubble of stability. In Exit that stability is shaken when alien artifacts leave. Just shows that fiction might not be so unbelievable after all.
Join me on twitter at http://twitter.com/geoffnelder

A most profound question. At last I have something to chew on and maybe achieve a new blog post — time I got my brain working again, and fingers typing, once more!
You should read the current (23rd Feb) edition of New Scientist. Self awareness is a surprisingly superficial thing. Experiment suggests that many of the decisions we think we take are actually taken by non-self aware parts of the brain and we only subsequently post-rationalise them as having been a conscious decision.
Thanks John. My online subscription to New Scientist has run out but I’ll pick up a copy – good alternative reading for fiction and it often triggers ideas for a SF story in me anyway 🙂
We still don’t have any idea of the capacity of the brain. I believe we only need to use about 10% of its potential. Has it a power that could move objects, kill by looking, enable its owner to levitate, even fly? Absurd suggestions? Maybe, but just think back a little. Who would have thought in 1906 man would land on the moon, send a spacecraft to Mars? Therefore,adding the fact that humans have emotion (and so do other animals), then I doubt such a facility can be ‘manufactured’ for androids/robots. We can give them the ability to diagnose, make decisions, but can such a being experience sadness, love, hate, happiness? I doubt it.