Ted Chiang redefines how everybody will write about artificial intelligenceS

Science fiction's full of wishful thinking about artificial intelligence: It'll spring up on its own. It'll become smarter than us in no time. Ted Chiang's new novella, The Life Cycle of Software Objects, will change how you think about A.I.

Spoilers ahead! Although I'll try to avoid giving away anything major.

And not only that, but Chiang's longest work to date is pure idea crack. Writing a longer work doesn't make the award-winning short-story writer spread out his legendary inventiveness and gift for challenging the reader — if anything, he goes into overdrive. The Life Cycle of Software Objects keeps surprising you. Not just in the sense that you think the story's going one direction, and then it suddenly veers in a new, totally logical, direction. But also in the sense that the story isn't really about what you think it's about. New ideas, new ways of looking at the conundrum of artificial intelligence, keep coming up, although they feel as though they're organic to the story and the characters.

Anybody interested in artificial intelligence must read Chiang's novella — but the same goes for anybody who cares about science fiction and wants to see it done well.

I'm not an expert on A.I. by any means, but I had the sensation, while reading Life Cycle, of encountering a reality check. I kept nodding and thinking to myself, "Yeah, that's how it would actually be." The book just feels like the work of someone who's worked in software development and spent time in virtual environments like Second Life. The development of A.I. in the book feels totally believable, and I honestly think all future A.I. sagas will have to be influenced by Chiang's work — which seems, in turn, to be influenced by some classics like Rudy Rucker's Ware tetralogy and Amy Thomson's Virtual Girl.

So what's Chiang's theory of A.I. development? The book's blurb does such a great job of explaining it, I'm just going to quote from it:

What's the best way to create artificial intelligence? In 1950, Alan Turing wrote, "Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried."

The first approach has been tried many times in both science fiction and reality. In this new novella, at over 30,000 words, his longest work to date, Ted Chiang offers a detailed imagining of how the second approach might work within the contemporary landscape of startup companies, massively-multiplayer online gaming, and open-source software.

Ted Chiang redefines how everybody will write about artificial intelligenceS

So Chiang's story is all about A.I. that develops in a similar fashion to regular intelligence. One of the two main characters, Ana, is a zookeeper who's studied software testing. She gets hired by a software company to help train its digitial life-forms, because of her zoo background, and training baby artificial intelligences turns out to be remarkably similar to training baby animals. The "Digients" bond with their human owners, and they have a "reward map" that allows them to be bribed with digital food and other treats, and in many ways they behave just like human children. The other main character of the novella, Derek, is a designer who creates the faces of the Digients and tries to get them as expressive as possible.

A big part of the twists and turns of Life Cycle comes from two sources: The evolution of these Digients, who do keep getting smarter and more fascinating, and the normal cycle of technological innovation and business development. The Digients become more and more like real people — except for jarring moments where you realize that they're nothing like real people — and meanwhile, the platform they were built on becomes obsolete, technology changes, companies go under, boom turns to bust, and it's all a bit heartbreaking and thrilling. Anybody who's worked in the software industry will recognize the world Chiang is writing about here.

Any one of the ideas in this book would be the idea in a book by a lesser author — like, for example, the implications of rolling back the Digients' memories to an earlier date, so that they don't remember a particular event. What does it mean when you can erase a day or a year of someone's memory just by restoring an earlier backup?

And after reading this book, it really is hard to remember why you ever believed in "pie in the sky" promises for A.I. — you realize that, of course, A.I. would be quirky and impossible to tailor to our specific needs. The best A.I.s might not have any commercial application whatsoever, and the smartest A.I.s might be saddled with so much obsessive-compulsive disorder that they wouldn't listen to anybody, and might not do anything useful. You might create the perfect artificially intelligent personal assistant, but then it might only be loyal to one specific person regardless of your attempts to transfer its loyalty to other people so you can sell instances of it to other people.

The Turing quote is instructive — we've been accustomed to thinking about A.I. as something that will just spontaneously generate out of game-playing or increasingly complex problem-solving algorithms, but Chiang makes a pretty compelling case that true A.I. will instead come out of interactions with humans who teach computers how to think and communicate in a way that is meaningful to us. The result is an A.I. that's still dependent on humans, and defined by its relationships with humans — but you still get the sense that it could eventually become something that exceeds human capabilities in some fundamental way beyond just problem-solving.

Chiang sums it up really nicely in this one passage towards the end of the book, talking about Ana and her own personal pet Digient, Jax:

The years she spent raising Jax didn't just make him fun to talk to, didn't just provide him with hobbies and a sense of humor. It was what gave him all the other attributes [a software company] was looking for: fluency at navigating the real world, creativity at solving new problems, judgment you could entrust an important decision to. Every quality that made a person more valuable than a database was a product of experience... Experience isn't merely the best teacher, it's the only teacher. If she's learned anything from raising Jax, it's that there are no shortcuts; if you want to create the common sense that comes from twenty years of being in the world, you need to devote twenty years to the task. You can't assemble an equivalent collection of heuristics in less time; experience is algorithmically incompressible.

And in the end, Chiang's novella winds up having something profound to say about human love — both our love for our pets and children (as mirrored in people's relationships with these digital life forms), and our love for each other. Just as you can't have artificial intelligence without investing time and attention, you can't have love without sacrifice and understanding. In the novella's final act, the two pressures of the Digients' evolution and rapidly changing software platforms collide, and people are forced to make choices that will be heart-breaking either way. It's not just the best book about artificial intelligence you'll read this year — it's one of the best science fiction books, period.

The Life Cycle of Software Objects is coming out as a fancy hardcover limited edition, and I believe the final version will have nice illustrations inside. Chiang's novella is available for preorder now, and should be out any day. [Subterranean Press]