Thank You to my Patrons!

Monday, July 11, 2011

When artificial intelligences start using contractions...

I'm sure you're all familiar with this scenario. There's a computer - or a robot, or an automaton of some type, or an android. And it speaks. But it doesn't speak in contractions. Many scenarios even take this further, showing development in the artificial individual by having him/her/it start to use contractions. It's a hint of sentience.

But what does it actually mean?

Just yesterday I got to thinking through what this sudden acquisition of contractions would mean in a linguistic sense, and I arrived at a conclusion which made me blink.

I should of course begin by explaining that I'm a descriptive linguist and very much a believer in the chaos-theoretical model of language and language learning. Chomsky-style "universal grammar" isn't necessary in such a model, and given the fact that languages vary so widely across the world, that seems quite a relief to me. [Language "universals" tend not to be universal, but instead very large-scale trends.]

So what kind of system is assumed to underlie speech which does not use any contractions? Basically, it implies a language built up from a list of vocabulary words and a set of syntactic rules. An artificial intelligence using these resources would not use contractions because those wouldn't be part of its programming, and the sudden development of contractions would thus imply transcendence of fundamental programming.

If, however, we look at the acquisition problem from a neural network perspective, it looks very different. A neural network acquires language based on examples that it receives in the environment. It would parse out the words based on patterns of repetition and difference, and it would parallel the developmental curves that we see in human language development, where there is a steady increase in proficiency at the beginning due to memorization, then the learner grasps a larger pattern and overapplies it (leading to apparently less successful performance on tests) and then starts re-introducing all the exceptions to the rule. I would expect a successful neural network-based artificial intelligence to speak in contractions very early on, regardless of its proficiency in other matters (not to mention its sentience).

So in the end, the contractions question is something of a conceit. This isn't necessarily a bad thing. By "conceit" I mean it's a gesture toward the idea of developing intelligence. It's subtle enough that not everyone will notice it, yet definitely a change in an artificial being's behavior. If you try to reason through the way that this super-amazing artificial brain can't seem to operate on anything more complex than a vocabulary list and a set of syntactic rules, you might get tripped up... But the most important thing here is that use of contractions is a marker that normal people will notice. And as a marker, it has been very useful - and continues to be useful - for science fictional storytellers.

7 comments:

  1. Hmm, that's an interesting way to look at it, but I want to throw a wrench in the logic... Children don't grasp contractions as fast as they do words, which is, I think, where the trend came from. So I think it is an artifact of the belief that artificial intelligence is less than us and therefore too "simplistic" or "childish" to understand complex syntax.

    On the other hand, ignoring all the assumptions in that belief, there's something to be said for an artificial neural net that grows and learns much like a human does.

    ReplyDelete
  2. "But the most important thing here is that use of contractions is a marker that normal people will notice."

    Plus it's just cool. ;)

    No, really, this was fascinating and something I'd never thought about in depth.

    ReplyDelete
  3. Margaret, yes and no. Contractions definitely come after no+concept but they still come in very quickly. I don't tend to hear kids saying the extended versions "do not" and "cannot" very often. Children are also sentient, even when they aren't using contractions yet, so what does that say about this question? I don't know. :)

    Leah, I agree that it's cool. I always accepted it when I encountered it, which means that it's pretty effective too. I can't resist looking at things from different perspectives.

    ReplyDelete
  4. Another Star Trek: TNG reference! I did suspect that you'd be a fan of the series, Juliette.

    Interesting points you bring up. I always figured it would be difficult to write a computer program that reproduces a natural human speaking pattern -- especially if the language of the future is English, which doesn't have a lot of consistent rules. Using a simplistic, formal style of speech seems like it'd be the safest bet for an AI that needs to interact with unpredictable humans.

    ReplyDelete
  5. LOL! Yes, that's exactly my tweak point with humans. We set lines as barriers to sapience without considering the implications that to be reaching toward that line in the sand, the being has to be sapient already.

    The difference in timeline would be another marker, I'd guess.

    However, to your baseline, yes, it works. I was fascinated by Data's development as I watched STNG.

    ReplyDelete
  6. Heidi, I was a huge fan when it was still being made. The status of English as a difficult language to learn is overstated, I believe (there are difficult aspects to any language). Socially I suppose that formal speech would be safer, but that wouldn't be the case in every situation. Thanks for your comment!

    Margaret, I'm right there with you. :)

    ReplyDelete
  7. Very interesting, Juliette. I think you nailed it (and made me chuckle) with your qualifier, "a marker that normal people will notice."

    So an emergent neural-network AI would in its infancy use wonderful constructions such as, "I hitted the solution," and "my programmer his instructions to me" (the learner grasps a larger pattern and overapplies it). I really like this.

    ReplyDelete