TAKE one baby. Install a network of microphones and video cameras to monitor him 14 hours a day, 365 days a year until he's 3 years old. No, it's not the latest reality TV series, but a project designed to provide a unique insight into the role environment plays in the seemingly miraculous process by which children acquire language.
Deb Roy of the MIT Media Lab in Cambridge, Massachusetts, has so much faith in the project that he volunteered his own family as guinea pigs. Since his newborn son returned from hospital nine months ago, 14 microphones and 11 1-megapixel "fish-eye" video cameras, attached to the ceilings of each room, have been capturing a continuous stream of experiences, such as the mother using a melodious voice or frequently repeating a certain task, which the researchers hope to match to changes in the baby's linguistic performance.
How babies go from scarcely being able to gurgle at birth to chattering fluently by the age of three is still hotly disputed. Most psycholinguists agree that just listening to speech is not enough for a child to learn the basic rules of a language, but they still argue about how much of the extra information comes from specific "language genes" and how much from other stimuli in the environment apart from words.
In the past researchers have recorded mothers and babies playing in the lab or gone into their homes to observe them. But following babies in such an unnatural atmosphere does not reflect their normal experiences, and makes it impossible to tell whether changes in a child's speech are sudden or merely appear that way because of gaps in the recording.
Roy hopes his "speechome" project, as it has been dubbed, will fill in some of these gaps. "It allows us to put a microscope on the day-by-day and hour-by-hour changes that go into learning a language," says Steven Pinker, a psycholinguist at Harvard University, who is an adviser to the project. "Nothing remotely on this scale has ever been done."
The
cameras are switched on between 8 am and 10 pm each day and will
capture 85 per cent of the baby's waking hours up to his third
birthday. Roy and his wife can switch them off when they choose using
wall-mounted touch display, or press an "oops" button to delete
recordings. Every week the data is transported to a vast petabyte
storage sink at MIT, where it is downloaded and processed to extract
meaningful results from the recordings (
Whether Roy will be able to make sense of the data remains an open question, says Linda Smith of Indiana University in Bloomington. But if he is successful, Roy says the project could lead to better strategies for diagnosing and treating language disorders and a even a new breed of robots that learn to speak without being programmed with language.
As for the ethics of subjecting his son to such surveillance, Roy believes he is providing him with an incredible gift. "He might be the first person to have a memory that goes back to birth," he says.
Although his son has only uttered only one word so far, Roy has already amassed 24,000 hours of video and 33,000 hours of audio footage. How on earth will he make sense of it all?
Roy is hoping
computer algorithms will provide the answer. He has divided each room
into sections such as sink, table, fridge and stove. The computer picks
out combinations of movements between these sections that are
frequently repeated, called "behaviour fragments". Humans then piece
together how these fragments are associated with specific activities,
such as making coffee or doing the dishes, and program that into the
computer
The team is also developing algorithms that can transcribe speech, recognise objects and track people by the colour of their clothes on a day-by-day basis. Eventually algorithms will track such activities automatically, providing statistics on how many times a day a particular activity took place before the child finally produced a related word, for example.