baby cam

Deb Roy dkroy at media.mit.edu
Thu May 18 19:28:48 UTC 2006


Hi Brian,

Thanks for copying me on this email. I am happy to clarify. Regarding  
Brian's comments:

     I very much appreciate Margaret's input on this issue.  However,  
my own reaction to this project, when Deb Roy first mentioned it to  
me about a year ago, was different from Margaret's.  I considered it  
a remarkable opportunity for a busy father to spend more time with  
his child than might otherwise be possible.  Personally, I still very  
much value the fact that I was  able to join work and family life for  
several years in the process of recording and transcribing speech  
samples from my own children.


As a busy father I must concur that this project gives me every  
excuse to spend more time with my son!


     I agree with Margaret that one of the big outputs of this type  
of project is in the area of systems computing and terabyte storage  
management.  This work will inevitably have consequences outside of  
the field of child language.  For example, there is an interesting  
project here at CMU that seeks to use similar video technology to  
monitor patients in extended care centers to make sure that they are  
receiving care and attention when required.

Again, this is on target. The high-performance computing  
infrastructure and audio-visual analysis software we are developing  
will likely have applications in a broad range of domains beyond our  
interest in language acquisition.

      On the issue of IRB review, I understand that there is no  
intention to publish these data generally, so the concerns that  
Margaret raises in that regard would not apply.


Correct. Due to the private and relatively comprehensive nature of  
the data, we have not made any commitments to share a single bit of  
it. Over time, as we come to better understand methods for sharing  
that would work, we will consider releasing small fragments of data  
while ensuring all privacy concerns of all participants would be met.  
By the way, this is why my wife (also a researcher interested in  
language) and I decided to do this with our own family -- to sort  
through methodology issues with our own data first.

We have certainly had detailed interactions with IRB on this. MIT  
would not support it, nor would the NSF fund it if the protocols were  
not carefully designed to safeguard all obvious privacy/consent issues.


      Margaret assumes somehow that the child cannot leave the  
house.  I don't see anything that suggests this to be true.  I am  
copying this message to Deb Roy so that he can perhaps further  
clarify us all regarding such details.

Yikes, what a frightful idea! I just spent a wonderful morning on the  
deck with my son; we are planning summer vacations; walks in the park  
and visits to friends houses are regular activities etc. etc.

Our goal is to record a large part of my sons waking hours in the  
house -- when we are out of the house, we have decided not to record  
anything at all (we thought about audio only but even that is too  
complicated to keep up for a sustained period for a variety of  
reasons). Over the months our son will naturally be out of the house  
for larger and larger chunks of time so the amount of coverage is  
expected to progressively diminish. Recordings are often off over  
dinner which is when gossip often happens etc. The goal here is to  
find a balance between capturing all waking hours (which is now  
technologically feasible) and what is practical/acceptable.

As an aside, it's funny how many people think of Big Brother, the  
Truman show, etc. -- these all assume an incorrect mental model of  
the audience for the data. We are maintaining the data on a secure  
server, with carefully controlled access to a handful of researchers,  
and we are evolving new privacy policies over time to deal with  
issues as we understand them. The primary "audience" for this data  
will be pattern analysis and machine learning algorithms, not humans.


       On the negative side, I must say that I am not much of a fan  
of the fisheye lens video view.  My Ross used the fisheye lens a lot  
in his skateboard videos, particularly in facial close-ups, and that  
was indeed creepy!


We went with this compromise since there is no one place to point a  
static camera with regular lens in a room and get sufficient spatial  
coverage. The alternative -- a moving camera that tracks people --  
would definitely be creepy! Clearly there are many important aspects  
of social interaction (e.g., eye gaze) that we will lose, but we are  
not aware of any technology (yet) to get highly detailed visual  
information that will also provide comprehensive coverage over space  
and time.


Regarding Margaret's comment:

Thinking about what a relatively small set of fixed-position cameras  
and mics
is likely to capture, or not capture, one could easily write this off  
as primarily an
exercise in systems computing and terabyte storage management.    
However, eventually
someone is going to attempt something similar with more plausible  
recording technology.


I feel this comment misses an important point -- we are already in  
the realm of "plausible recording technology". I realize movable  
cameras and wearable mics are the norm (I have worked with both in  
the past). One challenge with ultradense longitudinal recordings is  
to rethink instrumentation. For example, relying on wearable  
microphones is simply not practical -- not for 12 hours a day for  
1000 days! (think: bath time, throw-ups, multiple changes of clothes  
in a day, etc.). A design goal in our project was to eliminate any  
wearable equipment, exposed wiring or other recording equipment --  
any of those seemed to me a bad idea for long term use in a home. The  
quality of boundary layer microphones from ceiling height surprise  
most people who work with conventional portable recording equipment.  
I'm happy to share details with anyone interested.

Regards,
Deb Roy


------------------------------------------------------------------------ 
------
Deb Roy
Director, Cognitive Machines
Associate Professor of Media Arts and Sciences
AT&T Career Development Professor
The Media Laboratory
Massachusetts Institute of Technology
dkroy at media.mit.edu   www.media.mit.edu/~dkroy   www.media.mit.edu/ 
cogmac


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/info-childes/attachments/20060518/76fca47b/attachment.htm>


More information about the Info-childes mailing list