eye tracking reading study

Alon Hafri ahafri at gmail.com
Tue Jul 5 16:11:06 UTC 2011


Thanks Lisa and Mich!

Mich, yesterday I actually implemented the distance-to-image-center
calculation for a non-reading study to determine which of several
images on screen a participant clicked on, and it worked very well, so
thanks for that suggestion.

It is true that a hit-test on an AOI rectangle with a buffer on all
sides is overly tolerant of diagonally displaced gaze positions. You
could correct it (as you said) with the following:
-When the hit-test coords lie within the AOI rectangle, it is coded as
a hit
-When the closest point to the hit-test coords is any side of the AOI
rectangle, the hit-test coordinates must be within 20 pixels
horizontally or vertically from the AOI rectangle
-When the closest point to the hit-test coords is the corner, measure
the distance between the hit-test and the corner sqrt(x^2 + y^2),
which must be <=20 pixels

I'll have a crack at designing the reading study now, and take
everyone's suggestions into account.

Alon


On Jul 1, 8:53 am, Lisa Levinson <lml1... at gmail.com> wrote:
> Not sure about this but for the EEG experiments I have helped with
> that explore reading they seem to use the list function to generate
> sentences. The words appear one at a time but it's the only way you
> can flag (for segmentation) the aspect of the sentence being
> investigated. Might be totally different for eye-tracking but that's
> how I have seen the reading experiments are organized for EEG.
>
> On Jun 28, 4:46 pm, Alon Hafri <aha... at gmail.com> wrote:
>
> > Hi, wondering if someone in the group can help me out.
>
> > I am creating an E-Prime experiment using the E-Prime Extensions for
> > Tobii. For each trial, the participant will read a sentence, and I
> > would like to record reading times for each window of size 1 to 3
> > words.
>
> > I have experience with using list attributes to set Slide Image
> > position (and therefore AOI position) at runtime, as well as with
> > using transparent Slide Images overtop of larger images to act as
> > AOIs, but I've never done a reading study before. Since the words will
> > vary in length for each trial, but will need to be evenly spaced (and
> > on multiple lines conceivably), it would be great if position for
> > every window would not have to be specified but could be determined by
> > a script.
>
> > I can think of a few solutions, but perhaps someone has an easier one?
> > 1) create a Slide Text object for each word (or window), and so they
> > could act as their own AOIs.
> > Drawbacks: You would have to specify locations for each one, and it
> > could be tricky lining them up appropriately so words are evenly
> > spaced.
>
> > 2) Have one Slide Text object for the whole sentence and have
> > transparent AOIs overtop as the AOIs.
> > Drawbacks: You would still have to specify locations for each, but the
> > text would be in one object so would appear normal.
>
> > 3) Suggestions?
>
> > Thanks!
> > Alon

-- 
You received this message because you are subscribed to the Google Groups "E-Prime" group.
To post to this group, send email to e-prime at googlegroups.com.
To unsubscribe from this group, send email to e-prime+unsubscribe at googlegroups.com.
For more options, visit this group at http://groups.google.com/group/e-prime?hl=en.



More information about the Eprime mailing list