[RNLD] Links between publication and sound corpus
shaurholml at GMAIL.COM
Fri Mar 8 09:01:46 UTC 2013
It is so great to see that so many people have wanted to contribute with
answers / solutions to the problem of linking between descriptive work and
The way I see it, there are two roads to follow here, which serve different
purposes and potentially different audiences, all though these may overlap.
*Scenario 1) the reader has sufficiently high speed access to the internet
to stream video online.*
For this, people here have offered a lot of different options (some very
technical), and many of the mentioned online solutions carry the added
benefit of archiving the recordings at the same time; this is definitely a
nice long term solution which I would like to learn how to implement. One
problem that we might run into, though, is access rights - ideally the
reader should not have to deal with a lot of hassle about archival access
rights before getting to the audio files that go with the publication he is
*Scenario 2) the reader does not have a high speed internet connection.*
This will be the case in most rural areas where speakers of endangered
languages live, and where field linguists spend a lot of their time. But it
will also be the case in many places where you just don't want to depend on
a WIFI hotspot being around in order to access the audio files of the book
you're reading - that means anywhere on the move or outdoors (I know that
there is mobile broadband and WIFI in trains etc. etc. but all of those
cost money and ultimately makes the publication less accessible).
So, to avoid the access rights issue and the internet issue, I would prefer
to have an off-line option, (in addition to the online one?). Now, I'm not
a programmer at all, which is probably the reason why I asked this question
in the first place, but I imagine at least two possible forms that the
links from my dissertation could take, in my order of preference:
1) the link from the linguistic example in the dissertation PDF-file would
open the relevant EAF-file which would be located in the same directory as
the PDF-file and the MP4 or WAV-file used by ELAN. This would not only let
the reader hear the example but would also allow them to see any further
information given in other tiers in ELAN such as translation, glossing and
2) the link from the linguistic example in the dissertation PDF-file would
open the relevant media-file in a media player such as Quicktime or VLC
along with a subtitle track which would then play the relevant snippet.
The reason for preferring these more or less annotated solutions, even
though you have the glosses and translations in the text from which you
link, is that it gives you access to the larger context that the cited
example is taken out of and this, I think, is crucial to fully
understanding the meaning of the cited example - was it elicited? what was
the word just before it? What happened just before or just after? If it is
natural speech, what happened in the conversation or narration just before
and just after the cited example? etc.
On Fri, Mar 8, 2013 at 3:59 AM, Mat Bettinson <mat at plothatching.com> wrote:
> On 8 March 2013 13:25, Doug Cooper <doug.cooper.thailand at gmail.com> wrote:
> Yes, this states the server solution exactly. This does not pose any
>> technical barrier (it's just a matter of providing a wrapper for
>> something like sox or mp3splt).
> I recently knocked up something that did exactly what John described. I
> implemented it as a Python CGI script running on a web server. You pass a
> filename and the start/end time periods and it uses the Python Wave library
> to simply generate a new wave file and then sends that to the web browser
> as Content-Type: audio/wav.
> As you say if you're working on mp3 data it would need to be more
> sophisticated, piping to mp3splt etc.
> Mat Bettinson
Mange venlige hilsener,
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Resource-network-linguistic-diversity