: [RNLD] Links between publication and sound corpus
Honeyman Tom
t.honeyman at GMAIL.COM
Tue Mar 19 01:49:31 UTC 2013
I guess there's three parts to this problem: (1) what will the digital form of the document/thesis be, (2) where will the audio be, and (3) what programs/workflows exist out there that make this easy to do without too much knowledge of the technical side. The answer to (1) could be either pdf, or html. Actually the list could be longer, but lets limit it to these two. The answer to (2) is "embedded" or "linked". The answer to (3) is to my knowledge, they don't really exist.
Exporting the thesis as a website would allow for linking via "embedding" which I guess would mean excerpting all audio chunks linked to in the thesis and linking directly to those chunks. One advantage of this would be that a local, offline copy of the website could be distributed by CD.
Alternately the "linking" method would be using a concept like the snippet service. This could potentially save space/time by not splitting up audio files, and even allow arbitrary audio segments (i.e. a request to hear "around" a snippet). But it wouldn't work offline.
Exporting the thesis to a pdf, one can embed audio files in the pdf, but this would be supported by a limited range of pdf readers, and would produce a potentially enormous file size. But the advantage would be that a single file can be distributed and it would work offline. Linking to a bunch of compatible readers in the front matter of the document might be a good idea.
Alternatively links in a pdf can go either to a local file system (e.g. a CD distributed with the printed copy) or online. If online then it can be done either way with the advantages and disadvantages listed above.
As for actually producing this stuff, I would recommend LaTeX or the slightly more user friendly LyX. There are packages out there for linking to audio and embedding them into a pdf. This option is perhaps not for the faint hearted however. I haven't played around with html exports from this. Output agnostic publishing systems like this are ideally suited to the task of producing different version/targets of the same document.
Or one can build a system like this by hand, if one is comfortable with HTML or with a pdf editing program like Acrobat. Or develop a workflow involving naming conventions, scripts, etc.
Otherwise, I'm not aware of a program/workflow to make something like this that is easy to do. I'd love to hear of one however!
Also, a server with byte-range request support enabled is needed for the html5 magic to work. Also, it's not so much the url with a timecode in it that is supported, but rather that a snippet of html/js can include a audio player that can jump straight to a portion of the audio. With byte-range request support one can jump straight to a portion of an audio file without having to download the whole thing first. So, to go full circle, a snippet service could be provided that performed this task. It wouldn't need to segment an existing audio file either as was suggested, and I'm guessing that in principle the audio file could be hosted on a different server (as long as the audio hosting server supported byte-range requests).
Cheers,
Tom Honeyman
On 19/03/2013, at 12:12 PM, Aidan Wilson <aidan.wilson at unimelb.edu.au> wrote:
> To return this thread to the original point then:
>
> Are we any closer? If HTML5 supports timecodes as operands in URLs then I suppose all that's needed to achieve this is a server that can host recordings, or an archive that would allow such access to its recordings (at Paradisec for instance, one would have to be logged in and have permission to read the files in question).
>
> @John Hatton, from what you describe on March 8, it sounds like the snippet service you mention is in principle possible right now. Is that the case? Could I for instance, set up a test pdf with an embedded url pointing to a time-coded audio file? Would it have to be .ogg or .wav or something in particular?
>
> I'm very keen for this to become a reality.
>
> --
> Aidan Wilson
>
> School of Languages and Linguistics
> The University of Melbourne
>
> +61428 458 969
> aidan.wilson at unimelb.edu.au
> @aidanbwilson
>
> On Tue, 19 Mar 2013, Margaret Carew wrote:
>
>> Hi
>> We are preparing a couple of publications at present through ILS funded projects using sound printing. The readers will play traditional songs linked to codes on the pages. The books contain provenance information linking these song items to an archival deposit.
>> Regards
>> Margaret Carew
>>
>> On 18/03/2013, at 11:01 PM, "Randy LaPolla" <randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>> wrote:
>>
>> That sort of thing is used here (Singapore) for children's books, but it can be used for anything. The ones I've seen are made in China, so mainly are for teaching children to read Chinese. Essentially the pen is a recorder and playback device, and "learns" to associate codes that are on the page with short audio files that you buy or create yourself. For ones you record yourself you associate a tag with the file and then paste the tag on the page. When the pen reads the code, it plays the file. It is quite clever and not expensive.
>>
>> Randy
>> -----
>> Prof. Randy J. LaPolla, PhD FAHA (罗仁地)| Head, Division of Linguistics and Multilingual Studies | Nanyang Technological University
>> HSS-03-45, 14 Nanyang Drive, Singapore 637332 | Tel: (65) 6592-1825 GMT+8h | Fax: (65) 6795-6525 | http://sino-tibetan.net/rjlapolla/
>>
>>
>>
>> On Mar 18, 2013, at 10:17 AM, Colleen Hattersley wrote:
>>
>> Ruth and others
>> At the recent ALW there was some discussion about a company that associates sound to a printed document. The sound is heard by swiping a special pen-like instrument over particular spots on the page. Not sure if this process would be suitable for academic documents but it might be worth investigating. Here is the linkk: http://www.printingasia.com/
>> Colleen Hattersley
>>
>>
>> On Fri, Mar 8, 2013 at 2:02 PM, Ruth Singer <ruth.singer at gmail.com<mailto:ruth.singer at gmail.com>> wrote:
>> Hi Steffen and others,
>>
>> So we've got the technological know-how and we've got archives that
>> will store these sound files in a way that we can link to. The problem
>> is how to publish documents with linked audio files in way that will
>> receive the same academic recognition as a print publication without
>> linked audio. Mouton de Gruyter has gone backwards in their policy
>> regards audio files. The latest information I received is that they
>> will not include CDs in their linguistics books or host audio files
>> without obtaining intellectual property over the sound files.
>>
>> I am interested in publishing descriptive work on an endangered
>> language with linked audio files. At the moment I'm hoping that the
>> OALI initiative will produce academically recognised way to publish
>> this:
>> http://hpsg.fu-berlin.de/OALI/
>>
>> Here's a bit pasted from their website:
>> OALI is an Open Access initiative of Stefan Müller (and other
>> linguists at FU Berlin) and Martin Haspelmath that was started in
>> August 2012 and quickly found many prominent supporters (more
>> than 100 by now). Please refer to background and motivation to
>> learn more about the serious problems that we see with the
>> traditional practice of book publication in our field. An
>> extended version of this document including detailed numbers
>> and case studies can be found in Müller, 2012.
>> Our proposed solution is open-access publication in which the
>> (freely available) electronic book is the primary entity.
>> Printed copies are available through print-on-demand services.
>> We are planning to set up a publication unit at the FU Berlin,
>> coordinated by Stefan Müller and Martin Haspelmath, that
>> publishes high-quality book-length work from any subfield of
>> linguistics.
>>
>> Cheers,
>>
>> Ruth
>>
>> On Fri, Mar 8, 2013 at 12:59 PM, Mat Bettinson <mat at plothatching.com<mailto:mat at plothatching.com>> wrote:
>>> On 8 March 2013 13:25, Doug Cooper <doug.cooper.thailand at gmail.com<mailto:doug.cooper.thailand at gmail.com>> wrote:
>>>
>>>> Yes, this states the server solution exactly. This does not pose any
>>>> technical barrier (it's just a matter of providing a wrapper for
>>>> something like sox or mp3splt).
>>>
>>>
>>> I recently knocked up something that did exactly what John described. I
>>> implemented it as a Python CGI script running on a web server. You pass a
>>> filename and the start/end time periods and it uses the Python Wave library
>>> to simply generate a new wave file and then sends that to the web browser as
>>> Content-Type: audio/wav.
>>>
>>> As you say if you're working on mp3 data it would need to be more
>>> sophisticated, piping to mp3splt etc.
>>>
>>> --
>>> Regards,
>>>
>>> Mat Bettinson
>>>
>>
>>
>>
>> --
>> Ruth Singer
>> ARC Research Fellow
>> Linguistics Program
>> School of Languages and Linguistics
>> Faculty of Arts
>> University of Melbourne 3010
>> Tel. +61 3 90353774<tel:%2B61%203%2090353774>
>> http://languages-linguistics.unimelb.edu.au/academic-staff/ruth-singer
>>
>>
>>
>> --
>> Colleen
>>
More information about the Resource-network-linguistic-diversity
mailing list