<font><font face="garamond,serif">Workflow is a good word here because the use case has not been adequately specified. How does the User know that a given text string is associated with a specific time interval. Presumably, somebody (let's call her Alice) listened to the media file and noted that string S corresponds to the interval t1 - t2. It is possible that Alice entered her notes into a three-column spreadsheet table (S, t1, t2), and the User copied and pasted t1 and t2 into the query string. However, this is unlikely. Alice probably used a program like ELAN that associates text strings with time intervals, and in the process creates unique IDs for S, t1 and t2. The problem is that ELAN uses an XML format, EAF. So, we need two programs. One (call it eaf2html5) repackages the EAF information, both text strings and time slots, in HTML5. The other (call it SnippetPlayer) plays a specified snippet, searches for snippets by their text content, goes to the next or preceding snippet, and so on. </font></font><span style="font-family:garamond,serif">Once the SnippetPlayer is in place, it can be invoked from any document whose processor understands URLs: Web page, PDF, Google Mail ...</span><div>
<br><div class="gmail_quote">On Mon, Mar 18, 2013 at 9:49 PM, Honeyman Tom <span dir="ltr"><<a href="mailto:t.honeyman@gmail.com" target="_blank">t.honeyman@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I guess there's three parts to this problem: (1) what will the digital form of the document/thesis be, (2) where will the audio be, and (3) what programs/workflows exist out there that make this easy to do without too much knowledge of the technical side. The answer to (1) could be either pdf, or html. Actually the list could be longer, but lets limit it to these two. The answer to (2) is "embedded" or "linked". The answer to (3) is to my knowledge, they don't really exist.<br>
<br>
Exporting the thesis as a website would allow for linking via "embedding" which I guess would mean excerpting all audio chunks linked to in the thesis and linking directly to those chunks. One advantage of this would be that a local, offline copy of the website could be distributed by CD.<br>
<br>
Alternately the "linking" method would be using a concept like the snippet service. This could potentially save space/time by not splitting up audio files, and even allow arbitrary audio segments (i.e. a request to hear "around" a snippet). But it wouldn't work offline.<br>
<br>
Exporting the thesis to a pdf, one can embed audio files in the pdf, but this would be supported by a limited range of pdf readers, and would produce a potentially enormous file size. But the advantage would be that a single file can be distributed and it would work offline. Linking to a bunch of compatible readers in the front matter of the document might be a good idea.<br>
<br>
Alternatively links in a pdf can go either to a local file system (e.g. a CD distributed with the printed copy) or online. If online then it can be done either way with the advantages and disadvantages listed above.<br>
<br>
As for actually producing this stuff, I would recommend LaTeX or the slightly more user friendly LyX. There are packages out there for linking to audio and embedding them into a pdf. This option is perhaps not for the faint hearted however. I haven't played around with html exports from this. Output agnostic publishing systems like this are ideally suited to the task of producing different version/targets of the same document.<br>
<br>
Or one can build a system like this by hand, if one is comfortable with HTML or with a pdf editing program like Acrobat. Or develop a workflow involving naming conventions, scripts, etc.<br>
<br>
Otherwise, I'm not aware of a program/workflow to make something like this that is easy to do. I'd love to hear of one however!<br>
<br>
Also, a server with byte-range request support enabled is needed for the html5 magic to work. Also, it's not so much the url with a timecode in it that is supported, but rather that a snippet of html/js can include a audio player that can jump straight to a portion of the audio. With byte-range request support one can jump straight to a portion of an audio file without having to download the whole thing first. So, to go full circle, a snippet service could be provided that performed this task. It wouldn't need to segment an existing audio file either as was suggested, and I'm guessing that in principle the audio file could be hosted on a different server (as long as the audio hosting server supported byte-range requests).<br>
<br>
Cheers,<br>
Tom Honeyman<br>
<div class="HOEnZb"><div class="h5"><br>
On 19/03/2013, at 12:12 PM, Aidan Wilson <<a href="mailto:aidan.wilson@unimelb.edu.au">aidan.wilson@unimelb.edu.au</a>> wrote:<br>
<br>
> To return this thread to the original point then:<br>
><br>
> Are we any closer? If HTML5 supports timecodes as operands in URLs then I suppose all that's needed to achieve this is a server that can host recordings, or an archive that would allow such access to its recordings (at Paradisec for instance, one would have to be logged in and have permission to read the files in question).<br>
><br>
> @John Hatton, from what you describe on March 8, it sounds like the snippet service you mention is in principle possible right now. Is that the case? Could I for instance, set up a test pdf with an embedded url pointing to a time-coded audio file? Would it have to be .ogg or .wav or something in particular?<br>
><br>
> I'm very keen for this to become a reality.<br>
><br>
> --<br>
> Aidan Wilson<br>
><br>
> School of Languages and Linguistics<br>
> The University of Melbourne<br>
><br>
> <a href="tel:%2B61428%20458%20969" value="+61428458969">+61428 458 969</a><br>
> <a href="mailto:aidan.wilson@unimelb.edu.au">aidan.wilson@unimelb.edu.au</a><br>
> @aidanbwilson<br>
><br>
> On Tue, 19 Mar 2013, Margaret Carew wrote:<br>
><br>
>> Hi<br>
>> We are preparing a couple of publications at present through ILS funded projects using sound printing. The readers will play traditional songs linked to codes on the pages. The books contain provenance information linking these song items to an archival deposit.<br>
>> Regards<br>
>> Margaret Carew<br>
>><br>
>> On 18/03/2013, at 11:01 PM, "Randy LaPolla" <<a href="mailto:randy.lapolla@gmail.com">randy.lapolla@gmail.com</a><mailto:<a href="mailto:randy.lapolla@gmail.com">randy.lapolla@gmail.com</a>>> wrote:<br>
>><br>
>> That sort of thing is used here (Singapore) for children's books, but it can be used for anything. The ones I've seen are made in China, so mainly are for teaching children to read Chinese. Essentially the pen is a recorder and playback device, and "learns" to associate codes that are on the page with short audio files that you buy or create yourself. For ones you record yourself you associate a tag with the file and then paste the tag on the page. When the pen reads the code, it plays the file. It is quite clever and not expensive.<br>
>><br>
>> Randy<br>
>> -----<br>
>> Prof. Randy J. LaPolla, PhD FAHA (罗仁地)| Head, Division of Linguistics and Multilingual Studies | Nanyang Technological University<br>
>> HSS-03-45, 14 Nanyang Drive, Singapore 637332 | Tel: (65) 6592-1825 GMT+8h | Fax: (65) 6795-6525 | <a href="http://sino-tibetan.net/rjlapolla/" target="_blank">http://sino-tibetan.net/rjlapolla/</a><br>
>><br>
>><br>
>><br>
>> On Mar 18, 2013, at 10:17 AM, Colleen Hattersley wrote:<br>
>><br>
>> Ruth and others<br>
>> At the recent ALW there was some discussion about a company that associates sound to a printed document. The sound is heard by swiping a special pen-like instrument over particular spots on the page. Not sure if this process would be suitable for academic documents but it might be worth investigating. Here is the linkk: <a href="http://www.printingasia.com/" target="_blank">http://www.printingasia.com/</a><br>
>> Colleen Hattersley<br>
>><br>
>><br>
>> On Fri, Mar 8, 2013 at 2:02 PM, Ruth Singer <<a href="mailto:ruth.singer@gmail.com">ruth.singer@gmail.com</a><mailto:<a href="mailto:ruth.singer@gmail.com">ruth.singer@gmail.com</a>>> wrote:<br>
>> Hi Steffen and others,<br>
>><br>
>> So we've got the technological know-how and we've got archives that<br>
>> will store these sound files in a way that we can link to. The problem<br>
>> is how to publish documents with linked audio files in way that will<br>
>> receive the same academic recognition as a print publication without<br>
>> linked audio. Mouton de Gruyter has gone backwards in their policy<br>
>> regards audio files. The latest information I received is that they<br>
>> will not include CDs in their linguistics books or host audio files<br>
>> without obtaining intellectual property over the sound files.<br>
>><br>
>> I am interested in publishing descriptive work on an endangered<br>
>> language with linked audio files. At the moment I'm hoping that the<br>
>> OALI initiative will produce academically recognised way to publish<br>
>> this:<br>
>> <a href="http://hpsg.fu-berlin.de/OALI/" target="_blank">http://hpsg.fu-berlin.de/OALI/</a><br>
>><br>
>> Here's a bit pasted from their website:<br>
>> OALI is an Open Access initiative of Stefan Müller (and other<br>
>> linguists at FU Berlin) and Martin Haspelmath that was started in<br>
>> August 2012 and quickly found many prominent supporters (more<br>
>> than 100 by now). Please refer to background and motivation to<br>
>> learn more about the serious problems that we see with the<br>
>> traditional practice of book publication in our field. An<br>
>> extended version of this document including detailed numbers<br>
>> and case studies can be found in Müller, 2012.<br>
>> Our proposed solution is open-access publication in which the<br>
>> (freely available) electronic book is the primary entity.<br>
>> Printed copies are available through print-on-demand services.<br>
>> We are planning to set up a publication unit at the FU Berlin,<br>
>> coordinated by Stefan Müller and Martin Haspelmath, that<br>
>> publishes high-quality book-length work from any subfield of<br>
>> linguistics.<br>
>><br>
>> Cheers,<br>
>><br>
>> Ruth<br>
>><br>
>> On Fri, Mar 8, 2013 at 12:59 PM, Mat Bettinson <<a href="mailto:mat@plothatching.com">mat@plothatching.com</a><mailto:<a href="mailto:mat@plothatching.com">mat@plothatching.com</a>>> wrote:<br>
>>> On 8 March 2013 13:25, Doug Cooper <<a href="mailto:doug.cooper.thailand@gmail.com">doug.cooper.thailand@gmail.com</a><mailto:<a href="mailto:doug.cooper.thailand@gmail.com">doug.cooper.thailand@gmail.com</a>>> wrote:<br>
>>><br>
>>>> Yes, this states the server solution exactly. This does not pose any<br>
>>>> technical barrier (it's just a matter of providing a wrapper for<br>
>>>> something like sox or mp3splt).<br>
>>><br>
>>><br>
>>> I recently knocked up something that did exactly what John described. I<br>
>>> implemented it as a Python CGI script running on a web server. You pass a<br>
>>> filename and the start/end time periods and it uses the Python Wave library<br>
>>> to simply generate a new wave file and then sends that to the web browser as<br>
>>> Content-Type: audio/wav.<br>
>>><br>
>>> As you say if you're working on mp3 data it would need to be more<br>
>>> sophisticated, piping to mp3splt etc.<br>
>>><br>
>>> --<br>
>>> Regards,<br>
>>><br>
>>> Mat Bettinson<br>
>>><br>
>><br>
>><br>
>><br>
>> --<br>
>> Ruth Singer<br>
>> ARC Research Fellow<br>
>> Linguistics Program<br>
>> School of Languages and Linguistics<br>
>> Faculty of Arts<br>
>> University of Melbourne 3010<br>
>> Tel. <a href="tel:%2B61%203%2090353774" value="+61390353774">+61 3 90353774</a><tel:%2B61%203%2090353774><br>
>> <a href="http://languages-linguistics.unimelb.edu.au/academic-staff/ruth-singer" target="_blank">http://languages-linguistics.unimelb.edu.au/academic-staff/ruth-singer</a><br>
>><br>
>><br>
>><br>
>> --<br>
>> Colleen<br>
>><br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Alexander Nakhimovsky, Computer Science Department<br>Colgate University Hamilton NY 13346<br>Director, Linguistics Program<br>Director, Project Afghanistan<br>
t. +1 315 228 7586 f. +1 315 228 7009
</div>