<div dir="ltr">Hi John,<div><br></div><div>1. Sounds like you are looking into doing a rule-based pose-to-mocap transformation.</div><div>The vast majority of previous work on this has shown that it does not work in a rule based, and one must train a neural network for this transformation.</div><div><br></div><div>2. SignTube will soon (always, hopefully) be able to transcribe videos in SignWriting automatically. The quality will not be great (at first). That too will be using a neural network, specifically, a VQVAE to encode the video, and a sequence-to-sequence translation model to write the SignWriting.</div><div><br></div><div>3. If you want to generate videos directly from SignWriting, <a href="https://rotem-shalev.github.io/ham-to-pose/">this work</a> would be a good starting point, working from HamNoSys.</div><div><br></div><div>Amit</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 2, 2023 at 6:53 AM John Carlson <<a href="mailto:yottzumm@gmail.com">yottzumm@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">I need a large collection of signing videos to run an experiment converting video geometry. I do not particularly have large drives to do this, so I may rent space on a cloud service.<div><br></div><div>I plan to use python packages cv2 (openCV) <a href="https://pypi.org/project/opencv-python/" target="_blank">https://pypi.org/project/opencv-python/</a>, cvzone <a href="https://github.com/cvzone/cvzone" target="_blank">https://github.com/cvzone/cvzone</a>, and MediaPipe <a href="https://developers.google.com/mediapipe/solutions/guide" target="_blank">https://developers.google.com/mediapipe/solutions/guide</a> to convert video files into geometry and transformations, either BVH (BioVision Hierarchy) or some other mocap format (HAnim+BVH?). That is, we are converting signs and body language to line segments and points, and ultimately sets of geometry and transformations, and then translating those to something like English. I do not know if facial expressions are really recognizable or not. I may try my hand at lipreading video, IDK. If the video has sound, we'll transcribe that.</div><div><br></div><div>Ideally, I'll be able to store geometry, transformations and translation (possibly achieved by transcribing sound or lipreading) along with links to a video URL. The step after that is to find a translation from geometry and transformations to English, and back.</div><div><br></div><div>An acquaintance suggested that depth was required but not available, Elon Musk says depth is not required for autonomous driving. IDK, but I want to find out.</div><div><br></div><div>If anyone has already tried this, let me know. It would be interesting to convert geometry to SignWriting as well.</div><div><br></div><div>I am not sure if SignTube does this automatically, or if it uses human transcribers.</div><div><br></div><div>Any knowledge of a media solution or publically available database that links all this data would be helpful, too.</div><div><br></div><div>If someone wants to provide assistance on this effort, let me know.</div><div><br></div><div>John</div></div></div></div></div>
_______________________________________________<br>
Sw-l mailing list<br>
<a href="mailto:Sw-l@listserv.linguistlist.org" target="_blank">Sw-l@listserv.linguistlist.org</a><br>
<a href="https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/sw-l" rel="noreferrer" target="_blank">https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/sw-l</a><br>
</blockquote></div>