<div dir="ltr"><div dir="ltr"><div>Answers below:</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 2, 2023 at 4:16 AM Amit Moryossef <<a href="mailto:amitmoryossef@gmail.com">amitmoryossef@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi John,<div><br></div><div>1. Sounds like you are looking into doing a rule-based pose-to-mocap transformation.</div><div>The vast majority of previous work on this has shown that it does not work in a rule based, and one must train a neural network for this transformation.</div></div></blockquote><div><br></div><div>I plan on using existing python packages I mentioned to perform the conversion from video to geometry+transformations, with a little glue to get it into BVH or HAnim+BVH. If these python packages are rule-based, then I need to reconsider. I know rule-based systems do not work for the most part. I don't really have a lot of experience with neural networks or video capture and hope to leverage other's work. If the python package I mentioned uses rule-based systems, I will consider alternatives. I have just a little bit of experience. I think there's a large leap from geometry+transformations to language. That's the challenge. I hope to do translation from language to geometry+transformations as well. This is where encoders for geometry+transformations come in. Ultimately, I view geometry+transformations as a language (HAnim, X3D, BVH), so I'll be doing language to language translation, or as you say, sequence-to-sequence (perhaps in a larger, tree-to-tree solution. Trees may be encoded as sequences.)</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>2. SignTube will soon (always, hopefully) be able to transcribe videos in SignWriting automatically. The quality will not be great (at first). That too will be using a neural network, specifically, a VQVAE to encode the video, and a sequence-to-sequence translation model to write the SignWriting.</div></div></blockquote><div><br></div><div>Thank you for any information you have on VQVAE. This looks like a good resource: <a href="https://keras.io/examples/generative/vq_vae/">https://keras.io/examples/generative/vq_vae/</a>.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>3. If you want to generate videos directly from SignWriting, <a href="https://rotem-shalev.github.io/ham-to-pose/" target="_blank">this work</a> would be a good starting point, working from HamNoSys.</div></div></blockquote><div><br></div><div>I'm not targeting video output at this time. I am targeting BVH+HAnim. Then video will be possible, but not my job, except for validation. My target intended audience is the deafblind, so robotic control of mannequins--SignWriting is not an option. I believe the company vcom3d has robotic controlled mannequins, and I've been unable to contact them online at public email addresses or web contact forms.</div><div><br>If SignWriting can help on this project, then that would be a big bonus!.</div><div><br></div><div>John</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Amit</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 2, 2023 at 6:53 AM John Carlson <<a href="mailto:yottzumm@gmail.com" target="_blank">yottzumm@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">I need a large collection of signing videos to run an experiment converting video geometry. I do not particularly have large drives to do this, so I may rent space on a cloud service.<div><br></div><div>I plan to use python packages cv2 (openCV) <a href="https://pypi.org/project/opencv-python/" target="_blank">https://pypi.org/project/opencv-python/</a>, cvzone <a href="https://github.com/cvzone/cvzone" target="_blank">https://github.com/cvzone/cvzone</a>, and MediaPipe <a href="https://developers.google.com/mediapipe/solutions/guide" target="_blank">https://developers.google.com/mediapipe/solutions/guide</a> to convert video files into geometry and transformations, either BVH (BioVision Hierarchy) or some other mocap format (HAnim+BVH?). That is, we are converting signs and body language to line segments and points, and ultimately sets of geometry and transformations, and then translating those to something like English. I do not know if facial expressions are really recognizable or not. I may try my hand at lipreading video, IDK. If the video has sound, we'll transcribe that.</div><div><br></div><div>Ideally, I'll be able to store geometry, transformations and translation (possibly achieved by transcribing sound or lipreading) along with links to a video URL. The step after that is to find a translation from geometry and transformations to English, and back.</div><div><br></div><div>An acquaintance suggested that depth was required but not available, Elon Musk says depth is not required for autonomous driving. IDK, but I want to find out.</div><div><br></div><div>If anyone has already tried this, let me know. It would be interesting to convert geometry to SignWriting as well.</div><div><br></div><div>I am not sure if SignTube does this automatically, or if it uses human transcribers.</div><div><br></div><div>Any knowledge of a media solution or publically available database that links all this data would be helpful, too.</div><div><br></div><div>If someone wants to provide assistance on this effort, let me know.</div><div><br></div><div>John</div></div></div></div></div>
_______________________________________________<br>
Sw-l mailing list<br>
<a href="mailto:Sw-l@listserv.linguistlist.org" target="_blank">Sw-l@listserv.linguistlist.org</a><br>
<a href="https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/sw-l" rel="noreferrer" target="_blank">https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/sw-l</a><br>
</blockquote></div>
</blockquote></div></div></div>