Dear Corpora list members,<div><br></div><div>I am looking for Chinese Tokenization and Chinese Lemmatizer tool to tokenize Chinese Wikipedia text.</div><div>Please suggest a open-source, and freely available tool.</div><div>
<br></div><div>Regards,<br clear="all"><div></div>Ajay Dubey <br>M.S. by Research<div>SIEL, IIIT, Hyderabad<div><br><br><a href="http://www.google.com/imgres?imgurl=http://admissions.iiit.ac.in/logo_name.gif&imgrefurl=http://admissions.iiit.ac.in/admission_procedure.php&usg=__9ccHkzRxJdf9UV-7HNUbLjy0KYQ=&h=91&w=324&sz=19&hl=en&start=18&sig2=W2CiCzBOQyPJFhCggrbDSA&zoom=1&tbnid=5zVpQ8aNlkzftM:&tbnh=63&tbnw=226&ei=qP53TLSJDsnQccn7qOcF&prev=/images%3Fq%3DIIIT-H%26hl%3Den%26sa%3DX%26prmdo%3D1%26biw%3D1307%26bih%3D576%26tbs%3Disch:1&itbs=1&iact=hc&vpx=152&vpy=322&dur=876&hovh=72&hovw=259&tx=164&ty=45&oei=o_53TOanCYfKcIyU-eIF&esq=2&page=2&ndsp=19&ved=1t:429,r:13,s:18" target="_blank"></a></div>

</div><br>
</div>