Hi,<br><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>> I tried a couple of reviews from Amazon. Among different feature sets from 1 to 6, always one is close to the amazon's ranking, but unfortunately its never one feature set in particular, but rather randomly one the six. Besides the closest method, all other are usually reversed (e.g., if the closest method gives 5 star, all other give 1). However, this might have just happen for those couple examples I tried (Reviews of Kindle on Amazon).<br>
</div></blockquote><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
</div>Isn't that more-or-less what one would expect from random output?<br>
<div><br></div></blockquote><div>It can be considered as random if classification is performed on weblogs although the classifiers are trained on grammatically correct movie reviews. Actually, the recognition rate of my approach for almost all corpora I studied in my thesis is about triple of random choice. For example, for a 9-classes-problem choice by chance is 11.(1)%. My approach calculates about 34% and so on. But you have to classify a review that is conform with the style of texts used for learning the classifier. Otherwise, you get unreliable results.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br>
</div>As a human, and an Englishman, I expect I can understand and fairly judge the sentiment of most reviews written by, say, an American truck driver, without undue reprogramming. Is this really an unrealistic goal for our algorithms? And I wonder, is mastering a highly restricted style or register a necessary step in that direction... or is it in fact a detour.<br>
<div><br></div></blockquote><div>As a human and as an Englishman, you learned to recognize particular words of English language. Now you understand English in every country but you can't comprehend it. Understanding is only the first step of cognition, comprehension takes much more time and energy. Or can you explain the most severe problems of American truck drivers nowadays? Or tell me what problems you would discuss with an American truck driver? In terms of data mining, it means: you know what features you have in a dataset but you don't know their weights. In my opinion, if you want to learn "weights" you have to live in the country and tune the weights.</div>
<div><br></div><div>I don't think that we should worry about reprogramming -- first of all we can be happy that at least NaiveBayes or SVM classify texts more or less realistically. In my demo, I maintain about 30 classifiers that were trained using lexical, stylometric, deictic, grammatical datasets. You can look over a framework I use for this purpose (<a href="http://www.socioware.de/technology.html" target="_blank">www.socioware.de/technology.html</a>). AO</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
<br>
<br>
> How can a classifier calculate a weight of a lexical feature if this lexical feature is not present in the analyzed text?<br>
<br>
<br>
<br>
</div>By inferring from similarities between that feature and those that *are* present (e.g. through semi-supervised learning/bootstrapping of unannotated data)? That's at least one method about which a fair amount has been written already. I'm not saying its a solved problem mind you, but perhaps you're not up against a brick wall yet?<br>
<div><div><br></div></div></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>
<br>
<br>
Justin Washtell<br>
University of Leeds<br>
</div></div></blockquote></div><br>