<html>
<head>
<meta name="generator" content="Windows Mail 17.5.9600.20413">
<style data-externalstyle="true"><!--
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {
margin-top:0in;
margin-right:0in;
margin-bottom:0in;
margin-left:.5in;
margin-bottom:.0001pt;
}
p.MsoNormal, li.MsoNormal, div.MsoNormal {
margin:0in;
margin-bottom:.0001pt;
}
p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst,
p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle,
p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast {
margin-top:0in;
margin-right:0in;
margin-bottom:0in;
margin-left:.5in;
margin-bottom:.0001pt;
line-height:115%;
}
--></style></head>
<body dir="ltr">
<div data-externalstyle="false" dir="ltr" style="font-family: 'Calibri', 'Segoe UI', 'Meiryo', 'Microsoft YaHei UI', 'Microsoft JhengHei UI', 'Malgun Gothic', 'sans-serif';font-size:12pt;"><div>That’s why it’s so important to provide easy access to both standardized datasets and<br>automatic, standardized scorers.</div><div>As an example there was considerable variation in how people evaluated their system, with both a race to the bottom in terms of meaningful evaluation (people wanted to see improvements from noisy world knowledge, so they’d evaluate on gold mentions because that’s the only setting in which that helps a lot), and even people writing ACL papers about how everyone</div><div>was doing it wrong weren't above “ choosing not to impute certain errors”.</div><div>These are the same issues that also plagued parsing evaluation ca. 1997, and in coreference the SemEval-2010 and CoNLL shared task mean that we now have a dataset that’s fully accessible to everyone (unlike the ACE data, where the testing data was not distributed to participants), with a standardized scorer (written by Emili Sapena for SemEval,with many corrections and improvements from Sameer Pradhan and the others who organized the CoNLL shared task).</div><div>So it’s definitely possible to measure the same thing for people, even if it takes some effort.</div><div>In NLP, you also want to not only measure the same thing for everyone, but also the right thing - normally people don't want to use a parser because they want to find out exciting new things about PTB section 23, but because they want to use it on 18th century German, or on Arabic blogs, or the next exciting thing. Which is why, once you have one point of reference firmly down, you want to get to another one to see if your assumptions still hold.</div><div><br></div><div>So, yes, it’s perfectly possible to do “Cargo cult” style NLP, which is why standardized evaluations and people actually replicating other’s experiments are both important. And I picked established tasks here because earlier mistakes are more visible and well-understood, not because I couldn’t come up with more egregious examples from new and exciting tasks.</div><div><br></div><div>-Yannick</div><div><br></div><div style="padding-top: 5px; border-top-color: rgb(229, 229, 229); border-top-width: 1px; border-top-style: solid;"><div><font face=" 'Calibri', 'Segoe UI', 'Meiryo', 'Microsoft YaHei UI', 'Microsoft JhengHei UI', 'Malgun Gothic', 'sans-serif'" style='line-height: 15pt; letter-spacing: 0.02em; font-family: "Calibri", "Segoe UI", "Meiryo", "Microsoft YaHei UI", "Microsoft JhengHei UI", "Malgun Gothic", "sans-serif"; font-size: 12pt;'><b>Von:</b> <a href="mailto:nasmith@cs.cmu.edu" target="_parent">Noah A Smith</a><br><b>Gesendet:</b> Mittwoch, 9. April 2014 03:59<br><b>An:</b> <a href="mailto:kevin.cohen@gmail.com" target="_parent">Kevin B. Cohen</a><br><b>Cc:</b> <a href="mailto:corpora@uib.no" target="_parent">corpora</a></font></div></div><div><br></div><div dir=""><div dir="ltr">What are the "unknown ways" that one NLP researcher's conditions might differ from another NLP researcher's? If you're empirically measuring runtime, you might have a point. But if you're using a standardized dataset and automatic evaluation, it seems reasonable to report others' results for comparison. Since NLP is much more about methodology than scientific hypothesis testing, it's not clear what the "experimental control" should be. Is it really better to run your own implementation of the competing method? (Some reviewers would likely complain that you might not have replicated the method properly!) What about running the other researcher's code yourself? I don't think that's fundamentally different from reporting others' results, unless you don't trust what they report. Must I reannotate a Penn Treebank-style corpus every time I want to build a new parser?</div>
<div class="gmail_extra"><br clear="all"><div>--<br>Noah Smith<br>Associate Professor<br>School of Computer Science<br>Carnegie Mellon University</div>
<br><br><div class="gmail_quote">On Tue, Apr 8, 2014 at 6:57 PM, Kevin B. Cohen <span dir="ltr"><<a href="mailto:kevin.cohen@gmail.com" target="_parent">kevin.cohen@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; padding-left: 1ex; border-left-color: rgb(204, 204, 204); border-left-width: 1px; border-left-style: solid;">
<div dir="ltr"><div><div>I was recently reading the
Wikipedia page on "cargo cult science," a concept attributed to no
lesser a light than Richard Feynman. I found this on the page:<br><br>"An example of cargo cult science is an experiment that uses another researcher's results in lieu of an <a href="http://en.wikipedia.org/wiki/Experimental_control" target="_parent">experimental control</a>.
Since the other researcher's conditions might differ from those of the
present experiment in unknown ways, differences in the outcome might
have no relation to the <a href="http://en.wikipedia.org/wiki/Independent_variable" target="_parent">independent variable</a> under consideration. Other examples, given by Feynman, are from <a href="http://en.wikipedia.org/wiki/Educational_research" target="_parent">educational research</a>, <a href="http://en.wikipedia.org/wiki/Psychology" target="_parent">psychology</a> (particularly <a href="http://en.wikipedia.org/wiki/Parapsychology" target="_parent">parapsychology</a>), and <a href="http://en.wikipedia.org/wiki/Physics" target="_parent">physics</a>. He also mentions other kinds of dishonesty, for example, falsely promoting one's research to secure funding."<br>
<br></div>If we all had a dime for every NLP paper we've read that used "another researcher's results in lieu of an
experimental control," we wouldn't have to work for a living. <br><br>What do you think? Are we all cargo cultists in this respect?<br><br><a href="http://en.wikipedia.org/wiki/Cargo_cult_science" target="_parent">http://en.wikipedia.org/wiki/Cargo_cult_science</a><br>
<br></div>Kev<span class="HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div dir="ltr">Kevin Bretonnel Cohen, PhD<br>Biomedical Text Mining Group Lead, Computational Bioscience Program, <br>U. Colorado School of Medicine<br>
<a href="tel:303-916-2417" target="_parent">303-916-2417</a><br><a href="http://compbio.ucdenver.edu/Hunter_lab/Cohen" target="_parent">http://compbio.ucdenver.edu/Hunter_lab/Cohen</a><br>
<br><br><br></div>
</font></span></div>
<br>_______________________________________________<br>
UNSUBSCRIBE from this page: <a href="http://mailman.uib.no/options/corpora" target="_parent">http://mailman.uib.no/options/corpora</a><br>
Corpora mailing list<br>
<a href="mailto:Corpora@uib.no" target="_parent">Corpora@uib.no</a><br>
<a href="http://mailman.uib.no/listinfo/corpora" target="_parent">http://mailman.uib.no/listinfo/corpora</a><br>
<br></blockquote></div><br></div>
</div></div>
</body>
</html>