Basically, there is a huge lack of theory as well as historical educational research knowledge in the field. At the recent LAK conference in Arizona, Chris Rice, Tanya Joosten, and myself were constantly on Twitter subtweeting the conference away from the official hashtag about this problem. Very basic educational ideas were being presented as if they were just discovered – things along the lines of “students that read more content scored better on the tests.” But no theoretical examination of what that meant, no connection to past research around that area, no critical examination of how limiting that finding is in the first place.

And this is the top Learning Analytics conference, boasting an incredibly low acceptance rate. Some UTA profs were there expressing their amazement at that: if these are the ones that got in…. what got rejected?

Not to mention that the proceedings from this conference now rank in the top 10 EdTech journals in Google’s rankings. But you read the papers they publish in the proceedings, and a huge chunk of them would not be considered for any of the other top 10 journals because they are soooo lacking in the theory, in critical examination in the discussion, etc.

I hate to sound so negative about it… but I mentioned these same problems at the same conference 3 years ago and basically got “shooshed” over it by some people. Even those that somewhat agreed didn’t really get it. Now more and more people with higher visibility than me are starting to notice as well. If they don’t hit the breaks and deal with this now, it could blow up on them later. Can you imagine the expose waiting to happen on a top 10 ranked publication being found to have multiple articles that wouldn’t even get a peer review in the other top 10?