Do We Need Time to Do Meaningful Research?

Estimated reading time: 12 minutes, 50 seconds

This is a post rant about why the current way academia rewards researchers won’t work in the age of AI – rewarding “number of publications” over quality and real-world impact, or any meaningful contribution at all, was already problematic, but in the age of AI, where people believe that using AI will enhance their productivity, I feel more inclined to switch over to “slow” research. And if you know me at all, you know that I’m usually all about “fast” everything (OK, except my PhD which took me 7 years to finish, but I digress). So for me to want to go slow is a big deal. I’ll explain.

But first, snippets from my history of doing “research”.

Undergrad years

  1. I’m a college freshman, and we’re supposed to do research for the required course called “freshman writing program”. I’m in the library, searching the card catalog. I’m with friends, who have no idea how to use it, so I help them. I find some books, I run upstairs to get one, and I notice one beside it that I think I might like, so I grab it, too. One of the sources I need is microfiche. I think I needed an appointment or something to see that one because it was too late for the staff member responsible. I need a journal article and I search through these huge stacks that… gasp… move, so I can browse and find the one I need and photocopy it; I wait in line to put in my photocopy order.
  2. Fast forward to my junior year. There are now library databases. I’m doing really awkward Boolean searches to do research for MUN (Model United Nations). I discover that one of my colleagues used to make up BS in his CV to get good internships, and I don’t believe anything he says about MUN, the UN, or anything anymore. We’re both secretariat (i.e. student leaders of councils of other students). I feel… disenfranchised by overly eloquent people who are descendents from diplomats, who care nothing for the truth (the descendants, not the diplomats en generale). I guess this was my first interaction with “politicians”.
card catalog

3. My senior year, search engines exist, and I’m trying to do research for my senior computer science courses. We discuss with each other in the lab what a good source is, and we conclude that if it’s a “pdf” and not a website, it’s a credible source (I don’t even think we looked for whether it was in a journal or whatnot). We pride ourselves on our “cut and paste” skills, because I think we’ve forgotten what paraphrasing is, since we haven’t done it since freshman year.

PhD and beyond

4. Fast forward to my PhD, and I now understand what information literacy is and stuff. I understand what proper citation looks like. I’ve got access to multiple libraries. AUC’s locally, University of Sheffield, online, and Google Scholar, not just Google. I discover that Google Scholar can be set up to let me know if a subscription-based article is available at a library I have access to, so I don’t have to search too much. I occasionally use library databases, but Google Scholar is the best. It even gives you how to cite the source in different styles (not always 100% good). Serendipity still exists, and I keep coming across critical pedagogy literature instead of critical thinking (my research topic) which changes the course of my life (I end up making all my research since then focused on social justice!). In any case, Google Scholar is for me the most useful of all the changes in the way we do research.

5. Fast forward towards the end of my PhD dissertation, where I have toooooooo much to read as it is. On days that I’m supposed to do more writing or analysis, I “procrastinate” by mindlessly downloading more articles I will likely never read. I might read them, someday (we’re talking 10 years after I finished my PhD!), I’m struggling with which folder to put each reading in because some of them talk about more than one thing. I discover Mendeley and Zotero and tags. If an article has multiple keywords, of course it fits into multiple categories. Duh. I also discover annotation and that I can share an article across multiple devices by working online. My work laptop, my home laptop, my iPad, dropbox, Google docs… aaaaah… eventually my phone.

6. Done with my PhD, doing research with others – Google drive is our friend for collaborative writing – no Word docs back and forth (although, strangely, some people still do that, any idea why? I guess Google not allowed in their institution? MS 365 not so good at collaborative writing)

7. Fast forward to now. GenAI. Sigh.

    Look, it was hard to GET to a source, back in scenario #1 because the process of finding sources was slow, so we were probably expected to find fewer sources and read them slower and deeper. It was longwinded, but good. I finished books and stuff. I learned that different info sources were in different parts of the library and some were harder to find.

    In scenarios #2 and #3, it was early days of library databases and internet. They got better, and we got better at using them to get what we wanted, and filtering for quality. The internet never stopped being full of BS, but we got better at recognizing it. Usually. I taught students that kind of thing. Now it’s an uphill battle to teach people that regular GenAI is not a search engine (though I know with co-pilot, Gemini and Perplexity, you could argue it’s a search engine). Or to convince people that this authoritative-sounding response may be complete BS. You just won’t know unless you know, and if you already know, then why are you researching it? We are usually researching things we don’t know. Right? Is precision, is accuracy, not important to anyone anymore?

    In scenarios #4 and #5, I think I was so lucky to be doing my PhD in that era – where it was easy and quick to get a lot of information fast (not to mention interlibrary loan, and open access stuff, and pirated stuff – so many ways to get sources) and also still manage and organize it, and maybe read quite a bit of what we find.

    But, man. Scenario #7. Here is my nightmare scenario of AI being taken to the extreme, and I truly do NOT want to live in this world. P.S. I hate hate hate doing literature reviews for articles and I hate hate hate reading them – UNLESS while I’m writing (or reading) the author synthesizes the literature into something new and meaningful and interesting (I say this because of course as an academic I read lots of masters theses at various stages of development and I’m always trying to help the student learn how to make their literature review something interesting and not a laundry list of what they’ve read).

    So my nightmare scenario looks like this:

    1. Use an AI tool to help you come up with research questions and even research topics (because, what, you didn’t know from looking around you what might be meaningful and worthwhile to study?)
    2. Use an AI tool to find for you a large number of sources of literature on the topic, synthesize, and write up a literature review for you.
    3. Use an AI tool to write a survey based on your research questions
    4. Send the survey to people
    5. Let the AI tool analyze the responses and create graphs for you. Note that some of the open-ended survey responses may themselves be written with AI. Because, yes, people DO that. Have done that to me.
    6. Let the AI tool write up your paper for you, submit to journal
    7. Journal gives you peer-reviewed feedback from three peer reviewers. It’s clear that reviewer #2 used AI and never read your paper. This has happened to me. To be fair, you’ve written it with AI and have never read the paper yourself, really. You’re citing something AI paraphrased for you, but you don’t know the person who came up with the idea and if you met them at a conference you’d have nothing to say because, you’ve neither read their paper nor noticed their name. It’s possible that, since you haven’t actually read all the sources you cite, some of them may not exist, or may not say what the AI tool summarized them to have said. Because I have seen that happen to others, and I myself have been misquoted for something I never said. It is already happening.
    8. Your paper gets published real quick. Quick turnaround. Because AI
    9. You promote your paper on social media and ask an AI tool to divide up the key points into multiple tweets
    10. People pick up your paper and use an AI tool to summarize and scan for the most pertinent information.
    11. And so on

    Ok, so I’ve taken it too far, perhaps people will not use AI for some of the steps above. Like maybe they’ll do interviews and actually interview real people, in person or on Zoom, but real people and stuff. But that would take waaaaay longer, so the ones who used AI would publish faster and get tenure and you… won’t.

    So here’s what I want to say.

    Next time I have to do a literature review (can we just stop doing so much of these already?) I’m really, literally, going to try to focus on 5 or 6 good sources for each key element that I need to cover, and not try to find the 20 or so sources AI can find and read for me. I’m going to find the 5 or 6 sources and skim them as a human being and make sure they’re a good set of 5 or 6 sources, maybe follow one or two more from their reference section, then I’ll read them in depth, and if they inspire me, if they really influence my thinking, I’ll include them in my writing. Maybe I’ll ask AI to help make it shorter because… word counts.

    When I have data to analyze, I’ll read it all myself. Maybe I’ll let AI give me possible themes out of qualitative data, but I’ll edit it thoroughly myself because I know my data and I know what the AI might miss and what the AI might make up. 

    Or maybe, just maybe, I’ll do a smaller study where I can write up the analysis without needing AI. And that smaller study will be slow, because I’m not just doing research to publish it, I’m doing the research to benefit others. And they’re involved in the analysis alongside me, because we’re making it participatory and I’m researching with them not about them like they’re specimens. And they’ll have agency over how their data gets used and what recommendations should come out of it. And maybe that research will have meaning for someone and help make their lives better.

    And if someone’s going to make a podcast about this work, I’d love for them to be a real person who has a history and identity that shows me why they’re interested in the work, and they really read it and engage with it, and invite me and the participants into the podcast to discuss our work, not to rehash it, but to extend it further with their fresh eyes. Google’s NotebookLM AI podcast with AI voices, while cool for the first 5 minutes of discovering it, doesn’t give you fresh eyes on your work, doesn’t let you know what a real human thinks of the work, how they might use it, how they might critique it.

    And maybe then people will listen to the podcast and be inspired. Maybe some don’t speak English so they’ll need AI to translate it, and they’ll get the gist of most of it, and more people will want to try what we tried but adapt it for their context. And the idea will take on and many people around the world will benefit from it.

    That’s the kind of research I want to do. And AI won’t do that for me.

    So what do I think research should look like going forward:

    Shorter literature review sections. Just focus on how particular literature led you to where you are right here, right now, in this paper. Just the main stuff. And just link me to it. I’ll go there if I need to dig deeper and hadn’t read it already.

    More space in the paper for sharing our data and our interpretation.

    Reward meaningfulness and contribution to the world, not number of publications. Let people do the longitudinal research that takes 5, 10 years to do, and reward that 10 times better than producing 3 papers out of one measly study conducted over 2 months.

    AI tools can and will produce text, make connections, but there is no meaning-making there. What’s the point? It has the appearance of research, but it’s got no soul inside. And some research is already that superficial, but I really don’t want to see more of it.

    “The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicry—and sometimes fakery—over deep creativity, accuracy, or empathy.” – Jude Estrin

    But look, I’ve written this post over several days, and I’m thinking now that if I’m doing research on the field of medicine, and if I do it faster, maybe I’ll help heal more people faster, that seems like a noble goal. As long as I know what I am doing and speeding up some of the steps doesn’t result in what in Arabic we call “karwata”, which means rushing something to the point of doing it really poorly. The English word doesn’t come to mind, but that’s basically the risk with AI – we take shortcuts and by doing so, we miss out on important steps or details, or we just miss out on giving ourselves enough time to think and reflect deeply about what we’re doing, why we are doing it, and what it all means.

    I understand that perhaps, just like library databases and internet search engines got waaaaay better, though never perfect, AI may get there (and btw, search engines do use AI, just not GenAI). I definitely like my tools for analyzing data, we can get more done faster. I’m just questioning whether faster will always be better. No wait, I’m suggesting that faster is often NOT better and MORE is often NOT better, and I want us to use different ways of assessing valuable research. I know that if we keep rewarding “more papers published per year” in academia, we’re setting ourselves up for more mediocrity, if not more complete BS.

    What’s your take?

    Card Catalog image: Photo by Erol Ahmed on Unsplash

    Header image of bookshelves and light: Photo by 🇸🇮 Janko Ferlič on Unsplash

    Leave a Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.