I am an AI researcher. But I take a slighty unusual approach to the subject, and I also have a peculiar backstory...
(This is a bit long, but the ending might be interesting, if you have the patience to read that far.)
Back in the late 1980s, when I graduated (for the second time), there tended to be a lot of cross-border traffic between artificial intelligence, cognitive psychology, linguistics, and neuroscience. Researchers were comfortable in each other's company, and ways of thinking were fluid and synergetic.
Then, everything seemed to change.
At the heart of the change was a fight, started by an AI faction--they called themselves the 'Neats'--who believed that AI should be done by mathematicians. They called other AI researchers 'Scruffs' -- and what they meant by 'Scruffs' were people who built AI programs that were exploratory, or inspired by human psychology.
This was all happening in a period that was later called the Second AI Winter, and by the time Spring arrived (roughly, the early 90s) the Neats had won. In the course of a few years (roughly 1987 to 1992) they took over all the academic positions of power, and before long it became darned hard for anyone who was labeled a 'scruff' to get a job, get funding, get students, or get published. Of course, there were exceptions, but this was the big picture.
So, where do I come into this?
I graduated at the beginning of the fight. I didn't know it at the time, but I would have been classified as a "super-scruff" -- a scruff who also believed that 'complex systems' were important in AI. And if the neats hated one thing more than psychology and hacking, it was the whole idea of a complex system--because one implication of the complex-system concept is that intelligence might be intrinsically emergent, and if it really is emergent you can't use mathematics to build AI systems.
So, as soon as I graduated, my career was doomed. That entire branch of AI that involved combining complex systems with cognitive psychology (my specialty) was wiped out by the ascendance of the Neats.
At the time, though, I was blissfully ignorant of all this. Instead, I carried on innocently trying to engage other AI researchers in conversations about psychology and exploratory methods. I tried to get research posts, gave talks, and so on. But in spite of my best efforts, I got nowhere. Eventually, like many scruffs, I had to get work elsewhere -- which meant working as a software engineer.
But (unlike many scruffs), I never gave up on my AI research.
In 2006 I gave a paper at the first workshop on Artificial General Intelligence, in Bethesda, Maryland. That paper ("Complex Systems, Artificial Intelligence and Theoretical Psychology") explained that AI had a serious problem at its heart, because it was ignoring the fact that if you want to get an intelligent system working, you inevitably and unavoidably have to accept that it will be a complex system. This, I pointed out, implied that we had to take drastic action, because all modern AI is built on an assumption that you can always ignore complex systems effects. Later in the paper I suggested a way (a methodology) to get around this problem.
The funny thing is ... right now (2019) the AI/ML community is slowly, slowly, waking up to the ideas that I have been pushing all along. There have been rumblings, in the Machine Learning community, about how their methodology is looking more and more like "alchemy" and although it might not be obvious, this tendency is something I predicted in that 2006 paper (the original title of the paper was "Cognitive Alchemy", but I was pressured to change it because it was thought to be unscientific).
But, sadly, at the rate things are progressing it will take the AI community another ten or twenty years to finally understand the relationship between complex systems, machine intelligence, and cognitive psychology. Another ten or twenty years of wasted effort.
So, if you want to know what AI will be like in a couple of decades, or if you prefer not to have to wait that long, you know who to ask.