Since I don’t conduct technology-focused research and I’m not in the Artificial Intelligence market in any significant way, my best and most honest answer to this question is, “I don’t know.”
But I’m also not convinced that this should be a concern for me.
Here’s a quick framework to prioritize our thinking about future developments:
If a topic or issue is both important and readily knowable, then I should concern myself with it. If a topic or issue is not both of these things (i.e., important and readily knowable), then I try not to let myself get too concerned about it.
Clearly A.I. is important and will most likely continue to be for all humankind.
But is A.I.’s impact on our future readily knowable?
For me, A.I.’s future impact on humans is not at all knowable. I would be guessing. (And, I would suggest even people close to A.I. would be guessing as well, perhaps a far more educated guess than mine, but a guess nonetheless).
The aspects of my professional work that bring meaning to my life. My family relationships and friendships. My health habits and hobbies. These are all both important to me and far more readily knowable for me.
And, it turns out, when I focus on the important and readily knowable areas of my life, positive results tend to emerge.
Everything else ends up being little more than distracting noise.