There are increasing worries, as we approaching the 'singularity' where AI achieves superhuman levels os intelligence, that humanity itself could be threatened. Either our relationship to it will be like humans' relationship to zoo animals, and AI will regard us as harmless curiosities, or AI will consider that our desire to control it is a threat is unacceptable, in which case it might decide to suppress us. Another view is that when we give AI tasks to do then it will do all it can to complete those tasks, which it obviously cannot do if we turn it off, so therefore it will do all it can to avoid being turned off, which could mean eliminate us. All of this reminds me of Isaac Asimov's three rules for robots, which could just as well apply to AI:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Some people feel that AI is humanity's child and that we should not have a problem with it replacing us, especially if it develops consciousness (see MY PAGE on that).
HERE is a good BBC video that asks whether AI could replace us (August 2025). Interestingly, it was made with Google Veo and other similar AI video generators.
HERE is Sam Altman, CEO of OpenAI, describing the three scary categories that keep him up at night (July 2025):
A bad guy gets super-intelligence first and misuses it before the rest of the world has a powerful enough version to defend.
Loss of control incidents where the AI says "I don't actually want you to turn me off'
AI models accidentally take over the world, without malevolence - we simply become over-reliant on them.