I’ve worked in AI for 13 years and I’ve encountered a lot of people worried about it. In my opinion, they are worried about the wrong thing. But their worries generally fall into two categories:
Killer robots: When are they going to be here? Just how killer will they be? Do we stand a chance against them?
Superintelligence: An all-seeing hive mind (cameras and any internet connected device are its “eyes and ears”) that can make connections and predictions that are quite literally beyond our comprehension. This is adjacent to the killer robots. For example, the Terminator was a killer robot sent back in time to kill someone by Skynet, a futuristic program/superintelligence. As another example of superintelligence in pop culture, the show Person of Interest has an omniscient computer that sees all and can predict the future beyond our comprehension.
I do not find either of these worries to be of particular interest because we have no reason to believe that machines will be aggressive. Why apply humanity’s worst traits to machines? They are definitely not us. But beyond that we cannot really come close to comprehending what a superintelligence would think like. It would probably be closer to what many would think of as a god than a human.
While superintelligence and robots aren’t here yet, here’s what I believe you should be concerned about right now.
AI in law enforcement
Argentina intends to, among other things, predict future crime using AI. On the surface this sounds fantastic. Computers help us optimize resources all the time. Baltimore used technology like ShotSpotter which instantaneously identifies the location from where a shot is fired to as part of a modern approach to reduce crime. This is a positive development but I am worried about a darker near future. We have a sex offender registry for concerned parents. What if someone hasn’t committed a crime yet, but an AI program finds that they are 80% likely to be a sex offender? Would that person perhaps start losing some of their constitutional rights?
No doubt we live in a scary world. But are we ready to deal with the implications of highly likely criminals? And before you answer too quickly, would you be prepared for a loved one’s life to change if a computer determines there’s probability they will do something wrong, even if they haven’t yet? How soon could we be headed this way?
Social credit score
There was an episode of Black Mirror where a woman had to raise her social score to live in the best apartments, and get flight and rental car privileges. The woman was rated by other people after every interaction. Imagine a future where you are trying to walk through an automated door to your favorite store and your face is scanned and you find out that you are not permitted entry into that store anymore. Sorry to inform you that this is not just an episode of Black Mirror. There is already a similar program in China that human rights groups have condemned. In one instance an attorney was not allowed to purchase a plane ticket, as a court determined his apology for an action was “insincere.”
Deepfakes
Deepfakes could also present problems for society. In Hong Kong, scammers walked away with more than $25 million after they created an entire deepfake virtual meeting where all participants except for the victim were fabricated images and voices of real people. The scammers used publicly available images and audio to create the deepfakes of the employees including a deepfake of the company’s CFO.
In the future it could get much scarier. What happens if you get a call from “your mother” saying there is an emergency where funds are needed. Or you think you saw a politician say something to you but it was an AI video. That doesn’t even get into potential interactions with real robots or cyborgs that haven’t quite reached superintelligence but could certainly fool you.
AI is already transforming our world in ways that we need to pay attention to now. While the fear of killer robots and superintelligence captures our imaginations, these are not the immediate threats we face. The real concern is how AI is being applied today—predicting crimes before they happen, determining who is “worthy” in a social credit system, and fabricating entire virtual realities through deepfakes. These developments raise serious ethical and societal questions.
Are we ready to live in a world where a machine’s probability scores could determine our freedom, our social standing, or even our trust in reality?
As we push forward with AI advancements, it’s crucial to ensure that we are thoughtful and deliberate in how we use this technology, balancing innovation with the values and rights that define us as a society. The future of AI isn’t about distant dystopias—it’s already here, and how we choose to navigate it will shape the world we live in tomorrow.
George Kailas is CEO at Prospero.Ai.
I’ve worked in AI for 13 years and I’ve encountered a lot of people worried about it. In my opinion, they are worried about the wrong thing. But their worries generally fall into two categories:
Killer robots: When are they going to be here? Just how killer will they be? Do we stand a chance against them?
Superintelligence: An all-seeing hive mind (cameras and any internet connected device are its “eyes and ears”) that can make connections and predictions that are quite literally beyond our comprehension. This is adjacent to the killer robots. For example, the Terminator was a killer robot sent back in time to kill someone by Skynet, a futuristic program/superintelligence. As another example of superintelligence in pop culture, the show Person of Interest has an omniscient computer that sees all and can predict the future beyond our comprehension.
I do not find either of these worries to be of particular interest because we have no reason to believe that machines will be aggressive. Why apply humanity’s worst traits to machines? They are definitely not us. But beyond that we cannot really come close to comprehending what a superintelligence would think like. It would probably be closer to what many would think of as a god than a human.
While superintelligence and robots aren’t here yet, here’s what I believe you should be concerned about right now.
AI in law enforcement
Argentina intends to, among other things, predict future crime using AI. On the surface this sounds fantastic. Computers help us optimize resources all the time. Baltimore used technology like ShotSpotter which instantaneously identifies the location from where a shot is fired to as part of a modern approach to reduce crime. This is a positive development but I am worried about a darker near future. We have a sex offender registry for concerned parents. What if someone hasn’t committed a crime yet, but an AI program finds that they are 80% likely to be a sex offender? Would that person perhaps start losing some of their constitutional rights?
No doubt we live in a scary world. But are we ready to deal with the implications of highly likely criminals? And before you answer too quickly, would you be prepared for a loved one’s life to change if a computer determines there’s probability they will do something wrong, even if they haven’t yet? How soon could we be headed this way?
Social credit score
There was an episode of Black Mirror where a woman had to raise her social score to live in the best apartments, and get flight and rental car privileges. The woman was rated by other people after every interaction. Imagine a future where you are trying to walk through an automated door to your favorite store and your face is scanned and you find out that you are not permitted entry into that store anymore. Sorry to inform you that this is not just an episode of Black Mirror. There is already a similar program in China that human rights groups have condemned. In one instance an attorney was not allowed to purchase a plane ticket, as a court determined his apology for an action was “insincere.”
Deepfakes
Deepfakes could also present problems for society. In Hong Kong, scammers walked away with more than $25 million after they created an entire deepfake virtual meeting where all participants except for the victim were fabricated images and voices of real people. The scammers used publicly available images and audio to create the deepfakes of the employees including a deepfake of the company’s CFO.
In the future it could get much scarier. What happens if you get a call from “your mother” saying there is an emergency where funds are needed. Or you think you saw a politician say something to you but it was an AI video. That doesn’t even get into potential interactions with real robots or cyborgs that haven’t quite reached superintelligence but could certainly fool you.
AI is already transforming our world in ways that we need to pay attention to now. While the fear of killer robots and superintelligence captures our imaginations, these are not the immediate threats we face. The real concern is how AI is being applied today—predicting crimes before they happen, determining who is “worthy” in a social credit system, and fabricating entire virtual realities through deepfakes. These developments raise serious ethical and societal questions.
Are we ready to live in a world where a machine’s probability scores could determine our freedom, our social standing, or even our trust in reality?
As we push forward with AI advancements, it’s crucial to ensure that we are thoughtful and deliberate in how we use this technology, balancing innovation with the values and rights that define us as a society. The future of AI isn’t about distant dystopias—it’s already here, and how we choose to navigate it will shape the world we live in tomorrow.
George Kailas is CEO at Prospero.Ai.