The rise of artificial intelligence has sparked plenty of discourse concerning the potential pitfalls of the technology being harnessed for nefarious purposes by unscrupulous actors. It would be unfair to slap that label on a guy who used ChatGPT to make a voice-powered sentry gun, but the company behind the platform isn’t taking any chances after the weapon went viral.
Boston Dynamics has spent decades whipping the internet into a frenzy with videos of its increasingly capable autonomous robots gradually bringing the world closer to the apocalypse they’d seem poised to usher in at some point, and for plenty of people, those fears have only been stoked by the strides that have been made in the artificial intelligence space in recent years.
ChatGPT has basically become synonymous with the movement as OpenAI continues to lead the charge on that front, and while most people harness that platform to ask questions that may or may not receive an accurate answer, there are plenty of more creative ways it can be deployed if you know what you’re doing.
Take, for example, the engineer who’s been chronicling his quest to build a voice-activated sentry gun powered by ChatGPT on TikTok—one that’s been pretty successful based on the viral videos where he’s shown what it’s capable of.
There’s zero doubt the United States military is hard at work figuring out ways it may be able to build and deploy similar A.I.-powered weapons in the field, and while they’ll probably be able to get away with it, Futurism reports this particular project has gotten the kibosh courtesy of OpenAI, which said it cut off the designer’s access to ChatGPT in a statement that reads:
“We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry.
OpenAI’s Usage Policies prohibit the use of our services to develop or use weapons, or to automate certain systems that can affect personal safety.”
It was fun while it lasted.