Ever had the feeling you’re being watched? You’re probably right. Surveillance is now almost ubiquitous, so much so that we barely notice it anymore. Often, the cameras are intentionally discrete. Sooner than you realise, the cameras will have guns — and a machine will decide who to shoot and when.

A new small AI startup, Ultimate Systems, has just unveiled the prototype of a shocking new surveillance platform for which they are seeking investors and partners in industry. Is this exciting, or scary? Watch their new video and decide for yourself, it’s a delightfully classic home-brew living-room mash-up style production, in which the CEO’s wife waves around guns and knives while dressed like Lara Croft in Tombraider:

Ultimate Systems is a new AI startup with technology that is ready to disrupt major industries, initially with monitoring and surveillance systems, both civilian and military.”

– Tim Acheson (CEO, Ultimate Systems)

Potentially interested parties already include Chinese tech giant, Huawei, which has been making strong progress in their own intelligent surveillance platform that does not yet offer reliable automated weapon-detection. The UK’s G4S is also in the loop, along with a few other key players — both big and small — in sectors ranging from tech, through to security, law-enforcement, and of course defence.

One of the geeks behind this new breakthrough in AI-based intelligent surveillance, Tim Acheson, has promised that more videos are coming soon. The most exciting embodiment of Tim’s technology is arguably the drone-based surveillance platform, currently in working prototype form, which has the capability to auto-deploy “non-lethal countermeasures” against auto-detected threats — raising disturbing ethical questions. Picture a fleet of taser-armed drones skimming over a battlefield… Or, over a city street. Notwithstanding Tim’s good intentions, the “countermeasures” may not always be “non-lethal.”

When a human intelligence (HI) pulls the trigger, that person is legally responsible for any outcome. When an artificial intelligence (AI) decides to pull the trigger, who is responsible for that? It’s a grey-area in current legislation which will eventually, very soon, be decided by case law, and one which will need to be continuously reviewed as AI surpasses HI and understands the technical differences between right and wrong.

Surveillance is almost inescapable, and that is — on the whole — a good thing… Unless you’re a criminal. Even on a suburban street, the chances are somebody will have a camera in their front window, or on a dash cam in their car. Most of the time, nobody is watching these personal cams, and the images will only be checked after something undesirable happens — but either way, it’s useful to be able to detect a threat. The holy grail is to detect a threat before something bad happens.

“When an AI pulls the trigger, who is responsible for that?”

Images from CCTV cameras in innumerable important public and private spaces, from city streets to airports and prisons, are actively monitored on screens by human observers who are employed to be vigilant for threats.

With the advent of automated threat-detection, monitoring by humans will rapidly become less important, and even when screens are monitored by humans technology will significantly improve the chances and speed of threat-detection — and not only in the visible spectrum. Advanced AI is here and advancing at explosive pace, far bigger and faster than most people eve realise at this moment.

“Any sufficiently advanced technology is indistinguishable from magic”

– Clark’s First Law

It’s not sci-fi anymore. The era of pre-crime, Minority Report¬†style, is truly now at hand. In fact, two of the geeks at Ultimate Systems are already drafting the patent [confidential under NDA] for an extremely sophisticated system specifically aimed at pre-crime detection which sounds more like magic than science. More on that will be revealed in due course, so please watch this space…